# 14.5: Metabolism and Signaling: The Steady State, Adaptation and Homeostasis

- Page ID
- 107152

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

## Introduction

We have studied binding interactions in Chapter 5, kinetics in Chapter 6, and principles of metabolic control in this chapter. We've learned the following:

__Binding Reactions__

- for simple binding of a ligand to a macromolecule, graphs of fractional saturation of the macromolecule vs free ligand concentration are hyperbolic and demonstrate saturation binding. In the initial part of the binding curve, when [L] << K
_{D}, the fractional saturation shows a linear dependence on free ligand concentration. Figure \(\PageIndex{1}\) shows [ML] vs L, which is the same basic equation as a plot of Y vs L.

Figure \(\PageIndex{1}\)

- for allosteric binding of a ligand to a multimeric protein, graphs of fractional saturation vs free ligand concentration are sigmoidal and also display saturation binding. In the first parts of the binding curve, the fractional saturation is much more sensitive to ligand concentration than in simple binding of a ligand to a macromolecule with one binding site. Figure \(\PageIndex{2}\) below shows graphs for the allosteric binding of a ligand to a macromolecule using the Hill Equation (instead of the MWC equation we used to model O
_{2}binding to tetrameric hemoglobin).

Figure \(\PageIndex{2}\)

In these two plots, the system (in this case a single macromolecule) displays different sensitivities to ligand concentration, allowing the system to have different responses to changes in physiological conditions.

__Binding and Chemical Reactions__

As with the case for binding interactions, we have seen hyperbolic and sigmoidal plots of initial velocity (v_{0}) vs [substrate] for enzyme-catalyzed reactions. These also allow appropriate responses to a single substrate in a physiological setting.

But what if you put the same macromolecule and ligand into a larger metabolic or signal transduction pathway in vivo? What kinds of responses would they make to a change in input? As we have just seen in our discussion of the steady state, the ligand or substrate concentration might not change at all as flux continues through the pathway. One could imagine a lot of scenarios with different inputs and different optimal outputs. For example, what if the input (a reactant or small signaling molecule) comes in pulses? Ultimately a system should return to its basal state since a prolonged response (such as cell proliferation) could be detrimental to the health of the organism.

Let's look at some simple examples and see how different inputs lead to specific outputs. We'll just construct some very simple reaction diagrams in Vcell and see how varying them leads to different outputs. Here are two simple cases for **isolated** chemical species and reactions, analogs to the simple binding reactions described above.

**Linear Response: A Signal S and a Response R; S → R**

If no enzyme is involved, the rate doubles as the signal (substrate) doubles since dR/dt = k[S] for the first-order reaction. If S is the stimulus and R is the response, a plot of R vs S is linear. Hence the __system responds linearly__ with increasing S. Here is the simple chemical equation

\begin{equation}

\mathrm{S} \underset{\mathrm{k}_2}{\stackrel{\mathrm{k}_1}{\rightleftarrows}} \mathrm{R}

\end{equation}

As a concrete example, consider the synthesis and degradation of a protein, characterized by the following equation derived from mass action.

\begin{equation}

\frac{d R}{d t}=k_0+k_1 S-k_2 R

\end{equation}

where S is the signal (ex. concentration of mRNA) and R is the response (ex concentration of the transcribed protein). A constant k_{0} has been added to account for any basal rate of the reaction. (This is a vastly oversimplified way to model a complex process like mRNA translation to a protein as it omits 100s of steps.)

Here is the simplified derivation under steady state (SS) conditions typically found for enzymes embedded in a pathway.

\begin{equation}

\begin{gathered}

\frac{d R_{S S}}{d t}=k_0+k_1 S-k_2 R=0 \\

R_{S S}=\frac{k_0+k_1 S}{k_2}

\end{gathered}

\end{equation}

The equation is a linear function of S.

**Hyperbolic Response: E+S ↔ ES → E + R **

In a simple enzyme-catalyzed reaction with a fixed concentration of enzyme, as S increases the initial velocity saturates. Hence there is a limit on the response, so the response R is a hyperbolic function of S. Increasing S ever more after saturation won't lead to more R (in a given amount of time).

As a concrete example of this consider the phosphorylation/dephosphorylation of a protein R. R_{P} represents the phosphorylated and active form of the protein R with concentration [R_{P}]. The reaction is simply written as R ↔ R_{P}, where R_{P} is the response. Mass action shows that the total amount of R, R_{T} = R + R_{P}. A simple mass action equation can be derived.

Here is the chemical equation

\begin{equation}

\mathrm{R}+\mathrm{S} \underset{\mathrm{k}_2}{\stackrel{\mathrm{k}_1}{\rightleftarrows}} \mathrm{R}_{\mathrm{P}}

\end{equation}

Here is the math equation, again for the steady state (SS), when dR_{P}/dt = 0. (We derived the same equation for the steady-state version of the Michaelis-Menten equation in Chapter 6.

\begin{equation}

\frac{d R_P}{d t}=k_1 S\left(R_T-R_P\right)-k_2 R_P

\end{equation}

Click below to see the derivation

**Derivation**-
\begin{equation}

\frac{d R_P}{d t}=k_1 R[S]-k_2 R_P

\end{equation}then in the stead state:

\begin{equation}

\begin{gathered}

\frac{d R_P}{d t}=k_1 S\left(R_T-R_P\right)-k_2 R_P=0 \\

k_2 R_{P, S S}=k_1 S\left(R_T\right)-k_1 S\left(R_{P, S S}\right) \\

k_2 R_{P, S S}+k_1 S\left(R_{P, S S}\right)=k_1 S\left(R_T\right) \\

R_{P, S S}\left(k_2+k_1 S\right)=k_1 S\left(R_T\right)

\end{gathered}

\end{equation}Finally, we get

\begin{equation}

R_{P, S s}=\frac{k_1 S\left(R_T\right)}{\left(k_2+k_1 S\right)}=\frac{\left(R_T\right) S}{\left(\frac{k_2}{k_1}+S\right)}

\end{equation}

In the steady state, dR_{P}/dt = 0, and the steady state equation can be written as:

\begin{equation}

R_{P, s s}=\frac{k_1 S\left(R_T\right)}{\left(k_2+k_1 S\right)}=\frac{\left(R_T\right) S}{\left(\frac{k_2}{k_1}+S\right)}

\end{equation}

**Sigmoidal Response**

Consider this simple reaction for a homotetramer in which each monomer can bind a substrate S: nS + E_{n} ↔ E_{n}S_{n} → E_{n} + nR: If E_{n} is a multimeric allosteric enzyme, as S increases the initial velocity also saturates but the response R is a sigmoidal function of S (in analogy to the above example). The equation is too complicated to derive there, but the result reproduces a sigmoidal curve for the steady state, much as the Hill equation does for cooperative binding.

## Adaptation and Homeostasis

The above examples show that the response of proteins or enzymes to increasing levels of a stimulus like a ligand or a substrate can be linear, hyperbolic, or sigmoidal, with quite a varied set of outcomes. However, in many biological conditions, an ever-increasing or increasing and plateauing response might be too much. The cell needs a way to turn off the response and settle back to a basal state, even in the presence of constant or changing stimuli. This allows the **adaption **of a system to a stimulus and the maintenance of **homeostasis**. Every system needs to be able to respond and return to a homeostatic basal level. The maintenance of homeostasis is critical to life.

The American Association for Biochemistry and Molecular Biology (ASBMB) describes both homeostasis and evolution as key underlying concepts for all biology. Homeostasis shapes both form and function from the molecular to organismal levels. Homeostasis is needed to maintain biological balance. The steady state at the molecular to organismal levels in metabolic and signaling pathways is a hallmark of homeostasis. Here are the learning goals for homeostasis designated by the ASBMB

**1. Biological need for homeostasis**

Biological homeostasis is the ability to maintain relative stability and function as changes occur in the internal or external environment. Organisms are viable under a relatively narrow set of conditions. As such, there is a need to tightly regulate the concentrations of metabolites and small molecules at the cellular level to ensure survival. To optimize resource use, and to maintain conditions, the organism may sacrifice efficiency for robustness. The breakdown of homeostatic regulation can contribute to the cause or progression of disease or lead to cell death.

**2. Link steady-state processes and homeostasis**

A system that is in a steady state remains constant over time, but that constant state requires continual work. A system in a steady state has a higher level of energy than its surroundings. Biochemical systems maintain homeostasis via the regulation of gene expression, metabolic flux, and energy transformation but are never at equilibrium.

**3. Quantifying homeostasis**

Multiple reactions with intricate networks of activators and inhibitors are involved in biological homeostasis. Modifications of such networks can lead to the activation of previously latent metabolic pathways or even to unpredicted interactions between components of these networks. These pathways and networks can be mathematically modeled and correlated with metabolomics data and kinetic and thermodynamic parameters of individual components to quantify the effects of changing conditions related to either normal or disease states.

**4. Control mechanisms**

Homeostasis is maintained by a series of control mechanisms functioning at the organ, tissue, or cellular level. These control mechanisms include substrate supply, activation or inhibition of individual enzymes and receptors, synthesis and degradation of enzymes, and compartmentalization. The primary components responsible for the maintenance of homeostasis can be categorized as stimulus, receptor, control center, effector, and feedback mechanism.

**5. Cellular and organismal homeostasis**

Homeostasis in an organism or colony of single-celled organisms is regulated by secreted proteins and small molecules often functioning as signals. Homeostasis in the cell is maintained by regulation and by the exchange of materials and energy with its surroundings.

In the rest of the chapter section, we will describe chemically and mathematically simple circuits/motifs that are employed that allow perfect or near-perfect adaptation to a stimulus, a hallmark of homeostasis. We will define adaptation as a complete or almost complete return to a basal state after the introduction of a stimulus. In all the cases below we will consider not a single application of a stimulus but a pulse application (a repetitive step wave function). The pulsed stimuli could be of constant magnitude or an increasing/decreasing pulse of a signal such as a substrate. All responses must be transient to avoid uncontrolled responses such as proliferation (a hallmark of tumor cells) or cell death.

Adaptation is commonly found in sensory systems like vision, hearing, pressure, taste, etc. Think of eating your favorite cookie. The first bite is delicious but by the tenth bite, there is significant attenuation in the positive sensory response, which helps keep most from adding significant weight continually.

Ma et al. conducted simulations on three component/nodes (proteins, enzyme) systems to see which might display the potential for perfect or near-perfect adaption. The simple 3-component motifs or circuits were modeled using simple mass action kinetic equations, ordinary differential equations (which we learned to write in Chapter 6.2), or a combination of both. The systems that displayed adaption had to conform to three criteria:

- The stimulus had to initially induce a response of high magnitude
- The system had to return to a basal or near basal state.
- The return to a basal state had to be mostly parameter-independent. That is, the return to the basal state must occur for many different combinations of parameters.

The possible 3-component components (nodes) and the links among the nodes are shown in Figure \(\PageIndex{3}\) below.

Figure \(\PageIndex{3}\): Possible 3-component components (nodes) and the links among the nodes. After Ma et al. Cell Theory,138, 760-773 (2009) https://www.cell.com/fulltext/S0092-8674(09)00712-0. DOI:https://doi.org/10.1016/j.cell.2009.06.013.

Out of over 16,000 models, several hundred were found that met the criteria. Most were variations of simple motifs that we will show below. The most common motifs were the **negative feedback loop** and the **incoherent feedforward system**.

Much of the discussions, models, and equations used below are from two articles:

- John J Tyson, Katherine C Chen, Bela Novak, Sniffers, buzzers, toggles and blinkers: dynamics of regulatory and signaling pathways in the cell, Current Opinion in Cell Biology, Volume 15, Issue 2, 2003, Pages 221-231, https://doi.org/10.1016/S0955-0674(03)00017-6.
- James E. Ferrell, Perfect and Near-Perfect Adaptation in Cell Signaling, Cell Systems, Volume 2, Issue 2, 2016, Pages 62-67, https://doi.org/10.1016/j.cels.2016.02.006.

By adding a third component to form a mini pathway, we can now change the response R to a stimulus S from linear, or hyperbolic/sigmoidal in the steady state, to one that exhibits perfect or near-perfect adaptation. Again we see this kind of response in signaling pathways in sensation and also in responses like chemotaxis, in which a cell moves toward a stimulus (a chemoattractant molecule).

### Simple 3-node motif/circuit for perfect adaptation

Figure \(\PageIndex{4}\) below shows our first example of a 3-component system that displays perfect or near-perfect adaption. The right-hand side shows a Vcell reaction diagram. In this example, a stimulus S (could be a reactant, neurotransmitter, mRNA, etc) leads to the synthesis of X and also of R, a response molecule. Both X and R get degraded. The yellow squares represent the nodes through which the flux of S to X and R proceeds. Each node has an equation for the flux, J, through the node. The left part of Figure 4 shows the periodic pulse of stimuli S that increases the concentration of S from an initial value of S_{0} = 1 uM to S + 0.2 uM for each step. Note that the flux equations for J are very simple and are based on mass action, and are not derived through Michaelis-Menten kinetic equations.

Figure \(\PageIndex{4}\): Simple 3-component system that displays perfect or near-perfect adaption.

Note that S, the stimulus (or substrate for example) is a square wave step function varying from 0 to 1 over the time interval shown in the graph. The dotted blue line simply shows when the pulse is delivered. The initial S concentration is 1 uM and increases by 0.2 for each step (as shown in the gray line). Hence S increases in a stepwise fashion.

Figure \(\PageIndex{5}\) below is a time course graph that shows the stepwise (=0.2 uM) increase in S from 1 uM and the concentration of R (the response) over 20 seconds. Even though S continues to increase in a stepwise fashion, R rises substantially only from the initial input of S (1 uM) and subsequent increases in S with each increment of S are damped out!

Figure \(\PageIndex{5}\): Time course for a 3-Component Perfect Response system. Model by ModeBrick from VCell: CM-PM12648679_MB4:Perfect_Adaptation; Biomodel 188456707

The present version of Vcell release (as of 4/28/23) does not yet allow the export of a file compatible with the software used to run simulations with this book. The Vcell model includes an "event" which allows for the production of stepwise changes in stimuli. A future release will allow users to run the simulations within this book (as is the case for the other Vcell simulations throughout the book).

### Negative Feedback Loop

The negative feedback loop is one of the simplest circuits/motifs to generate perfect or near/perfect adaptation. It has only two nodes (yellow dots) and two proteins. An example is bacterial chemotaxis. Figure \(\PageIndex{6}\) below shows a Vcell reaction diagram (left), and another representation (middle) and the time course graphs for all species. This model works especially well with certain parameters assigned.

Figure \(\PageIndex{6}\)

Figure \(\PageIndex{6}\): Near-Perfect Adaptation from Negative Feedback. Adapted from Ferrell (ibid)

The gray line in the graph is the stimulus S (substrate). The blue line is the response, designated in this model as A. B acts as an inhibitor (note the dotted line to the input node in the left diagram and the blunt-ended red bar in the middle diagram. Note that the stimulus goes from 0.2 uM (initial concentration) at t=0 to 1 uM (a 5-fold increase) at 40 seconds, but the response A increases at most from 0.4 (initial condition) to 0.5 (a 1.25-fold increase).

If we say the [A] is the output, then the differential equation for dA/dt is given by

\begin{equation}

\frac{d A}{d t}=k_1 \operatorname{S} \cdot(1-A)-k_2 A \cdot B

\end{equation}

dB/dt is given by

\begin{equation}

\frac{d B}{d t}=k_3 A \frac{1-B}{K_3+1-B}-k_4 \frac{B}{K_4+B}

\end{equation}

The constants for the graph (right) produced by the Vcell model are:

- k
_{1}= k_{2}= 200 - k
_{3}= 10; k_{4}= 4 - K
_{3}= K_{4}= 0.01

### Incoherent Feedforward systems

In this circuit/motif, the stimulus S increases the concentration of A (the output) but also forms a negative modulator, B, which with a bit of a time lag decreases the concentration of A through inhibition. There is no feedback inhibition from A in this simple system. If you're reading carefully, you'll see that the reaction scheme and inhibition are the same as the first circuit/motif we introduced. Here we simplify the diagram and give it an official name. The word __incoherent __in the name makes sense since the stimulus S is converted both to the output A, and to the inhibitor B, which on the surface seems like a crazy thing to do.

Figure \(\PageIndex{7}\) below shows the Vcell reaction diagram (left) and more classical reaction diagram (middle) and progress curves showing S, the stimulus, A the output or response, and B, the inhibitor. The dashed line in the left diagram from B to the reaction node for the S → A reaction shows that B affects the rate of that reaction. The equations used account for the inhibitory effect of B.

Figure \(\PageIndex{7}\): Near-Perfect Adaptation from an Incoherent Feedforward System. Adapted from Ferrell (ibid)

Note that the response A goes up or down a bit with each new step in concentration of S but to a very minimal degree. The system is certainly almost perfectly adapted.

The differential equation for dA/dt (where A is the response) is

\begin{equation}

\frac{d A}{d t}=k_1 \operatorname{S} \cdot(1-A)-k_2 A \cdot B

\end{equation}

The equation for dB/dt (the inhibitor generated from A) is

\begin{equation}

\frac{d B}{d t}=k_3 \text { S } \frac{1-B}{K_3+1-B}-k_4 B

\end{equation}

The constants for the graph (right) produced by the Vcell model are:

- k
_{1}= 10; k_{2}= 100 - k
_{3}= 0.1; k_{4}= 1 - K
_{3}= 0.001

## State-dependent Inactivations systems.

There are two simple circuits/motifs in this system that were found after the initial analyses that showed all possible interactions in a 3-component system (see Figure 3). The motif was patterned after the inhibition of proteins in neuron stimulation, specifically in ion channels in neural cell membranes that open up on a change in the transmembrane potential but then close again quickly to avoid constant neuronal stimulation (or inhibition). In the Na^{+ }ion channel, there are both fast (1-2 ms) and slow (100 ms) inactivation mechanisms. The fast one allows for repetitive firing, the development of action potentials, and the control of the excitation of neurons, and at the neuromuscular junction. Neuronal signaling is discussed in Chapter 28.9. Figure \(\PageIndex{8}\) below shows a simplified model for one type of inactivation of the Na^{+ }ion channel

Figure \(\PageIndex{7}\): Simplified state transition model of voltage-gated sodium channels featuring closed, open, and inactivated states. Zybura, A. et al. *Cells*** 2021**,

*, 1595. https://doi.org/10.3390/cells10071595. Creative Commons Attribution (CC BY) license (*

*10***https://creativecommons.org/licenses/by/4.0/**).

The figure implies that there are at least 3 conformational states of the channel so the inactivation for the channel and the circuit/motif for adaptation we will now discuss are called **state-dependent inactivations**. The slow return to the original state is observed in many ion channels as well as in the return of G protein-coupled receptors to the normal state after their desensitization. Also, some protein kinases (kinases that use ATP to phosphorylate protein substrates) can be inactivated by internalizing the membrane kinase into vesicles where they can be reactivated and returned to the plasma membrane in a slow process.

For the construction of a perfect or near-perfect adaption state, we will assume the protein A exists in an off state (A_{off}) which binds the stimulus (B or S), an on state (A_{on}) which is viewed as the response (or A produces the response), and an inactivated state (A_{in}) which slowly reverts to the A_{off} state which can be activated again. The inactive state can be produced by conformational transitions with the protein itself or another molecule produced downstream of it in a metabolic or signaling pathway. For example, a GPRC could be phosphorylated or bind to another species to produce an inactive state.

There are two different circuits/motifs that can produce state-dependent inactivation. We'll refer to these as Type A and Type B

**Type A**

Figure \(\PageIndex{9}\) shows the Vcell reaction diagram (top left), a classical reaction diagram (bottom left) and time course graphs for Type A state-dependent inactivation.

Figure \(\PageIndex{9}\): Perfect Adaptation for Type A State-Dependent Inactivation. Adapted from Ferrell (ibid).

A_{on} represents the active state of the protein. This mechanism applies well to the Na^{+} channel. The differential equations for dA_{on}/dt and dA_{off}/dt are shown below.

For dA_{on}/dt

\begin{equation}

\frac{d A_{o n}}{d t}=k_1 \operatorname{Input} \cdot\left(1-A_{o n}-A_{i n}\right)-k_2 A_{o n}

\end{equation}

dA_{in}/dt

\begin{equation}

\frac{d A_{i n}}{d t}=k_2 A_{o n}

\end{equation}

with constants k_{1} = k_{2} = 1.

Again, as with the other cases, the stimulus S is pulsed. The different colors in the bottom left reaction diagram imply an off and inactive red state and a green active state, each of different conformations. The graphs were produced using Vcell. There is a slight anomaly in the graph of A_{on} which shows two additional small peaks as the system returns to the basal state. This contrasts to just 1 peak which returns to the basal state in a simple exponential fashion as described in the Ferrell paper. We are uncertain as to the source of the discrepancy.

**Type B**

In this case, the periodic stimulus, abbreviated as B, is a binding partner for A_{off} which produces an active complex B-A_{on}. Figure \(\PageIndex{10}\) below shows the Vcell reaction diagram (top left), a classical reaction diagram (bottom left), and time course graphs for Type B state-dependent inactivation

Figure \(\PageIndex{10}\)

Figure \(\PageIndex{10}\): Perfect Adaptation for Type B State-Dependent Inactivation. Adapted from Ferrell (ibid)

BA_{on} represents the active state of the protein bound to B while BA_{in} represents the inactive complex.

The equation of dBA_{on}/dt for the formation of the active state is

\begin{equation}

\frac{d B A_{o n}}{d t}=k_1\left(B_{t o t}-B A_{o n}-B A_{i n}\right) *\left(1-B A_{o n}-B A_{i n}\right)-k_2 B A_{o n}

\end{equation}

and the equation for dBA_{in}/dt for the formation of the inactive state is

\begin{equation}

\frac{d A_{i n}}{d t}=k_2 A B_{o n}

\end{equation}

with constants k_{1} = k_{2} = 4.

The graphs (note the different time concentration scales on the left) show a fairly quick return to the basal state after each pulse of stimuli (B).