If we then want to find the equilibria, \(\hat{x}_1,\hat{x}_2, ..., \hat{x}_n\), we set all these equations to 0 and solve for one variable at a time. Let's assume we've already done this and now want to know when an equilibrium is stable.
In the linear multivariate case we determined stability with the eigenvalues of a matrix \(\textbf{M}\) of parameters. Unfortunately we can not generally write a system of nonlinear equations in matrix form with a matrix composed only of parameters. In order to use what we've learned about eigenvalues and eigenvectors we're first going to have to linearize the system so that the corresponding matrices do not contain variables.
As we saw in nonlinear univariate models, one useful way to linearize a system is to measure the system relative to equilibrium, \(\epsilon = x - \hat{x}\). Then assuming that the deviation from equilibrium, \(\epsilon\), is small, we used a Taylor series expansion to approximate the nonlinear system with a linear system. To do that with multivariate models we'll need to take a Taylor series expansion of a multivariate function.
Taylor series expansion of a multivariate function
Taking the series of \(f\) around \(x_1=a_1\), \(x_2=a_2\), ..., \(x_n=a_n\) gives
where \(\frac{\partial f}{\partial x_i}\) is the "partial derivative" of \(f\) with respect to \(x_i\), meaning that we treat all the other variables as constants when taking the derivative. Considering only the first two terms gives a linear approximation.
So let \(\epsilon_i = x_i - \hat{x}_i\) be the deviation of variable \(x_i\) from its equilibrium value, \(\hat{x}_i\), and write a system of equations describing the change in the deviations for all of our variables,
We can take a Taylor series around \(x_1=\hat{x}_1, x_2=\hat{x}_2, ..., x_n=\hat{x}_n\) to get a linear approximation of our system near the equilibrium,
Each of the partial derivatives \(\frac{\partial f_i}{\partial x_j}\) is evaluated at the equilibrium, so these are constants. And \(x_i - \hat{x}_i = \epsilon_i\). So we now have a linear system,
If any of the equations, \(f_i\), are nonlinear then \(\textbf{J}\) depends on the variables. To determine local stability we find the eigenvalues of the Jacobian evaluated at an equilibrium.
Example: epidemiology
Let's return to our model of disease spread from the previous lecture. The rate of change in the number of susceptibles and infecteds is
\[\begin{aligned}
\frac{\mathrm{d}S}{\mathrm{d}t} &= \theta - \beta S I - d S + \gamma I \\
\frac{\mathrm{d}I}{\mathrm{d}t} &= \beta S I - (d + v) I - \gamma I.
\end{aligned}\]
We found two equilibria, the diease-free equilibrium,
This is an upper triangular matrix, so the eigenvalues are just the diagonal elements, \(\lambda = -d, \beta\theta/d-(d+v+\gamma)\). Because all the parameters are rates they are all non-negative, and therefore the only eigenvalue that can have a positive real part (and therefore cause instability) is \(\lambda=\beta\theta/d-(d+v+\gamma)\). The equilibrium is unstable when this is positive, \(\beta\theta/d-(d+v+\gamma)>0\). Because this equilibrium has no infected individuals, instability in this case means the infected individuals will increase in number from rare -- ie, the disease can invade.
We can rearrange the instability condition to get a little more intuition. The disease can invade whenever
The numerator is \(\beta\) times the number of susceptibles at the disease-free equilibrium, \(\hat{S}=\theta/d\). This is the rate that a rare disease infects new individuals. The denominator is the rate at which the disease is removed from the population. Therefore a rare disease that infects faster than it is removed can spread. This ratio, in our case \(\frac{\beta\theta/d}{d+v+\gamma}\), turns out to be the expected number of new infections per infection when the disease is rare. This is termed \(R_0\), which is a very key epidemiological quantity (you may remember estimates of \(R_0\) in the news from a certain recent virus...).
Now for the endemic equilibrium. Plugging these values into the Jacobian and simplifying gives
Here, instead of calculating the eigenvalues explicitly, we can use the Routh-Hurwitz stability criteria for a 2x2 matrix, a negative trace and positive determinant. The determinant is \(\beta \theta - d (d+v+\gamma)\). For this to be positive we need \(\beta \theta/d > (d+v+\gamma)\), which was the instability condition on the disease-free equilibrium (\(R_0>1\)). The trace is \(-\frac{\beta \theta - d \gamma}{d+v}\). For this to be negative we need \(\beta \theta/d > \gamma\), which is guaranteed if the determinant is positive. So in conclusion, the endemic equilibrium is valid and stable whenever the disease can invade, \(R_0>1\).
2. Discrete time
Note
The short version of this section is that we can do the same thing in discrete time -- local stability is determined by the eigenvalues of the Jacobian, where the functions in that Jacobian are now our recursions, \(x_i(t+1) = f_i(x_1(t), x_2(t), ..., x_n(t))\).
We can do something very similar for nonlinear multivariate models in discrete time
Now the equilibria are found by setting all \(x_i(t+1) = x_i(t) = \hat{x}_i\) and solving for the \(\hat{x}_i\) one at a time. Let's assume we've done this.
To linearize the system around an equilibrium we again measure the system in terms of deviation from the equilibrium, \(\epsilon_i(t) = x_i(t) - \hat{x}_i\), giving
Then taking the Taylor series of each \(f_i\) around \(x_1(t) = \hat{x}_1, ..., x_n(t) = \hat{x}_n\) we can approximate our system near the equilibrium as