Now that we've covered linear multivariate models, we turn our attention to the most common type of model: one with multiple interacting variables.
For example, a model of the number of susceptible, \(S\), and infected, \(I\), individuals often includes the interaction between these two variables, \(SI\), describing the rate at which these two classes of individuals meet one another (and potentially spread disease).
Similarly, models of predator, \(P\), and prey, \(N\), abundance often include terms like \(NP\) describing the rate at which predators encounter prey (and potentially eat them).
Let's first see how to deal with these models in general and then apply those techniques to specific circumstances like those mentioned above (in the next two lectures).
1. Continuous time
In general, if we have \(n\) interacting variables, \(x_1, x_2, ..., x_n\), we can write any continuous time model like
If we then want to find the equilibria, \(\hat{x}_1,\hat{x}_2, ..., \hat{x}_n\), we set all these equations to 0 and solve for one variable at a time (note that solving for the equilibrium is not always possible in nonlinear models!).
Now note that we can no longer write this system of equations in matrix form with a matrix composed only of parameters.
In order to use what we've learned about eigenvalues and eigenvectors we're first going to have to linearize the system so that the corresponding matrices do not contain variables.
As we saw in nonlinear univariate models, one useful way to linearize a system is to measure the system relative to equilibrium, \(\epsilon = n - \hat{n}\).
Then assuming that the deviation from equilibrium, \(\epsilon\), is small, we used a Taylor series expansion to approximate the nonlinear system with a linear system.
To do that with multivariate models we'll need to know how to take a Taylor series expansion of multivariate functions
Taylor series expansion of a multivariate function
Taking the series of \(f\) around \(x_1=a_1\), \(x_2=a_2\), ..., \(x_n=a_n\) gives
where \(\frac{\partial f}{\partial x_i}\) is the "partial derivative" of \(f\) with respect to \(x_i\), meaning that we treat all the other variables as constants when taking the derivative.
Then when the difference between each variable and its value, \(x_i-a_i\), is small enough we can ignore all the terms with a \((x_i-a_i)(x_j-a_j)\), and we are left with a linear approximation of \(f\).
So let \(\epsilon_i = x_i - \hat{x}_i\) be the deviation of variable \(x_i\) from its equilibrium value, \(\hat{x}_i\).
Then we can write a system of equations describing the change in the deviations for all of our variables
And then we can take a Taylor series around \(x_1=\hat{x}_1, x_2=\hat{x}_2, ..., x_n=\hat{x}_n\) to get a linear approximation of our system near the equilibrium
Each of the partial derivatives \(\frac{\partial f_i}{\partial x_j}\) is evaluated at the equilibrium, so these are constants. And \(x_i - \hat{x}_i = \epsilon_i\). So we now have a linear system
And now that we have a linear system around an equilibrium, we can assess its local stability just as we did with linear multivariate models (see Summary).
2. Discrete time
Note
The short version of this section is that we can do the same thing in discrete time -- local stability is determined by the eigenvalues of the Jacobian, where the functions in that Jacobian are now our recursions, \(x_i(t+1) = f_i(x_1(t), x_2(t), ..., x_n(t))\).
We can do something very similar for nonlinear multivariate models in discrete time
Now the equilibria are found by setting all \(x_i(t+1) = x_i(t) = \hat{x}_i\) and solving for the \(\hat{x}_i\) one at a time.
To linearize the system around an equilibrium we again measure the system in terms of deviation from the equilibrium, \(\epsilon_i(t) = x_i(t) - \hat{x}_i\), giving
Then taking the Taylor series of each \(f_i\) around \(x_1(t) = \hat{x}_1, ..., x_n(t) = \hat{x}_n\) we can approximate our system near the equilibrium as