8.1 The Euler or Tangent Line Method

7.5
Homogeneous Linear Systems with Constant Coefficients
7.5 Homogeneous Linear Systems with Constant Coefficients
We will concentrate most of our attention on systems of homogeneous linear equations with constant coefficients; that is, systems of the form
ODE
x = Ax, (1)
where A is a constant n × n matrix. Unless stated otherwise, we will assume further that all the elements of A are real (rather than complex) numbers.
If n = 1, then the system reduces to a single first order equation d x = ax,
(2)
dt
whose solution is x = ceat . In Section 2.5 we noted that x = 0 is the only equilibrium solution if a = 0. Other solutions approach x = 0 if a < 0 and in this case we say that x = 0 is an asymptotically stable equilibrium solution. On the other hand, if a > 0, then x = 0 is unstable, since other solutions depart from it. For higher order systems the situation is somewhat analogous, but more complicated. Equilibrium solutions are found by solving Ax = 0. We assume that det A = 0, so x = 0 is the only equilibrium solution. An important question is whether other solutions approach this equilibrium solution or depart from it as t increases; in other words, is x = 0 asymptotically stable or unstable? Or are there still other possibilities?
The case n = 2 is particularly important and lends itself to visualization in the x x -plane, called the By evaluating Ax at a large number of points and 1 2 plotting the resulting vectors one obtains a direction field of tangent vectors to solutions of the system of differential equations. A qualitative understanding of the behavior of solutions can usually be gained from a direction field. More precise information results from including in the plot some solution curves, or trajectories. A plot that shows a representative sample of trajectories for a given system is called a Examples of direction fields and phase portraits occur later in this section.
To construct the general solution of the system (1) we proceed by analogy with the treatment of second order linear equations in Section 3.1. Thus we seek solutions of Eq. (1) of the form x = ert ,
(3)
where the exponent r and the constant vector are to be determined. Substituting from Eq. (3) for x in the system (1) gives r ert = Aert .
Upon canceling the nonzero scalar factor ert we obtain A = r, or (A − rI) = 0, (4)
where I is the n × n identity matrix. Thus, to solve the system of differential equations (1), we must solve the system of algebraic equations (4). This latter problem is precisely the one that determines the eigenvalues and eigenvectors of the matrix A. Therefore the vector x given by Eq. (3) is a solution of Eq. (1) provided that r is an eigenvalue and an associated eigenvector of the coefficient matrix A.
The following two examples illustrate the solution procedure in the case of 2 × 2 coefficient matrices. We also show how to construct the corresponding phase portraits.
Later in the section we return to a further discussion of the general n × n system.
Chapter 7. Systems of First Order Linear Equations
Consider the system E X A M P L E
1
1
1
x = x.
(5)
4
1
Plot a direction field and determine the qualitative behavior of solutions. Then find the general solution and draw several trajectories.
A direction field for this system is shown in Figure 7.5.1. From this figure it is easy to see that a typical solution departs from the neighborhood of the origin and ultimately has a slope of approximately 2 in either the first or third quadrant.
x2
2
1
–2
–1
1
2
x1
–1
–2
FIGURE 7.5.1 Direction field for the system (5).
To find solutions explicitly we assume that x = ert , and substitute for x in Eq. (5).
We are led to the system of algebraic equations
1 − r
1
ξ
0
1
=
.
(6)
4
1 − r
ξ
0
2
Equations (6) have a nontrivial solution if and only if the determinant of coefficients is zero. Thus allowable values of r are found from the equation
1 − r
1
4
1 − r = (1 − r)2 − 4 = r2 − 2r − 3 = 0.
(7)
Equation (7) has the roots r = 3 and r = −1; these are the eigenvalues of the coeffi 1
2
cient matrix in Eq. (5). If r = 3, then the system (6) reduces to the single equation −2ξ + ξ = 0.
(8)
1
2
Thus ξ = 2ξ and the eigenvector corresponding to r = 3 can be taken as
2
1
1
(1) = 1 .
(9)
2
7.5
Homogeneous Linear Systems with Constant Coefficients
Similarly, corresponding to r = −1, we find that ξ = −2ξ , so the eigenvector is
2
2
1
(2) =
1
−
.
(10)
2
The corresponding solutions of the differential equation are
1
1
x(1)(t) = e3t ,x(2)(t) = e−t .
(11)
2
−2
The Wronskian of these solutions is
e3t
e−t W [x(1), x(2)](t) =
2e3t
−2e−t = −4e2t,
(12)
which is never zero. Hence the solutions x(1) and x(2) form a fundamental set, and the general solution of the system (5) is x = c x(1)(t) + c x(2)(t)
1
2
=
1
1
c
e3t + c
e−t ,
(13)
1
2
2
−2
where c and c are arbitrary constants.
1
2
To visualize the solution (13) it is helpful to consider its graph in the x x -plane for 1 2 various values of the constants c and c . We start with x = c x(1)(t), or in scalar form
1
2
1
x = c e3t ,x = 2c e3t .
1
1
2
1
By eliminating t between these two equations, we see that this solution lies on the straight line x = 2x ; see This is the line through the origin in the
2
1
direction of the eigenvector (1). If we look on the solution as the trajectory of a moving particle, then the particle is in the first quadrant when c > 0 and in the third
1
quadrant when c < 0. In either case the particle departs from the origin as t increases.
1
Next consider x = c x(2)(t), or
2
x = c e−t ,x = −2c e−t .
1
2
2
2
This solution lies on the line x = −2x , whose direction is determined by the eigenvec 2
1
tor (2). The solution is in the fourth quadrant when c > 0 and in the second quadrant
2
when c < 0, as shown in In both cases the particle moves toward the
2
origin as t increases. The solution (13) is a combination of x(1)(t) and x(2)(t). For large t the term c x(1)(t) is dominant and the term c x(2)(t) becomes negligible. Thus all
1
2
solutions for which c = 0 are asymptotic to the line x = 2x as t → ∞. Similarly,
1
2
1
all solutions for which c = 0 are asymptotic to the line x = −2x as t → −∞. The
2
2
1
graphs of several solutions are shown in The pattern of trajectories in this figure is typical of all second order systems x = Ax for which the eigenvalues are real and of opposite signs. The origin is called in this case. Saddle points are always unstable because almost all trajectories depart from them as t increases.
In the preceding paragraph we have described how to draw by hand a qualitatively correct sketch of the trajectories of a system such as Eq. (5), once the eigenvalues and eigenvectors have been determined. However, to produce a detailed and accurate drawing, such as and other figures that appear later in this chapter, a computer is extremely helpful, if not indispensable.
Chapter 7. Systems of First Order Linear Equations
x
x
2
1
x(1)(t)
2
2
1
–2
–1
1
2
x1
1
t
0.5
–1
–2
x(2)(t)
–2
(a)
(b)
FIGURE 7.5.2 (a) Trajectories of the system (5); the origin is a saddle point. (b) Plots of x1 versus t for the system (5).
As an alternative to Figure 7.5.2a one can also plot x or x as a function of t; some
1
2
typical plots of x versus t are shown in Figure 7.5.2b, and those of x versus t are
1
2
similar. For certain initial conditions it follows that c = 0 in Eq. (13), so that x = c e−t
1
1
2
and x → 0 as t → ∞. One such graph is shown in Figure 7.5.2b, corresponding to
1
a trajectory that approaches the origin in Figure 7.5.2a. For most initial conditions, however, c = 0 and x is given by x = c e3t + c e−t . Then the presence of the positive
1
1
1
1
2
exponential term causes x to grow exponentially in magnitude as t increases. Several
1
graphs of this type are shown in Figure 7.5.2b, corresponding to trajectories that depart from the neighborhood of the origin in Figure 7.5.2a. It is important to understand the relation between parts (a) and (b) of Figure 7.5.2 and other similar figures that appear later, since one may want to visualize solutions either in the x x -plane or as functions 1 2 of the independent variable t.
Consider the system E X A M P L E
√
2
−3
2
x = √ x.
(14)
2
−2
Draw a direction field for this system; then find its general solution and plot several trajectories in the phase plane.
The direction field for the system (14) in Figure 7.5.3 shows clearly that all solutions approach the origin. To find the solutions assume that x = ert ; then we obtain the algebraic system
√
−3 − r
2
√
ξ
0
1
=
.
(15)
2
−2 − r
ξ
0
2
7.5
Homogeneous Linear Systems with Constant Coefficients
x2
2
1
–2
–1
1
2
x1
–1
–2
FIGURE 7.5.3 Direction field for the system (14).
The eigenvalues satisfy (−3 − r)(−2 − r) − 2 = r2 + 5r + 4 = (r + 1)(r + 4) = 0, (16)
so r = −1 and r = −4. For r = −1, Eq. (15) becomes
1
2
√
−2
2
√
ξ
0
1
=
.
(17)
2
−1
ξ
0
2
√
Hence ξ = 2 ξ and the eigenvector (1) corresponding to the eigenvalue r = −1
2
1
1
can be taken as
(1) =
1
√
.
(18)
2
√
Similarly, corresponding to the eigenvalue r = −4, we have ξ = − 2 ξ , so the
2
1
2
eigenvector is
√
(2) = − 2 .
(19)
1
Thus a fundamental set of solutions of the system (14) is
√
1
− 2 x(1)(t) = √ e−t ,
x(2)(t) =
e−4t ,
(20)
2
1
and the general solution is
√
1
− 2 x = c x(1)(t) + c x(2) = c
√
e−t + ce−4t .
(21)
1
2
1
2
2
1
Graphs of the solution (21) for several values of c and c are shown in Figure
1
2
√
The solution x(1)(t) approaches the origin along the line x = 2 x , while the
2
1
√
solution x(2)(t) approaches the origin along the line x = − 2 x . The directions of
1
2
Chapter 7. Systems of First Order Linear Equations these lines are determined by the eigenvectors (1) and (2), respectively. In general, we have a combination of these two fundamental solutions. As t → ∞, the solution x(2)(t)
is negligible compared to x(1)(t). Thus, unless c = 0, the solution (21) approaches
1
√
the origin tangent to the line x = 2x . The pattern of trajectories shown in Figure
2
1
7.5.4a is typical of all second order systems x = Ax for which the eigenvalues are real, different, and of the same sign. The origin is called a node for such a system.
If the eigenvalues were positive rather than negative, then the trajectories would be similar but traversed in the outward direction. Nodes are asymptotically stable if the eigenvalues are negative and unstable if the eigenvalues are positive.
Although Figure 7.5.4a was computer-generated, a qualitatively correct sketch of the trajectories can be drawn quickly by hand, based on a knowledge of the eigenvalues and eigenvectors.
Some typical plots of x versus t are shown in Figure 7.5.4b. Observe that each
1
of the graphs approaches the t-axis asymptotically as t increases, corresponding to a trajectory that approaches the origin in The behavior of x as a function
2
of t is similar.
x
x
2
1
x(1)(t)
2
1
1
x
–2
–1
1
2
1
0.5
1
t
–1
x(2)(t)
–1
–2
(a)
(b)
FIGURE 7.5.4 (a) Trajectories of the system (14); the origin is a node. (b) Plots of x versus
1
t for the system (14).
The two preceding examples illustrate the two main cases for 2 × 2 systems having eigenvalues that are real and different: Either the eigenvalues have opposite signs (Example 1) or the same sign (Example 2). The other possibility is that zero is an eigenvalue, but in this case it follows that det A = 0, which violates the assumption made at the beginning of this section.
Returning to the general system (1) we proceed as in the examples. To find solutions of the differential equation (1) we must find the eigenvalues and eigenvectors of A from the associated algebraic system (4). The eigenvalues r , . . . , r (which need not
1
n all be different) are roots of the nth degree polynomial equation det(A − rI) = 0.
(22)
The nature of the eigenvalues and the corresponding eigenvectors determines the nature of the general solution of the system (1). If we assume that A is a real-valued matrix, there are three possibilities for the eigenvalues of A: 1.
All eigenvalues are real and different from each other.
2.
Some eigenvalues occur in complex conjugate pairs.
3.
Some eigenvalues are repeated.
If the eigenvalues are all real and different, as in the two preceding examples, then associated with each eigenvalue r is a real eigenvector (i) and the set of n eigenvectors
i
(1), . . . , (n) is linearly independent. The corresponding solutions of the differential system (1) are x(1)(t) = (1)er t
t
1 , . . . , x(n)(t) = (n)ern .
(23)
To show that these solutions form a fundamental set, we evaluate their Wronskian:
ξ (1)
er tt
1
· · · ξ (n)ern
1
1
W [x(1), . . . , x(n)](t) = .
.
..
..
ξ (1)er t
t
1
· · · ξ (n)ern
n
n
ξ (1) · · · ξ (n)
1
1
=
e(r +···+r )t .
.
1
n
.
..
..
(24)
ξ (1) · · · ξ (n) n
n
First, we observe that the exponential function is never zero. Next, since the eigenvectors (1), . . . , (n) are linearly independent, the determinant in the last term of Eq. (24) is nonzero. As a consequence, the Wronskian W [x(1), . . . , x(n)](t) is never zero; hence x(1), . . . , x(n) form a fundamental set of solutions. Thus the general solution of Eq. (1) is x = c (1)er t
t
1
+ · · · + c (n)ern .
(25)
1
n
If A is real and symmetric (a special case of Hermitian matrices), recall from Section 7.3 that all the eigenvalues r , . . . , r must be real. Further, even if some of the
1
n
eigenvalues are repeated, there is always a full set of n eigenvectors (1), . . . , (n) that are linearly independent (in fact, orthogonal). Hence the corresponding solutions of the differential system (1) given by Eq. (23) again form a fundamental set of solutions and the general solution is again given by Eq. (25). The following example illustrates this case.
Find the general solution of E X A M P L E
3
0
1
1
x = 1
0
1 x.
(26)
1
1
0
Chapter 7. Systems of First Order Linear Equations
Observe that the coefficient matrix is real and symmetric. The eigenvalues and eigenvectors of this matrix were found in Example 5 of Section 7.3, namely,
1
r = 2, (1) = 1 ;
(27)
1
1
1
0
r = −1,r = −1; (2) = 0 , (3) = 1 .
(28)
2
3
−1
−1
Hence a fundamental set of solutions of Eq. (26) is
1
1
0
x(1)(t) = 1 e2t ,x(2)(t) = 0 e−t ,x(3)(t) = 1 e−t (29)
1
−1
−1
and the general solution is
1
1
0
x = c 1 e2t + c 0 e−t + c 1 e−t .
(30)
1
2
3
1
−1
−1
This example illustrates the fact that even though an eigenvalue (r = −1) has multiplicity 2, it may still be possible to find two linearly independent eigenvectors (2) and (3) and, as a consequence, to construct the general solution (30).
The behavior of the solution (30) depends critically on the initial conditions. For large t the first term on the right side of Eq. (30) is the dominant one; therefore, if c = 0, all components of x become unbounded as t → ∞. On the other hand, for
1
certain initial points c will be zero. In this case, the solution involves only the negative
1
exponential terms and x → 0 as t → ∞. The initial points that cause c to be zero
1
are precisely those that lie in the plane determined by the eigenvectors (2) and (3)
corresponding to the two negative eigenvalues. Thus, solutions that start in this plane approach the origin as t → ∞, while all other solutions become unbounded.
If some of the eigenvalues occur in complex conjugate pairs, then there are still n linearly independent solutions of the form (23), provided that all the eigenvalues are different. Of course, the solutions arising from complex eigenvalues are complexvalued. However, as in Section 3.4, it is possible to obtain a full set of real-valued solutions. This is discussed in Section 7.6.
More serious difficulties can occur if an eigenvalue is repeated. In this event the number of corresponding linearly independent eigenvectors may be smaller than the multiplicity of the eigenvalue. If so, the number of linearly independent solutions of the form ert will be smaller than n. To construct a fundamental set of solutions it is then necessary to seek additional solutions of another form. The situation is somewhat analogous to that for an nth order linear equation with constant coefficients; a repeated root of the characteristic equation gave rise to solutions of the form ert , tert , t2ert , . . . .
The case of repeated eigenvalues is treated in Section 7.8.
Finally, if A is complex, then complex eigenvalues need not occur in conjugate pairs, and the eigenvectors are normally complex-valued even though the associated eigenvalue may be real. The solutions of the differential equation (1) are still of the form (23), provided that the eigenvalues are distinct, but in general all the solutions are complex-valued.
PROBLEMS
In each of Problems 1 through 6 find the general solution of the given system of equations and describe the behavior of the solution as t → ∞. Also draw a direction field and plot a few trajectories of the system.
䉴
3
−2
1
−2
x
x
2
−2
3
−4
䉴
2
−1
1
1
x
x
3
−2
4
−2
5
3
䉴
−2
1
x
4
4
x
1
−2
3
5
4
4
䉴
4
−3
3
6
x
x
8
−6
−1 −2
In each of Problems 9 through 14 find the general solution of the given system of equations.
1
i
2
x
x
i
1
1
1
2
3
2
4
2
0
2
1
1
4
2
3
1
1
1
1
−1
4
1
2
−8 −5 −3
2
1
−1
5
−1
2
3
1
−1
−2 1
1
5
4
3
1
1
2
2
2
−1 1 3
1
0
0
−1
7
0
−1
2
4
5
Chapter 7. Systems of First Order Linear Equations 19. The system tx = Ax is analogous to the second order Euler equation (Section 5.5). Assum ing that x = tr , where is a constant vector, show that and r must satisfy (A − rI) = 0
in order to obtain nontrivial solutions of the given differential equation.
Referring to Problem 19, solve the given system of equations in each of Problems 20 through 23. Assume that t > 0.
2
−1
5
−1
x
x
3
−2
3
1
4
−3
3
−2
x
x
8
−6
2
−2
In each of Problems 24 through 27 the eigenvalues and eigenvectors of a matrix A are given.
Consider the corresponding system x = Ax.
(a) Sketch a phase portrait of the system.
(b) Sketch the trajectory passing through the initial point (2, 3).
(c) For the trajectory in part (b) sketch the graphs of x versus t and of x versus t on the
1
2
same set of axes.
−1
1
24. r = −1, (1) = ;
r = −2, (2) =
1
2
2
2
−1
1
;
1
2
2
2
−1
1
26. r = −1, (1) = ;
r = 2, (2) =
1
2
2
2
1
1
;
1
2
2
−2
28. Consider a 2 × 2 system x = Ax. If we assume that r = r , the general solution is
1
2
x = c (1)er t
t
1
+ c (2)er2 , provided that (1) and (2) are linearly independent. In this
1
2
problem we establish the linear independence of (1) and (2) by assuming that they are linearly dependent, and then showing that this leads to a contradiction.
(a) Note that (1) satisfies the matrix equation (A − r I) (1) = 0; similarly, note that
1
(A − r I) (2) = 0.
2
(b) Show that (A − r I) (1) = (r − r ) (1).
2
1
2
(c) Suppose that (1) and (2) are linearly dependent. Then c (1) + c (2) = 0 and at least
1
2
one of c and c is not zero; suppose that c = 0. Show that (A − r I)(c (1) + c (2)) = 0, 1
2
1
2
1
2
and also show that (A − r I)(c (1) + c (2)) = c (r − r ) (1). Hence c = 0, which is
2
1
2
1
1
2
1
a contradiction. Therefore (1) and (2) are linearly independent.
(d) Modify the argument of part (c) in case c is zero but c is not.
1
2
(e) Carry out a similar argument for the case in which the order n is equal to 3; note that the procedure can be extended to cover an arbitrary value of n.
29. Consider the equation (i)
(ii)
7.5
Homogeneous Linear Systems with Constant Coefficients
1
2
x
1 .
䉴 30. The two-tank system of Problem 21 in Section 7.1 leads to the initial value problem
−
1
3
−17
10
40
,
1
−1
−21
10
5
1
2
1
2
equilibria.
(a) Find the solution of the given initial value problem.
1
2
1
2
31. Consider the system
−1 −1
1
(c) In parts (a) and (b) solutions of the system exhibit two quite different types of behavior.
Electric Circuits.
Problems 32 and 33 are concerned with the electric circuit described by the system of differential equations in Problem 20 of Section 7.1:
R
− 1 − 1
d
I
(i)
dt
V 1
− 1
V
C
1
2
5
3
33. Consider the preceding system of differential equations (i).
1
2
(c) If the condition found in part (a) is not satisfied, then the eigenvalues are either complex