5.2 Series Solutions near an Ordinary Point, Part I
3.3
Linear Independence and the Wronskian
Bessel’s equation
Legendre’s equation
Airy’s equation of the adjoint equation is the original equation.
3.3 Linear Independence and the Wronskian
The representation of the general solution of a second order linear homogeneous differential equation as a linear combination of two solutions whose Wronskian is not zero is intimately related to the concept of linear independence of two functions. This is a very important algebraic idea and has significance far beyond the present context; we briefly discuss it in this section.
We will refer to the following basic property of systems of linear homogeneous algebraic equations. Consider the two–by–two system a x + a x = 0, 11 1
12 2
(1)
a x + a x = 0, 21 1
22 2 and let = a a − a a be the corresponding determinant of coefficients. Then 11 22
12 21 x = 0, y = 0 is the only solution of the system (1) if and only if = 0. Further, the system (1) has nonzero solutions if and only if = 0.
Two functions f and g are said to be linearly dependent on an interval I if there exist two constants k and k , not both zero, such that
1
2
k f (t) + k g(t) = 0 (2)
1
2
for all t in I . The functions f and g are said to be linearly independent on an interval I if they are not linearly dependent; that is, Eq. (2) holds for all t in I only if k = k = 0. In Section 4.1 these definitions are extended to an arbitrary number of
1
2
functions. Although it may be difficult to determine whether a large set of functions is linearly independent or linearly dependent, it is usually easy to answer this question for a set of only two functions: they are linearly dependent if they are proportional to each other, and linearly independent otherwise. The following examples illustrate these definitions.
E X A M P L E
Determine whether the functions sin t and cos(t − π/2) are linearly independent or
1
linearly dependent on an arbitrary interval.
Chapter 3. Second Order Linear Equations
The given functions are linearly dependent on any interval since k sin t + k cos(t − π/2) = 0
1
2
for all t if we choose k = 1 and k = −1.
1
2
Show that the functions et and e2t are linearly independent on any interval.
E X A M P L E
To establish this result we suppose that
2
k et + k e2t = 0 (3)
1
2
for all t in the interval; we must then show that k = k = 0. Choose two points t and
1
2
0
t in the interval, where t = t . Evaluating Eq. (3) at these points, we obtain
1
1
0
k et0 + k e2t0 = 0, 1
2
(4)
k et1 + k e2t1 = 0.
1
2
The determinant of coefficients is et0 e2t1 − e2t0et1 = et0et1(et1 − et0).
Since this determinant is not zero, it follows that the only solution of Eq. (4) is k = k = 0. Hence et and e2t are linearly independent.
1
2
The following theorem relates linear independence and dependence to the Wronskian.
Theorem 3.3.1
If f and g are differentiable functions on an open interval I and if W ( f, g)(t ) = 0
0
for some point t in I , then f and g are linearly independent on I . Moreover, if f
0
and g are linearly dependent on I , then W ( f, g)(t) = 0 for every t in I .
To prove the first statement in Theorem 3.3.1, consider a linear combination k f (t) +
1
k g(t), and suppose that this expression is zero throughout the interval. Evaluating the
2
expression and its derivative at t , we have
0
k f (t ) + k g(t ) = 0, 1
0
2
0
(5)
k f (t ) + k g(t ) = 0.
1
0
2
0
The determinant of coefficients of Eqs. (5) is precisely W ( f, g)(t ), which is not zero
0
by hypothesis. Therefore, the only solution of Eqs. (5) is k = k = 0, so f and g are
1
2
linearly independent.
The second part of Theorem 3.3.1 follows immediately from the first. Let f and g be linearly dependent, and suppose that the conclusion is false, that is, W ( f, g) is not everywhere zero in I . Then there is a point t such that W ( f, g)(t ) = 0; by the first
0
0
part of Theorem 3.3.1 this implies that f and g are linearly independent, which is a contradiction, thus completing the proof.
3.3Linear Independence and the Wronskian
We can apply this result to the two functions f (t) = et and g(t) = e2t discussed in Example 2. For any point t we have
0
et
0
e2t0
W ( f, g)(t ) =
0
et
= e3t0 = 0.
(6)
0
2e2t0
The functions et and e2t are therefore linearly independent on any interval.
You should be careful not to read too much into Theorem 3.3.1. In particular, two functions f and g may be linearly independent even though W ( f, g)(t) = 0 for every t in the interval I . This is illustrated in Problem 28.
Now let us examine further the properties of the Wronskian of two solutions of a second order linear homogeneous differential equation. The following theorem, perhaps surprisingly, gives a simple explicit formula for the Wronskian of any two solutions of any such equation, even if the solutions themselves are not known.
Theorem 3.3.2 (Abel’s Theorem)4 If y and y are solutions of the differential equation
1
2
L[y] = y + p(t)y + q(t)y = 0, (7)
where p and q are continuous on an open interval I , then the Wronskian W (y , y )(t)
1
2
is given by
W (y ,y )(t) = c exp − p(t) dt , (8)
1
2
where c is a certain constant that depends on y and y , but not on t. Further,
1
2
W (y , y )(t) is either zero for all t in I (if c = 0) or else is never zero in I (if c = 0).
1
2
To prove Abel’s theorem we start by noting that y and y satisfy
1
2
y + p(t)y + q(t)y = 0, 1
1
1
(9)
y + p(t)y + q(t)y = 0.
2
2
2
If we multiply the first equation by −y , the second by y , and add the resulting
2
1
equations, we obtain (y y − yy ) + p(t)(y y − y y ) = 0.
(10)
1 2
1
2
1 2
1 2
Next, we let W (t) = W (y ,y )(t) and observe that
1
2
W = y y − y y .
(11)
1 2
1
2
Then we can write Eq. (10) in the form W + p(t)W = 0.
(12)
4 The result in Theorem 3.3.2 was derived by the Norwegian mathematician Niels Henrik Abel (1802–1829) in 1827 and is known as Abel’s formula. Abel also showed that there is no general formula for solving a quintic, or fifth degree, polynomial equation in terms of explicit algebraic operations on the coefficients, thereby resolving a question that had been open since the sixteenth century. His greatest contributions, however, were in analysis, particularly in the study of elliptic functions. Unfortunately, his work was not widely noticed until after his death.
The distinguished French mathematician Legendre called it a “monument more lasting than bronze.”
Chapter 3. Second Order Linear Equations Equation (12) can be solved immediately since it is both a first order linear equation (Section 2.1) and a separable equation (Section 2.2). Thus
W (t) = c exp − p(t) dt , (13)
where c is a constant. The value of c depends on which pair of solutions of Eq. (7) is involved. However, since the exponential function is never zero, W (t) is not zero unless c = 0, in which case W (t) is zero for all t, which completes the proof of Theorem 3.3.2.
Note that the Wronskians of any two fundamental sets of solutions of the same differential equation can differ only by a multiplicative constant, and that the Wronskian of any fundamental set of solutions can be determined, up to a multiplicative constant, without solving the differential equation.
In Example 5 of Section 3.2 we verified that y (t) = t1/2 and y (t) = t−1 are solutions
1
2
E X A M P L E of the equation
3
2t2 y + 3t y − y = 0,t > 0.
(14)
Verify that the Wronskian of y and y is given by Eq. (13).
1
2
From the example just cited we know that W (y , y )(t) = −(3/2)t−3/2. To use
1
2
Eq. (13) we must write the differential equation (14) in the standard form with the coefficient of y equal to 1. Thus we obtain y + 3 y − 1 y = 0,
2t
2t2
so p(t) = 3/2t. Hence
3
W (y , y )(t) = c exp −
dt
= c exp −3 ln t
1
2
2t
2
= c t−3/2.
(15)
Equation (15) gives the Wronskian of any pair of solutions of Eq. (14). For the particular solutions given in this example we must choose c = −3/2.
A stronger version of Theorem 3.3.1 can be established if the two functions involved are solutions of a second order linear homogeneous differential equation.
Theorem 3.3.3 Let y and y be the solutions of Eq. (7),
1
2
L[y] = y + p(t)y + q(t)y = 0, where p and q are continuous on an open interval I . Then y and y are linearly
1
2
dependent on I if and only if W (y , y )(t) is zero for all t in I . Alternatively, y and
1
2
1
y are linearly independent on I if and only if W (y , y )(t) is never zero in I .
2
1
2
Of course, we know by Theorem 3.3.2 that W (y , y )(t) is either everywhere zero or
1
2
nowhere zero in I . In proving Theorem 3.3.3, observe first that if y and y are linearly
1
2
dependent, then W (y , y )(t) is zero for all t in I by Theorem 3.3.1. It remains to prove
1
2
the converse; that is, if W (y , y )(t) is zero throughout I , then y and y are linearly
1
2
1
2
dependent. Let t be any point in I ; then necessarily W (y , y )(t ) = 0. Consequently,
0
1
2
0
the system of equations c y (t ) + c y (t ) = 0, 1 1
0
2 2
0
(16)
c y (t ) + c y (t ) = 0 1 1
0
2 2
0
for c and c has a nontrivial solution. Using these values of c and c , let φ(t) =
1
2
1
2
c y (t) + c y (t). Then φ is a solution of Eq. (7), and by Eqs. (16) φ also satisfies the 1 1
2 2 initial conditions φ(t ) = 0,φ(t ) = 0.
(17)
0
0
Therefore, by the uniqueness part of Theorem 3.2.1, or by Example 2 of Section 3.2, φ(t) = 0 for all t in I . Since φ(t) = c y (t) + c y (t) with c and c not both zero, this 1 1
2 2
1
2
means that y and y are linearly dependent. The alternative statement of the theorem
1
2
follows immediately.
We can now summarize the facts about fundamental sets of solutions, Wronskians, and linear independence in the following way. Let y and y be solutions of Eq. (7),
1
2
y + p(t)y + q(t)y = 0, where p and q are continuous on an open interval I . Then the following four statements are equivalent, in the sense that each one implies the other three:
1.
The functions y and y are a fundamental set of solutions on I .
1
2
2.
The functions y and y are linearly independent on I .
1
2
3.
W (y , y )(t ) = 0 for some t in I .
1
2
0
0
4.
W (y , y )(t) = 0 for all t in I .
1
2
It is interesting to note the similarity between second order linear homogeneous differential equations and two-dimensional vector algebra. Two vectors a and b are said to be linearly dependent if there are two scalars k and k , not both zero, such that
1
2
k a + k b = 0; otherwise, they are said to be linearly independent. Let i and j be unit
1
2
vectors directed along the positive x and y axes, respectively. Since k i + k j = 0 only
1
2
if k = k = 0, the vectors i and j are linearly independent. Further, we know that any
1
2
vector a with components a and a can be written as a = a i + a j, that is, as a linear
1
2
1
2
combination of the two linearly independent vectors i and j. It is not difficult to show that any vector in two dimensions can be expressed as a linear combination of any two linearly independent two-dimensional vectors (see Problem 14). Such a pair of linearly independent vectors is said to form a basis for the vector space of two-dimensional vectors.
The term vector space is also applied to other collections of mathematical objects that obey the same laws of addition and multiplication by scalars that geometric vectors do. For example, it can be shown that the set of functions that are twice differentiable on the open interval I forms a vector space. Similarly, the set V of functions satisfying Eq. (7) also forms a vector space.
Since every member of V can be expressed as a linear combination of two linearly independent members y and y , we say that such a pair forms a basis for V . This leads
1
2
to the conclusion that V is two-dimensional; therefore, it is analogous in many respects to the space of geometric vectors in a plane. Later we find that the set of solutions of an Chapter 3. Second Order Linear Equationsnth order linear homogeneous differential equation forms a vector space of dimension n, and that any set of n linearly independent solutions of the differential equation forms a basis for the space. This connection between differential equations and vectors constitutes a good reason for the study of abstract linear algebra.
PROBLEMS
In each of Problems 1 through 8 determine whether the given pair of functions is linearly independent or linearly dependent.
1.
2.
3.
4.
5.
6.
7.
8.
or linearly dependent? Why?
or linearly dependent? Why?
1
2
1 1
2 2
1
2
1
2
3
1
2
4
1
2
3
4
1
2
1
2
3
1 1
2 2
4
1 1
2 2 also form a linearly independent set of solutions.
1
2
1
2
1
2
1 2
2 1
Bessel’s equation
Legendre’s equation
1
2
1
2
1
2
1
2
1
2
1
2
1
2
Linear Independence and the Wronskian
Bessel’s equation
Legendre’s equation
Airy’s equation of the adjoint equation is the original equation.
3.3 Linear Independence and the Wronskian
The representation of the general solution of a second order linear homogeneous differential equation as a linear combination of two solutions whose Wronskian is not zero is intimately related to the concept of linear independence of two functions. This is a very important algebraic idea and has significance far beyond the present context; we briefly discuss it in this section.
We will refer to the following basic property of systems of linear homogeneous algebraic equations. Consider the two–by–two system a x + a x = 0, 11 1
12 2
(1)
a x + a x = 0, 21 1
22 2 and let = a a − a a be the corresponding determinant of coefficients. Then 11 22
12 21 x = 0, y = 0 is the only solution of the system (1) if and only if = 0. Further, the system (1) has nonzero solutions if and only if = 0.
Two functions f and g are said to be linearly dependent on an interval I if there exist two constants k and k , not both zero, such that
1
2
k f (t) + k g(t) = 0 (2)
1
2
for all t in I . The functions f and g are said to be linearly independent on an interval I if they are not linearly dependent; that is, Eq. (2) holds for all t in I only if k = k = 0. In Section 4.1 these definitions are extended to an arbitrary number of
1
2
functions. Although it may be difficult to determine whether a large set of functions is linearly independent or linearly dependent, it is usually easy to answer this question for a set of only two functions: they are linearly dependent if they are proportional to each other, and linearly independent otherwise. The following examples illustrate these definitions.
E X A M P L E
Determine whether the functions sin t and cos(t − π/2) are linearly independent or
1
linearly dependent on an arbitrary interval.
Chapter 3. Second Order Linear Equations
The given functions are linearly dependent on any interval since k sin t + k cos(t − π/2) = 0
1
2
for all t if we choose k = 1 and k = −1.
1
2
Show that the functions et and e2t are linearly independent on any interval.
E X A M P L E
To establish this result we suppose that
2
k et + k e2t = 0 (3)
1
2
for all t in the interval; we must then show that k = k = 0. Choose two points t and
1
2
0
t in the interval, where t = t . Evaluating Eq. (3) at these points, we obtain
1
1
0
k et0 + k e2t0 = 0, 1
2
(4)
k et1 + k e2t1 = 0.
1
2
The determinant of coefficients is et0 e2t1 − e2t0et1 = et0et1(et1 − et0).
Since this determinant is not zero, it follows that the only solution of Eq. (4) is k = k = 0. Hence et and e2t are linearly independent.
1
2
The following theorem relates linear independence and dependence to the Wronskian.
Theorem 3.3.1
If f and g are differentiable functions on an open interval I and if W ( f, g)(t ) = 0
0
for some point t in I , then f and g are linearly independent on I . Moreover, if f
0
and g are linearly dependent on I , then W ( f, g)(t) = 0 for every t in I .
To prove the first statement in Theorem 3.3.1, consider a linear combination k f (t) +
1
k g(t), and suppose that this expression is zero throughout the interval. Evaluating the
2
expression and its derivative at t , we have
0
k f (t ) + k g(t ) = 0, 1
0
2
0
(5)
k f (t ) + k g(t ) = 0.
1
0
2
0
The determinant of coefficients of Eqs. (5) is precisely W ( f, g)(t ), which is not zero
0
by hypothesis. Therefore, the only solution of Eqs. (5) is k = k = 0, so f and g are
1
2
linearly independent.
The second part of Theorem 3.3.1 follows immediately from the first. Let f and g be linearly dependent, and suppose that the conclusion is false, that is, W ( f, g) is not everywhere zero in I . Then there is a point t such that W ( f, g)(t ) = 0; by the first
0
0
part of Theorem 3.3.1 this implies that f and g are linearly independent, which is a contradiction, thus completing the proof.
3.3Linear Independence and the Wronskian
We can apply this result to the two functions f (t) = et and g(t) = e2t discussed in Example 2. For any point t we have
0
et
0
e2t0
W ( f, g)(t ) =
0
et
= e3t0 = 0.
(6)
0
2e2t0
The functions et and e2t are therefore linearly independent on any interval.
You should be careful not to read too much into Theorem 3.3.1. In particular, two functions f and g may be linearly independent even though W ( f, g)(t) = 0 for every t in the interval I . This is illustrated in Problem 28.
Now let us examine further the properties of the Wronskian of two solutions of a second order linear homogeneous differential equation. The following theorem, perhaps surprisingly, gives a simple explicit formula for the Wronskian of any two solutions of any such equation, even if the solutions themselves are not known.
Theorem 3.3.2 (Abel’s Theorem)4 If y and y are solutions of the differential equation
1
2
L[y] = y + p(t)y + q(t)y = 0, (7)
where p and q are continuous on an open interval I , then the Wronskian W (y , y )(t)
1
2
is given by
W (y ,y )(t) = c exp − p(t) dt , (8)
1
2
where c is a certain constant that depends on y and y , but not on t. Further,
1
2
W (y , y )(t) is either zero for all t in I (if c = 0) or else is never zero in I (if c = 0).
1
2
To prove Abel’s theorem we start by noting that y and y satisfy
1
2
y + p(t)y + q(t)y = 0, 1
1
1
(9)
y + p(t)y + q(t)y = 0.
2
2
2
If we multiply the first equation by −y , the second by y , and add the resulting
2
1
equations, we obtain (y y − yy ) + p(t)(y y − y y ) = 0.
(10)
1 2
1
2
1 2
1 2
Next, we let W (t) = W (y ,y )(t) and observe that
1
2
W = y y − y y .
(11)
1 2
1
2
Then we can write Eq. (10) in the form W + p(t)W = 0.
(12)
4 The result in Theorem 3.3.2 was derived by the Norwegian mathematician Niels Henrik Abel (1802–1829) in 1827 and is known as Abel’s formula. Abel also showed that there is no general formula for solving a quintic, or fifth degree, polynomial equation in terms of explicit algebraic operations on the coefficients, thereby resolving a question that had been open since the sixteenth century. His greatest contributions, however, were in analysis, particularly in the study of elliptic functions. Unfortunately, his work was not widely noticed until after his death.
The distinguished French mathematician Legendre called it a “monument more lasting than bronze.”
Chapter 3. Second Order Linear Equations Equation (12) can be solved immediately since it is both a first order linear equation (Section 2.1) and a separable equation (Section 2.2). Thus
W (t) = c exp − p(t) dt , (13)
where c is a constant. The value of c depends on which pair of solutions of Eq. (7) is involved. However, since the exponential function is never zero, W (t) is not zero unless c = 0, in which case W (t) is zero for all t, which completes the proof of Theorem 3.3.2.
Note that the Wronskians of any two fundamental sets of solutions of the same differential equation can differ only by a multiplicative constant, and that the Wronskian of any fundamental set of solutions can be determined, up to a multiplicative constant, without solving the differential equation.
In Example 5 of Section 3.2 we verified that y (t) = t1/2 and y (t) = t−1 are solutions
1
2
E X A M P L E of the equation
3
2t2 y + 3t y − y = 0,t > 0.
(14)
Verify that the Wronskian of y and y is given by Eq. (13).
1
2
From the example just cited we know that W (y , y )(t) = −(3/2)t−3/2. To use
1
2
Eq. (13) we must write the differential equation (14) in the standard form with the coefficient of y equal to 1. Thus we obtain y + 3 y − 1 y = 0,
2t
2t2
so p(t) = 3/2t. Hence
3
W (y , y )(t) = c exp −
dt
= c exp −3 ln t
1
2
2t
2
= c t−3/2.
(15)
Equation (15) gives the Wronskian of any pair of solutions of Eq. (14). For the particular solutions given in this example we must choose c = −3/2.
A stronger version of Theorem 3.3.1 can be established if the two functions involved are solutions of a second order linear homogeneous differential equation.
Theorem 3.3.3 Let y and y be the solutions of Eq. (7),
1
2
L[y] = y + p(t)y + q(t)y = 0, where p and q are continuous on an open interval I . Then y and y are linearly
1
2
dependent on I if and only if W (y , y )(t) is zero for all t in I . Alternatively, y and
1
2
1
y are linearly independent on I if and only if W (y , y )(t) is never zero in I .
2
1
2
Of course, we know by Theorem 3.3.2 that W (y , y )(t) is either everywhere zero or
1
2
nowhere zero in I . In proving Theorem 3.3.3, observe first that if y and y are linearly
1
2
dependent, then W (y , y )(t) is zero for all t in I by Theorem 3.3.1. It remains to prove
1
2
the converse; that is, if W (y , y )(t) is zero throughout I , then y and y are linearly
1
2
1
2
dependent. Let t be any point in I ; then necessarily W (y , y )(t ) = 0. Consequently,
0
1
2
0
the system of equations c y (t ) + c y (t ) = 0, 1 1
0
2 2
0
(16)
c y (t ) + c y (t ) = 0 1 1
0
2 2
0
for c and c has a nontrivial solution. Using these values of c and c , let φ(t) =
1
2
1
2
c y (t) + c y (t). Then φ is a solution of Eq. (7), and by Eqs. (16) φ also satisfies the 1 1
2 2 initial conditions φ(t ) = 0,φ(t ) = 0.
(17)
0
0
Therefore, by the uniqueness part of Theorem 3.2.1, or by Example 2 of Section 3.2, φ(t) = 0 for all t in I . Since φ(t) = c y (t) + c y (t) with c and c not both zero, this 1 1
2 2
1
2
means that y and y are linearly dependent. The alternative statement of the theorem
1
2
follows immediately.
We can now summarize the facts about fundamental sets of solutions, Wronskians, and linear independence in the following way. Let y and y be solutions of Eq. (7),
1
2
y + p(t)y + q(t)y = 0, where p and q are continuous on an open interval I . Then the following four statements are equivalent, in the sense that each one implies the other three:
1.
The functions y and y are a fundamental set of solutions on I .
1
2
2.
The functions y and y are linearly independent on I .
1
2
3.
W (y , y )(t ) = 0 for some t in I .
1
2
0
0
4.
W (y , y )(t) = 0 for all t in I .
1
2
It is interesting to note the similarity between second order linear homogeneous differential equations and two-dimensional vector algebra. Two vectors a and b are said to be linearly dependent if there are two scalars k and k , not both zero, such that
1
2
k a + k b = 0; otherwise, they are said to be linearly independent. Let i and j be unit
1
2
vectors directed along the positive x and y axes, respectively. Since k i + k j = 0 only
1
2
if k = k = 0, the vectors i and j are linearly independent. Further, we know that any
1
2
vector a with components a and a can be written as a = a i + a j, that is, as a linear
1
2
1
2
combination of the two linearly independent vectors i and j. It is not difficult to show that any vector in two dimensions can be expressed as a linear combination of any two linearly independent two-dimensional vectors (see Problem 14). Such a pair of linearly independent vectors is said to form a basis for the vector space of two-dimensional vectors.
The term vector space is also applied to other collections of mathematical objects that obey the same laws of addition and multiplication by scalars that geometric vectors do. For example, it can be shown that the set of functions that are twice differentiable on the open interval I forms a vector space. Similarly, the set V of functions satisfying Eq. (7) also forms a vector space.
Since every member of V can be expressed as a linear combination of two linearly independent members y and y , we say that such a pair forms a basis for V . This leads
1
2
to the conclusion that V is two-dimensional; therefore, it is analogous in many respects to the space of geometric vectors in a plane. Later we find that the set of solutions of an Chapter 3. Second Order Linear Equationsnth order linear homogeneous differential equation forms a vector space of dimension n, and that any set of n linearly independent solutions of the differential equation forms a basis for the space. This connection between differential equations and vectors constitutes a good reason for the study of abstract linear algebra.
PROBLEMS
In each of Problems 1 through 8 determine whether the given pair of functions is linearly independent or linearly dependent.
1.
2.
3.
4.
5.
6.
7.
8.
or linearly dependent? Why?
or linearly dependent? Why?
1
2
1 1
2 2
1
2
1
2
3
1
2
4
1
2
3
4
1
2
1
2
3
1 1
2 2
4
1 1
2 2 also form a linearly independent set of solutions.
1
2
1
2
1
2
1 2
2 1
Bessel’s equation
Legendre’s equation
1
2
1
2
1
2
1
2
1
2
1
2
1
2