4.1 General Theory of nth Order Linear Equations
1.3
Classification of Differential Equations
(c) You have invited several dozen friends to a pool party that is scheduled to begin in 4 hr.
(e) Find the flow rate that is sufficient to achieve the concentration 0.02 g/gal within 4 hr.
1.3 Classification of Differential Equations
The main purpose of this book is to discuss some of the properties of solutions of differential equations, and to describe some of the methods that have proved effective in finding solutions, or in some cases approximating them. To provide a framework for our presentation we describe here several useful ways of classifying differential equations.
Ordinary and Partial Differential Equations.
One of the more obvious classifications is based on whether the unknown function depends on a single independent variable or on several independent variables. In the first case, only ordinary derivatives appear in the differential equation, and it is said to be an In the second case, the derivatives are partial derivatives, and the equation is called a partial
differential equation.
All the differential equations discussed in the preceding two sections are ordinary differential equations. Another example of an ordinary differential equation is d2 Q(t)d Q(t)
L + R + 1 Q(t) = E(t), (1)
dt2
dt
C for the charge Q(t) on a capacitor in a circuit with capacitance C, resistance R, and inductance L; this equation is derived in Section 3.8. Typical examples of partial differential equations are the heat conduction equation
∂2
∂
α
u(x,t)u(x,t)
2
=
,
(2)
∂x2
∂t
and the wave equation ∂2u(x,t)∂2u(x,t)a2
=
.
(3)
∂x2
∂t2
Here, α2 and a2 are certain physical constants. The heat conduction equation describes the conduction of heat in a solid body and the wave equation arises in a variety of problems involving wave motion in solids or fluids. Note that in both Eqs. (2) and (3) the dependent variable u depends on the two independent variables x and t.
Chapter 1. IntroductionSystems of Differential Equations.
Another classification of differential equations depends on the number of unknown functions that are involved. If there is a single function to be determined, then one equation is sufficient. However, if there are two or more unknown functions, then a system of equations is required. For example, the Lotka–Volterra, or predator–prey, equations are important in ecological modeling.
They have the form d x/dt = ax − αx y
d y/dt = −cy + γ x y,
(4)
where x(t) and y(t) are the respective populations of the prey and predator species.
The constants a, α, c, and γ are based on empirical observations and depend on the particular species being studied. Systems of equations are discussed in and 9; in particular, the Lotka–Volterra equations are examined in Section 9.5. It is not unusual in some areas of application to encounter systems containing a large number of equations.
Order.
The order of a differential equation is the order of the highest derivative that appears in the equation. The equations in the preceding sections are all first order equations, while Eq. (1) is a second order equation. Equations (2) and (3) are second order partial differential equations. More generally, the equation F [t, u(t), u(t), . . . , u(n)(t)] = 0 (5)
is an ordinary differential equation of the nth order. Equation (5) expresses a relation between the independent variable t and the values of the function u and its first n
derivatives u, u, . . . , u(n). It is convenient and customary in differential equations to write y for u(t), with y, y, . . . , y(n) standing for u(t), u(t), . . . , u(n)(t). Thus Eq.
(5) is written as F (t, y, y, . . . , y(n)) = 0.
(6)
For example, y + 2et y + yy = t4
(7)
is a third order differential equation for y = u(t). Occasionally, other letters will be used instead of t and y for the independent and dependent variables; the meaning should be clear from the context.
We assume that it is always possible to solve a given ordinary differential equation for the highest derivative, obtaining y(n) = f (t, y, y, y, . . . , y(n−1)).
(8)
We study only equations of the form (8). This is mainly to avoid the ambiguity that may arise because a single equation of the form (6) may correspond to several equations of the form (8). For example, the equation y2 + t y + 4y = 0 (9)
leads to the two equations −t + t2 − 16y −t − t2 − 16yy =
or
y =
.
(10)
2
2
1.3
Classification of Differential EquationsLinear and Nonlinear Equations.
A crucial classification of differential equations is whether they are linear or nonlinear. The ordinary differential equation F (t, y, y, . . . , y(n)) = 0 is said to be linear if F is a linear function of the variables y, y, . . . , y(n); a similar definition applies to partial differential equations. Thus the general linear ordinary differential equation of order n is a (t)y(n) + a (t)y(n−1) + · · · + a (t)y = g(t).
(11)
0
1
n
Most of the equations you have seen thus far in this book are linear; examples are the equations in Sections 1.1 and 1.2 describing the falling object and the field mouse population. Similarly, in this section, Eq. (1) is a linear ordinary differential equation and Eqs. (2) and (3) are linear partial differential equations. An equation that is not of the form (11) is a Equation (7) is nonlinear because of the term yy. Similarly, each equation in the system (4) is nonlinear because of the terms that involve the product x y.
A simple physical problem that leads to a nonlinear differential equation is the oscillating pendulum. The angle θ that an oscillating pendulum of length L makes with the vertical direction (see Figure 1.3.1) satisfies the equation d2θ + g sinθ = 0, (12)
dt2
L whose derivation is outlined in Problem 29. The presence of the term involving sin θ
makes Eq. (12) nonlinear.
The mathematical theory and methods for solving linear equations are highly devel oped. In contrast, for nonlinear equations the theory is more complicated and methods of solution are less satisfactory. In view of this, it is fortunate that many significant problems lead to linear ordinary differential equations or can be approximated by linear equations. For example, for the pendulum, if the angle θ is small, then sin θ ∼ = θ and Eq. (12) can be approximated by the linear equation d2θ + g θ = 0.
(13)
dt2
L
This process of approximating a nonlinear equation by a linear one is called linearization and it is an extremely valuable way to deal with nonlinear equations. Nevertheless, there are many physical phenomena that simply cannot be represented adequately
θ
L
m
mg
FIGURE 1.3.1 An oscillating pendulum.
Chapter 1. Introduction by linear equations; to study these phenomena it is essential to deal with nonlinear equations.
In an elementary text it is natural to emphasize the simpler and more straightforward parts of the subject. Therefore the greater part of this book is devoted to linear equations and various methods for solving them. However, Chapters 8 and 9, as well as parts of Chapter 2, are concerned with nonlinear equations. Whenever it is appropriate, we point out why nonlinear equations are, in general, more difficult, and why many of the techniques that are useful in solving linear equations cannot be applied to nonlinear equations.
Solutions.
A equation (8) on the interval α <t < β is a function φ such that φ, φ, . . . , φ(n) exist and satisfy φ(n)(t) = f [t, φ(t), φ(t), . . . , φ(n−1)(t)]
(14)
for every t in α < t < β. Unless stated otherwise, we assume that the function f of Eq.
(8) is a real-valued function, and we are interested in obtaining real-valued solutions y = φ(t).
Recall that in Section 1.2 we found solutions of certain equations by a process of direct integration. For instance, we found that the equation d p = 0.5p − 450 (15)
dt
has the solution p = 900 + cet/2, (16)
where c is an arbitrary constant. It is often not so easy to find solutions of differential equations. However, if you find a function that you think may be a solution of a given equation, it is usually relatively easy to determine whether the function is actually a solution simply by substituting the function into the equation. For example, in this way it is easy to show that the function y (t) = cos t is a solution of
1
y + y = 0 (17)
for all t. To confirm this, observe that y (t) = − sin t and y(t) = − cos t; then it
1
1
follows that y(t) + y (t) = 0. In the same way you can easily show that y (t) = sin t
1
1
2
is also a solution of Eq. (17). Of course, this does not constitute a satisfactory way to solve most differential equations because there are far too many possible functions for you to have a good chance of finding the correct one by a random choice. Nevertheless, it is important to realize that you can verify whether any proposed solution is correct by substituting it into the differential equation. For a problem of any importance this can be a very useful check and is one that you should make a habit of considering.
Some Important Questions.
Although for the equations (15) and (17) we are able to verify that certain simple functions are solutions, in general we do not have such solutions readily available. Thus a fundamental question is the following: Does an equation of the form (8) always have a solution? The answer is “No.” Merely writing down an equation of the form (8) does not necessarily mean that there is a function y = φ(t) that satisfies it. So, how can we tell whether some particular equation has a solution? This is the question of existence of a solution, and it is answered by theorems stating that under certain restrictions on the function f in Eq. (8), the equation always has solutions. However, this is not a purely mathematical concern, for at least two reasons. If a problem has no solution, we would prefer to know that fact before investing time and effort in a vain attempt to solve the problem. Further, if a sensible physical problem is modeled mathematically as a differential equation, then the equation should have a solution. If it does not, then presumably there is something wrong with the formulation. In this sense an engineer or scientist has some check on the validity of the mathematical model.
Second, if we assume that a given differential equation has at least one solution, the question arises as to how many solutions it has, and what additional conditions must be specified to single out a particular solution. This is the question of uniqueness. In general, solutions of differential equations contain one or more arbitrary constants of integration, as does the solution (16) of Eq. (15). Equation (16) represents an infinity of functions corresponding to the infinity of possible choices of the constant c. As we saw in Section 1.2, if p is specified at some time t, this condition will determine a value for c; even so, we have not yet ruled out the possibility that there may be other solutions of Eq. (15) that also have the prescribed value of p at the prescribed time t.
The issue of uniqueness also has practical implications. If we are fortunate enough to find a solution of a given problem, and if we know that the problem has a unique solution, then we can be sure that we have completely solved the problem. If there may be other solutions, then perhaps we should continue to search for them.
A third important question is: Given a differential equation of the form (8), can we actually determine a solution, and if so, how? Note that if we find a solution of the given equation, we have at the same time answered the question of the existence of a solution. However, without knowledge of existence theory we might, for example, use a computer to find a numerical approximation to a “solution” that does not exist.
On the other hand, even though we may know that a solution exists, it may be that the solution is not expressible in terms of the usual elementary functions—polynomial, trigonometric, exponential, logarithmic, and hyperbolic functions. Unfortunately, this is the situation for most differential equations. Thus, while we discuss elementary methods that can be used to obtain solutions of certain relatively simple problems, it is also important to consider methods of a more general nature that can be applied to more difficult problems.
Computer Use in Differential Equations.
A computer can be an extremely valuable tool in the study of differential equations. For many years computers have been used to execute numerical algorithms, such as those described in Chapter 8, to construct numerical approximations to solutions of differential equations. At the present time these algorithms have been refined to an extremely high level of generality and efficiency. A few lines of computer code, written in a high-level programming language and executed (often within a few seconds) on a relatively inexpensive computer, suffice to solve numerically a wide range of differential equations. More sophisticated routines are also readily available. These routines combine the ability to handle very large and complicated systems with numerous diagnostic features that alert the user to possible problems as they are encountered.
The usual output from a numerical algorithm is a table of numbers, listing selected values of the independent variable and the corresponding values of the dependent variable. With appropriate software it is easy to display the solution of a differential equation graphically, whether the solution has been obtained numerically or as the result of an analytical procedure of some kind. Such a graphical display is often much more Chapter 1. Introduction illuminating and helpful in understanding and interpreting the solution of a differential equation than a table of numbers or a complicated analytical formula. There are on the market several well-crafted and relatively inexpensive special-purpose software packages for the graphical investigation of differential equations. The widespread availability of personal computers has brought powerful computational and graphical capability within the reach of individual students. You should consider, in the light of your own circumstances, how best to take advantage of the available computing resources. You will surely find it enlightening to do so.
Another aspect of computer use that is very relevant to the study of differential equations is the availability of extremely powerful and general software packages that can perform a wide variety of mathematical operations. Among these are Maple, Mathematica, and MATLAB, each of which can be used on various kinds of personal computers or workstations. All three of these packages can execute extensive numerical computations and have versatile graphical facilities. In addition, Maple and Mathematica also have very extensive analytical capabilities. For example, they can perform the analytical steps involved in solving many differential equations, often in response to a single command. Anyone who expects to deal with differential equations in more than a superficial way should become familiar with at least one of these products and explore the ways in which it can be used.
For you, the student, these computing resources have an effect on how you should study differential equations. To become confident in using differential equations, it is essential to understand how the solution methods work, and this understanding is achieved, in part, by working out a sufficient number of examples in detail. However, eventually you should plan to delegate as many as possible of the routine (often repetitive) details to a computer, while you focus more attention on the proper formulation of the problem and on the interpretation of the solution. Our viewpoint is that you should always try to use the best methods and tools available for each task. In particular, you should strive to combine numerical, graphical, and analytical methods so as to attain maximum understanding of the behavior of the solution and of the underlying process that the problem models. You should also remember that some tasks can best be done with pencil and paper, while others require a calculator or computer. Good judgment is often needed in selecting a judicious combination.
PROBLEMS
In each of Problems 1 through 6 determine the order of the given differential equation; also state whether the equation is linear or nonlinear.
d y
d y
dt
dt
d y
3.
4.
dt
dt
d y
5.
6.
dt
In each of Problems 7 through 14 verify that the given function or functions is a solution of the differential equation.
7. y − y = 0; y (t) = et ,y (t) = cosh t
1
2
1
2
9. t y − y = t2; y = 3t + t2 10. y + 4y + 3y = t; y (t) = t/3,y (t) = e−t + t/3
1
2
11. 2t2 y + 3t y − y = 0,t > 0; y (t) = t1/2,y (t) = t−1
1
2
12. t2 y + 5t y + 4y = 0,t > 0; y (t) = t−2,y (t) = t−2 ln t
1
2
13. y + y = sec t, 0 < t < π/2; y = (cos t) ln cos t + t sin t
t
0
In each of Problems 15 through 18 determine the values of r for which the given differential equation has solutions of the form y = ert .
x x
yy
zz
x x
yy
x
y
= 0
x x x x x x yy
yyyy
t
x
x x In each of Problems 25 through 28 verify that the given function or functions is a solution of the given partial differential equation.
25. u + u = 0; u (x, y) = cos x cosh y,u (x, y) = ln(x2 + y2)x x
yy
1
2
x x
t
1
2
27. a2u = u ; u (x, t) = sin λx sin λat,u (x, t) = sin(x − at),λ a real constant x x
tt
1
2
28. α2u = u ; u = (π/t)1/2e−x2/4α2t ,t > 0 x x
t
29. Follow the steps indicated here to derive the equation of motion of a pendulum, Eq. (12)
in the text. Assume that the rod is rigid and weightless, that the mass is a point mass, and that there is no friction or drag anywhere in the system.
(a) Assume that the mass is in an arbitrary displaced position, indicated by the angle θ.
Draw a free-body diagram showing the forces acting on the mass.
(b) Apply Newton’s law of motion in the direction tangential to the circular arc on which the mass moves. Then the tensile force in the rod does not enter the equation. Observe that you need to find the component of the gravitational force in the tangential direction. Observe also that the linear acceleration, as opposed to the angular acceleration, is Ld2θ/dt2, where L is the length of the rod.
(c) Simplify the result from part (b) to obtain Eq. (12) of the text.
1.4 Historical Remarks
Without knowing something about differential equations and methods of solving them, it is difficult to appreciate the history of this important branch of mathematics. Further, the development of differential equations is intimately interwoven with the general development of mathematics and cannot be separated from it. Nevertheless, to provide some historical perspective, we indicate here some of the major trends in the history of Chapter 1. Introduction the subject, and identify the most prominent early contributors. Other historical information is contained in footnotes scattered throughout the book and in the references listed at the end of the chapter.
The subject of differential equations originated in the study of calculus by Isaac Newton (1642–1727) and Gottfried Wilhelm Leibniz (1646–1716) in the seventeenth century. Newton grew up in the English countryside, was educated at Trinity College, Cambridge, and became Lucasian Professor of Mathematics there in 1669. His epochal discoveries of calculus and of the fundamental laws of mechanics date from 1665. They were circulated privately among his friends, but Newton was extremely sensitive to criticism, and did not begin to publish his results until 1687 with the appearance of his most famous book, Philosophiae Naturalis Principia Mathematica. While Newton did relatively little work in differential equations as such, his development of the calculus and elucidation of the basic principles of mechanics provided a basis for their applications in the eighteenth century, most notably by Euler. Newton classified first order differential equations according to the forms d y/dx = f (x), dy/dx = f (y), and d y/dx = f (x,y). For the latter equation he developed a method of solution using infinite series when f (x,y) is a polynomial in x and y. Newton’s active research in mathematics ended in the early 1690s except for the solution of occasional challenge problems and the revision and publication of results obtained much earlier. He was appointed Warden of the British Mint in 1696 and resigned his professorship a few years later. He was knighted in 1705 and, upon his death, was buried in Westminster Abbey.
Leibniz was born in Leipzig and completed his doctorate in philosophy at the age of 20 at the University of Altdorf. Throughout his life he engaged in scholarly work in several different fields. He was mainly self-taught in mathematics, since his interest in this subject developed when he was in his twenties. Leibniz arrived at the fundamental results of calculus independently, although a little later than Newton, but was the first to publish them, in 1684. Leibniz was very conscious of the power of good mathematical notation, and our notation for the derivative, d y/dx, and the integral sign are due to him. He discovered the method of separation of variables (Section 2.2) in 1691, the reduction of homogeneous equations to separable ones in 1691, and the procedure for solving first order linear equations (Section 2.1) in 1694. He spent his life as ambassador and adviser to several German royal families, which permitted him to travel widely and to carry on an extensive correspondence with other mathematicians, especially the Bernoulli brothers. In the course of this correspondence many problems in differential equations were solved during the latter part of the seventeenth century.
The brothers Jakob (1654–1705) and Johann (1667–1748) Bernoulli of Basel did much to develop methods of solving differential equations and to extend the range of their applications. Jakob became professor of mathematics at Basel in 1687, and Johann was appointed to the same position upon his brother’s death in 1705. Both men were quarrelsome, jealous, and frequently embroiled in disputes, especially with each other. Nevertheless, both also made significant contributions to several areas of mathematics. With the aid of calculus they solved a number of problems in mechanics by formulating them as differential equations. For example, Jakob Bernoulli solved the differential equation y = [a3/(b2 y − a3)]1/2 in 1690 and in the same paper first used the term “integral” in the modern sense. In 1694 Johann Bernoulli was able to solve the equation d y/dx = y/ax. One problem to which both brothers contributed, and which led to much friction between them, was the 1.4
Historical Remarks Problem 33 of Section 2.3). The brachistochrone problem was also solved by Leibniz and Newton in addition to the Bernoulli brothers. It is said, perhaps apocryphally, that Newton learned of the problem late in the afternoon of a tiring day at the Mint, and solved it that evening after dinner. He published the solution anonymously, but on seeing it, Johann Bernoulli exclaimed, “Ah, I know the lion by his paw.”
Daniel Bernoulli (1700–1782), son of Johann, migrated to St. Petersburg as a young man to join the newly established St. Petersburg Academy, but returned to Basel in 1733 as professor of botany, and later, of physics. His interests were primarily in partial differential equations and their applications. For instance, it is his name that is associated with the Bernoulli equation in fluid mechanics. He was also the first to encounter the functions that a century later became known as Bessel functions (Section 5.8).
The greatest mathematician of the eighteenth century, Leonhard Euler (1707–1783), grew up near Basel and was a student of Johann Bernoulli. He followed his friend Daniel Bernoulli to St. Petersburg in 1727. For the remainder of his life he was associated with the St. Petersburg Academy (1727–1741 and 1766–1783) and the Berlin Academy (1741–1766). Euler was the most prolific mathematician of all time; his collected works fill more than 70 large volumes. His interests ranged over all areas of mathematics and many fields of application. Even though he was blind during the last 17 years of his life, his work continued undiminished until the very day of his death. Of particular interest here is his formulation of problems in mechanics in mathematical language and his development of methods of solving these mathematical problems. Lagrange said of Euler’s work in mechanics, “The first great work in which analysis is applied to the science of movement.” Among other things, Euler identified the condition for exactness of first order differential equations (Section 2.6) in 1734–35, developed the theory of integrating factors (Section 2.6) in the same paper, and gave the general solution of homogeneous linear equations with constant coefficients (Sections 3.1, 3.5, and 4.2) in 1743. He extended the latter results to nonhomogeneous equations in
1750–51. Beginning about 1750, Euler made frequent use of power series in solving differential equations. He also proposed a numerical procedure (Sections 2.7 and 8.1) in 1768–69, made important contributions in partial differential equations, and gave the first systematic treatment of the calculus of variations.
Joseph-Louis Lagrange (1736–1813) became professor of mathematics in his native Turin at the age of 19. He succeeded Euler in the chair of mathematics at the Berlin Academy in 1766, and moved on to the Paris Academy in 1787. He is most famous for his monumental work Me´canique analytique, published in 1788, an elegant and comprehensive treatise of Newtonian mechanics. With respect to elementary differential equations, Lagrange showed in 1762–65 that the general solution of an nth order linear homogeneous differential equation is a linear combination of n independent solutions (Sections 3.2, 3.3, and 4.1). Later, in 1774–75, he gave a complete development of the method of variation of parameters (Sections 3.7 and 4.4). Lagrange is also known for fundamental work in partial differential equations and the calculus of variations.
Pierre-Simon de Laplace (1749–1827) lived in Normandy as a boy but came to Paris in 1768 and quickly made his mark in scientific circles, winning election to the Acade´mie des Sciences in 1773. He was preeminent in the field of celestial mechanics; his greatest work, Traite´ de me´canique ce´leste, was published in five volumes between 1799 and 1825. Laplace’s equation is fundamental in many branches of mathematical physics, and Laplace studied it extensively in connection with gravitational attraction.
Chapter 1. Introduction The Laplace transform (Chapter 6) is also named for him although its usefulness in solving differential equations was not recognized until much later.
By the end of the eighteenth century many elementary methods of solving ordinary differential equations had been discovered. In the nineteenth century interest turned more toward the investigation of theoretical questions of existence and uniqueness and to the development of less elementary methods such as those based on power series expansions (see Chapter 5). These methods find their natural setting in the complex plane. Consequently, they benefitted from, and to some extent stimulated, the more or less simultaneous development of the theory of complex analytic functions. Partial differential equations also began to be studied intensively, as their crucial role in mathematical physics became clear. In this connection a number of functions, arising as solutions of certain ordinary differential equations, occurred repeatedly and were studied exhaustively. Known collectively as higher transcendental functions, many of them are associated with the names of mathematicians, including Bessel, Legendre, Hermite, Chebyshev, and Hankel, among others.
The numerous differential equations that resisted solution by analytical means led to the investigation of methods of numerical approximation (see Chapter 8). By 1900 fairly effective numerical integration methods had been devised, but their implementation was severely restricted by the need to execute the computations by hand or with very primitive computing equipment. In the last 50 years the development of increasingly powerful and versatile computers has vastly enlarged the range of problems that can be investigated effectively by numerical methods. During the same period extremely refined and robust numerical integrators have been developed and are readily available.
Versions appropriate for personal computers have brought the ability to solve a great many significant problems within the reach of individual students.
Another characteristic of differential equations in the twentieth century has been the creation of geometrical or topological methods, especially for nonlinear equations. The goal is to understand at least the qualitative behavior of solutions from a geometrical, as well as from an analytical, point of view. If more detailed information is needed, it can usually be obtained by using numerical approximations. An introduction to these geometrical methods appears in Chapter 9.
Within the past few years these two trends have come together. Computers, and especially computer graphics, have given a new impetus to the study of systems of nonlinear differential equations. Unexpected phenomena (Section 9.8), referred to by terms such as strange attractors, chaos, and fractals, have been discovered, are being intensively studied, and are leading to important new insights in a variety of applications. Although it is an old subject about which much is known, differential equations at the dawn of the twenty-first century remains a fertile source of fascinating and important unsolved problems.
REFERENCES
Computer software for differential equations changes too fast for particulars to be given in a book such as this. A good source of information is the Software Review and Computer Corner sections of The College
Mathematics Journal, published by the Mathematical Association of America.
There are a number of books that deal with the use of computer algebra systems for differential equations.
The following are associated with this book, although they can be used independently as well:
w York: Wiley, 1997) and w York: Wiley 1999).
1.4
Historical Remarks
For further reading in the history of mathematics see books such as those listed below: Kline, M., Mathematical Thought from Ancient to Modern Times (New York: Oxford University Press, 1972).
A useful historical appendix on the early development of differential equations appears in:
Ince, E. L., Ordinary Differential Equations (London: Longmans, Green, 1927; New York: Dover, 1956).
An encyclopedic source of information about the lives and achievements of mathematicians of the past is: Gillespie, C. C., ed., Dictionary of Scientific Biography (15 vols.) (New York: Scribner’s, 1971).
Classification of Differential Equations
(c) You have invited several dozen friends to a pool party that is scheduled to begin in 4 hr.
(e) Find the flow rate that is sufficient to achieve the concentration 0.02 g/gal within 4 hr.
1.3 Classification of Differential Equations
The main purpose of this book is to discuss some of the properties of solutions of differential equations, and to describe some of the methods that have proved effective in finding solutions, or in some cases approximating them. To provide a framework for our presentation we describe here several useful ways of classifying differential equations.
Ordinary and Partial Differential Equations.
One of the more obvious classifications is based on whether the unknown function depends on a single independent variable or on several independent variables. In the first case, only ordinary derivatives appear in the differential equation, and it is said to be an In the second case, the derivatives are partial derivatives, and the equation is called a partial
differential equation.
All the differential equations discussed in the preceding two sections are ordinary differential equations. Another example of an ordinary differential equation is d2 Q(t)d Q(t)
L + R + 1 Q(t) = E(t), (1)
dt2
dt
C for the charge Q(t) on a capacitor in a circuit with capacitance C, resistance R, and inductance L; this equation is derived in Section 3.8. Typical examples of partial differential equations are the heat conduction equation
∂2
∂
α
u(x,t)u(x,t)
2
=
,
(2)
∂x2
∂t
and the wave equation ∂2u(x,t)∂2u(x,t)a2
=
.
(3)
∂x2
∂t2
Here, α2 and a2 are certain physical constants. The heat conduction equation describes the conduction of heat in a solid body and the wave equation arises in a variety of problems involving wave motion in solids or fluids. Note that in both Eqs. (2) and (3) the dependent variable u depends on the two independent variables x and t.
Chapter 1. IntroductionSystems of Differential Equations.
Another classification of differential equations depends on the number of unknown functions that are involved. If there is a single function to be determined, then one equation is sufficient. However, if there are two or more unknown functions, then a system of equations is required. For example, the Lotka–Volterra, or predator–prey, equations are important in ecological modeling.
They have the form d x/dt = ax − αx y
d y/dt = −cy + γ x y,
(4)
where x(t) and y(t) are the respective populations of the prey and predator species.
The constants a, α, c, and γ are based on empirical observations and depend on the particular species being studied. Systems of equations are discussed in and 9; in particular, the Lotka–Volterra equations are examined in Section 9.5. It is not unusual in some areas of application to encounter systems containing a large number of equations.
Order.
The order of a differential equation is the order of the highest derivative that appears in the equation. The equations in the preceding sections are all first order equations, while Eq. (1) is a second order equation. Equations (2) and (3) are second order partial differential equations. More generally, the equation F [t, u(t), u(t), . . . , u(n)(t)] = 0 (5)
is an ordinary differential equation of the nth order. Equation (5) expresses a relation between the independent variable t and the values of the function u and its first n
derivatives u, u, . . . , u(n). It is convenient and customary in differential equations to write y for u(t), with y, y, . . . , y(n) standing for u(t), u(t), . . . , u(n)(t). Thus Eq.
(5) is written as F (t, y, y, . . . , y(n)) = 0.
(6)
For example, y + 2et y + yy = t4
(7)
is a third order differential equation for y = u(t). Occasionally, other letters will be used instead of t and y for the independent and dependent variables; the meaning should be clear from the context.
We assume that it is always possible to solve a given ordinary differential equation for the highest derivative, obtaining y(n) = f (t, y, y, y, . . . , y(n−1)).
(8)
We study only equations of the form (8). This is mainly to avoid the ambiguity that may arise because a single equation of the form (6) may correspond to several equations of the form (8). For example, the equation y2 + t y + 4y = 0 (9)
leads to the two equations −t + t2 − 16y −t − t2 − 16yy =
or
y =
.
(10)
2
2
1.3
Classification of Differential EquationsLinear and Nonlinear Equations.
A crucial classification of differential equations is whether they are linear or nonlinear. The ordinary differential equation F (t, y, y, . . . , y(n)) = 0 is said to be linear if F is a linear function of the variables y, y, . . . , y(n); a similar definition applies to partial differential equations. Thus the general linear ordinary differential equation of order n is a (t)y(n) + a (t)y(n−1) + · · · + a (t)y = g(t).
(11)
0
1
n
Most of the equations you have seen thus far in this book are linear; examples are the equations in Sections 1.1 and 1.2 describing the falling object and the field mouse population. Similarly, in this section, Eq. (1) is a linear ordinary differential equation and Eqs. (2) and (3) are linear partial differential equations. An equation that is not of the form (11) is a Equation (7) is nonlinear because of the term yy. Similarly, each equation in the system (4) is nonlinear because of the terms that involve the product x y.
A simple physical problem that leads to a nonlinear differential equation is the oscillating pendulum. The angle θ that an oscillating pendulum of length L makes with the vertical direction (see Figure 1.3.1) satisfies the equation d2θ + g sinθ = 0, (12)
dt2
L whose derivation is outlined in Problem 29. The presence of the term involving sin θ
makes Eq. (12) nonlinear.
The mathematical theory and methods for solving linear equations are highly devel oped. In contrast, for nonlinear equations the theory is more complicated and methods of solution are less satisfactory. In view of this, it is fortunate that many significant problems lead to linear ordinary differential equations or can be approximated by linear equations. For example, for the pendulum, if the angle θ is small, then sin θ ∼ = θ and Eq. (12) can be approximated by the linear equation d2θ + g θ = 0.
(13)
dt2
L
This process of approximating a nonlinear equation by a linear one is called linearization and it is an extremely valuable way to deal with nonlinear equations. Nevertheless, there are many physical phenomena that simply cannot be represented adequately
θ
L
m
mg
FIGURE 1.3.1 An oscillating pendulum.
Chapter 1. Introduction by linear equations; to study these phenomena it is essential to deal with nonlinear equations.
In an elementary text it is natural to emphasize the simpler and more straightforward parts of the subject. Therefore the greater part of this book is devoted to linear equations and various methods for solving them. However, Chapters 8 and 9, as well as parts of Chapter 2, are concerned with nonlinear equations. Whenever it is appropriate, we point out why nonlinear equations are, in general, more difficult, and why many of the techniques that are useful in solving linear equations cannot be applied to nonlinear equations.
Solutions.
A equation (8) on the interval α <t < β is a function φ such that φ, φ, . . . , φ(n) exist and satisfy φ(n)(t) = f [t, φ(t), φ(t), . . . , φ(n−1)(t)]
(14)
for every t in α < t < β. Unless stated otherwise, we assume that the function f of Eq.
(8) is a real-valued function, and we are interested in obtaining real-valued solutions y = φ(t).
Recall that in Section 1.2 we found solutions of certain equations by a process of direct integration. For instance, we found that the equation d p = 0.5p − 450 (15)
dt
has the solution p = 900 + cet/2, (16)
where c is an arbitrary constant. It is often not so easy to find solutions of differential equations. However, if you find a function that you think may be a solution of a given equation, it is usually relatively easy to determine whether the function is actually a solution simply by substituting the function into the equation. For example, in this way it is easy to show that the function y (t) = cos t is a solution of
1
y + y = 0 (17)
for all t. To confirm this, observe that y (t) = − sin t and y(t) = − cos t; then it
1
1
follows that y(t) + y (t) = 0. In the same way you can easily show that y (t) = sin t
1
1
2
is also a solution of Eq. (17). Of course, this does not constitute a satisfactory way to solve most differential equations because there are far too many possible functions for you to have a good chance of finding the correct one by a random choice. Nevertheless, it is important to realize that you can verify whether any proposed solution is correct by substituting it into the differential equation. For a problem of any importance this can be a very useful check and is one that you should make a habit of considering.
Some Important Questions.
Although for the equations (15) and (17) we are able to verify that certain simple functions are solutions, in general we do not have such solutions readily available. Thus a fundamental question is the following: Does an equation of the form (8) always have a solution? The answer is “No.” Merely writing down an equation of the form (8) does not necessarily mean that there is a function y = φ(t) that satisfies it. So, how can we tell whether some particular equation has a solution? This is the question of existence of a solution, and it is answered by theorems stating that under certain restrictions on the function f in Eq. (8), the equation always has solutions. However, this is not a purely mathematical concern, for at least two reasons. If a problem has no solution, we would prefer to know that fact before investing time and effort in a vain attempt to solve the problem. Further, if a sensible physical problem is modeled mathematically as a differential equation, then the equation should have a solution. If it does not, then presumably there is something wrong with the formulation. In this sense an engineer or scientist has some check on the validity of the mathematical model.
Second, if we assume that a given differential equation has at least one solution, the question arises as to how many solutions it has, and what additional conditions must be specified to single out a particular solution. This is the question of uniqueness. In general, solutions of differential equations contain one or more arbitrary constants of integration, as does the solution (16) of Eq. (15). Equation (16) represents an infinity of functions corresponding to the infinity of possible choices of the constant c. As we saw in Section 1.2, if p is specified at some time t, this condition will determine a value for c; even so, we have not yet ruled out the possibility that there may be other solutions of Eq. (15) that also have the prescribed value of p at the prescribed time t.
The issue of uniqueness also has practical implications. If we are fortunate enough to find a solution of a given problem, and if we know that the problem has a unique solution, then we can be sure that we have completely solved the problem. If there may be other solutions, then perhaps we should continue to search for them.
A third important question is: Given a differential equation of the form (8), can we actually determine a solution, and if so, how? Note that if we find a solution of the given equation, we have at the same time answered the question of the existence of a solution. However, without knowledge of existence theory we might, for example, use a computer to find a numerical approximation to a “solution” that does not exist.
On the other hand, even though we may know that a solution exists, it may be that the solution is not expressible in terms of the usual elementary functions—polynomial, trigonometric, exponential, logarithmic, and hyperbolic functions. Unfortunately, this is the situation for most differential equations. Thus, while we discuss elementary methods that can be used to obtain solutions of certain relatively simple problems, it is also important to consider methods of a more general nature that can be applied to more difficult problems.
Computer Use in Differential Equations.
A computer can be an extremely valuable tool in the study of differential equations. For many years computers have been used to execute numerical algorithms, such as those described in Chapter 8, to construct numerical approximations to solutions of differential equations. At the present time these algorithms have been refined to an extremely high level of generality and efficiency. A few lines of computer code, written in a high-level programming language and executed (often within a few seconds) on a relatively inexpensive computer, suffice to solve numerically a wide range of differential equations. More sophisticated routines are also readily available. These routines combine the ability to handle very large and complicated systems with numerous diagnostic features that alert the user to possible problems as they are encountered.
The usual output from a numerical algorithm is a table of numbers, listing selected values of the independent variable and the corresponding values of the dependent variable. With appropriate software it is easy to display the solution of a differential equation graphically, whether the solution has been obtained numerically or as the result of an analytical procedure of some kind. Such a graphical display is often much more Chapter 1. Introduction illuminating and helpful in understanding and interpreting the solution of a differential equation than a table of numbers or a complicated analytical formula. There are on the market several well-crafted and relatively inexpensive special-purpose software packages for the graphical investigation of differential equations. The widespread availability of personal computers has brought powerful computational and graphical capability within the reach of individual students. You should consider, in the light of your own circumstances, how best to take advantage of the available computing resources. You will surely find it enlightening to do so.
Another aspect of computer use that is very relevant to the study of differential equations is the availability of extremely powerful and general software packages that can perform a wide variety of mathematical operations. Among these are Maple, Mathematica, and MATLAB, each of which can be used on various kinds of personal computers or workstations. All three of these packages can execute extensive numerical computations and have versatile graphical facilities. In addition, Maple and Mathematica also have very extensive analytical capabilities. For example, they can perform the analytical steps involved in solving many differential equations, often in response to a single command. Anyone who expects to deal with differential equations in more than a superficial way should become familiar with at least one of these products and explore the ways in which it can be used.
For you, the student, these computing resources have an effect on how you should study differential equations. To become confident in using differential equations, it is essential to understand how the solution methods work, and this understanding is achieved, in part, by working out a sufficient number of examples in detail. However, eventually you should plan to delegate as many as possible of the routine (often repetitive) details to a computer, while you focus more attention on the proper formulation of the problem and on the interpretation of the solution. Our viewpoint is that you should always try to use the best methods and tools available for each task. In particular, you should strive to combine numerical, graphical, and analytical methods so as to attain maximum understanding of the behavior of the solution and of the underlying process that the problem models. You should also remember that some tasks can best be done with pencil and paper, while others require a calculator or computer. Good judgment is often needed in selecting a judicious combination.
PROBLEMS
In each of Problems 1 through 6 determine the order of the given differential equation; also state whether the equation is linear or nonlinear.
d y
d y
dt
dt
d y
3.
4.
dt
dt
d y
5.
6.
dt
In each of Problems 7 through 14 verify that the given function or functions is a solution of the differential equation.
7. y − y = 0; y (t) = et ,y (t) = cosh t
1
2
1
2
9. t y − y = t2; y = 3t + t2 10. y + 4y + 3y = t; y (t) = t/3,y (t) = e−t + t/3
1
2
11. 2t2 y + 3t y − y = 0,t > 0; y (t) = t1/2,y (t) = t−1
1
2
12. t2 y + 5t y + 4y = 0,t > 0; y (t) = t−2,y (t) = t−2 ln t
1
2
13. y + y = sec t, 0 < t < π/2; y = (cos t) ln cos t + t sin t
t
0
In each of Problems 15 through 18 determine the values of r for which the given differential equation has solutions of the form y = ert .
x x
yy
zz
x x
yy
x
y
= 0
x x x x x x yy
yyyy
t
x
x x In each of Problems 25 through 28 verify that the given function or functions is a solution of the given partial differential equation.
25. u + u = 0; u (x, y) = cos x cosh y,u (x, y) = ln(x2 + y2)x x
yy
1
2
x x
t
1
2
27. a2u = u ; u (x, t) = sin λx sin λat,u (x, t) = sin(x − at),λ a real constant x x
tt
1
2
28. α2u = u ; u = (π/t)1/2e−x2/4α2t ,t > 0 x x
t
29. Follow the steps indicated here to derive the equation of motion of a pendulum, Eq. (12)
in the text. Assume that the rod is rigid and weightless, that the mass is a point mass, and that there is no friction or drag anywhere in the system.
(a) Assume that the mass is in an arbitrary displaced position, indicated by the angle θ.
Draw a free-body diagram showing the forces acting on the mass.
(b) Apply Newton’s law of motion in the direction tangential to the circular arc on which the mass moves. Then the tensile force in the rod does not enter the equation. Observe that you need to find the component of the gravitational force in the tangential direction. Observe also that the linear acceleration, as opposed to the angular acceleration, is Ld2θ/dt2, where L is the length of the rod.
(c) Simplify the result from part (b) to obtain Eq. (12) of the text.
1.4 Historical Remarks
Without knowing something about differential equations and methods of solving them, it is difficult to appreciate the history of this important branch of mathematics. Further, the development of differential equations is intimately interwoven with the general development of mathematics and cannot be separated from it. Nevertheless, to provide some historical perspective, we indicate here some of the major trends in the history of Chapter 1. Introduction the subject, and identify the most prominent early contributors. Other historical information is contained in footnotes scattered throughout the book and in the references listed at the end of the chapter.
The subject of differential equations originated in the study of calculus by Isaac Newton (1642–1727) and Gottfried Wilhelm Leibniz (1646–1716) in the seventeenth century. Newton grew up in the English countryside, was educated at Trinity College, Cambridge, and became Lucasian Professor of Mathematics there in 1669. His epochal discoveries of calculus and of the fundamental laws of mechanics date from 1665. They were circulated privately among his friends, but Newton was extremely sensitive to criticism, and did not begin to publish his results until 1687 with the appearance of his most famous book, Philosophiae Naturalis Principia Mathematica. While Newton did relatively little work in differential equations as such, his development of the calculus and elucidation of the basic principles of mechanics provided a basis for their applications in the eighteenth century, most notably by Euler. Newton classified first order differential equations according to the forms d y/dx = f (x), dy/dx = f (y), and d y/dx = f (x,y). For the latter equation he developed a method of solution using infinite series when f (x,y) is a polynomial in x and y. Newton’s active research in mathematics ended in the early 1690s except for the solution of occasional challenge problems and the revision and publication of results obtained much earlier. He was appointed Warden of the British Mint in 1696 and resigned his professorship a few years later. He was knighted in 1705 and, upon his death, was buried in Westminster Abbey.
Leibniz was born in Leipzig and completed his doctorate in philosophy at the age of 20 at the University of Altdorf. Throughout his life he engaged in scholarly work in several different fields. He was mainly self-taught in mathematics, since his interest in this subject developed when he was in his twenties. Leibniz arrived at the fundamental results of calculus independently, although a little later than Newton, but was the first to publish them, in 1684. Leibniz was very conscious of the power of good mathematical notation, and our notation for the derivative, d y/dx, and the integral sign are due to him. He discovered the method of separation of variables (Section 2.2) in 1691, the reduction of homogeneous equations to separable ones in 1691, and the procedure for solving first order linear equations (Section 2.1) in 1694. He spent his life as ambassador and adviser to several German royal families, which permitted him to travel widely and to carry on an extensive correspondence with other mathematicians, especially the Bernoulli brothers. In the course of this correspondence many problems in differential equations were solved during the latter part of the seventeenth century.
The brothers Jakob (1654–1705) and Johann (1667–1748) Bernoulli of Basel did much to develop methods of solving differential equations and to extend the range of their applications. Jakob became professor of mathematics at Basel in 1687, and Johann was appointed to the same position upon his brother’s death in 1705. Both men were quarrelsome, jealous, and frequently embroiled in disputes, especially with each other. Nevertheless, both also made significant contributions to several areas of mathematics. With the aid of calculus they solved a number of problems in mechanics by formulating them as differential equations. For example, Jakob Bernoulli solved the differential equation y = [a3/(b2 y − a3)]1/2 in 1690 and in the same paper first used the term “integral” in the modern sense. In 1694 Johann Bernoulli was able to solve the equation d y/dx = y/ax. One problem to which both brothers contributed, and which led to much friction between them, was the 1.4
Historical Remarks Problem 33 of Section 2.3). The brachistochrone problem was also solved by Leibniz and Newton in addition to the Bernoulli brothers. It is said, perhaps apocryphally, that Newton learned of the problem late in the afternoon of a tiring day at the Mint, and solved it that evening after dinner. He published the solution anonymously, but on seeing it, Johann Bernoulli exclaimed, “Ah, I know the lion by his paw.”
Daniel Bernoulli (1700–1782), son of Johann, migrated to St. Petersburg as a young man to join the newly established St. Petersburg Academy, but returned to Basel in 1733 as professor of botany, and later, of physics. His interests were primarily in partial differential equations and their applications. For instance, it is his name that is associated with the Bernoulli equation in fluid mechanics. He was also the first to encounter the functions that a century later became known as Bessel functions (Section 5.8).
The greatest mathematician of the eighteenth century, Leonhard Euler (1707–1783), grew up near Basel and was a student of Johann Bernoulli. He followed his friend Daniel Bernoulli to St. Petersburg in 1727. For the remainder of his life he was associated with the St. Petersburg Academy (1727–1741 and 1766–1783) and the Berlin Academy (1741–1766). Euler was the most prolific mathematician of all time; his collected works fill more than 70 large volumes. His interests ranged over all areas of mathematics and many fields of application. Even though he was blind during the last 17 years of his life, his work continued undiminished until the very day of his death. Of particular interest here is his formulation of problems in mechanics in mathematical language and his development of methods of solving these mathematical problems. Lagrange said of Euler’s work in mechanics, “The first great work in which analysis is applied to the science of movement.” Among other things, Euler identified the condition for exactness of first order differential equations (Section 2.6) in 1734–35, developed the theory of integrating factors (Section 2.6) in the same paper, and gave the general solution of homogeneous linear equations with constant coefficients (Sections 3.1, 3.5, and 4.2) in 1743. He extended the latter results to nonhomogeneous equations in
1750–51. Beginning about 1750, Euler made frequent use of power series in solving differential equations. He also proposed a numerical procedure (Sections 2.7 and 8.1) in 1768–69, made important contributions in partial differential equations, and gave the first systematic treatment of the calculus of variations.
Joseph-Louis Lagrange (1736–1813) became professor of mathematics in his native Turin at the age of 19. He succeeded Euler in the chair of mathematics at the Berlin Academy in 1766, and moved on to the Paris Academy in 1787. He is most famous for his monumental work Me´canique analytique, published in 1788, an elegant and comprehensive treatise of Newtonian mechanics. With respect to elementary differential equations, Lagrange showed in 1762–65 that the general solution of an nth order linear homogeneous differential equation is a linear combination of n independent solutions (Sections 3.2, 3.3, and 4.1). Later, in 1774–75, he gave a complete development of the method of variation of parameters (Sections 3.7 and 4.4). Lagrange is also known for fundamental work in partial differential equations and the calculus of variations.
Pierre-Simon de Laplace (1749–1827) lived in Normandy as a boy but came to Paris in 1768 and quickly made his mark in scientific circles, winning election to the Acade´mie des Sciences in 1773. He was preeminent in the field of celestial mechanics; his greatest work, Traite´ de me´canique ce´leste, was published in five volumes between 1799 and 1825. Laplace’s equation is fundamental in many branches of mathematical physics, and Laplace studied it extensively in connection with gravitational attraction.
Chapter 1. Introduction The Laplace transform (Chapter 6) is also named for him although its usefulness in solving differential equations was not recognized until much later.
By the end of the eighteenth century many elementary methods of solving ordinary differential equations had been discovered. In the nineteenth century interest turned more toward the investigation of theoretical questions of existence and uniqueness and to the development of less elementary methods such as those based on power series expansions (see Chapter 5). These methods find their natural setting in the complex plane. Consequently, they benefitted from, and to some extent stimulated, the more or less simultaneous development of the theory of complex analytic functions. Partial differential equations also began to be studied intensively, as their crucial role in mathematical physics became clear. In this connection a number of functions, arising as solutions of certain ordinary differential equations, occurred repeatedly and were studied exhaustively. Known collectively as higher transcendental functions, many of them are associated with the names of mathematicians, including Bessel, Legendre, Hermite, Chebyshev, and Hankel, among others.
The numerous differential equations that resisted solution by analytical means led to the investigation of methods of numerical approximation (see Chapter 8). By 1900 fairly effective numerical integration methods had been devised, but their implementation was severely restricted by the need to execute the computations by hand or with very primitive computing equipment. In the last 50 years the development of increasingly powerful and versatile computers has vastly enlarged the range of problems that can be investigated effectively by numerical methods. During the same period extremely refined and robust numerical integrators have been developed and are readily available.
Versions appropriate for personal computers have brought the ability to solve a great many significant problems within the reach of individual students.
Another characteristic of differential equations in the twentieth century has been the creation of geometrical or topological methods, especially for nonlinear equations. The goal is to understand at least the qualitative behavior of solutions from a geometrical, as well as from an analytical, point of view. If more detailed information is needed, it can usually be obtained by using numerical approximations. An introduction to these geometrical methods appears in Chapter 9.
Within the past few years these two trends have come together. Computers, and especially computer graphics, have given a new impetus to the study of systems of nonlinear differential equations. Unexpected phenomena (Section 9.8), referred to by terms such as strange attractors, chaos, and fractals, have been discovered, are being intensively studied, and are leading to important new insights in a variety of applications. Although it is an old subject about which much is known, differential equations at the dawn of the twenty-first century remains a fertile source of fascinating and important unsolved problems.
REFERENCES
Computer software for differential equations changes too fast for particulars to be given in a book such as this. A good source of information is the Software Review and Computer Corner sections of The College
Mathematics Journal, published by the Mathematical Association of America.
There are a number of books that deal with the use of computer algebra systems for differential equations.
The following are associated with this book, although they can be used independently as well:
w York: Wiley, 1997) and w York: Wiley 1999).
1.4
Historical Remarks
For further reading in the history of mathematics see books such as those listed below: Kline, M., Mathematical Thought from Ancient to Modern Times (New York: Oxford University Press, 1972).
A useful historical appendix on the early development of differential equations appears in:
Ince, E. L., Ordinary Differential Equations (London: Longmans, Green, 1927; New York: Dover, 1956).
An encyclopedic source of information about the lives and achievements of mathematicians of the past is: Gillespie, C. C., ed., Dictionary of Scientific Biography (15 vols.) (New York: Scribner’s, 1971).