Higher-Order Differential Equations

Martha L. Abell , James P. Braselton , in Differential Equations with Mathematica (Fourth Edition), 2016

4.1.3 Fundamental Set of Solutions

Obtaining a collection of n linearly independent solutions to the nth-order linear homogeneous differential equation (4.5) is of great importance in solving it.

A nontrivial solution is a solution that is not identically the zero function.

Definition 13 Fundamental Set of Solutions

A set S of n linearly independent nontrivial solutions of the nth-order linear homogeneous equation (4.5) is called a fundamental set of solutions of the equation.

Example 4.1.4

Show that S = e 5 x , e x is a fundamental set of solutions of the equation y″ + 6y′ + 5y = 0.

Solution

Because

d 2 d x 2 e 5 x + 6 d d x e 5 x + 5 e 5 x = 25 e 5 x 30 e 5 x + 5 e 5 x = 0

and

d 2 d x 2 e x + 6 d d x e x + 5 e x = e x 6 e x + 5 e x = 0

each function is a solution of the differential equation. It follows that S is linearly independent because

W ( S ) = e 5 x e x 5 e 5 x e x = e 6 x + 5 e 6 x = 4 e x 0 .

so we conclude that S is a fundamental set of solutions of the equation.

Of course, we can perform the same steps with Mathematica. First, we define caps to be the set of functions S.

Clear[x, y, caps]caps={Exp[−5x], Exp[−x]};

To verify that each function in S is a solution of y″ + 6y′ + 5y = 0, we define a function f. f[y] computes and returns y″ + 6y′ + 5y. We then use Map (/@) to apply f to each function in caps to see that each function in caps is a solution of y″ + 6y′ + 5y = 0, confirming the result we obtained previously.

Clear[f]f[y_]:=D[y, {x, 2}]+6D[y, x]+5y//Simplify;

f/@caps

{0, 0}

Next, we use Wronskian to find the determinant of the Wronskian matrix e 5 x e x 5 e 5 x e x and display wmat in traditional row-and-column form with MatrixForm.

wmat={caps, D[caps, x]}//MatrixForm

e 5 x e x 5 e 5 x e x

Wronskian is then used to compute W(S).

Wronskian[caps, x]

4e −6x

We use a fundamental set of solutions to create a general solution of an nth-order linear homogeneous differential equation.

Theorem 4 Principle of Superposition

If S = f 1 ( x ) , f 2 ( x ) , , f k ( x ) is a set of solutions of the nth-order linear homogeneous equation (4.5) and c 1 , c 2 , , c k is a set of k constants, then

f ( x ) = c 1 f 1 ( x ) + c 2 f 2 ( x ) + + c k f k ( x )

is also a solution of equation (4.5).

f(x) = c 1 f 1(x) + c 2 f 2(x) + ⋯ + c k f k (x) is called a linear combination of functions in the set S = f 1 ( x ) , f 2 ( x ) , , f k ( x ) . A consequence of this fact is that the linear combination of the functions in a fundamental set of solutions of the nth-order linear homogeneous differential equation (4.5) is also a solution of the differential equation, and we call this linear combination a general solution of the differential equation.

Definition 14 General Solution

If S = f 1 ( x ) , f 2 ( x ) , , f n ( x ) is a fundamental set of solutions of the nth-order linear homogeneous equation

a n ( x ) y ( n ) + a n 1 ( x ) y ( n 1 ) + + a 1 ( x ) y + a 0 ( x ) y = 0 ,

then a general solution of the equation is

f ( x ) = c 1 f 1 ( x ) + c 2 f 2 ( x ) + + c n f n ( x )

where c 1 , c 2 , , c n is a set of n arbitrary constants.

In other words, if we have a fundamental set of solutions S, then a general solution of the differential equation is formed by taking the linear combination of the functions in S.

Example 4.1.5

Show that S = cos 2 x , sin 2 x is a fundamental set of solutions of the second-order ordinary linear differential equation with constant coefficients y″ + 4y = 0.

Solution

First, we verify that both functions are solutions of y″ + 4y = 0. Note that we have defined caps to be the set of functions S = cos 2 x , , sin 2 x . Now, we use Map to apply the function y″ + 4y to the functions in caps: the command Map[D[#,{x,2}]+4#&,caps] computes y″ + 4y for each function y in caps. Thus, we see that given an argument #, the command D[#,{x,2}]+4#& computes the sum of the second derivative (with respect to x) of the argument and four times the argument. We conclude that both functions are solutions of y″ + 4y because the result is a list of two zeros.

caps={Cos[2x], Sin[2x]}Map[D[#, {x, 2}]+4#&, caps]

{Cos[2x], Sin[2x]}

{0, 0}

Next, we compute the Wronskian

Wronskian[caps, x]

2

to show that the functions in S are linearly independent.

By the Principle of Superposition, y ( x ) = c 1 cos 2 x + c 2 sin 2 x , where c 1 and c 2 are arbitrary constants, is also a solution of the equation. We now graph y(x) for various values of c 1 and c 2. After defining y, we use Table to create a list obtained by replacing c[1] in y[x] by − 1, 0, and 1 and c[2] by − 1, 0, and 1. We name the resulting list toplot. Note that toplot is a list of lists: toplot consists of three elements each of which is a list consisting of three functions.

Clear[y]y[x_]=c[1]Cos[2x]+c[2]Sin[2x];toplot=Table[y[x], {c[1], −1, 1}, {c[2], −1, 1}];

Last, we use Plot to graph the nine functions in toplot for 0 ≤ x ≤ 2π in Figure 4-7.

Figure 4-7. Graphs of various linear combinations of cos 2 x and sin 2 x

Plot[toplot, {x, 0, 2Pi}, AxesLabel→{x, y}]

Alternatively, we can show the graphs individually in a graphics array as shown in Figure 4-8.

Figure 4-8. Graphs of various linear combinations of cos 2 x and sin 2 x

toshow=Map[Plot[#, {x, 0, 2Pi}, AxesLabel→{x, y}]&, Flatten[toplot]];Length[toshow]

9

Show[GraphicsGrid[Partition[toshow, 3]]]

The Principle of Superposition is a very important property of linear homogeneous equations and is generally not valid for nonlinear equations and never valid for nonhomogeneous equations.

Example 4.1.6

Is the Principle of Superposition valid for the nonlinear equation tx″ − 2xx′ = 0?

Solution

We see that DSolve is able to find a general solution of this nonlinear equation.

gensol=DSolve[tx"[t]−2x[t]x′[t]==0, x[t], t]

{ { x [ t ] 1 2 ( 1 + 1 8 C [ 1 ] Tan [ 1 2 ( 1 8 C [ 1 ] + 1 8 C [ 1 ] Log [ t ] ) ] ) } }

Next, we define sol1 and sol2 to be two solutions to the equation.

sol1=gensol[[1, 1, 2]]/.{C[1]→−1/8, C[2]→0}

1 2

sol2=gensol[[1, 1, 2]]/.{C[1]→−1/4, C[2]→1}

1 2 1 + Tan 1 2 ( 1 + Log [ t ] )

However, the sum of these two solutions is not a solution to the nonlinear equation because tf″ − 2ff′≠0; the Principle of Superposition is not valid for this nonlinear equation.

y[t_]=sol1+sol2

1 2 + 1 2 1 + Tan 1 2 ( 1 + Log [ t ] )

ty"[t]−2y[t]y′[t]//Simplify

Sec 1 2 ( 1 + Log [ t ] ) 2 4 t

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128047767000048

Homogeneous Equation

Alberto Cabada , ... Lucía López-Somoza , in Maximum Principles for the Hill's Equation, 2018

2.2 Sturm Comparison Theory

We will study now the Sturm comparison theory, establishing the main general properties of oscillation of solutions.

Theorem 6 Sturm's separation

Let y 1 and y 2 be two linearly independent solutions of (2.1) . Then, neither y 1 and y 2 nor y 1 and y 2 can have any zero in common. Moreover, y 1 vanishes exactly once between two consecutive zeros of y 2 , and reciprocally.

Proof

Let y 1 and y 2 be in the conditions of the theorem. Since they are linearly independent solutions of the equation, its Wronskian

W ( y 1 , y 2 ) = y 1 y 2 y 2 y 1

is different from zero on R and so, as it is continuous, it has constant sign.

As an immediate consequence we deduce that neither y 1 and y 2 nor y 1 and y 2 can have any zero in common, otherwise the Wronskian would be zero in that point.

Consider now t 1 and t 2 two consecutive zeros of y 2 . In such points, the Wronskian takes the form y 1 y 2 and, since it is different from zero, both y 1 and y 2 are different from zero at t 1 and t 2 . In addition, y 2 ( t 1 ) and y 2 ( t 2 ) take opposite signs.

As the Wronskian has constant sign, then y 1 ( t 1 ) and y 1 ( t 2 ) also have different signs, from where we deduce, given the continuity of y 1 , that it must vanish somewhere on ( t 1 , t 2 ) .

Moreover, y 1 has a unique zero between t 1 and t 2 since on the contrary the previous argument would imply that y 2 takes the value zero between two consecutive zeros of y 1 , which contradicts the hypothesis that t 1 and t 2 are consecutive zeros of y 2 . □

The situation described on the previous theorem would be similar to the one represented in Fig. 2.1.

Figure 2.1

Figure 2.1. Two linearly independent solutions of (2.1).

We will simplify now expression (2.1), seeing that every equation in this form, in which p and q satisfy suitable regularity conditions, could be rewritten as a Hill's equation, also called the normal form of (2.1),

(2.2) u ( t ) + a ( t ) u ( t ) = 0 , a.e. t R .

In order to write (2.1) in the normal form, we decompose y ( t ) = u ( t ) v ( t ) , so that y = u v + u v and y = u v + 2 u v + u v . Substituting in (2.1), we obtain

v u + ( 2 v + p v ) u + ( v + p v + q v ) u = 0 .

Making the coefficient of u equal to zero, we deduce that, for some t 0 R ,

v ( t ) = e 1 2 t 0 t p ( s ) d s

reduces (2.1) into the normal form (2.2), with

a ( t ) = v ( t ) v ( t ) + p ( t ) v ( t ) v ( t ) + q ( t ) .

Taking into account that

v ( t ) v ( t ) = 1 2 p ( t )

and

v ( t ) v ( t ) = 1 2 p ( t ) + 1 4 p 2 ( t ) ,

we obtain

a ( t ) = q ( t ) 1 4 p 2 ( t ) 1 2 p ( t ) .

We observe that, since v does not take the value zero, the transformation we have just made does not affect neither to the zeros of the solutions nor to their oscillation and sign.

Consequently, from now on, we will work with the Hill's equation (2.2). We will be specially interested in those cases in which either a is a positive function or it changes its sign. This is due to the following result.

Theorem 7

If a < 0 a.e. on R , every nontrivial solution of (2.2) has at most one zero on R .

Proof

Assume a ( t ) < 0 for a.e. t R in Eq. (2.2) and let u be a nontrivial solution.

If t 0 is a zero of u, since the solution is not trivial, necessarily u ( t 0 ) 0 . Suppose that u ( t 0 ) > 0 . Then u takes positive values on some interval on the right side of t 0 and, consequently, u ( t ) = u ( t ) a ( t ) > 0 for a.e. t on that interval. Therefore, u is an increasing function on the right of t 0 , which implies that u does not have any zero on the right side of t 0 . The same way, u cannot have zeros on the left side of t 0 .

In case of being u ( t 0 ) < 0 , the argument would be analogous.  □

Thus, as if a < 0 a.e. on R the solutions do not oscillate, we will not consider this case. Nevertheless, condition a > 0 a.e. on R does not assure the oscillation. In fact, let u be a nontrivial solution of (2.2) with a > 0 a.e. on R . If we consider an interval on which u > 0 , then u ( t ) = a ( t ) u ( t ) < 0 for a.e. t R , that is, u is decreasing. If this slope becomes negative, u will have a zero somewhere on the right side of the considered interval. However, if u decreases but remains positive, then u is strictly increasing and never takes value zero.

We can find an example of this situation by considering Euler's equation

(2.3) u ( x ) + k x 2 u ( x ) = 0 , x > 0 ,

with k a positive constant.

The change of variable t = log ( x ) allows us to transform the previous equation into a homogeneous linear equation with constant coefficients. In fact, we have the following equalities

v ( t ) : = u ( e t ) , v ( t ) = u ( e t ) e t , v ( t ) = u ( e t ) e 2 t + u ( e t ) e t .

From (2.3), we reach the following equation with constant coefficients

v ( t ) v ( t ) + k v ( t ) = 0 .

This equation can be explicitly solved, its general solution follows the expression

v ( t ) = c 1 e 1 + 1 4 k 2 t + c 2 e 1 1 4 k 2 t , if k < 1 4 ,

v ( t ) = c 1 e 1 2 t + c 2 t e 1 2 t , if k = 1 4 ,

v ( t ) = e 1 2 t ( c 1 cos 4 k 1 2 t + c 2 sin 4 k 1 2 t ) , if k > 1 4 .

As a consequence, the solutions oscillate if and only if k > 1 4 .

In relation with what we have just commented, the following theorem gives a sufficient condition to assure the oscillation of the solutions.

Theorem 8

Let u be a nontrivial solution of u ( t ) + a ( t ) u ( t ) = 0 , with a L ( R ) . Suppose that there exists t ¯ R such that a ( t ) > 0 for a.e. t > t ¯ and

t ¯ a ( t ) d t = .

Then u has an infinite number of zeros bigger than t ¯ .

Proof

Assume that u vanishes on, at most, a finite number of points on ( t ¯ , ) . Then, there exists t 0 > t ¯ such that u ( t ) 0 for t t 0 . Suppose that u ( t ) > 0 for all t t 0 . If we define v ( t ) = u ( t ) u ( t ) , then v ( t ) = a ( t ) + v 2 ( t ) , and integrating from t 0 to t we obtain that

v ( t ) v ( t 0 ) = t 0 t a ( s ) d s + t 0 t v 2 ( s ) d s .

As by hypothesis we have that t 0 a ( t ) d t = , then we can affirm that v ( t ) > 0 for t large enough and that, for such values of t, u ( t ) and u ( t ) have opposite signs. In particular, u ( t ) would be negative. In addition, u = a u < 0 a.e. on ( t 0 , ) , so u is concave and decreasing. See Fig. 2.2.

Figure 2.2

Figure 2.2. Graph of u for t large enough.

This implies that u has one zero on the right side of t 0 . So we reach a contradiction and conclude that u must have an infinite number of zeros on the positive axis.

In case of being u ( t ) < 0 for t t 0 the proof would be analogous. □

Remark 6

The condition given in the previous theorem is sufficient but not necessary. To confirm this, it is enough to consider Eq. (2.3) defined on ( t ¯ , ) for some t ¯ > 0 , for which we have that

t ¯ k x 2 d x = k t ¯ .

Nevertheless, as we have already commented, its solutions have an infinite number of zeros if and only if k > 1 4 .

Theorem 8 proves that, under certain conditions, a solution of (2.2) oscillates infinitely many times on an unbounded from above interval. The following theorem will prove that such solution can not oscillate infinitely many times on a compact interval and, consequently, its oscillations must extend through all the positive axis.

Theorem 9

Let u be a nontrivial solution of equation u ( t ) + a ( t ) u ( t ) = 0 on an interval [ c , d ] , with a L ( [ c , d ] ) . Then u has, at most, a finite number of zeros on such interval.

Proof

Suppose that u has infinitely many zeros on the interval [ c , d ] . Then there exist a point t 0 [ c , d ] and a sequence of zeros { t n } n N , with t n t 0 , such that lim n t n = t 0 . Since u C 1 [ c , d ] , we have that

u ( t 0 ) = lim t n t 0 u ( t n ) = 0

and

u ( t 0 ) = lim t n t 0 u ( t n ) u ( t 0 ) t n t 0 = 0 .

As a consequence, u ( t 0 ) = u ( t 0 ) = 0 , and therefore u is the trivial solution. Thus, u can not have an infinite number of zeros on the interval [ c , d ] . □

Having seen this, we could think about comparing the way in which two different solutions of (2.2) oscillate. In this sense, Theorem 6 affirms that the zeros of every pair of nontrivial solutions of (2.2) either coincide or alternate, depending on their relation of linear dependence. Then, we can affirm that all the solutions of (2.2) oscillate with the same speed, in the sense that, on a given interval, two solutions will have the same number of zeros or one of them will have exactly one more zero than the other one.

On the other hand, the following theorem describes the influence of the potential on the speed of oscillation of the solutions.

Theorem 10 Sturm's comparison

Let u and v be nontrivial solutions of

u ( t ) + q ( t ) u ( t ) = 0 , a.e. undefined t R

and

v ( t ) + r ( t ) v ( t ) = 0 , a.e. undefined t R ,

respectively, with q , r L l o c 1 ( R ) such that q > r a.e. on R . Then u vanishes at least once between two consecutive zeros of v.

Proof

Let t 1 and t 2 be two consecutive zeros of v and suppose that u does not vanish on the interval ( t 1 , t 2 ) . By continuity, both u and v have constant sign on ( t 1 , t 2 ) and we will assume both are positive (in any other case the proof would be analogous).

Consider the Wronskian

W ( u , v ) ( t ) = u ( t ) v ( t ) v ( t ) u ( t ) .

Then

d W ( u , v ) d t = u v v u = u ( r v ) v ( q u ) = ( q r ) u v > 0

a.e. on the interval ( t 1 , t 2 ) .

If we integrate the previous expression between t 1 and t 2 we obtain that W ( u , v ) ( t 2 ) > W ( u , v ) ( t 1 ) . However, since v ( t 1 ) = v ( t 2 ) = 0 and v ( t ) > 0 on ( t 1 , t 2 ) , we have that v ( t 1 ) 0 and v ( t 2 ) 0 and so

W ( u , v ) ( t 1 ) = u ( t 1 ) v ( t 1 ) 0

and

W ( u , v ) ( t 2 ) = u ( t 2 ) v ( t 2 ) 0 .

We reach a contradiction and we conclude that u attains a zero on the interval ( t 1 , t 2 ) . □

Remark 7

In particular, we can deduce from the previous theorem that, when a ( t ) > k 2 > 0  for a.e. t R , every solution of Eq. (2.2) must have a zero between each pair of consecutive zeros of a solution u ( t ) = sin k ( t t 0 ) of the equation

u ( t ) + k 2 u ( t ) = 0 ,

that is, it must have a zero on every interval of length π k .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128041178000023

Introduction

NAIL A. GUMEROV , RAMANI DURAISWAMI , in Fast Multipole Methods for the Helmholtz Equation in Three Dimensions, 2004

PROOF

The proof is similar to the proof of the corollary for Theorem 2.

One can then think that, by application of the curl, more linearly independent solutions can be generated. This is not true since operator ∇ × ∇ × can be expressed via the Laplacian as:

(1.1.51) × × ( ψ × r ) = 2 ( ψ × r ) = k 2 ( ψ × r ) .

Thus the function E ^ ( 2 ) = × × ( Ψ × r ) = k 2 E ^ ( 0 ) inearly depends on E ^ ( 0 ) , where E ^ ( 0 ) = Ψ × r and is a solution of the Maxwell equations. This shows that all solutions produced by multiple application of the curl operator to Ψ × r can be expressed via the two basic solutions Ψ × r and × ( Ψ × r ) and, generally, we can represent solutions of the Maxwell equations in the formml:

(1.1.52) E ^ = ψ 1 × r + × ( ψ 2 × r ) ,

where ϕ and ψ are two arbitrary scalar functions that satisfy Eqs. (1.1.43). Owing to identity (1.1.44) this also can be rewritten as:

(1.1.53) E ^ = × [ r ψ 1 + × ( r ψ 2 ) ] .

Note that the above decomposition is centered at r = 0 (r = 0 is a special point). Obviously the center can be selected at an arbitrary point r = r *. For some problems it is more convenient to use a constant vector r * instead of r for the decomposition we used above. More generally, one can use a decomposition in the form:

(1.1.54) E ^ = × { ( a 1 r + r 1 ) ψ 1 + × [ ( a 2 r + r 2 ) ψ 2 ] } ,

which is valid for arbitrary constants a 1 and a 2 (can be zero) and vectors r *1 and r *2, which can be selected as convenience dictates for the solution of a particular problem.

Consider now the phasor of the magnetic field vector. As follows from the first equation (1.1.36) it satisfies the equation:

(1.1.55) i ω μ H ^ = × E ^

Substituting here decomposition (1.1.43) and using identities (1.1.44) and (1.1.51), we obtain

(1.1.56) i c μ k H ^  = × ( ψ 1 × r ) + k 2 ( ψ 2 × r ) = × [ k 2 r ψ 2 + × ( r ψ 1 ) ] .

This form is similar to the representation of the phasor of the electric field (1.1.53) where functions ψ 1 and ψ 2 exchange their roles and some coefficients appear. In the case of more general decomposition (1.1.54) we have:

(1.1.57) i c μ k H ^  = × { k 2 ( a 2 r + r 2 ) ψ 2 + × [ ( a 1 r + r 1 ) ψ 1 ] } .

This shows that solution of Maxwell equations in the frequency domain is equivalent to two scalar Helmholtz equations. These equations can be considered as independent, while their coupling occurs via the boundary conditions for particular problems.

Note that the wavenumber k in the scalar Helmholtz equations (1.1.43) is real. More complex models of the medium can be considered (say, owing to the presence of particles of sizes much smaller than the wavelength and for waves whose period is comparable with the periods of molecular relaxation or resonances once we consider waves in some media, not vacuum). In such a medium one can expect effects of dispersion and dissipation such as we considered for acoustic wave propagation in complex media. This will introduce a dispersion relationship k = k(ω) and complex k.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080443713500053

Answers to Selected Exercises

Martha L. Abell , James P. Braselton , in Introductory Differential Equations (Fourth Edition), 2014

Exercises 4.2

1.

y = 0 has characteristic equation r 2 = 0 so r = 0 has multiplicity two. Two linearly independent solutions to the equation are y 1 = 1 and y 2 = t; a fundamental set of solutions is S = {1,t}; and a general solution is y = c 1 + c 2 t.

3.

y + y′ = 0 has characteristic equation r 2 + r = 0, which has solutions r 1 = 0 and r 2 = −1. Two linearly independent solutions to the equation are y 1 = 1 and y 2 = et ; a fundamental set of solutions is S = {1,et }; and a general solution is y = c 1 + c 2et .

5.

y = c 1e−6t + c 2e−2t .

6.

6r 2 + 5r + 1 = (3r + 1)(2r + 1) = 0 so r 1 = −1/3 and r 2 = −1/2. Thus, two linearly independent solutions to the equation are y 1 = et/3 and y 2 = et/2; a fundamental set of solutions is S = {et/3,et/2}; and a general solution is y = c 1et/3 + c 2et/2.

7.

y = c 1et/4 + c 2et/2.

9.

y = c 1 cos 4 t + c 2 sin 4 t .

11.

y = c 1 cos ( 7 t ) + c 2 sin ( 7 t ) .

13.

y = c 1et + c2e3t/7.

15.

y = c 1e3t + c 2 te3t .

17.

General: y = c 1 + c 2e t/3; IVP: y = −21(1 −e t/3).

19.

General: y = c 1e3t + c 2e4t ; IVP: y = 14e3t − 11e4t .

21.

General: y = c 1e5t + c 2e2t ; IVP: y = e5t .

23.

General: y = c 1 cos 10 t + c 2 sin 10 t ; IVP: y = cos 10 t + sin 10 t .

25.

General: y = e−2t (c 1 + c 2 t); IVP: y = e−2t (1 + 5t).

27.

General: y = e 2 t ( c 1 cos 4 t + c 2 sin 4 t ) ; IVP: y = e 2 t ( 2 cos 4 t + sin 4 t ) .

29.

y = e t 2 cos 3 2 t + 1 3 e t 2 sin 3 2 t .

31.

y = e 1 2 5 2 t e 1 2 + 5 2 t 5 .

33.

y = 9et/3 − 8et/2.

35.

y = 3 4 e 2 t sin ( 4 t ) + e 2 t cos ( 4 t ) .

37.

(a) y = c 1 t 2/3 + c 2 t; (b) y = t ( c 1 + c 2 ln t ) .

41.

y = C e t sin 2 t , y = 0, y = e t cos 2 t .

43.

y = 3 2 a + 1 2 b e t + 1 2 a 1 2 b e 3 t so y = 3 2 a + 1 2 b e t 3 1 2 a 1 2 b e 3 t ; y′ = 0 if t = ln ± 1 3 3 b + a 3 a + b b + a ; For none, (b + a)(3a + b) ≤ 0 while for one, (b + a)(3a + b) > 0.

45.

(a) No (b) To be a general solution, a fundamental set for the equation is S = t cos t , t sin t . Now substitute each of these functions into the differential equation and set the result equal to zero. Solve the resulting system for p(t) and q(t) to obtain p(t) = −2/t and q(t) = (t 2 + 2)/t 2.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124172197000144

Boundary Value Problems with Circular Symmetry

Enrique A. González-Velasco , in Fourier Analysis and Boundary Value Problems, 1995

§10.5 Bessel Functions of the Second Kind

In order to obtain the general solution of (2) if p is a nonnegative integer we still need a second linearly independent solution. We can try variation of constants and assume, following a classic method of d'Alembert, that it is of the form y  = cJp where c is a function of x. Substitution into (2) yields 1

c x = J p x 1 x J p 2 x d x .

Assuming now—as it turns out to be the case—that 1 / J p 2 ( x ) can be expanded in a power series starting with a constant term, it follows that this second solution is a constant multiple of

J p x log x + w x ,

where w(x) still has to be determined. This was done in 1867 by Carl Gottfried Neumann (1832–1925), a professor at Leipzig, for p =   0, and then, after deriving a recursion formula, he obtained the solution for any p by induction. However, a more convenient second solution was found independently by Hermann Hankel (1839–1873), of the University of Tübingen, in his 1869 memoir Die Cylinderfunctionen erster und zweiter Art. His definition was not valid if 2p is an odd integer, and for this reason it was modified in 1872 by Ernst Heinrich Weber (1842–1913), then at Heidelberg but soon to become a full professor at Könisberg, where he was one of Hilbert's teachers, and by Ludwig Schlàfli (1814–1895) of the University of Bern (published in 1873 and 1875, respectively). Although Weber defined his solution by means of an integral, it turns out that if p is not an integer it can be written in the form

Y p = J p cos p π J p sin p π .

We shall omit a detailed account of the work that led to this function and simply observe that it is indeed a solution of (2) since it is a linear combination of solutions. The notation Yp is due to Hankel and, except for a constant factor, this is the form proposed by Schläfli. Still, one may wonder about the reason for choosing this particular linear combination of Jp and J p since the stated quotient is undefined for every integral value of p. The reason will be stated at the end of §10.6, but note now that if n is an integer we can use l'Hôpital's rule to evaluate lim p  n Yp from the expression above, and then define the resulting limit to be Yn . We obtain

Y n = lim p n π J p sin p π + cos p π J p p J p p π cos p π = 1 π J p p 1 n J p p p = n

since sin     0 and cos     (–1) n as p  n. The computations of the derivatives and limits in this expression were first carried out by Hankel, who also showed that the function so defined is a solution of (2) and obtained its expansion as an infinite series. However, these computations are rather long and tedious and we shall also omit them. 1 If we define

φ k = 1 + 1 2 + 1 3 + 1 k

for every positive integer k and then the so-called Euler constant by

γ = lim k φ k log k ,

the results are

Y 0 x = 2 π log x 2 + γ J 0 x 2 π k = 1 1 k φ k k ! 2 x 2 2 k

and, if n is a positive integer,

Y n x = 2 π log x 2 + γ J n x 1 π k = 0 n = 1 n k 1 ! k ! x 2 2 k n φ n π n ! x 2 n 1 π k = 1 1 k φ k + φ k + n k ! k + n ! x 2 2 k + n

Then term-by-term differentiation and substitution into the Bessel differential equation of order n would show that Yn is a solution for n  0.

Definition 10.3

If n is a nonnegative integer, the function Y n : 0 defined by the two preceding equations is called the Bessel function of the second kind of order n.

The behavior of Yn near the origin is as follows.

Lemma 10.2

If n is a nonnegative integer, then Yn (x)     –∞ as x    0.

Proof

For n =   0 this is clear from the expansion for Y 0(x) since J 0(x)     1 as x    0. For n >   0 the well-known fact that xm log x    0 as x    0 for any positive integer m shows that the first term in the expansion for Yn (x) approaches zero as x    0. Then, for very small x,

Y n x n 1 ! π x 2 n ,

and the result follows, Q.E.D.

Lemmas 10.1 and 10.2 show that Jn and Yn are linearly inpendent for any nonnegative integer n. Since this is the only case we need to consider, we have found the general solution of (2) in the case excluded in Theorem 10.1.

Theorem 10.3

If n is a nonnegative integer, then the general solution of (2) on (0, ∞) is

y = A J n + B Y n ,

where A and B are arbitrary constants.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780122896408500101

Higher Order Equations

Martha L. Abell , James P. Braselton , in Introductory Differential Equations (Fourth Edition), 2014

Reduction of Order

In the next section, we learn how to find solutions of homogeneous equations with constant coefficients. In doing so, we will find it necessary to determine a second linearly independent solution from a known solution. We illustrate this procedure, called reduction of order, by considering the second-order equation in normal (or standard) form y + p(t)y + q(t)y = 0 and assuming that we are given one solution, y 1(t) = f(t). We know from our previous discussion that to find a general solution of this second-order equation, we must have two linearly independent solutions. We must determine a second linearly independent solution. We accomplish this by attempting to find a solution of the form

y 2 ( t ) = v ( t ) f ( t )

One would never look for a second linearly independent solution of the form y 2 = c f(t) for a constant c because f(t) and c f(t) are linearly dependent.

and solving for v(t). Differentiating with the product rule, we obtain

y 2 = v f + v f and y 2 = v f + 2 v f + v f .

For convenience, we have omitted the argument of these functions. We now substitute y 2, y2, and y2″ into the equation y + p(t)y + q(t)y = 0.This gives us

y 2 + p ( t ) y 2 + q ( t ) y 2 = v f + 2 v f + v f + p ( t ) ( v f + v f ) + q ( t ) vf = ( f + p ( t ) f + q ( t ) f ) = 0 v + v f + 2 v f + p ( t ) v f = f v + ( 2 fy + p ( t ) f ) v .

Therefore, we have the equation fv + (2f + p(t)f)v = 0, which can be written as a first-order equation by letting w = v . Making this substitution gives us the linear first-order equation

f w + ( 2 f + p ( t ) f ) w = 0 or f d w d t + ( 2 f + p ( t ) f ) w = 0

which is separable, so we obtain the separated equation

1 w d w = 2 f f p ( t ) d t .

We solve this equation by integrating both sides of the equation to yield

ln | w | = ln 1 f 2 p ( t ) d t .

If y 1 = f(t) is a known solution of the differential equation y + p(t)y + q(t)y = 0, we can obtain a second linearly independent solution of the form y 2 = v(t)f(t), where v ( t ) = 1 ( f ( t ) ) 2 e p ( t ) d t d t .

This means that w = 1 f 2 e p ( t ) d t , so we have the formula d v d t = 1 f 2 e p ( t ) d t or

(4.4) v ( t ) = 1 ( f ( t ) ) 2 e p ( t ) d t d t .

We leave the proof that

y 1 ( t ) = f ( t ) and

y 2 ( t ) = v ( t ) f ( t ) = f ( t ) 1 ( f ( t ) ) 2 e p ( t ) d t d t

are linearly independent as an exercise.

Example 4.1.7

Determine a second linearly independent solution to the differential equation y + 6y + 9y = 0 given that y 1 = e−3t is a solution.

Solution

First we identify the functions p(t) = 6 and f(t) = e−3t . Then we determine the function v(t) so that y 2(t) = v(t)f(t ) is a second linearly independent solution of the equation with the formula

v ( t ) = 1 ( f ( t ) ) 2 e p ( t ) d t d t = 1 ( e 3 t ) 2 e 6 d t d t = 1 e 6 t e 6 t = d t = t .

A second linearly independent solution is y 2 = v(t)f(t) = te−3t ; a general solution of the differential equation is y = (c 1 + c 2 t)e−3t ; and a fundamental set of solutions for the equation is {e−3t , te−3t }.

Example 4.1.8

Determine a second linearly independent solution to the differential equation 4t 2 y + 8ty + y = 0, t > 0, given that y 1 = t −1/2 is a solution.

Solution

In this case, we must first write the equation in normal (or standard) form to use formula (4.4) so we divide by 4t 2 to obtain y + 2 t 1 y + 1 4 t 2 y = 0 . Therefore, p(t) = 2t −1 and f(t) = t −1/2. Using the formula for v, we obtain

v ( t ) = 1 ( f ( t ) ) 2 e p ( t ) d t d t = 1 ( t 1 2 ) 2 e 2 t d t d t = t e 2 ln t d t = t 1 d t = ln t , t > 0 .

A second linearly independent solution is y 2 = v ( t ) f ( t ) = t 1 2 ln t ; a general solution of the differential equation is y = t 1 2 ( c 1 + c 2 ln t ) ; and a fundamental set of solutions for the equation is { t 1 2 , t 1 2 ln t } .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124172197000041

PARAMETRIC EXCITATION

S.C. Sinha , A. David , in Encyclopedia of Vibration, 2001

Floquet Theory

Consider the linear periodic system given by eqn (4). Let Φ(t) denote the fundamental solution matrix or state transition matrix (STM) that contains n linearly independent solutions of eqn (4) with the initial conditions Φ(0)=I where I is the n×n unit matrix. Then the following statements hold:

1.

Φ t + T = Φ T Φ t 0 t T and, consequently

2.

Φ t + jT = Φ T j Φ t 0 t T , j = 2 , 3

3.

x t = Φ t x 0 t 0 These results imply that, if the solution is known for the first period, it can be constructed for all time t. The matrix Φ(T) is called the Floquet transition matrix (FTM). The next statement considers the stability of eqn (4).

4.

Let ζ i(i=1, …, n) denote the eigenvalues of Φ(T). System (4) is asymptotically stable if all ζ i's lie inside the unit circle of the complex plane. The system is unstable if at least one of the eigenvalues of the FTM has magnitude greater than one. The eigenvalues ζ i are called the Floquet multipliers. The above statements briefly summarize the results of Floquet theory.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122270851000382

Particles and Waves in Electron Optics and Microscopy

Giulio Pozzi , in Advances in Imaging and Electron Physics, 2016

5 Comparison Between the Two Approaches

In order to compare the results of the eikonal and multislice methods, it is convenient to start with Eq. (154 ) and substitute at the place of the two generic linearly independent solutions of the trajectory equations the two particular solutions g(z) and h(z), which in the object plane z  = z O satisfy the initial conditions given by Eq. (85) in Chapter "Particle theory of image formation" of this volume.

By taking

(155) ρ ( z ) = h ( z ) ; σ ( z ) = g ( z ) ,

the spherical wave becomes

(156) ψ S ( x , y , z ) = A h ( z ) p ( z ) exp i z O z p ( t ) d t exp i 2 h ( z ) 1 p ( z O ) B 2 + C 2 g ( z ) + 2 B x + 2 C y + p ( z ) h ( z ) x 2 + y 2

Using the relation

(157) p ( z ) h ( z ) = p ( z O ) g ( z ) + p ( z ) h ( z ) g ( z ) g ( z )

derived from Eq. (146), the integral equation [Eq. (156)] can be rewritten in the more useful form

(158) ψ S ( x , y , z ) = A h ( z ) p ( z ) exp i z O z p ( t ) d t + p ( z ) g ( z ) 2 g ( z ) ( x 2 + y 2 ) exp i g ( z ) 2 h ( z ) p ( z O ) B + p ( z O ) g ( z ) x 2 + C + p ( z O ) g ( z ) y 2

If, for the parameters B and C, the following choice is made:

(159) B = x O p ( z O ) C = y O p ( z O ) ,

then we can easily ascertain that we have recovered Eq. (58).

If, instead of Eq. (155), the opposite choice is made, ie,

(160) ρ ( z ) = g ( z ) σ ( z ) = h ( z ) ,

then the spherical wave can be written as

(161) ψ S ( x , y , z ) = A g ( z ) p ( z ) exp i z O z p ( t ) d t exp i 2 g ( z ) 1 p ( z O ) B 2 + C 2 h ( z ) + 2 B x + 2 C y + p ( z ) g ( z ) x 2 + y 2

At the object plane, z  = z O , this spherical wave becomes a plane wave, given by

(162) ψ S ( x O , y O , z O ) = A p ( z O ) exp i ( B x O + C y O )

and with the substitution

(163) B = p O x F h F C = p O y F h F ,

we recover the plane wave contribution in Eq. (64).

In conclusion, both approaches lead to the same result.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1076567016300246

Half-Linear Differential Equations

In North-Holland Mathematics Studies, 2005

1.2.5 More on the proof of the separation theorem

We have already seen that Theorem 1.2.3 can be proved by means of the Riccati technique (i.e., the equivalence (i)⇔(iii) of Theorem 1.2.2). Here we offer another four ways of its proof in order to show a wide variety of different approaches which are possible in spite of the fact that the additivity of the solution space of (1.1.1) is lost.

Before we present the proofs, let us give some auxiliary results. First note that in Subsection 1.3.1 below it is proved that the Wronskian identity applies onlywhen p = 2. This means, among others, that there is no extension of the reduction of order formula (see also Subsection 1.3.1), which is a basis for one of the proofs of the separation theorem in the linear case. On the other hand, the concept of Wronskian can be utilized in characterization of linear (in)dependence of solutions (Lemma 1.3.1), which will be helpful here. The following statement clearly follows from Lemma 1.3.1. Nevertheless, we offer also an alternative proof.

Lemma 1.2.2

Two nontrivial solutions x and y of (1.1.1) which are not proportional cannot have a common zero.

Proof. Suppose, by contradiction, that x and y are linearly independent solutions with x(t 0) = 0 = y(t 0). Then x′(t 0) = A ≠ 0 and y′(t 0) = B ≠ 0. Consider the solution z of (1.1.1) satisfying z(t 0) = 0, z(t 0) = 1. Then Az and Bz are the solutions of (1.1.1) satisfying Az(t 0) = 0 = Bz(t 0), Az′(t 0) = A, Bz′(t 0) = B, in view of the homogeneity property. Owing to the uniqueness we have x = Az and y = Bz, which implies x = (A/B)y. Consequently, x and y are proportional, a contradiction.

Now we are ready to give several alternative proofs of the separation theorem (Theorem 1.2.3).

1

Proof based on the Riccati technique: See Subsection 1.2.4.

2

Variational proof: This proof is based on the combination of the implication (i)⇒(ii) of Theorem 1.2.2 with Lemma 1.2.1. In fact, there is nothing to prove, in view of Lemma 1.2.1, since the Roundabout theorem then says that the coexistence of two solutions of (1.1.1), where one solution has at least two zeros in a given interval while another one has no zero, is impossible. This can be seen also from Sturmian comparison theorem where c(t) ≡ C(t) and r(t) ≡ R(t). We call this proof variational since an important role is played by Picone's identity involving the p-degree functional F.

3

Proof based on Prüfer's transformation: Without loss of generality we assume x(t) > 0, t ∈ (t 1, t 2). Hence, by (1.1.17), φ(t) ∈ (0, π p ) (mod π p ) for t ∈ (t 1, t 2). See also Figure 1.2.2. By (1.1.20), φ′(t i ) = r 1-q (t i ), i = 1,2, thus φ is increasing in some neighborhood of t i , and so without loss of generality we may suppose that φ(t 1) = 0, φ(t 2) = φ p . Let us consider any other solution y different from λx, λ ∈ ℝ. Then y(t 1) ≠ 0 by Lemma 1.2.2. We may suppose y(t 1) > 0 and by (1.1.17), 0 = ϕ ( t 1 ) < ϕ ¯ ( t 1 ) < π p , where ϕ ¯ corresponds to y like φ corresponds to x. Since by (1.1.20) ϕ depends only on φ and not on ρ, there is also uniqueness for the variable φ alone. This uniqueness implies that from ϕ ¯ ( t 1 ) > ϕ ( t 1 ) it follows ϕ ¯ ( t ) > ϕ ( t ) for all t from the interval under consideration. Hence ϕ ¯ ( t 2 ) > ϕ ( t 2 ) = π p , and the continuous function ϕ ¯ ( t ) π p changes its sign in [t 1, t 2]. So there is a point t ∈ (t 1, t 2) such that ϕ ¯ ( t ¯ ) = π p . By (1.1.17) with y and ϕ ¯ instead of x and φ, respectively, y ( t ¯ ) = 0 .

Figure 1.2.2. Proof of the separation theorem based on Prüfer's transformation

4

Proof based on the Wronskian: Let y be a solution of (1.1.1) independent on x. Then

y ( t i ) x ( t i ) = ( x y y x ) ( t i ) W ( x , y ) ( t i ) 0 ,

since x, y are linearly independent. Hence x′(t 1) ≠ 0 ≠ y(t i ) (actually, this follows also from the uniqueness and Lemma 1.2.2). Clearly, x′(t 1)x′(t 2) < 0 and W(x, y)(t) > 0 [or < 0] for all t by below given Lemma 1.3.1. Consequently, y(t 1)y(t 2) < 0 and therefore y has to vanish somewhere between t 1 and t 2.
5

Proof based on the uniqueness of the IVP: Without loss of generality we assume x(t) > 0, t ∈ (t 1, t 2), and we prove that another solution, linearly independent of x, has at least one zero in (t 1, t 2). Thus assume, by a contradiction, that there is a solution y such that y(t) > 0 for t ∈ [t 1, t 2]. Define the set Ω = {λ > 0 : λx(t) < y(t) for t ∈ (t 1, t 2)}. Clearly, Ω is nonempty and bounded. Therefore there exists λ ¯ = sup Ω . Now it is not difficult to see that there exists t ¯ ( t 1 , t 2 ) such that λ ¯ x ( t ¯ ) = y ( t ¯ ) . Indeed, if not, then we come to a contradiction with the definition of λ ¯ . By a contradiction with this definition it can be also shown that λ ¯ x ( t ¯ ) = y ( t ¯ ) . The uniqueness now yields λ ¯ x ( t ) = y ( t ) . Consequently, x and y are linearly dependent, a contradiction.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0304020805800027

Special Functions

In Table of Integrals, Series, and Products (Eighth Edition), 2014

9.20 Introduction

9.201 10 A confluent hypergeometric function is obtained by taking the limit as c→∞ in the solution of Riemann's differential equation

WH u = P { 0 c 1 2 + μ c c λ z 1 2 μ 0 λ }

9.202 The equation obtained by means of this limiting process is of the form

1.

WH d 2 u d z 2 + d u d z + ( λ z + 1 4 μ 2 z 2 ) u = 0

Equation 9.202 1 has the following two linearly independent solutions:

2.

z 1 2 + μ e z Φ ( 1 2 + μ λ , 2 μ + 1 ; z )

3.

z 1 2 μ e z Φ ( 1 2 μ λ , 2 μ + 1 ; z )

which are defined for all values of μ ± 1 2 , ± 2 2 , ± 3 2 ,

MO 111

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123849335000096