9-5 Summary
Chapter 9 Matrix Formulation of the Linear Variation Method
A more recently discovered, faster procedure is the Givens– Householder–Wilkinson method. Here, H is first tridiagonalized, which means that all elements are made to vanish except those on the main diagonal as well as on the codiagonals above and below
the main diagonal. This similarity transformation can be done in a few steps, each step zeroing all the necessary elements in an entire row and column. The eigenvalues for the tridiagonal matrix (and hence for the original matrix) may be found one at a time as desired. If only the third lowest eigenvalue is of interest, that one alone can be computed. This is a useful degree of freedom which results in substantial savings of time. Once an eigenvalue is found, its corresponding eigenvector may be computed.
9-5 Summary The steps to be performed in a matrix solution for a linear variation calculation are:
1. From the basis set, calculate the overlap matrix S.
2. From the basis set and hamiltonian operator, calculate the hamiltonian matrix H.
3. If S = 1, find an orthogonalization procedure. In the Schmidt method, A is such that A†SA = 1. The matrix equation may now be written in the form HC = CE.
4. Find C such that C†HC is a diagonal matrix. The diagonal elements are the roots E.
5. If necessary, back transform: AC = C. The columns of C contain the MO coeffi cients appropriate for the original basis set.
9-5.A Problems9-1. Evaluate the following according to the rules of matrix algebra:
9
a)
6
7
8 10
11
6
b) a
b
c
7
4
6
3
1
c) + 7
i
−3
−1 3
cos θ − sin θ cos θ
sin θ
d)
sin θ
cos θ − sin θ cos θ
i
4
3 2 e) 1 7 4 7 0 −3
Section 9-5 Summary
i
4
3
2
f) 1
7
4
7
0 −3
cos θ − sin θ g)
sin θ
cos θ
9-2. If H
ˆ
ij = χ ∗H χ
i j dτ and ˆ H is hermitian, show that H is a hermitian matrix.
9-3. Let
a
b
A =
11
a12
11
b12
,
B = a21 a22
b21 b22
Show that, in general, AB = BA.
9-4. Let
1
0
0
4
0
0
1
0
1
A = 0 2 0 , B = 0 5 0 , and C = 0 1 0
0
0
3
0
0
6
1
0
1
Show that AB = BA, but AC = CA. Compare the matrix AC with CA. Do these matrices show any simple relationship? Can you relate this to properties of A and C mathematically?
9-5. The “latent roots” λi of A are solutions to the equation |A − λi1| = 0, i = 1, 2, . . . , n, where n is the dimension of A.
a) Show that, under a similarity transformation B = T−1AT, the latent roots are preserved.
b) Demonstrate that diagonalization of A via a similarity transformation produces the latent roots as the diagonal elements.
9-6. Show that, if a matrix has any latent roots equal to zero, it has no inverse.
9-7. The trace (or spur) of a matrix is the sum of the elements on the principal diagonal.
Thus, tr A = n a i=1 ii .
a) Show that the trace of a triple product of matrices is invariant under cyclic permutation. That is, tr(ABC) = tr(CAB) = tr(BCA) but not tr(CBA).
b) Show that the trace of a matrix is invariant under a similarity transformation.
9-8. The norm of a matrix is the positive square root of the sum of the absolute squares of all the elements.
For a real matrix A,
1/2
1/2
n
norm A = a2 =
˜A
ij (A)j,i i,j i,j =1
i
j
Chapter 9 Matrix Formulation of the Linear Variation Method
Prove that the norm of a real matrix is preserved in an orthogonal transformation (or, you may prefer to prove that the norm of any matrix is preserved in a unitary transformation).
9-9. Use the facts that the trace, the determinant, and the norm of a matrix are invari ant under an orthogonal transformation to find the eigenvalues of the following matrices:
0
1
1
a) 1 0 1
1
1
0
1
1
√
1
2
2
2 b) 1 √ 0 − 1
√
2
2
1
− 1 √
1
2
2
2
0
1
0
c) 1 2 1
0
1
0
9-10. Consider the matrix
cos θ
0
− sin θ 0
3
3
What is the effect of this transformation on ? On ? Can the transfor 2
3
mation be uniquely reversed? (That is, for, say, θ = 0, and given a transformed
3
vector , can one uniquely determine the vector this was transformed from?)
0
Does the matrix have an inverse? Evaluate its determinant.
9-11. What are the eigenvectors for the matrix
1
0
0
H = 0 −3 0
0
0 −2 9-12. Show that, if A and B have “simultaneous eigenvectors” (i.e., both diagonalized by the same similarity transformation), then A and B commute.
9-13. If HC = CE, and C†C = 1, then C†HC = E, and we seek a unitary transformation that diagonalizes H. If HC = SCE, and C†SC = 1, then C†HC = C†SCE = 1E, and C†HC = E. Since this is the same working equation as the one we found above, why do we not proceed in the same way? Why do we bother orthogonalizing our basis first?
9-14. We have mentioned that a matrix may be used to represent the rotation of coor dinates by some angle θ . Such a rotation is a geometric operation, so we have, in effect, represented an operator with a matrix. It is possible to represent other operators in a similar way. Indeed, an alternative approach to quantum mechanics exists in which the whole formalism is based on matrices and their properties (matrix mechanics, as opposed to wave mechanics). A particularly interesting example is provided by the matrices constructed by Pauli to represent spin operators and functions. It was mentioned in Chapter 5 that spin functions α and β satisfy rules similar to those for orbital angular momentum. Two of these are ˆSzα = 1α, ˆSzβ = −1β
2
2
But it was pointed out that α and β could not be expressed in terms of spherical harmonics. Pauli represented this operator and functions by
1
0
1
0
α = , β = , ˆSz = 1
0
1
2
0 −1
Using these definitions, show that
α†β dω = β†α dω = 0, α†α dω = β†β dω = 1, ˆSzα = 1α, ˆSzβ = −1β
2
2
[Note: since α and β are essentially the Dirac delta functions in the spin coordinate ω, the process of integration reduces here to scalar multiplication of vectors.]
References [1] A. C. Aitken, Determinants and Matrices, 9th ed. Wiley (Interscience), New York, 1958.
[2] G. Birkhoff and S. MacLane, A Survey of Modern Algebra, 5th ed. Macmillan, New York, 1995.