Section 9-3
Section 9-3 Matrix Formulation of the Linear Variation Method
TABLE 9-1 Some Matrix Rules and Definitions for a Square Matrix A of Dimension n A = B Matrix equality; means aij = bij , i, j = 1, n A + B = C Matrix addition; cij = aij + bij , i, j = 1, n cA = B Multiplication of A by scalar; bij = c · aij , i, j = 1, n AB = C Matrix multiplication; c
n
ij = k=1 aikbkj , i, j = 1, n
|A|
The determinant of the matrix A (see Appendix 2)
A−1
The inverse of A; A−1A = AA−1 = 1 If A−1 exists, A is nonsingular and |A| = 0.
A∗
The complex conjugate of A; aij → a∗ , i, j = 1, n
ij
If A∗ = A, A is real.
˜A
The transpose of A; ( ˜ A)ij = aji (rows and columns inter changed)
If ˜ A = A, symmetric; if ˜A = −A, antisymmetric; if ˜ A = A−1, orthogonal.
A†
The hermitian adjoint of A; (A†)ij = a∗ (A† = ˜A∗) j i If A† = A, hermitian. If A† = A−1, unitary.
(ABC)∗ = A∗B∗C∗ Complex conjugate of product
( ABC) = ˜C ˜B ˜A
Transpose of product (ABC)† = C†B†A†
Hermitian adjoint of product (ABC)−1 = C−1B−1A−1
Inverse of product |ABC| = |A| · |B| · |C|
Determinant of product (any order)
T−1AT
A similarity transformation If T−1 = T†, this is a unitary transformation.
If T−1 = ˜T, this is an orthogonal transformation.
9-3 Matrix Formulation of the Linear Variation Method
We have seen that the independent-electron approximation leads to a series of MOs for a molecular system. If the MOs are expressed as a linear combination of n basis functions (which are often approximations to AOs, although this is not necessary), the variation method leads to a set of simultaneous equations: (H11 − ES11)c1 + (H12 − ES12)c2 + · · · + (H1n − ES1n)cn = 0 ...
(9-12)
(Hn1 − ESn1)c1 + · · · + (Hnn − ESnn)cn = 0
All terms have been defined in Chapter 7. Given a value for E that satisfies the associated determinantal equations, we can solve this set of simultaneous equations for ratios between the ci’s. Requiring MO normality establishes convenient numerical values for the ci’s.
Chapter 9 Matrix Formulation of the Linear Variation Method
A matrix equation equivalent to Eq. (9-12) is3
c1
0
H11 − ES11 H12 − ES12 · · · H1n − ES1n
c2 0
.
.
.
. = .
(9-13) . .
.. Hn1 − ESn1 · · · Hnn − ESnn cn
0
The matrix in Eq. (9-13) is clearly the difference between two matrices. This enables us to rewrite the equation in the form
c1 c1 H11 H12 · · · H1n S11 S12 · · · S1n
c2
c2
.
.
.
.
.
.
.. . = E .. .. .
(9-14) .
.
..
Hn1 · · · Hnn
Sn1 · · · Snn
cn
cn
or Hci = EiSci, i = 1, 2, . . . , n
(9-15)
where we have introduced the subscript i to account for the fact that there are many possible values for E and that each one has its own characteristic set of coefficients. Note that the “eigenvector” ci is a column vector and that each element in ci is (effectively) multiplied by the scalar Ei according to Eq. (9-15).
In general, there are as many MOs as there are basis functions, and so Eq. (9-15)
represents n separate matrix equations. We can continue to use matrix notation to reduce these to a single matrix equation. We do this by stacking the n c vectors together, side by side, to produce an n × n matrix C. The numbers E must also be combined into an appropriate matrix form. We must be careful to do this in such a way that the scalar E1 still multiplies only c1 (now column 1 of C) E2 multiplies only c2, and so forth. This is accomplished in the following equation
H11 · · · H1n c11 · · · c1n
.
.
.
.
.
.
.. .. .. Hn1 · · · Hnn cn1 · · · cnn
E1
0
0
· · ·
0
S11 · · · S1n c11 · · · c1n
0
E
=
2
0
· · ·
0
.
.
.
.
.
.
.. .. .. .
.
.
.
(9-16) .
.
.. .. .. Sn1 · · · Snn cn1 · · · cnn
0
0
0
· · · En or
HC = SCE
(9-17)
The matrix E is a diagonal matrix of orbital energies (often referred to as the matrix of eigenvalues). C is the matrix of coefficients (or matrix of eigenvectors), and each 3Quantum-chemical convention is to use upper case letters for individual elements of the matrices H, S, and E.
This differs from the usual convention.
TABLE 9-1 Some Matrix Rules and Definitions for a Square Matrix A of Dimension n A = B Matrix equality; means aij = bij , i, j = 1, n A + B = C Matrix addition; cij = aij + bij , i, j = 1, n cA = B Multiplication of A by scalar; bij = c · aij , i, j = 1, n AB = C Matrix multiplication; c
n
ij = k=1 aikbkj , i, j = 1, n
|A|
The determinant of the matrix A (see Appendix 2)
A−1
The inverse of A; A−1A = AA−1 = 1 If A−1 exists, A is nonsingular and |A| = 0.
A∗
The complex conjugate of A; aij → a∗ , i, j = 1, n
ij
If A∗ = A, A is real.
˜A
The transpose of A; ( ˜ A)ij = aji (rows and columns inter changed)
If ˜ A = A, symmetric; if ˜A = −A, antisymmetric; if ˜ A = A−1, orthogonal.
A†
The hermitian adjoint of A; (A†)ij = a∗ (A† = ˜A∗) j i If A† = A, hermitian. If A† = A−1, unitary.
(ABC)∗ = A∗B∗C∗ Complex conjugate of product
( ABC) = ˜C ˜B ˜A
Transpose of product (ABC)† = C†B†A†
Hermitian adjoint of product (ABC)−1 = C−1B−1A−1
Inverse of product |ABC| = |A| · |B| · |C|
Determinant of product (any order)
T−1AT
A similarity transformation If T−1 = T†, this is a unitary transformation.
If T−1 = ˜T, this is an orthogonal transformation.
9-3 Matrix Formulation of the Linear Variation Method
We have seen that the independent-electron approximation leads to a series of MOs for a molecular system. If the MOs are expressed as a linear combination of n basis functions (which are often approximations to AOs, although this is not necessary), the variation method leads to a set of simultaneous equations: (H11 − ES11)c1 + (H12 − ES12)c2 + · · · + (H1n − ES1n)cn = 0 ...
(9-12)
(Hn1 − ESn1)c1 + · · · + (Hnn − ESnn)cn = 0
All terms have been defined in Chapter 7. Given a value for E that satisfies the associated determinantal equations, we can solve this set of simultaneous equations for ratios between the ci’s. Requiring MO normality establishes convenient numerical values for the ci’s.
Chapter 9 Matrix Formulation of the Linear Variation Method
A matrix equation equivalent to Eq. (9-12) is3
c1
0
H11 − ES11 H12 − ES12 · · · H1n − ES1n
c2 0
.
.
.
. = .
(9-13) . .
.. Hn1 − ESn1 · · · Hnn − ESnn cn
0
The matrix in Eq. (9-13) is clearly the difference between two matrices. This enables us to rewrite the equation in the form
c1 c1 H11 H12 · · · H1n S11 S12 · · · S1n
c2
c2
.
.
.
.
.
.
.. . = E .. .. .
(9-14) .
.
..
Hn1 · · · Hnn
Sn1 · · · Snn
cn
cn
or Hci = EiSci, i = 1, 2, . . . , n
(9-15)
where we have introduced the subscript i to account for the fact that there are many possible values for E and that each one has its own characteristic set of coefficients. Note that the “eigenvector” ci is a column vector and that each element in ci is (effectively) multiplied by the scalar Ei according to Eq. (9-15).
In general, there are as many MOs as there are basis functions, and so Eq. (9-15)
represents n separate matrix equations. We can continue to use matrix notation to reduce these to a single matrix equation. We do this by stacking the n c vectors together, side by side, to produce an n × n matrix C. The numbers E must also be combined into an appropriate matrix form. We must be careful to do this in such a way that the scalar E1 still multiplies only c1 (now column 1 of C) E2 multiplies only c2, and so forth. This is accomplished in the following equation
H11 · · · H1n c11 · · · c1n
.
.
.
.
.
.
.. .. .. Hn1 · · · Hnn cn1 · · · cnn
E1
0
0
· · ·
0
S11 · · · S1n c11 · · · c1n
0
E
=
2
0
· · ·
0
.
.
.
.
.
.
.. .. .. .
.
.
.
(9-16) .
.
.. .. .. Sn1 · · · Snn cn1 · · · cnn
0
0
0
· · · En or
HC = SCE
(9-17)
The matrix E is a diagonal matrix of orbital energies (often referred to as the matrix of eigenvalues). C is the matrix of coefficients (or matrix of eigenvectors), and each 3Quantum-chemical convention is to use upper case letters for individual elements of the matrices H, S, and E.
This differs from the usual convention.