An operator is defined as \hat{O} throughout. Unless otherwise specified the number of particles is always N and d is the dimension of the system. In nuclear physics we normally define the total number of particles to be A=N+Z , where N is total number of neutrons and Z the total number of protons. In case of other baryons such isobars \Delta or various hyperons such as \Lambda or \Sigma , one needs to add their definitions. Hereafter, N is reserved for the total number of particles, unless otherwise specificied.
The quantum numbers of a single-particle state in coordinate space are defined by the variable x=(\boldsymbol{r},\sigma), where \boldsymbol{r}\in {\mathbb{R}}^{d}, with d=1,2,3 represents the spatial coordinates and \sigma is the eigenspin of the particle. For fermions with eigenspin 1/2 this means that x\in {\mathbb{R}}^{d}\oplus (\frac{1}{2}), and the integral \int dx = \sum_{\sigma}\int d^dr = \sum_{\sigma}\int d\boldsymbol{r} , and \int d^Nx= \int dx_1\int dx_2\dots\int dx_N.
The quantum mechanical wave function of a given state with quantum numbers \lambda (encompassing all quantum numbers needed to specify the system), ignoring time, is \Psi_{\lambda}=\Psi_{\lambda}(x_1,x_2,\dots,x_N), with x_i=(\boldsymbol{r}_i,\sigma_i) and the projection of \sigma_i takes the values \{-1/2,+1/2\} for particles with spin 1/2 . We will hereafter always refer to \Psi_{\lambda} as the exact wave function, and if the ground state is not degenerate we label it as \Psi_0=\Psi_0(x_1,x_2,\dots,x_N).
Since the solution \Psi_{\lambda} seldomly can be found in closed form, approximations are sought. Here we define an approximative wave function or an ansatz to the exact wave function as \Phi_{\lambda}=\Phi_{\lambda}(x_1,x_2,\dots,x_N), with \Phi_0=\Phi_0(x_1,x_2,\dots,x_N), being the ansatz to the ground state.
The wave function \Psi_{\lambda} is sought in the Hilbert space of either symmetric or anti-symmetric N -body functions, namely \Psi_{\lambda}\in {\cal H}_N:= {\cal H}_1\oplus{\cal H}_1\oplus\dots\oplus{\cal H}_1, where the single-particle Hilbert space \hat{H}_1 is the space of square integrable functions over \in {\mathbb{R}}^{d}\oplus (\sigma) resulting in {\cal H}_1:= L^2(\mathbb{R}^{d}\oplus (\sigma)).
Our Hamiltonian is invariant under the permutation (interchange) of two particles. Since we deal with fermions however, the total wave function is antisymmetric. Let \hat{P} be an operator which interchanges two particles. Due to the symmetries we have ascribed to our Hamiltonian, this operator commutes with the total Hamiltonian, [\hat{H},\hat{P}] = 0, meaning that \Psi_{\lambda}(x_1, x_2, \dots , x_N) is an eigenfunction of \hat{P} as well, that is \hat{P}_{ij}\Psi_{\lambda}(x_1, x_2, \dots,x_i,\dots,x_j,\dots,x_N)= \beta\Psi_{\lambda}(x_1, x_2, \dots,x_j,\dots,x_i,\dots,x_N), where \beta is the eigenvalue of \hat{P} . We have introduced the suffix ij in order to indicate that we permute particles i and j . The Pauli principle tells us that the total wave function for a system of fermions has to be antisymmetric, resulting in the eigenvalue \beta = -1 .
The Schrodinger equation reads \begin{equation} \hat{H}(x_1, x_2, \dots , x_N) \Psi_{\lambda}(x_1, x_2, \dots , x_N) = E_\lambda \Psi_\lambda(x_1, x_2, \dots , x_N), \label{eq:basicSE1} \end{equation} where the vector x_i represents the coordinates (spatial and spin) of particle i , \lambda stands for all the quantum numbers needed to classify a given N -particle state and \Psi_{\lambda} is the pertaining eigenfunction. Throughout this course, \Psi refers to the exact eigenfunction, unless otherwise stated.
We write the Hamilton operator, or Hamiltonian, in a generic way \hat{H} = \hat{T} + \hat{V} where \hat{T} represents the kinetic energy of the system \hat{T} = \sum_{i=1}^N \frac{\mathbf{p}_i^2}{2m_i} = \sum_{i=1}^N \left( -\frac{\hbar^2}{2m_i} \mathbf{\nabla_i}^2 \right) = \sum_{i=1}^N t(x_i) while the operator \hat{V} for the potential energy is given by \begin{equation} \hat{V} = \sum_{i=1}^N \hat{u}_{\mathrm{ext}}(x_i) + \sum_{j < i=1}^N v(x_i,x_j)+\sum_{i < j < k=1}^Nv(x_i,x_j,x_k)+\dots \label{eq:firstv} \end{equation} Hereafter we use natural units, viz. \hbar=c=e=1 , with e the elementary charge and c the speed of light. This means that momenta and masses have dimension energy.
If one does quantum chemistry, after having introduced the Born-Oppenheimer approximation which effectively freezes out the nucleonic degrees of freedom, the Hamiltonian for N=n_e electrons takes the following form \hat{H} = \sum_{i=1}^{n_e} t(x_i) - \sum_{i=1}^{n_e} k\frac{Z}{r_i} + \sum_{i < j}^{n_e} \frac{k}{r_{ij}}, with k=1.44 eVnm
We can rewrite this as \begin{equation} \hat{H} = \hat{H}_0 + \hat{H}_I = \sum_{i=1}^{n_e}\hat{h}_0(x_i) + \sum_{i < j}^{n_e}\frac{1}{r_{ij}}, \label{H1H2} \end{equation} where we have defined r_{ij}=| \boldsymbol{r}_i-\boldsymbol{r}_j|, and \begin{equation} \hat{h}_0(x_i) = \hat{t}(x_i) - \frac{Z}{x_i}. \label{hi} \end{equation} The first term of Eq. \eqref{H1H2}, H_0 , is the sum of the N one-body Hamiltonians \hat{h}_0 . Each individual Hamiltonian \hat{h}_0 contains the kinetic energy operator of an electron and its potential energy due to the attraction of the nucleus. The second term, H_I , is the sum of the n_e(n_e-1)/2 two-body interactions between each pair of electrons. Note that the double sum carries a restriction i < j .
The potential energy term due to the attraction of the nucleus defines the onebody field u_i=u_{\mathrm{ext}}(x_i) of Eq. \eqref{eq:firstv}. We have moved this term into the \hat{H}_0 part of the Hamiltonian, instead of keeping it in \hat{V} as in Eq. \eqref{eq:firstv}. The reason is that we will hereafter treat \hat{H}_0 as our non-interacting Hamiltonian. For a many-body wavefunction \Phi_{\lambda} defined by an appropriate single-particle basis, we may solve exactly the non-interacting eigenvalue problem \hat{H}_0\Phi_{\lambda}= w_{\lambda}\Phi_{\lambda}, with w_{\lambda} being the non-interacting energy. This energy is defined by the sum over single-particle energies to be defined below. For atoms the single-particle energies could be the hydrogen-like single-particle energies corrected for the charge Z . For nuclei and quantum dots, these energies could be given by the harmonic oscillator in three and two dimensions, respectively.
We will assume that the interacting part of the Hamiltonian can be approximated by a two-body interaction. This means that our Hamiltonian is written as \begin{equation} \hat{H} = \hat{H}_0 + \hat{H}_I = \sum_{i=1}^N \hat{h}_0(x_i) + \sum_{i < j}^N V(r_{ij}), \label{Hnuclei} \end{equation} with \begin{equation} H_0=\sum_{i=1}^N \hat{h}_0(x_i) = \sum_{i=1}^N\left(\hat{t}(x_i) + \hat{u}_{\mathrm{ext}}(x_i)\right). \label{hinuclei} \end{equation} The onebody part u_{\mathrm{ext}}(x_i) is normally approximated by a harmonic oscillator potential or the Coulomb interaction an electron feels from the nucleus. However, other potentials are fully possible, such as one derived from the self-consistent solution of the Hartree-Fock equations.
Our Hamiltonian is invariant under the permutation (interchange) of two particles. % (exercise here, prove it) Since we deal with fermions however, the total wave function is antisymmetric. Let \hat{P} be an operator which interchanges two particles. Due to the symmetries we have ascribed to our Hamiltonian, this operator commutes with the total Hamiltonian, [\hat{H},\hat{P}] = 0, meaning that \Psi_{\lambda}(x_1, x_2, \dots , x_N) is an eigenfunction of \hat{P} as well, that is \hat{P}_{ij}\Psi_{\lambda}(x_1, x_2, \dots,x_i,\dots,x_j,\dots,x_N)= \beta\Psi_{\lambda}(x_1, x_2, \dots,x_i,\dots,x_j,\dots,x_N), where \beta is the eigenvalue of \hat{P} . We have introduced the suffix ij in order to indicate that we permute particles i and j . The Pauli principle tells us that the total wave function for a system of fermions has to be antisymmetric, resulting in the eigenvalue \beta = -1 .
In our case we assume that we can approximate the exact eigenfunction with a Slater determinant \begin{equation} \Phi(x_1, x_2,\dots ,x_N,\alpha,\beta,\dots, \sigma)=\frac{1}{\sqrt{N!}} \left| \begin{array}{ccccc} \psi_{\alpha}(x_1)& \psi_{\alpha}(x_2)& \dots & \dots & \psi_{\alpha}(x_N)\\ \psi_{\beta}(x_1)&\psi_{\beta}(x_2)& \dots & \dots & \psi_{\beta}(x_N)\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ \psi_{\sigma}(x_1)&\psi_{\sigma}(x_2)& \dots & \dots & \psi_{\sigma}(x_N)\end{array} \right|, \label{eq:HartreeFockDet} \end{equation} where x_i stand for the coordinates and spin values of a particle i and \alpha,\beta,\dots, \gamma are quantum numbers needed to describe remaining quantum numbers.
The single-particle function \psi_{\alpha}(x_i) are eigenfunctions of the onebody Hamiltonian h_i , that is \hat{h}_0(x_i)=\hat{t}(x_i) + \hat{u}_{\mathrm{ext}}(x_i), with eigenvalues \hat{h}_0(x_i) \psi_{\alpha}(x_i)=\left(\hat{t}(x_i) + \hat{u}_{\mathrm{ext}}(x_i)\right)\psi_{\alpha}(x_i)=\varepsilon_{\alpha}\psi_{\alpha}(x_i). The energies \varepsilon_{\alpha} are the so-called non-interacting single-particle energies, or unperturbed energies. The total energy is in this case the sum over all single-particle energies, if no two-body or more complicated many-body interactions are present.
Let us denote the ground state energy by E_0 . According to the variational principle we have E_0 \le E[\Phi] = \int \Phi^*\hat{H}\Phi d\mathbf{\tau} where \Phi is a trial function which we assume to be normalized \int \Phi^*\Phi d\mathbf{\tau} = 1, where we have used the shorthand d\mathbf{\tau}=d\mathbf{r}_1d\mathbf{r}_2\dots d\mathbf{r}_N .
Before we proceed with a more compact representation of a Slater determinant, we would like to repeat some linear algebra properties which will be useful for our derivations of the energy as function of a Slater determinant, Hartree-Fock theory and later the nuclear shell model.
The inverse of a matrix is defined by \mathbf{A}^{-1} \cdot \mathbf{A} = I A unitary matrix \mathbf{A} is one whose inverse is its adjoint \mathbf{A}^{-1}=\mathbf{A}^{\dagger} A real unitary matrix is called orthogonal and its inverse is equal to its transpose. A hermitian matrix is its own self-adjoint, that is \mathbf{A}=\mathbf{A}^{\dagger}.
Matrix Properties Reminder
Relations | Name | matrix elements |
---|---|---|
A = A^{T} | symmetric | a_{ij} = a_{ji} |
A = \left (A^{T} \right )^{-1} | real orthogonal | \sum_k a_{ik} a_{jk} = \sum_k a_{ki} a_{kj} = \delta_{ij} |
A = A^{ * } | real matrix | a_{ij} = a_{ij}^{ * } |
A = A^{\dagger} | hermitian | a_{ij} = a_{ji}^{ * } |
A = \left (A^{\dagger} \right )^{-1} | unitary | \sum_k a_{ik} a_{jk}^{ * } = \sum_k a_{ki}^{ * } a_{kj} = \delta_{ij} |
If we deal with Fermions (identical and indistinguishable particles) we will form an ansatz for a given state in terms of so-called Slater determinants determined by a chosen basis of single-particle functions.
For a given n\times n matrix \mathbf{A} we can write its determinant det(\mathbf{A})=|\mathbf{A}|= \left| \begin{array}{ccccc} a_{11}& a_{12}& \dots & \dots & a_{1n}\\ a_{21}&a_{22}& \dots & \dots & a_{2n}\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ a_{n1}& a_{n2}& \dots & \dots & a_{nn}\end{array} \right|, in a more compact form as |\mathbf{A}|= \sum_{i=1}^{n!}(-1)^{p_i}\hat{P}_i a_{11}a_{22}\dots a_{nn}, where \hat{P}_i is a permutation operator which permutes the column indices 1,2,3,\dots,n and the sum runs over all n! permutations. The quantity p_i represents the number of transpositions of column indices that are needed in order to bring a given permutation back to its initial ordering, in our case given by a_{11}a_{22}\dots a_{nn} here.
A simple 2\times 2 determinant illustrates this. We have det(\mathbf{A})= \left| \begin{array}{cc} a_{11}& a_{12}\\ a_{21}&a_{22}\end{array} \right|= (-1)^0a_{11}a_{22}+(-1)^1a_{12}a_{21}, where in the last term we have interchanged the column indices 1 and 2 . The natural ordering we have chosen is a_{11}a_{22} .
With the above we can rewrite our Slater determinant in a more compact form. In the Hartree-Fock method the trial function is the Slater determinant of Eq. \eqref{eq:HartreeFockDet} which can be rewritten as \Phi(x_1,x_2,\dots,x_N,\alpha,\beta,\dots,\nu) = \frac{1}{\sqrt{N!}}\sum_{P} (-)^P\hat{P}\psi_{\alpha}(x_1) \psi_{\beta}(x_2)\dots\psi_{\nu}(x_N)=\sqrt{N!}\hat{A}\Phi_H, where we have introduced the antisymmetrization operator \hat{A} defined by the summation over all possible permutations of two particles.
It is defined as \begin{equation} \hat{A} = \frac{1}{N!}\sum_{p} (-)^p\hat{P}, \label{antiSymmetryOperator} \end{equation} with p standing for the number of permutations. We have introduced for later use the so-called Hartree-function, defined by the simple product of all possible single-particle functions \Phi_H(x_1,x_2,\dots,x_N,\alpha,\beta,\dots,\nu) = \psi_{\alpha}(x_1) \psi_{\beta}(x_2)\dots\psi_{\nu}(x_N).
Both \hat{H}_0 and \hat{H}_I are invariant under all possible permutations of any two particles and hence commute with \hat{A} \begin{equation} [H_0,\hat{A}] = [H_I,\hat{A}] = 0. \label{commutionAntiSym} \end{equation} Furthermore, \hat{A} satisfies \begin{equation} \hat{A}^2 = \hat{A}, \label{AntiSymSquared} \end{equation} since every permutation of the Slater determinant reproduces it.
The expectation value of \hat{H}_0 \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = N! \int \Phi_H^*\hat{A}\hat{H}_0\hat{A}\Phi_H d\mathbf{\tau} is readily reduced to \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = N! \int \Phi_H^*\hat{H}_0\hat{A}\Phi_H d\mathbf{\tau}, where we have used Eqs. \eqref{commutionAntiSym} and \eqref{AntiSymSquared}. The next step is to replace the antisymmetrization operator by its definition and to replace \hat{H}_0 with the sum of one-body operators \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = \sum_{i=1}^N \sum_{p} (-)^p\int \Phi_H^*\hat{h}_0\hat{P}\Phi_H d\mathbf{\tau}.
The integral vanishes if two or more particles are permuted in only one of the Hartree-functions \Phi_H because the individual single-particle wave functions are orthogonal. We obtain then \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau}= \sum_{i=1}^N \int \Phi_H^*\hat{h}_0\Phi_H d\mathbf{\tau}. Orthogonality of the single-particle functions allows us to further simplify the integral, and we arrive at the following expression for the expectation values of the sum of one-body Hamiltonians \begin{equation} \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = \sum_{\mu=1}^N \int \psi_{\mu}^*(\mathbf{r})\hat{h}_0\psi_{\mu}(\mathbf{r}) d\mathbf{r}. \label{H1Expectation} \end{equation}
We introduce the following shorthand for the above integral \langle \mu | \hat{h}_0 | \mu \rangle = \int \psi_{\mu}^*(\mathbf{r})\hat{h}_0\psi_{\mu}(\mathbf{r}) d\mathbf{r}, and rewrite Eq. \eqref{H1Expectation} as \begin{equation} \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = \sum_{\mu=1}^N \langle \mu | \hat{h}_0 | \mu \rangle. \label{H1Expectation1} \end{equation}
The expectation value of the two-body part of the Hamiltonian is obtained in a similar manner. We have \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = N! \int \Phi_H^*\hat{A}\hat{H}_I\hat{A}\Phi_H d\mathbf{\tau}, which reduces to \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = \sum_{i\le j=1}^N \sum_{p} (-)^p\int \Phi_H^*V(r_{ij})\hat{P}\Phi_H d\mathbf{\tau}, by following the same arguments as for the one-body Hamiltonian.
Because of the dependence on the inter-particle distance r_{ij} , permutations of any two particles no longer vanish, and we get \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = \sum_{i < j=1}^N \int \Phi_H^*V(r_{ij})(1-P_{ij})\Phi_H d\mathbf{\tau}. where P_{ij} is the permutation operator that interchanges particle i and particle j . Again we use the assumption that the single-particle wave functions are orthogonal.
We obtain \begin{align} \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = \frac{1}{2}\sum_{\mu=1}^N\sum_{\nu=1}^N &\left[ \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j)V(r_{ij})\psi_{\mu}(x_i)\psi_{\nu}(x_j) dx_idx_j \right. \label{_auto1}\\ &\left. - \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j) V(r_{ij})\psi_{\nu}(x_i)\psi_{\mu}(x_j) dx_idx_j \right]. \label{H2Expectation} \end{align} The first term is the so-called direct term. It is frequently also called the Hartree term, while the second is due to the Pauli principle and is called the exchange term or just the Fock term. The factor 1/2 is introduced because we now run over all pairs twice.
The last equation allows us to introduce some further definitions. The single-particle wave functions \psi_{\mu}(x) , defined by the quantum numbers \mu and x are defined as the overlap \psi_{\alpha}(x) = \langle x | \alpha \rangle .
We introduce the following shorthands for the above two integrals \langle \mu\nu|\hat{v}|\mu\nu\rangle = \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j)V(r_{ij})\psi_{\mu}(x_i)\psi_{\nu}(x_j) dx_idx_j, and \langle \mu\nu|\hat{v}|\nu\mu\rangle = \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j) V(r_{ij})\psi_{\nu}(x_i)\psi_{\mu}(x_j) dx_idx_j.
The direct and exchange matrix elements can be brought together if we define the antisymmetrized matrix element \langle \mu\nu|\hat{v}|\mu\nu\rangle_{\mathrm{AS}}= \langle \mu\nu|\hat{v}|\mu\nu\rangle-\langle \mu\nu|\hat{v}|\nu\mu\rangle, or for a general matrix element \langle \mu\nu|\hat{v}|\sigma\tau\rangle_{\mathrm{AS}}= \langle \mu\nu|\hat{v}|\sigma\tau\rangle-\langle \mu\nu|\hat{v}|\tau\sigma\rangle. It has the symmetry property \langle \mu\nu|\hat{v}|\sigma\tau\rangle_{\mathrm{AS}}= -\langle \mu\nu|\hat{v}|\tau\sigma\rangle_{\mathrm{AS}}=-\langle \nu\mu|\hat{v}|\sigma\tau\rangle_{\mathrm{AS}}.
The antisymmetric matrix element is also hermitian, implying \langle \mu\nu|\hat{v}|\sigma\tau\rangle_{\mathrm{AS}}= \langle \sigma\tau|\hat{v}|\mu\nu\rangle_{\mathrm{AS}}. With these notations we rewrite Eq. \eqref{H2Expectation} as \begin{equation} \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = \frac{1}{2}\sum_{\mu=1}^N\sum_{\nu=1}^N \langle \mu\nu|\hat{v}|\mu\nu\rangle_{\mathrm{AS}}. \label{H2Expectation2} \end{equation}
Combining Eqs. \eqref{H1Expectation1} and \eqref{H2Expectation2} we obtain the energy functional \begin{equation} E[\Phi] = \sum_{\mu=1}^N \langle \mu | \hat{h}_0 | \mu \rangle + \frac{1}{2}\sum_{{\mu}=1}^N\sum_{{\nu}=1}^N \langle \mu\nu|\hat{v}|\mu\nu\rangle_{\mathrm{AS}}. \label{FunctionalEPhi} \end{equation} which we will use as our starting point for the Hartree-Fock calculations.
Hartree-Fock (HF) theory is an algorithm for finding an approximative expression for the ground state of a given Hamiltonian. The basic ingredients are
We will show that the Hartree-Fock Hamiltonian \hat{h}^{\mathrm{HF}} equals our definition of the operator \hat{f} discussed in connection with the new definition of the normal-ordered Hamiltonian (see later lectures), that is we have, for a specific matrix element \langle p |\hat{h}^{\mathrm{HF}}| q \rangle =\langle p |\hat{f}| q \rangle=\langle p|\hat{t}+\hat{u}_{\mathrm{ext}}|q \rangle +\sum_{i\le F} \langle pi | \hat{V} | qi\rangle_{AS}, meaning that \langle p|\hat{u}^{\mathrm{HF}}|q\rangle = \sum_{i\le F} \langle pi | \hat{V} | qi\rangle_{AS}. The so-called Hartree-Fock potential \hat{u}^{\mathrm{HF}} brings an explicit medium dependence due to the summation over all single-particle states below the Fermi level F . It brings also in an explicit dependence on the two-body interaction (in nuclear physics we can also have complicated three- or higher-body forces). The two-body interaction, with its contribution from the other bystanding fermions, creates an effective mean field in which a given fermion moves, in addition to the external potential \hat{u}_{\mathrm{ext}} which confines the motion of the fermion. For systems like nuclei, there is no external confining potential. Nuclei are examples of self-bound systems, where the binding arises due to the intrinsic nature of the strong force. For nuclear systems thus, there would be no external one-body potential in the Hartree-Fock Hamiltonian.
We will deal only with systems where all possible single-particle states below a certain level are filled up. Such systems are called closed shell systems, a naming inspired from atomic and nuclear physics. These closed shell systems define what is frequently named magic numbers. Quantum dots exhibit also magic numbers, meaning that the addition or removal of one eletron requires more energy than systems where the lowest-lying shells are not filled. Using the harmonic oscillator in two dimensions as basis functions (with degenerate single-particle energies) the magic numbers are N=2 , N=6 , N=12 , N=20 etc, where N is the number of electrons. See the table below for more details.
We write our Hamiltonian as a one-body part \hat{H}_0=\sum_{i=1}^{N_e}\left(-{\frac{1}{2}}\nabla^2_{i}+\frac{ \omega^2}{2}r^2_{i} \right), and an interacting part \hat{V}=\sum_{i < j}^{N_e}\frac{1}{|\boldsymbol{r}_i-\boldsymbol{r}_j|}. The unperturbed part of the Hamiltonian yields the single-particle energies \epsilon_i = \omega\left(2n+|m| + 1\right), where n = 0,1,2,3,.. and m = 0, \pm 1, \pm 2,.. . The index i runs from 0,1,2,\dots .
The integral \langle pq \vert \hat{v} \vert rs \rangle = A\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \exp{[-(x_1^2+x_2^2+y_1^2+y_2^2)/2]}f(x_1,y_1,x_2,y_2)dx_1dy_1dx_2dy_2, can be calculate in analytical form if we switch to polar coordinates.
Instead of Hermite polynomials it is convenient for the Hartree-Fock calculations to use polar coordinates.
Using polar coordinates \begin{align} x &= r \cos \theta \label{_auto2}\\ y &= r \sin \theta \label{_auto3}\\ r &= \sqrt{x^2 + y^2}, \label{_auto4} \end{align} the normalized solution for the angular part in two dimensions is Y(\theta) = \frac{1}{\sqrt{2\pi}} e^{im\theta} \label{normalized angular part}
The total wavefunction must satisfy the physical condition that \psi(r,\theta) = \psi(r,\theta+2\pi) This makes a restriction on the quantum number m which can take integral values m = 0, \pm 1, \pm 2, ...
The time-independent part of the wave function is separated in an angular and a radial part \psi(r,\theta) = R(r) \frac{1}{\sqrt{2\pi}} e^{im\theta}, \qquad m=0,\pm 1, \pm 2,...
The solution of the radial equation is R_{nm}(r) = \sqrt{\frac{2n!}{(n+|m|)!}}\beta^{\frac{1}{2}(|m|+1)}r^{|m|}e^{-\frac{1}{2}\beta r^2} L_n^{|m|} (\beta r^2) Here the subscript n denote the principal quantum number, and m is the angular momentum number \begin{align} n &= 0, 1, 2, 3, .... \label{_auto5}\\ m &= 0, \pm 1, \pm, 2, \pm 3,... \label{_auto6} \end{align} L_n^{|m|} is the associated Laguerre polynomials discussed above, and \beta is defined as \beta = \frac{m\omega}{\hbar}
The final eigenfunction for an electron moving in a two-dimensional harmonic oscillator is then \psi(r,\theta) = \sqrt{\frac{n!}{\pi(n+|m|)!}}\beta^{\frac{1}{2}(|m|+1)}r^{|m|}e^{-\frac{1}{2}\beta r^2} L_n^{|m|} (\beta r^2) e^{im\theta}, with the eigenvalue E = \hbar \omega(2n+|m|+1) which is the same as the energy with cartesian coordinates but now in terms of n_x and n_y , that is E = \hbar \omega(n_x+n_y+1).
The program is based on the analytical expression given by Anisimova and Matulis. It returns the value of the integral (without checking for spin values, you need to add this test) using as input the quantum numbers n_i and m_i , that is \langle pq \vert \hat{v} \vert rs \rangle =\langle (n_pm_p)(n_qm_q) \vert \hat{v} \vert (n_rm_r)(n_sm_s) \rangle, and the main code is (click on the above link for the full code)
#include "Coulomb_Functions.hpp"
int main(int argc, char * argv[])
{
if(argc != 10){ std::cerr << "Wrong Input: should be ./QD_Coulomb hw n1 ml1 n2 ml2 n3 ml3 n4 ml4" << std::endl; exit(1); }
double hw = std::atof(argv[1]);
int n1 = std::atoi(argv[2]);
int ml1 = std::atoi(argv[3]);
int n2 = std::atoi(argv[4]);
int ml2 = std::atoi(argv[5]);
int n3 = std::atoi(argv[6]);
int ml3 = std::atoi(argv[7]);
int n4 = std::atoi(argv[8]);
int ml4 = std::atoi(argv[9]);
double TBME = Coulomb_HO(hw, n1, ml1, n2, ml2, n3, ml3, n4, ml4);
std::cout << std::setprecision(12);
std::cout << "< " << n1 << "," << ml1 << " ; " << n2 << "," << ml2 << " || V || " << n3 << "," << ml3 << " ; " << n4 << "," << ml4 << " > = " << TBME << std::endl;
return 0;
}
When setting up the matrix elements \langle pq \vert \hat{v} \vert rs \rangle =\langle (n_pm_p)(n_qm_q) \vert \hat{v} \vert (n_rm_r)(n_sm_s) \rangle, you need to take into account that there are conserved two-electron quantum numbers since the Hamiltonian is invariant under rotations, namely m_p+m_q = M = m_r+m_s, and the total spin projection which is not included in the above code. You need to add this as a test, that is check that \sigma_p+\sigma_q = S_z = \sigma_r+\sigma_s. Finally, you need to ensure that the single-particle spinors satisfy \sigma_p=\sigma_r and \sigma_q=\sigma_s . Pay in particular attention to this when you compute the exchange matrix elements.
The calculus of variations involves problems where the quantity to be minimized or maximized is an integral.
In the general case we have an integral of the type E[\Phi]= \int_a^b f(\Phi(x),\frac{\partial \Phi}{\partial x},x)dx, where E is the quantity which is sought minimized or maximized. The problem is that although f is a function of the variables \Phi , \partial \Phi/\partial x and x , the exact dependence of \Phi on x is not known. This means again that even though the integral has fixed limits a and b , the path of integration is not known. In our case the unknown quantities are the single-particle wave functions and we wish to choose an integration path which makes the functional E[\Phi] stationary. This means that we want to find minima, or maxima or saddle points. In physics we search normally for minima. Our task is therefore to find the minimum of E[\Phi] so that its variation \delta E is zero subject to specific constraints. In our case the constraints appear as the integral which expresses the orthogonality of the single-particle wave functions. The constraints can be treated via the technique of Lagrangian multipliers
Let us specialize to the expectation value of the energy for one particle in three-dimensions. This expectation value reads E=\int dxdydz \psi^*(x,y,z) \hat{H} \psi(x,y,z), with the constraint \int dxdydz \psi^*(x,y,z) \psi(x,y,z)=1, and a Hamiltonian \hat{H}=-\frac{1}{2}\nabla^2+V(x,y,z). We will, for the sake of notational convenience, skip the variables x,y,z below, and write for example V(x,y,z)=V .
The integral involving the kinetic energy can be written as, with the function \psi vanishing strongly for large values of x,y,z (given here by the limits a and b ), \int_a^b dxdydz \psi^* \left(-\frac{1}{2}\nabla^2\right) \psi dxdydz = \psi^*\nabla\psi|_a^b+\int_a^b dxdydz\frac{1}{2}\nabla\psi^*\nabla\psi. We will drop the limits a and b in the remaining discussion. Inserting this expression into the expectation value for the energy and taking the variational minimum we obtain \delta E = \delta \left\{\int dxdydz\left( \frac{1}{2}\nabla\psi^*\nabla\psi+V\psi^*\psi\right)\right\} = 0.
The constraint appears in integral form as \int dxdydz \psi^* \psi=\mathrm{constant}, and multiplying with a Lagrangian multiplier \lambda and taking the variational minimum we obtain the final variational equation \delta \left\{\int dxdydz\left( \frac{1}{2}\nabla\psi^*\nabla\psi+V\psi^*\psi-\lambda\psi^*\psi\right)\right\} = 0. We introduce the function f f = \frac{1}{2}\nabla\psi^*\nabla\psi+V\psi^*\psi-\lambda\psi^*\psi= \frac{1}{2}(\psi^*_x\psi_x+\psi^*_y\psi_y+\psi^*_z\psi_z)+V\psi^*\psi-\lambda\psi^*\psi, where we have skipped the dependence on x,y,z and introduced the shorthand \psi_x , \psi_y and \psi_z for the various derivatives.
For \psi^* the Euler-Lagrange equations yield \frac{\partial f}{\partial \psi^*}- \frac{\partial }{\partial x}\frac{\partial f}{\partial \psi^*_x}-\frac{\partial }{\partial y}\frac{\partial f}{\partial \psi^*_y}-\frac{\partial }{\partial z}\frac{\partial f}{\partial \psi^*_z}=0, which results in -\frac{1}{2}(\psi_{xx}+\psi_{yy}+\psi_{zz})+V\psi=\lambda \psi. We can then identify the Lagrangian multiplier as the energy of the system. The last equation is nothing but the standard Schroedinger equation and the variational approach discussed here provides a powerful method for obtaining approximate solutions of the wave function.
In deriving the Hartree-Fock equations, we will expand the single-particle functions in a known basis and vary the coefficients, that is, the new single-particle wave function is written as a linear expansion in terms of a fixed chosen orthogonal basis (for example the well-known harmonic oscillator functions or the hydrogen-like functions etc). We define our new Hartree-Fock single-particle basis by performing a unitary transformation on our previous basis (labelled with greek indices) as \begin{equation} \psi_p^{HF} = \sum_{\lambda} C_{p\lambda}\phi_{\lambda}. \label{eq:newbasis} \end{equation} In this case we vary the coefficients C_{p\lambda} . If the basis has infinitely many solutions, we need to truncate the above sum. We assume that the basis \phi_{\lambda} is orthogonal. A unitary transformation keeps the orthogonality, as discussed in exercise 1 below.
In the previous slide we stated that a unitary transformation keeps the orthogonality, as discussed in exercise 1 below. To see this consider first a basis of vectors \mathbf{v}_i , \mathbf{v}_i = \begin{bmatrix} v_{i1} \\ \dots \\ \dots \\v_{in} \end{bmatrix} We assume that the basis is orthogonal, that is \mathbf{v}_j^T\mathbf{v}_i = \delta_{ij}. An orthogonal or unitary transformation \mathbf{w}_i=\mathbf{U}\mathbf{v}_i, preserves the dot product and orthogonality since \mathbf{w}_j^T\mathbf{w}_i=(\mathbf{U}\mathbf{v}_j)^T\mathbf{U}\mathbf{v}_i=\mathbf{v}_j^T\mathbf{U}^T\mathbf{U}\mathbf{v}_i= \mathbf{v}_j^T\mathbf{v}_i = \delta_{ij}.
This means that if the coefficients C_{p\lambda} belong to a unitary or orthogonal trasformation (using the Dirac bra-ket notation) \vert p\rangle = \sum_{\lambda} C_{p\lambda}\vert\lambda\rangle, orthogonality is preserved, that is \langle \alpha \vert \beta\rangle = \delta_{\alpha\beta} and \langle p \vert q\rangle = \delta_{pq} .
This propertry is extremely useful when we build up a basis of many-body Stater determinant based states.
Note also that although a basis \vert \alpha\rangle contains an infinity of states, for practical calculations we have always to make some truncations.
Before we develop the Hartree-Fock equations, there is another very useful property of determinants that we will use both in connection with Hartree-Fock calculations and later shell-model calculations.
Consider the following determinant \left| \begin{array}{cc} \alpha_1b_{11}+\alpha_2sb_{12}& a_{12}\\ \alpha_1b_{21}+\alpha_2b_{22}&a_{22}\end{array} \right|=\alpha_1\left|\begin{array}{cc} b_{11}& a_{12}\\ b_{21}&a_{22}\end{array} \right|+\alpha_2\left| \begin{array}{cc} b_{12}& a_{12}\\b_{22}&a_{22}\end{array} \right|
We can generalize this to an n\times n matrix and have \left| \begin{array}{cccccc} a_{11}& a_{12} & \dots & \sum_{k=1}^n c_k b_{1k} &\dots & a_{1n}\\ a_{21}& a_{22} & \dots & \sum_{k=1}^n c_k b_{2k} &\dots & a_{2n}\\ \dots & \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots & \dots \\ a_{n1}& a_{n2} & \dots & \sum_{k=1}^n c_k b_{nk} &\dots & a_{nn}\end{array} \right|= \sum_{k=1}^n c_k\left| \begin{array}{cccccc} a_{11}& a_{12} & \dots & b_{1k} &\dots & a_{1n}\\ a_{21}& a_{22} & \dots & b_{2k} &\dots & a_{2n}\\ \dots & \dots & \dots & \dots & \dots & \dots\\ \dots & \dots & \dots & \dots & \dots & \dots\\ a_{n1}& a_{n2} & \dots & b_{nk} &\dots & a_{nn}\end{array} \right| . This is a property we will use in our Hartree-Fock discussions.
We can generalize the previous results, now with all elements a_{ij} being given as functions of linear combinations of various coefficients c and elements b_{ij} , \left| \begin{array}{cccccc} \sum_{k=1}^n b_{1k}c_{k1}& \sum_{k=1}^n b_{1k}c_{k2} & \dots & \sum_{k=1}^n b_{1k}c_{kj} &\dots & \sum_{k=1}^n b_{1k}c_{kn}\\ \sum_{k=1}^n b_{2k}c_{k1}& \sum_{k=1}^n b_{2k}c_{k2} & \dots & \sum_{k=1}^n b_{2k}c_{kj} &\dots & \sum_{k=1}^n b_{2k}c_{kn}\\ \dots & \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots &\dots \\ \sum_{k=1}^n b_{nk}c_{k1}& \sum_{k=1}^n b_{nk}c_{k2} & \dots & \sum_{k=1}^n b_{nk}c_{kj} &\dots & \sum_{k=1}^n b_{nk}c_{kn}\end{array} \right|=det(\mathbf{C})det(\mathbf{B}), where det(\mathbf{C}) and det(\mathbf{B}) are the determinants of n\times n matrices with elements c_{ij} and b_{ij} respectively. This is a property we will use in our Hartree-Fock discussions. Convince yourself about the correctness of the above expression by setting n=2 .
With our definition of the new basis in terms of an orthogonal basis we have \psi_p(x) = \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x). If the coefficients C_{p\lambda} belong to an orthogonal or unitary matrix, the new basis is also orthogonal. Our Slater determinant in the new basis \psi_p(x) is written as \frac{1}{\sqrt{A!}} \left| \begin{array}{ccccc} \psi_{p}(x_1)& \psi_{p}(x_2)& \dots & \dots & \psi_{p}(x_A)\\ \psi_{q}(x_1)&\psi_{q}(x_2)& \dots & \dots & \psi_{q}(x_A)\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ \psi_{t}(x_1)&\psi_{t}(x_2)& \dots & \dots & \psi_{t}(x_A)\end{array} \right|=\frac{1}{\sqrt{A!}} \left| \begin{array}{ccccc} \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_1)& \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_A)\\ \sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_1)&\sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_A)\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ \sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_1)&\sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_A)\end{array} \right|, which is nothing but det(\mathbf{C})det(\Phi) , with det(\Phi) being the determinant given by the basis functions \phi_{\lambda}(x) .
It is normal to choose a single-particle basis defined as the eigenfunctions of parts of the full Hamiltonian. The typical situation consists of the solutions of the one-body part of the Hamiltonian, that is we have \hat{h}_0\phi_{\lambda}=\epsilon_{\lambda}\phi_{\lambda}. The single-particle wave functions \phi_{\lambda}(\boldsymbol{r}) , defined by the quantum numbers \lambda and \boldsymbol{r} are defined as the overlap \phi_{\lambda}(\boldsymbol{r}) = \langle \boldsymbol{r} | \lambda \rangle .
In our discussions hereafter we will use our definitions of single-particle states above and below the Fermi ( F ) level given by the labels ijkl\dots \le F for so-called single-hole states and abcd\dots > F for so-called particle states. For general single-particle states we employ the labels pqrs\dots .
In Eq. \eqref{FunctionalEPhi}, restated here E[\Phi] = \sum_{\mu=1}^A \langle \mu | h | \mu \rangle + \frac{1}{2}\sum_{{\mu}=1}^A\sum_{{\nu}=1}^A \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}, we found the expression for the energy functional in terms of the basis function \phi_{\lambda}(\boldsymbol{r}) . We then varied the above energy functional with respect to the basis functions |\mu \rangle . Now we are interested in defining a new basis defined in terms of a chosen basis as defined in Eq. \eqref{eq:newbasis}. We can then rewrite the energy functional as \begin{equation} E[\Phi^{HF}] = \sum_{i=1}^A \langle i | h | i \rangle + \frac{1}{2}\sum_{ij=1}^A\langle ij|\hat{v}|ij\rangle_{AS}, \label{FunctionalEPhi2} \end{equation} where \Phi^{HF} is the new Slater determinant defined by the new basis of Eq. \eqref{eq:newbasis}.
Using Eq. \eqref{eq:newbasis} we can rewrite Eq. \eqref{FunctionalEPhi2} as \begin{equation} E[\Psi] = \sum_{i=1}^A \sum_{\alpha\beta} C^*_{i\alpha}C_{i\beta}\langle \alpha | h | \beta \rangle + \frac{1}{2}\sum_{ij=1}^A\sum_{{\alpha\beta\gamma\delta}} C^*_{i\alpha}C^*_{j\beta}C_{i\gamma}C_{j\delta}\langle \alpha\beta|\hat{v}|\gamma\delta\rangle_{AS}. \label{FunctionalEPhi3} \end{equation}
We wish now to minimize the above functional. We introduce again a set of Lagrange multipliers, noting that since \langle i | j \rangle = \delta_{i,j} and \langle \alpha | \beta \rangle = \delta_{\alpha,\beta} , the coefficients C_{i\gamma} obey the relation \langle i | j \rangle=\delta_{i,j}=\sum_{\alpha\beta} C^*_{i\alpha}C_{i\beta}\langle \alpha | \beta \rangle= \sum_{\alpha} C^*_{i\alpha}C_{i\alpha}, which allows us to define a functional to be minimized that reads \begin{equation} F[\Phi^{HF}]=E[\Phi^{HF}] - \sum_{i=1}^A\epsilon_i\sum_{\alpha} C^*_{i\alpha}C_{i\alpha}. \label{_auto7} \end{equation}
Minimizing with respect to C^*_{i\alpha} , remembering that the equations for C^*_{i\alpha} and C_{i\alpha} can be written as two independent equations, we obtain \frac{d}{dC^*_{i\alpha}}\left[ E[\Phi^{HF}] - \sum_{j}\epsilon_j\sum_{\alpha} C^*_{j\alpha}C_{j\alpha}\right]=0, which yields for every single-particle state i and index \alpha (recalling that the coefficients C_{i\alpha} are matrix elements of a unitary (or orthogonal for a real symmetric matrix) matrix) the following Hartree-Fock equations \sum_{\beta} C_{i\beta}\langle \alpha | h | \beta \rangle+ \sum_{j=1}^A\sum_{\beta\gamma\delta} C^*_{j\beta}C_{j\delta}C_{i\gamma}\langle \alpha\beta|\hat{v}|\gamma\delta\rangle_{AS}=\epsilon_i^{HF}C_{i\alpha}.
We can rewrite this equation as (changing dummy variables) \sum_{\beta} \left\{\langle \alpha | h | \beta \rangle+ \sum_{j}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}\right\}C_{i\beta}=\epsilon_i^{HF}C_{i\alpha}. Note that the sums over greek indices run over the number of basis set functions (in principle an infinite number).
Defining h_{\alpha\beta}^{HF}=\langle \alpha | h | \beta \rangle+ \sum_{j=1}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}, we can rewrite the new equations as \begin{equation} \sum_{\gamma}h_{\alpha\beta}^{HF}C_{i\beta}=\epsilon_i^{HF}C_{i\alpha}. \label{eq:newhf} \end{equation} The latter is nothing but a standard eigenvalue problem.
It suffices to tabulate the matrix elements \langle \alpha | h | \beta \rangle and \langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS} once and for all. Successive iterations require thus only a look-up in tables over one-body and two-body matrix elements. These details will be discussed below when we solve the Hartree-Fock equations numerically.
Our Hartree-Fock matrix is thus \hat{h}_{\alpha\beta}^{HF}=\langle \alpha | \hat{h}_0 | \beta \rangle+ \sum_{j=1}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}. The Hartree-Fock equations are solved in an iterative waym starting with a guess for the coefficients C_{j\gamma}=\delta_{j,\gamma} and solving the equations by diagonalization till the new single-particle energies \epsilon_i^{\mathrm{HF}} do not change anymore by a prefixed quantity.
Normally we assume that the single-particle basis |\beta\rangle forms an eigenbasis for the operator \hat{h}_0 , meaning that the Hartree-Fock matrix becomes \hat{h}_{\alpha\beta}^{HF}=\epsilon_{\alpha}\delta_{\alpha,\beta}+ \sum_{j=1}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}. The Hartree-Fock eigenvalue problem \sum_{\beta}\hat{h}_{\alpha\beta}^{HF}C_{i\beta}=\epsilon_i^{\mathrm{HF}}C_{i\alpha}, can be written out in a more compact form as \hat{h}^{HF}\hat{C}=\epsilon^{\mathrm{HF}}\hat{C}.
The Hartree-Fock equations are, in their simplest form, solved in an iterative way, starting with a guess for the coefficients C_{i\alpha} . We label the coefficients as C_{i\alpha}^{(n)} , where the subscript n stands for iteration n . To set up the algorithm we can proceed as follows:
Solving the Hartree-Fock eigenvalue problem yields then new eigenvectors C_{i\alpha}^{(1)} and eigenvalues \epsilon_i^{HF(1)} .
We can rewrite the ground state energy by adding and subtracting \hat{u}^{HF}(x_i) E_0^{HF} =\langle \Phi_0 | \hat{H} | \Phi_0\rangle = \sum_{i\le F}^A \langle i | \hat{h}_0 +\hat{u}^{HF}| j\rangle+ \frac{1}{2}\sum_{i\le F}^A\sum_{j \le F}^A\left[\langle ij |\hat{v}|ij \rangle-\langle ij|\hat{v}|ji\rangle\right]-\sum_{i\le F}^A \langle i |\hat{u}^{HF}| i\rangle, which results in E_0^{HF} = \sum_{i\le F}^A \varepsilon_i^{HF} + \frac{1}{2}\sum_{i\le F}^A\sum_{j \le F}^A\left[\langle ij |\hat{v}|ij \rangle-\langle ij|\hat{v}|ji\rangle\right]-\sum_{i\le F}^A \langle i |\hat{u}^{HF}| i\rangle. Our single-particle states ijk\dots are now single-particle states obtained from the solution of the Hartree-Fock equations.
Using our definition of the Hartree-Fock single-particle energies we obtain then the following expression for the total ground-state energy E_0^{HF} = \sum_{i\le F}^A \varepsilon_i - \frac{1}{2}\sum_{i\le F}^A\sum_{j \le F}^A\left[\langle ij |\hat{v}|ij \rangle-\langle ij|\hat{v}|ji\rangle\right]. This form will be used in our discussion of Koopman's theorem.
Calculating the difference E[\Phi^{\mathrm{HF}}(N)]- E[\Phi^{\mathrm{HF}}(N-1)] = \langle k | \hat{h}_0 | k \rangle + \frac{1}{2}\sum_{i=1;i\ne k}^N\langle ik|\hat{v}|ik\rangle_{AS} \frac{1}{2}\sum_{j=1;j\ne k}^N\langle kj|\hat{v}|kj\rangle_{AS}, we obtain E[\Phi^{\mathrm{HF}}(N)]- E[\Phi^{\mathrm{HF}}(N-1)] = \langle k | \hat{h}_0 | k \rangle + \frac{1}{2}\sum_{j=1}^N\langle kj|\hat{v}|kj\rangle_{AS} which is just our definition of the Hartree-Fock single-particle energy E[\Phi^{\mathrm{HF}}(N)]- E[\Phi^{\mathrm{HF}}(N-1)] = \epsilon_k^{\mathrm{HF}}
Similarly, we can now compute the difference (we label the single-particle states above the Fermi level as abcd > F ) E[\Phi^{\mathrm{HF}}(N+1)]- E[\Phi^{\mathrm{HF}}(N)]= \epsilon_a^{\mathrm{HF}}. These two equations can thus be used to the electron affinity or ionization energies, respectively. Koopman's theorem states that for example the ionization energy of a closed-shell system is given by the energy of the highest occupied single-particle state. If we assume that changing the number of electrons from N to N+1 does not change the Hartree-Fock single-particle energies and eigenfunctions, then Koopman's theorem simply states that the ionization energy of an atom is given by the single-particle energy of the last bound state. In a similar way, we can also define the electron affinities.
As an example, consider a simple model for atomic sodium, Na. Neutral sodium has eleven electrons, with the weakest bound one being confined the 3s single-particle quantum numbers. The energy needed to remove an electron from neutral sodium is rather small, 5.1391 eV, a feature which pertains to all alkali metals. Having performed a Hartree-Fock calculation for neutral sodium would then allows us to compute the ionization energy by using the single-particle energy for the 3s states, namely \epsilon_{3s}^{\mathrm{HF}} .
From these considerations, we see that Hartree-Fock theory allows us to make a connection between experimental observables (here ionization and affinity energies) and the underlying interactions between particles. In this sense, we are now linking the dynamics and structure of a many-body system with the laws of motion which govern the system. Our approach is a reductionistic one, meaning that we want to understand the laws of motion in terms of the particles or degrees of freedom which we believe are the fundamental ones. Our Slater determinant, being constructed as the product of various single-particle functions, follows this philosophy.
The Hamiltonian for a system of N electrons confined in a harmonic potential reads \hat{H} = \sum_{i=1}^{N} \frac{\hat{p}_{i}^{2}}{2m}+\sum_{i=1}^{N} \frac{1}{2} m\omega {r}_{i}^{2}+\sum_{i < j} \hat{V}_{ij}, with \hat{V}_{ij} is the two-body potential whose matrix elements are calculated on fly in your program. See expression below.
The Hartree-Fock algorithm can be broken down as follows. We recall that our Hartree-Fock matrix is \hat{h}_{\alpha\beta}^{HF}=\langle \alpha \vert\hat{h}_0 \vert \beta \rangle+ \sum_{j=1}^N\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|V|\beta\delta\rangle_{AS}. Normally we assume that the single-particle basis \vert\beta\rangle forms an eigenbasis for the operator \hat{h}_0 (this is our case), meaning that the Hartree-Fock matrix becomes \hat{h}_{\alpha\beta}^{HF}=\epsilon_{\alpha}\delta_{\alpha,\beta}+ \sum_{j=1}^N\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|V|\beta\delta\rangle_{AS}. The Hartree-Fock eigenvalue problem \sum_{\beta}\hat{h}_{\alpha\beta}^{HF}C_{i\beta}=\epsilon_i^{\mathrm{HF}}C_{i\alpha}, can be written out in a more compact form as \hat{h}^{HF}\hat{C}=\epsilon^{\mathrm{HF}}\hat{C}.
The equations are often rewritten in terms of a so-called density matrix, which is defined as \begin{equation} \rho_{\gamma\delta}=\sum_{i=1}^{N}\langle\gamma|i\rangle\langle i|\delta\rangle = \sum_{i=1}^{N}C_{i\gamma}C^*_{i\delta}. \label{_auto8} \end{equation} It means that we can rewrite the Hartree-Fock Hamiltonian as \hat{h}_{\alpha\beta}^{HF}=\epsilon_{\alpha}\delta_{\alpha,\beta}+ \sum_{\gamma\delta} \rho_{\gamma\delta}\langle \alpha\gamma|V|\beta\delta\rangle_{AS}. It is convenient to use the density matrix since we can precalculate in every iteration the product of two eigenvector components C .
Here we show a simple code in python for a Hartree-Fock calculation using precalculated matrix elements.
import numpy as np
from decimal import Decimal
# expectation value for the one body part, Harmonic oscillator in three dimensions
def onebody(i, n, l):
homega = 10.0
return homega*(2*n[i] + l[i] + 1.5)
if __name__ == '__main__':
Nparticles = 16
""" Read quantum numbers from file """
index = []
n = []
l = []
j = []
mj = []
tz = []
spOrbitals = 0
with open("nucleispnumbers.dat", "r") as qnumfile:
for line in qnumfile:
nums = line.split()
if len(nums) != 0:
index.append(int(nums[0]))
n.append(int(nums[1]))
l.append(int(nums[2]))
j.append(int(nums[3]))
mj.append(int(nums[4]))
tz.append(int(nums[5]))
spOrbitals += 1
""" Read two-nucleon interaction elements (integrals) from file, brute force 4-dim array """
nninteraction = np.zeros([spOrbitals, spOrbitals, spOrbitals, spOrbitals])
with open("nucleitwobody.dat", "r") as infile:
for line in infile:
number = line.split()
a = int(number[0]) - 1
b = int(number[1]) - 1
c = int(number[2]) - 1
d = int(number[3]) - 1
#print a, b, c, d, float(l[4])
nninteraction[a][b][c][d] = Decimal(number[4])
""" Set up single-particle integral """
singleparticleH = np.zeros(spOrbitals)
for i in range(spOrbitals):
singleparticleH[i] = Decimal(onebody(i, n, l))
""" Star HF-iterations, preparing variables and density matrix """
""" Coefficients for setting up density matrix, assuming only one along the diagonals """
C = np.eye(spOrbitals) # HF coefficients
DensityMatrix = np.zeros([spOrbitals,spOrbitals])
for gamma in range(spOrbitals):
for delta in range(spOrbitals):
sum = 0.0
for i in range(Nparticles):
sum += C[gamma][i]*C[delta][i]
DensityMatrix[gamma][delta] = Decimal(sum)
maxHFiter = 100
epsilon = 1.0e-5
difference = 1.0
hf_count = 0
oldenergies = np.zeros(spOrbitals)
newenergies = np.zeros(spOrbitals)
while hf_count < maxHFiter and difference > epsilon:
print "############### Iteration %i ###############" % hf_count
HFmatrix = np.zeros([spOrbitals,spOrbitals])
for alpha in range(spOrbitals):
for beta in range(spOrbitals):
""" If tests for three-dimensional systems, including isospin conservation """
if l[alpha] != l[beta] and j[alpha] != j[beta] and mj[alpha] != mj[beta] and tz[alpha] != tz[beta]: continue
""" Setting up the Fock matrix using the density matrix and antisymmetrized NN interaction in m-scheme """
sumFockTerm = 0.0
for gamma in range(spOrbitals):
for delta in range(spOrbitals):
if (mj[alpha]+mj[gamma]) != (mj[beta]+mj[delta]) and (tz[alpha]+tz[gamma]) != (tz[beta]+tz[delta]): continue
sumFockTerm += DensityMatrix[gamma][delta]*nninteraction[alpha][gamma][beta][delta]
HFmatrix[alpha][beta] = Decimal(sumFockTerm)
""" Adding the one-body term, here plain harmonic oscillator """
if beta == alpha: HFmatrix[alpha][alpha] += singleparticleH[alpha]
spenergies, C = np.linalg.eigh(HFmatrix)
""" Setting up new density matrix in m-scheme """
DensityMatrix = np.zeros([spOrbitals,spOrbitals])
for gamma in range(spOrbitals):
for delta in range(spOrbitals):
sum = 0.0
for i in range(Nparticles):
sum += C[gamma][i]*C[delta][i]
DensityMatrix[gamma][delta] = Decimal(sum)
newenergies = spenergies
""" Brute force computation of difference between previous and new sp HF energies """
sum =0.0
for i in range(spOrbitals):
sum += (abs(newenergies[i]-oldenergies[i]))/spOrbitals
difference = sum
oldenergies = newenergies
print "Single-particle energies, ordering may have changed "
for i in range(spOrbitals):
print('{0:4d} {1:.4f}'.format(i, Decimal(oldenergies[i])))
hf_count += 1
If we however wish to read from file, we can store the matrix elements in two ways:
The time-consuming part in the Hartree-Fock calculations involves the calculation of the two-body matrix. Furthermore, the storage of these matrix elements plays also an important role, in particular we wish to access the table of matrix elements as fast as possible.
Furthermore, in setting up a table for the two-body matrix elements we can convert the need of using four indices pqrs of \langle pr | \hat{v}|qs\rangle, which in a brute forces way could be coded as a four-dimensional array, to a two-dimensional array V_{lm} , where l and m stand for all possible two-body configurations pq .
Each number l and m in V_{lm} should then point to a set of single-particle states (p,q) and (r,s) .
In our case, since we have symmetries which allow us to set p\le q , we have, with d single particle states a total of d(d+1)/2 two-body configurations.
How do we store such a matrix? The simplest thing to do is to convert it into a one-dimensional array. How do we achieve that?
We now have a matrix V of dimension n\times n and we want to store the elements V_{lm} as a one-dimensional array A using 0 \le l \le m \le n-1 . For
To find the number ( \mathrm{number}(l,m) ) in a one-dimensional array A which corresponds to a matrix element V_{lm} , we note that \mathrm{number}(l,m)=\sum_{\nu =0}^{l-1}\left(n-\nu\right)+m-l=\frac{l(2n-l-1)}{2}+m. The first matrix element V(0,0) is obviously given by the element A(0) .
We have thus reduced a four dimensional array to a one-dimensional array, where the given pairs (p,q) and (r,s) point to the matrix indices l and m , respectively. The latter are used to find the explicit number \mathrm{number}(l,m) which points to the desired matrix element stored in a one-dimensional array.
Another way to store the matrix elements is to organize the matrix elements according to conserved quantum numbers. This means setting up a block structure and to look up matrix elements using two-body conserved quantum numbers. For quantum dots, the two-body quantum numbers that are conserved are The total orbital momentum projection M = m_1 + m_2, and the total spin projection M_s = m_{s_1} + m_{s_2}
For 2 shells we have
M | M_s | \alpha | State |
---|---|---|---|
-2 | 0 | 1 | \vert 3,2\rangle |
-1 | -1 | 3 | \vert 2,0\rangle |
-1 | 0 | 4 | \vert 3,0\rangle |
-1 | 0 | 4 | \vert 2,1\rangle |
-1 | 1 | 5 | \vert 3,1\rangle |
0 | -1 | 6 | \vert 4,2\rangle |
0 | 0 | 7 | \vert 1,0\rangle |
0 | 0 | 7 | \vert 5,2\rangle |
0 | 0 | 7 | \vert 4,3\rangle |
0 | 1 | 8 | \vert 5,3\rangle |
1 | -1 | 9 | \vert 4,0\rangle |
1 | 0 | 10 | \vert 5,0\rangle |
1 | 0 | 10 | \vert 4,1\rangle |
1 | 1 | 11 | \vert 5,1\rangle |
2 | 0 | 13 | \vert 5,4\rangle |
In this table we have paired orbitals with same M and M_s . Where \alpha \in \{0,...,N\} and N is the number of pairs \{M,M_s\} . Notice that for \alpha = 1 we only have one pair, and for \alpha = 4 , we have two pairs.
We list here some selected Hartree-Fock results for N=6 electrons as function of the number of major shells R included. Here we have \omega =1 a.u.
R | E_0^{HF} |
3 | 21.59320 |
4 | 20.76692 |
5 | 20.7484 |
6 | 20.72026 |
7 | 20.72013 |
8 | 20.71925 |
9 | 20.71925 |
10 | 20.71922 |
11 | 20.71922 |
12 | 20.71922 |
13 | 20.71922 |
The results are practically converged for approximately R=10-13 .
We list here some selected Hartree-Fock results for N=6 electrons as function of the number of major shells R included. Here we have \omega =0.1 a.u.
R | E_0^{HF} |
4 | 4.01979 |
5 | 3.96315 |
6 | 3.87062 |
7 | 3.86314 |
8 | 3.85288 |
9 | 3.85259 |
10 | 3.85239 |
11 | 3.85239 |
12 | 3.85238 |
13 | 3.85238 |
Again, the results are practically converged for approximately R=10-13 .