The textbooks we will use, in addition to our lecture notes, are
Most quantum mechanical problems of interest in for example atomic, molecular, nuclear and solid state physics consist of a large number of interacting electrons and ions or nucleons.
The total number of particles \( N \) is usually sufficiently large that an exact solution cannot be found.
Typically, the expectation value for a chosen hamiltonian for a system of \( N \) particles is
$$
\langle H \rangle =
\frac{\int d\boldsymbol{R}_1d\boldsymbol{R}_2\dots d\boldsymbol{R}_N
\Psi^{\ast}(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N)
H(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N)
\Psi(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N)}
{\int d\boldsymbol{R}_1d\boldsymbol{R}_2\dots d\boldsymbol{R}_N
\Psi^{\ast}(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N)
\Psi(\boldsymbol{R_1},\boldsymbol{R}_2,\dots,\boldsymbol{R}_N)},
$$
an in general intractable problem.
This integral is actually the starting point in a Variational Monte Carlo calculation. Gaussian quadrature: Forget it! Given 10 particles and 10 mesh points for each degree of freedom and an ideal 1 Tflops machine (all operations take the same time), how long will it take to compute the above integral? The lifetime of the universe is of the order of \( 10^{17} \) s.
As an example from the nuclear many-body problem, we have Schroedinger's equation as a differential equation
$$
\hat{H}\Psi(\boldsymbol{r}_1,..,\boldsymbol{r}_A,\alpha_1,..,\alpha_A)=E\Psi(\boldsymbol{r}_1,..,\boldsymbol{r}_A,\alpha_1,..,\alpha_A)
$$
where
$$
\boldsymbol{r}_1,..,\boldsymbol{r}_A,
$$
are the coordinates and
$$
\alpha_1,..,\alpha_A,
$$
are sets of relevant quantum numbers such as spin and isospin for a system of \( A \) nucleons (\( A=N+Z \), \( N \) being the number of neutrons and \( Z \) the number of protons).
There are
$$
2^A\times \left(\begin{array}{c} A\\ Z\end{array}\right)
$$
coupled second-order differential equations in \( 3A \) dimensions.
For a nucleus like beryllium-10 this number is 215040. This is a truely challenging many-body problem.
Methods like partial differential equations can at most be used for 2-3 particles.
The physics of the system hints at which many-body methods to use.
We start with the variational principle. Given a hamiltonian \( H \) and a trial wave function \( \Psi_T \), the variational principle states that the expectation value of \( \langle H \rangle \), defined through
$$
E[H]= \langle H \rangle =
\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})H(\boldsymbol{R})\Psi_T(\boldsymbol{R})}
{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})},
$$
is an upper bound to the ground state energy \( E_0 \) of the hamiltonian \( H \), that is
$$
E_0 \le \langle H \rangle .
$$
In general, the integrals involved in the calculation of various expectation values are multi-dimensional ones. Traditional integration methods such as the Gauss-Legendre will not be adequate for say the computation of the energy of a many-body system.
The trial wave function can be expanded in the eigenstates of the hamiltonian since they form a complete set, viz.,
$$
\Psi_T(\boldsymbol{R})=\sum_i a_i\Psi_i(\boldsymbol{R}),
$$
and assuming the set of eigenfunctions to be normalized one obtains
$$
\frac{\sum_{nm}a^*_ma_n \int d\boldsymbol{R}\Psi^{\ast}_m(\boldsymbol{R})H(\boldsymbol{R})\Psi_n(\boldsymbol{R})}
{\sum_{nm}a^*_ma_n \int d\boldsymbol{R}\Psi^{\ast}_m(\boldsymbol{R})\Psi_n(\boldsymbol{R})} =\frac{\sum_{n}a^2_n E_n}
{\sum_{n}a^2_n} \ge E_0,
$$
where we used that \( H(\boldsymbol{R})\Psi_n(\boldsymbol{R})=E_n\Psi_n(\boldsymbol{R}) \). In general, the integrals involved in the calculation of various expectation values are multi-dimensional ones. The variational principle yields the lowest state of a given symmetry.
In most cases, a wave function has only small values in large parts of configuration space, and a straightforward procedure which uses homogenously distributed random points in configuration space will most likely lead to poor results. This may suggest that some kind of importance sampling combined with e.g., the Metropolis algorithm may be a more efficient way of obtaining the ground state energy. The hope is then that those regions of configurations space where the wave function assumes appreciable values are sampled more efficiently.
The tedious part in a VMC calculation is the search for the variational minimum. A good knowledge of the system is required in order to carry out reasonable VMC calculations. This is not always the case, and often VMC calculations serve rather as the starting point for so-called diffusion Monte Carlo calculations (DMC). DMC is a way of solving exactly the many-body Schroedinger equation by means of a stochastic procedure. A good guess on the binding energy and its wave function is however necessary. A carefully performed VMC calculation can aid in this context.
$$
E[H]=\langle H \rangle =
\frac{\int d\boldsymbol{R}\Psi^{\ast}_{T}(\boldsymbol{R},\boldsymbol{\alpha})H(\boldsymbol{R})\Psi_{T}(\boldsymbol{R},\boldsymbol{\alpha})}
{\int d\boldsymbol{R}\Psi^{\ast}_{T}(\boldsymbol{R},\boldsymbol{\alpha})\Psi_{T}(\boldsymbol{R},\boldsymbol{\alpha})}.
$$
Choose a trial wave function \( \psi_T(\boldsymbol{R}) \).
$$
P(\boldsymbol{R})= \frac{\left|\psi_T(\boldsymbol{R})\right|^2}{\int \left|\psi_T(\boldsymbol{R})\right|^2d\boldsymbol{R}}.
$$
This is our new probability distribution function (PDF). The approximation to the expectation value of the Hamiltonian is now
$$
E[H(\boldsymbol{\alpha})] =
\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R},\boldsymbol{\alpha})H(\boldsymbol{R})\Psi_T(\boldsymbol{R},\boldsymbol{\alpha})}
{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R},\boldsymbol{\alpha})\Psi_T(\boldsymbol{R},\boldsymbol{\alpha})}.
$$
Define a new quantity
$$
E_L(\boldsymbol{R},\boldsymbol{\alpha})=\frac{1}{\psi_T(\boldsymbol{R},\boldsymbol{\alpha})}H\psi_T(\boldsymbol{R},\boldsymbol{\alpha}),
\tag{1}
$$
called the local energy, which, together with our trial PDF yields
$$
E[H(\boldsymbol{\alpha})]=\int P(\boldsymbol{R})E_L(\boldsymbol{R}) d\boldsymbol{R}\approx \frac{1}{N}\sum_{i=1}^N E_L(\boldsymbol{R_i},\boldsymbol{\alpha})
\tag{2}
$$
with \( N \) being the number of Monte Carlo samples.
The Algorithm for performing a variational Monte Carlo calculations runs thus as this
Observe that the jumping in space is governed by the variable step. This is Called brute-force sampling. Need importance sampling to get more relevant sampling, see lectures below.
The radial Schroedinger equation for the hydrogen atom can be written as
$$
-\frac{\hbar^2}{2m}\frac{\partial^2 u(r)}{\partial r^2}-
\left(\frac{ke^2}{r}-\frac{\hbar^2l(l+1)}{2mr^2}\right)u(r)=Eu(r),
$$
or with dimensionless variables
$$
-\frac{1}{2}\frac{\partial^2 u(\rho)}{\partial \rho^2}-
\frac{u(\rho)}{\rho}+\frac{l(l+1)}{2\rho^2}u(\rho)-\lambda u(\rho)=0,
\tag{3}
$$
with the hamiltonian
$$
H=-\frac{1}{2}\frac{\partial^2 }{\partial \rho^2}-
\frac{1}{\rho}+\frac{l(l+1)}{2\rho^2}.
$$
We introduce the variational parameter \( \alpha \) in the trial wave function
$$
u_T^{\alpha}(\rho)=\alpha\rho e^{-\alpha\rho}.
$$
Inserting this wave function into the expression for the local energy \( E_L \) gives
$$
E_L(\rho)=-\frac{1}{\rho}-
\frac{\alpha}{2}\left(\alpha-\frac{2}{\rho}\right).
$$
A simple variational Monte Carlo calculation results in
\( \alpha \) | \( \langle H \rangle \) | \( \sigma^2 \) | \( \sigma/\sqrt{N} \) |
7.00000E-01 | -4.57759E-01 | 4.51201E-02 | 6.71715E-04 |
8.00000E-01 | -4.81461E-01 | 3.05736E-02 | 5.52934E-04 |
9.00000E-01 | -4.95899E-01 | 8.20497E-03 | 2.86443E-04 |
1.00000E-00 | -5.00000E-01 | 0.00000E+00 | 0.00000E+00 |
1.10000E+00 | -4.93738E-01 | 1.16989E-02 | 3.42036E-04 |
1.20000E+00 | -4.75563E-01 | 8.85899E-02 | 9.41222E-04 |
1.30000E+00 | -4.54341E-01 | 1.45171E-01 | 1.20487E-03 |
We note that at \( \alpha=1 \) we obtain the exact result, and the variance is zero, as it should. The reason is that we then have the exact wave function, and the action of the hamiltionan on the wave function
$$
H\psi = \mathrm{constant}\times \psi,
$$
yields just a constant. The integral which defines various expectation values involving moments of the hamiltonian becomes then
$$
\langle H^n \rangle =
\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})H^n(\boldsymbol{R})\Psi_T(\boldsymbol{R})}
{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}=
\mathrm{constant}\times\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}
{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}=\mathrm{constant}.
$$
This gives an important information: the exact wave function leads to zero variance!
Variation is then performed by minimizing both the energy and the variance.
For bosons in a harmonic oscillator-like trap we will use is a spherical (S) or an elliptical (E) harmonic trap in one, two and finally three dimensions, with the latter given by
$$
\begin{equation}
V_{ext}(\mathbf{r}) = \Bigg\{
\begin{array}{ll}
\frac{1}{2}m\omega_{ho}^2r^2 & (S)\\
\strut
\frac{1}{2}m[\omega_{ho}^2(x^2+y^2) + \omega_z^2z^2] & (E)
\tag{4}
\end{array}
\end{equation}
$$
where (S) stands for symmetric and
$$
\begin{equation}
\hat{H} = \sum_i^N \left(
\frac{-\hbar^2}{2m}
{ \bigtriangledown }_{i}^2 +
V_{ext}({\bf{r}}_i)\right) +
\sum_{i < j}^{N} V_{int}({\bf{r}}_i,{\bf{r}}_j),
\tag{5}
\end{equation}
$$
as the two-body Hamiltonian of the system.
We will represent the inter-boson interaction by a pairwise, repulsive potential
$$
\begin{equation}
V_{int}(|\mathbf{r}_i-\mathbf{r}_j|) = \Bigg\{
\begin{array}{ll}
\infty & {|\mathbf{r}_i-\mathbf{r}_j|} \leq {a}\\
0 & {|\mathbf{r}_i-\mathbf{r}_j|} > {a}
\end{array}
\tag{6}
\end{equation}
$$
where \( a \) is the so-called hard-core diameter of the bosons. Clearly, \( V_{int}(|\mathbf{r}_i-\mathbf{r}_j|) \) is zero if the bosons are separated by a distance \( |\mathbf{r}_i-\mathbf{r}_j| \) greater than \( a \) but infinite if they attempt to come within a distance \( |\mathbf{r}_i-\mathbf{r}_j| \leq a \).
Our trial wave function for the ground state with \( N \) atoms is given by
$$
\begin{equation}
\Psi_T(\mathbf{R})=\Psi_T(\mathbf{r}_1, \mathbf{r}_2, \dots \mathbf{r}_N,\alpha,\beta)=\prod_i g(\alpha,\beta,\mathbf{r}_i)\prod_{i < j}f(a,|\mathbf{r}_i-\mathbf{r}_j|),
\tag{7}
\end{equation}
$$
where \( \alpha \) and \( \beta \) are variational parameters. The single-particle wave function is proportional to the harmonic oscillator function for the ground state
$$
\begin{equation}
g(\alpha,\beta,\mathbf{r}_i)= \exp{[-\alpha(x_i^2+y_i^2+\beta z_i^2)]}.
\tag{8}
\end{equation}
$$
The rest of this material is best read using the jupyter-notebook
For spherical traps we have \( \beta = 1 \) and for non-interacting bosons (\( a=0 \)) we have \( \alpha = 1/2a_{ho}^2 \). The correlation wave function is
$$
\begin{equation}
f(a,|\mathbf{r}_i-\mathbf{r}_j|)=\Bigg\{
\begin{array}{ll}
0 & {|\mathbf{r}_i-\mathbf{r}_j|} \leq {a}\\
(1-\frac{a}{|\mathbf{r}_i-\mathbf{r}_j|}) & {|\mathbf{r}_i-\mathbf{r}_j|} > {a}.
\end{array}
\tag{9}
\end{equation}
$$
The radial Schroedinger equation for the hydrogen atom can be written as (when we have gotten rid of the first derivative term in the kinetic energy and used \( rR(r)=u(r) \))
$$
-\frac{\hbar^2}{2m}\frac{d^2 u(r)}{d r^2}-
\left(\frac{ke^2}{r}-\frac{\hbar^2l(l+1)}{2mr^2}\right)u(r)=Eu(r).
$$
We will specialize to the case with \( l=0 \) and end up with
$$
-\frac{\hbar^2}{2m}\frac{d^2 u(r)}{d r^2}-
\left(\frac{ke^2}{r}\right)u(r)=Eu(r).
$$
Then we introduce a dimensionless variable \( \rho=r/a \) where \( a \) is a constant with dimension length. Multiplying with \( ma^2/\hbar^2 \) we can rewrite our equations as
$$
-\frac{1}{2}\frac{d^2 u(\rho)}{d \rho^2}-
\frac{ke^2ma}{\hbar^2}\frac{u(\rho)}{\rho}-\lambda u(\rho)=0.
$$
Since \( a \) is just a parameter we choose to set
$$
\frac{ke^2ma}{\hbar^2}=1,
$$
which leads to \( a=\hbar^2/mke^2 \), better known as the Bohr radius with value \( 0.053 \) nm. Scaling the equations this way does not only render our numerical treatment simpler since we avoid carrying with us all physical parameters, but we obtain also a natural length scale. We will see this again and again. In our discussions below with a harmonic oscillator trap, the natural lentgh scale with be determined by the oscillator frequency, the mass of the particle and \( \hbar \). We have also defined a dimensionless 'energy' \( \lambda = Ema^2/\hbar^2 \). With the rescaled quantities, the ground state energy of the hydrogen atom is \( 1/2 \). The equation we want to solve is now defined by the Hamiltonian
$$
H=-\frac{1}{2}\frac{d^2 }{d \rho^2}-\frac{1}{\rho}.
$$
As trial wave function we peep now into the analytical solution for the hydrogen atom and use (with \( \alpha \) as a variational parameter)
$$
u_T^{\alpha}(\rho)=\alpha\rho \exp{-(\alpha\rho)}.
$$
Inserting this wave function into the expression for the local energy \( E_L \) gives
$$
E_L(\rho)=-\frac{1}{\rho}-
\frac{\alpha}{2}\left(\alpha-\frac{2}{\rho}\right).
$$
To have analytical local energies saves us from computing numerically the second derivative, a feature which often increases our numerical expenditure with a factor of three or more. Integratng up the local energy (recall to bring back the PDF in the integration) gives \( \overline{E}[\boldsymbol{\alpha}]=\alpha(\alpha/2-1) \).
We present here another well-known example, the harmonic oscillator in one dimension for one particle. This will also serve the aim of introducing our next model, namely that of interacting electrons in a harmonic oscillator trap.
Here as well, we do have analytical solutions and the energy of the ground state, with \( \hbar=1 \), is \( 1/2\omega \), with \( \omega \) being the oscillator frequency. We use the following trial wave function
$$
\psi_T(x;\alpha) = \exp{-(\frac{1}{2}\alpha^2x^2)},
$$
which results in a local energy
$$
\frac{1}{2}\left(\alpha^2+x^2(1-\alpha^4)\right).
$$
We can compare our numerically calculated energies with the exact energy as function of \( \alpha \)
$$
\overline{E}[\alpha] = \frac{1}{4}\left(\alpha^2+\frac{1}{\alpha^2}\right).
$$
Similarly, with the above ansatz, we can also compute the exact variance which reads
$$
\sigma^2[\alpha]=\frac{1}{4}\left(1+(1-\alpha^4)^2\frac{3}{4\alpha^4}\right)-\overline{E}^2.
$$
Our code for computing the energy of the ground state of the harmonic oscillator follows here. We start by defining directories where we store various outputs.
# Common imports
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "Results/VMCHarmonic"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
outfile = open(data_path("VMCHarmonic.dat"),'w')
We proceed with the implementation of the Monte Carlo algorithm but list first the ansatz for the wave function and the expression for the local energy
# VMC for the one-dimensional harmonic oscillator
# Brute force Metropolis, no importance sampling and no energy minimization
from math import exp, sqrt
from random import random, seed
import numpy as np
import matplotlib.pyplot as plt
from decimal import *
# Trial wave function for the Harmonic oscillator in one dimension
def WaveFunction(r,alpha):
return exp(-0.5*alpha*alpha*r*r)
# Local energy for the Harmonic oscillator in one dimension
def LocalEnergy(r,alpha):
return 0.5*r*r*(1-alpha**4) + 0.5*alpha*alpha
Note that in the Metropolis algorithm there is no need to compute the trial wave function, mainly since we are just taking the ratio of two exponentials. It is then from a computational point view, more convenient to compute the argument from the ratio and then calculate the exponential. Here we have refrained from this purely of pedagogical reasons.
# The Monte Carlo sampling with the Metropolis algo
# The jit decorator tells Numba to compile this function.
# The argument types will be inferred by Numba when the function is called.
def MonteCarloSampling():
NumberMCcycles= 100000
StepSize = 1.0
# positions
PositionOld = 0.0
PositionNew = 0.0
# seed for rng generator
seed()
# start variational parameter
alpha = 0.4
for ia in range(MaxVariations):
alpha += .05
AlphaValues[ia] = alpha
energy = energy2 = 0.0
#Initial position
PositionOld = StepSize * (random() - .5)
wfold = WaveFunction(PositionOld,alpha)
#Loop over MC MCcycles
for MCcycle in range(NumberMCcycles):
#Trial position
PositionNew = PositionOld + StepSize*(random() - .5)
wfnew = WaveFunction(PositionNew,alpha)
#Metropolis test to see whether we accept the move
if random() <= wfnew**2 / wfold**2:
PositionOld = PositionNew
wfold = wfnew
DeltaE = LocalEnergy(PositionOld,alpha)
energy += DeltaE
energy2 += DeltaE**2
#We calculate mean, variance and error
energy /= NumberMCcycles
energy2 /= NumberMCcycles
variance = energy2 - energy**2
error = sqrt(variance/NumberMCcycles)
Energies[ia] = energy
Variances[ia] = variance
outfile.write('%f %f %f %f \n' %(alpha,energy,variance,error))
return Energies, AlphaValues, Variances
Finally, the results are presented here with the exact energies and variances as well.
#Here starts the main program with variable declarations
MaxVariations = 20
Energies = np.zeros((MaxVariations))
ExactEnergies = np.zeros((MaxVariations))
ExactVariance = np.zeros((MaxVariations))
Variances = np.zeros((MaxVariations))
AlphaValues = np.zeros(MaxVariations)
(Energies, AlphaValues, Variances) = MonteCarloSampling()
outfile.close()
ExactEnergies = 0.25*(AlphaValues*AlphaValues+1.0/(AlphaValues*AlphaValues))
ExactVariance = 0.25*(1.0+((1.0-AlphaValues**4)**2)*3.0/(4*(AlphaValues**4)))-ExactEnergies*ExactEnergies
#simple subplot
plt.subplot(2, 1, 1)
plt.plot(AlphaValues, Energies, 'o-',AlphaValues, ExactEnergies,'r-')
plt.title('Energy and variance')
plt.ylabel('Dimensionless energy')
plt.subplot(2, 1, 2)
plt.plot(AlphaValues, Variances, '.-',AlphaValues, ExactVariance,'r-')
plt.xlabel(r'$\alpha$', fontsize=15)
plt.ylabel('Variance')
save_fig("VMCHarmonic")
plt.show()
#nice printout with Pandas
import pandas as pd
from pandas import DataFrame
data ={'Alpha':AlphaValues, 'Energy':Energies,'Exact Energy':ExactEnergies,'Variance':Variances,'Exact Variance':ExactVariance,}
frame = pd.DataFrame(data)
print(frame)
For \( \alpha=1 \) we have the exact eigenpairs, as can be deduced from the table here. With \( \omega=1 \), the exact energy is \( 1/2 \) a.u. with zero variance, as it should. We see also that our computed variance follows rather well the exact variance. Increasing the number of Monte Carlo cycles will improve our statistics (try to increase the number of Monte Carlo cycles).
The fact that the variance is exactly equal to zero when \( \alpha=1 \) is that we then have the exact wave function, and the action of the hamiltionan on the wave function
$$
H\psi = \mathrm{constant}\times \psi,
$$
yields just a constant. The integral which defines various expectation values involving moments of the hamiltonian becomes then
$$
\langle H^n \rangle =
\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})H^n(\boldsymbol{R})\Psi_T(\boldsymbol{R})}
{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}=
\mathrm{constant}\times\frac{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}
{\int d\boldsymbol{R}\Psi^{\ast}_T(\boldsymbol{R})\Psi_T(\boldsymbol{R})}=\mathrm{constant}.
$$
This gives an important information: the exact wave function leads to zero variance!
As we will see later, many practitioners perform a minimization on both the energy and the variance.