The potentially most time-consuming part is the evaluation of the gradient and the Laplacian of an \( N \)-particle Slater determinant.
We have to differentiate the determinant with respect to all spatial coordinates of all particles. A brute force differentiation would involve \( N\cdot d \) evaluations of the entire determinant which would even worsen the already undesirable time scaling, making it \( Nd\cdot O(N^3)\sim O(d\cdot N^4) \).
This poses serious hindrances to the overall efficiency of our code.
The efficiency can be improved however if we move only one electron at the time. The Slater determinant matrix \( \hat{D} \) is defined by the matrix elements
$$
d_{ij}=\phi_j(x_i)
$$
where \( \phi_j(\mathbf{r}_i) \) is a single particle wave function. The columns correspond to the position of a given particle while the rows stand for the various quantum numbers.
What we need to realize is that when differentiating a Slater determinant with respect to some given coordinate, only one row of the corresponding Slater matrix is changed.
Therefore, by recalculating the whole determinant we risk producing redundant information. The solution turns out to be an algorithm that requires to keep track of the inverse of the Slater matrix.
Let the current position in phase space be represented by the \( (N\cdot d) \)-element vector \( \mathbf{r}^{\mathrm{old}} \) and the new suggested position by the vector \( \mathbf{r}^{\mathrm{new}} \).
The inverse of \( \hat{D} \) can be expressed in terms of its cofactors \( C_{ij} \) and its determinant (this our notation for a determinant) \( \vert\hat{D}\vert \):
$$
\begin{equation}
d_{ij}^{-1} = \frac{C_{ji}}{\vert\hat{D}\vert}
\tag{1}
\end{equation}
$$
Notice that the interchanged indices indicate that the matrix of cofactors is to be transposed.
If \( \hat{D} \) is invertible, then we must obviously have \( \hat{D}^{-1}\hat{D}= \mathbf{1} \), or explicitly in terms of the individual elements of \( \hat{D} \) and \( \hat{D}^{-1} \):
$$
\begin{equation}
\sum_{k=1}^N d_{ik}^{\phantom X}d_{kj}^{-1} = \delta_{ij}^{\phantom X}
\tag{2}
\end{equation}
$$
Consider the ratio, which we shall call \( R \), between \( \vert\hat{D}(\mathbf{r}^{\mathrm{new}})\vert \) and \( \vert\hat{D}(\mathbf{r}^{\mathrm{old}})\vert \). By definition, each of these determinants can individually be expressed in terms of the i-th row of its cofactor matrix
$$
\begin{equation}
R\equiv\frac{\vert\hat{D}(\mathbf{r}^{\mathrm{new}})\vert}
{\vert\hat{D}(\mathbf{r}^{\mathrm{old}})\vert} =
\frac{\sum_{j=1}^N d_{ij}(\mathbf{r}^{\mathrm{new}})\,
C_{ij}(\mathbf{r}^{\mathrm{new}})}
{\sum_{j=1}^N d_{ij}(\mathbf{r}^{\mathrm{old}})\,
C_{ij}(\mathbf{r}^{\mathrm{old}})}
\tag{3}
\end{equation}
$$
Suppose now that we move only one particle at a time, meaning that \( \mathbf{r}^{\mathrm{new}} \) differs from \( \mathbf{r}^{\mathrm{old}} \) by the position of only one, say the i-th, particle . This means that \( \hat{D}(\mathbf{r}^{\mathrm{new}}) \) and \( \hat{D}(\mathbf{r}^{\mathrm{old}}) \) differ only by the entries of the i-th row. Recall also that the i-th row of a cofactor matrix \( \hat{C} \) is independent of the entries of the i-th row of its corresponding matrix \( \hat{D} \). In this particular case we therefore get that the i-th row of \( \hat{C}(\mathbf{r}^{\mathrm{new}}) \) and \( \hat{C}(\mathbf{r}^{\mathrm{old}}) \) must be equal. Explicitly, we have:
$$
\begin{equation*}
C_{ij}(\mathbf{r}^{\mathrm{new}}) = C_{ij}(\mathbf{r}^{\mathrm{old}})\quad
\forall\ j\in\{1,\dots,N\}
\end{equation*}
$$
Inserting this into the numerator of eq. (3) and using eq. (1) to substitute the cofactors with the elements of the inverse matrix, we get:
$$
\begin{equation*}
R =\frac{\sum_{j=1}^N d_{ij}(\mathbf{r}^{\mathrm{new}})\,
C_{ij}(\mathbf{r}^{\mathrm{old}})}
{\sum_{j=1}^N d_{ij}(\mathbf{r}^{\mathrm{old}})\,
C_{ij}(\mathbf{r}^{\mathrm{old}})} =
\frac{\sum_{j=1}^N d_{ij}(\mathbf{r}^{\mathrm{new}})\,
d_{ji}^{-1}(\mathbf{r}^{\mathrm{old}})}
{\sum_{j=1}^N d_{ij}(\mathbf{r}^{\mathrm{old}})\,
d_{ji}^{-1}(\mathbf{r}^{\mathrm{old}})}
\end{equation*}
$$
Now by eq. (2) the denominator of the rightmost expression must be unity, so that we finally arrive at:
$$
\begin{equation}
R =
\sum_{j=1}^N d_{ij}(\mathbf{r}^{\mathrm{new}})\,
d_{ji}^{-1}(\mathbf{r}^{\mathrm{old}}) =
\sum_{j=1}^N \phi_j(\mathbf{r}_i^{\mathrm{new}})\,
d_{ji}^{-1}(\mathbf{r}^{\mathrm{old}})
\tag{4}
\end{equation}
$$
What this means is that in order to get the ratio when only the i-th particle has been moved, we only need to calculate the dot product of the vector \( \left(\phi_1(\mathbf{r}_i^\mathrm{new}),\,\dots,\,\phi_N(\mathbf{r}_i^\mathrm{new})\right) \) of single particle wave functions evaluated at this new position with the i-th column of the inverse matrix \( \hat{D}^{-1} \) evaluated at the original position. Such an operation has a time scaling of \( O(N) \). The only extra thing we need to do is to maintain the inverse matrix \( \hat{D}^{-1}(\mathbf{x}^{\mathrm{old}}) \).
If the new position \( \mathbf{r}^{\mathrm{new}} \) is accepted, then the inverse matrix can by suitably updated by an algorithm having a time scaling of \( O(N^2) \). This algorithm goes as follows. First we update all but the i-th column of \( \hat{D}^{-1} \). For each column \( j\neq i \), we first calculate the quantity:
$$
\begin{equation*}
S_j =
(\hat{D}(\mathbf{r}^{\mathrm{new}})\times
\hat{D}^{-1}(\mathbf{r}^{\mathrm{old}}))_{ij} =
\sum_{l=1}^N d_{il}(\mathbf{r}^{\mathrm{new}})\,
d^{-1}_{lj}(\mathbf{r}^{\mathrm{old}})
\tag{5}
\end{equation*}
$$
The new elements of the j-th column of \( \hat{D}^{-1} \) are then given by:
$$
\begin{equation*}
d_{kj}^{-1}(\mathbf{r}^{\mathrm{new}}) =
d_{kj}^{-1}(\mathbf{r}^{\mathrm{old}}) -
\frac{S_j}{R}\,d_{ki}^{-1}(\mathbf{r}^{\mathrm{old}})\quad
\begin{array}{ll}
\forall\ \ k\in\{1,\dots,N\}\\j\neq i
\end{array}
\tag{6}
\end{equation*}
$$
Finally the elements of the i-th column of \( \hat{D}^{-1} \) are updated simply as follows:
$$
\begin{equation*}
d_{ki}^{-1}(\mathbf{r}^{\mathrm{new}}) =
\frac{1}{R}\,d_{ki}^{-1}(\mathbf{r}^{\mathrm{old}})\quad
\forall\ \ k\in\{1,\dots,N\}
\tag{7}
\end{equation*}
$$
We see from these formulas that the time scaling of an update of \( \hat{D}^{-1} \) after changing one row of \( \hat{D} \) is \( O(N^2) \).
The scheme is also applicable for the calculation of the ratios involving derivatives. It turns out that differentiating the Slater determinant with respect to the coordinates of a single particle \( \mathbf{r}_i \) changes only the i-th row of the corresponding Slater matrix.
The gradient and the Laplacian can therefore be calculated as follows:
$$
\frac{\vec\nabla_i\vert\hat{D}(\mathbf{r})\vert}{\vert\hat{D}(\mathbf{r})\vert} =
\sum_{j=1}^N \vec\nabla_i d_{ij}(\mathbf{r})d_{ji}^{-1}(\mathbf{r}) =
\sum_{j=1}^N \vec\nabla_i \phi_j(\mathbf{r}_i)d_{ji}^{-1}(\mathbf{r})
$$
and
$$
\frac{\nabla^2_i\vert\hat{D}(\mathbf{r})\vert}{\vert\hat{D}(\mathbf{r})\vert} =
\sum_{j=1}^N \nabla^2_i d_{ij}(\mathbf{r})d_{ji}^{-1}(\mathbf{r}) =
\sum_{j=1}^N \nabla^2_i \phi_j(\mathbf{r}_i)\,d_{ji}^{-1}(\mathbf{r})
$$
Thus, to calculate all the derivatives of the Slater determinant, we only need the derivatives of the single particle wave functions (\( \vec\nabla_i \phi_j(\mathbf{r}_i) \) and \( \nabla^2_i \phi_j(\mathbf{r}_i) \)) and the elements of the corresponding inverse Slater matrix (\( \hat{D}^{-1}(\mathbf{r}_i) \)). A calculation of a single derivative is by the above result an \( O(N) \) operation. Since there are \( d\cdot N \) derivatives, the time scaling of the total evaluation becomes \( O(d\cdot N^2) \). With an \( O(N^2) \) updating algorithm for the inverse matrix, the total scaling is no worse, which is far better than the brute force approach yielding \( O(d\cdot N^4) \).
Important note: In most cases you end with closed form expressions for the single-particle wave functions. It is then useful to calculate the various derivatives and make separate functions for them.
The Slater determinant for atomic Beryllium could for example take the form
$$
\Phi(\mathbf{r}_1,\mathbf{r}_2,,\mathbf{r}_3,\mathbf{r}_4, \alpha,\beta,\gamma,\delta)=\frac{1}{\sqrt{4!}}
\left| \begin{array}{cccc} \psi_{100\uparrow}(\mathbf{r}_1)& \psi_{100\uparrow}(\mathbf{r}_2)& \psi_{100\uparrow}(\mathbf{r}_3)&\psi_{100\uparrow}(\mathbf{r}_4) \\
\psi_{100\downarrow}(\mathbf{r}_1)& \psi_{100\downarrow}(\mathbf{r}_2)& \psi_{100\downarrow}(\mathbf{r}_3)&\psi_{100\downarrow}(\mathbf{r}_4) \\
\psi_{200\uparrow}(\mathbf{r}_1)& \psi_{200\uparrow}(\mathbf{r}_2)& \psi_{200\uparrow}(\mathbf{r}_3)&\psi_{200\uparrow}(\mathbf{r}_4) \\
\psi_{200\downarrow}(\mathbf{r}_1)& \psi_{200\downarrow}(\mathbf{r}_2)& \psi_{200\downarrow}(\mathbf{r}_3)&\psi_{200\downarrow}(\mathbf{r}_4) \end{array} \right|.
$$
This expression can lead to problems when we omit the spin degrees of freedom, as is common in for example many atomic physics calculations. Leaving out the spin degrees of freedom, the problem we may encounter is that of zero determinants. But we can rewrite it as the product of two Slater determinants, one for spin up and one for spin down.
We can rewrite it as
$$
\Phi(\mathbf{r}_1,\mathbf{r}_2,,\mathbf{r}_3,\mathbf{r}_4, \alpha,\beta,\gamma,\delta)=\det\uparrow(1,2)\det\downarrow(3,4)-\det\uparrow(1,3)\det\downarrow(2,4)
$$
$$
-\det\uparrow(1,4)\det\downarrow(3,2)+\det\uparrow(2,3)\det\downarrow(1,4)-\det\uparrow(2,4)\det\downarrow(1,3)
$$
$$
+\det\uparrow(3,4)\det\downarrow(1,2),
$$
where we have defined
$$
\det\uparrow(1,2)=\frac{1}{\sqrt{2}}\left| \begin{array}{cc} \psi_{100\uparrow}(\mathbf{r}_1)& \psi_{100\uparrow}(\mathbf{r}_2)\\
\psi_{200\uparrow}(\mathbf{r}_1)& \psi_{200\uparrow}(\mathbf{r}_2) \end{array} \right|,
$$
and
$$
\det\downarrow(3,4)=\frac{1}{\sqrt{2}}\left| \begin{array}{cc} \psi_{100\downarrow}(\mathbf{r}_3)& \psi_{100\downarrow}(\mathbf{r}_4)\\
\psi_{200\downarrow}(\mathbf{r}_3)& \psi_{200\downarrow}(\mathbf{r}_4) \end{array} \right|.
$$
Note that if we again leave out the spin degrees of freedom, the determinant is still zero!
We want to avoid to sum over spin variables, in particular when the interaction does not depend on spin.
It can be shown, see for example Moskowitz and Kalos, Int. J. Quantum Chem. 20 1107 (1981), that for the variational energy we can approximate the Slater determinant as
$$
\Phi(\mathbf{r}_1,\mathbf{r}_2,,\mathbf{r}_3,\mathbf{r}_4, \alpha,\beta,\gamma,\delta) \propto \det\uparrow(1,2)\det\downarrow(3,4),
$$
or more generally as
$$
\Phi(\mathbf{r}_1,\mathbf{r}_2,\dots \mathbf{r}_N) \propto \det\uparrow \det\downarrow,
$$
where we have the Slater determinant as the product of a spin up part involving the number of electrons with spin up only (2 for beryllium and 5 for neon) and a spin down part involving the electrons with spin down.
This ansatz is not antisymmetric under the exchange of electrons with opposite spins but it can be shown (show this) that it gives the same expectation value for the energy as the full Slater determinant.
As long as the Hamiltonian is spin independent, the above approach gives us the same expectation value for the energy. It is rather straightforward to see this if you go back to the equations for the energy. We leave this as an exercise to the eager reader.
Can you think of observables where not respecting the symmetry can have consequences?
If we keep the spin degrees of freedom, which obviously leads to a more general code, we would need to flip spins as well and deal with the full Slater determinant. The above recipe is just a mere simplification to a case where we have identical particles, the same spatial single-particle functions and the same number of spin-up and spin-down fermions.
This is a situation which one encounters in for example a fermionic system like a closed-shell nucleus like oxygen-16 or a neutral noble gas like helium or neon with the same number of spin-up and spin-down orbitals and the same spatial single-particle functions. The example discussed above for neutral Beryllium where we fill the hydrogen-like states \( 1s \) and \( 2s \), is yet another case.
For those of you familiar with Hartree-Fock theory, this is often referred to as restricted Hartree-Fock theory. Unrestricted Hartree-Fock theory represents then the more general case.
The systems we will limit ourselves to, are all systems which can be described by a restricted basis set, We will thus factorize the full determinant \( \vert\hat{D}\vert \) into two smaller ones, where each can be identified with \( \uparrow \) and \( \downarrow \) respectively:
$$
\vert\hat{D}\vert = \vert\hat{D}\vert_\uparrow\cdot \vert\hat{D}\vert_\downarrow
$$
The combined dimensionality of the two smaller determinants equals the dimensionality of the full determinant. Such a factorization is advantageous in that it makes it possible to perform the calculation of the ratio \( R \) and the updating of the inverse matrix separately for \( \vert\hat{D}\vert_\uparrow \) and \( \vert\hat{D}\vert_\downarrow \):
$$
\frac{\vert\hat{D}\vert^\mathrm{new}}{\vert\hat{D}\vert^\mathrm{old}} =
\frac{\vert\hat{D}\vert^\mathrm{new}_\uparrow}
{\vert\hat{D}\vert^\mathrm{old}_\uparrow}\cdot
\frac{\vert\hat{D}\vert^\mathrm{new}_\downarrow
}{\vert\hat{D}\vert^\mathrm{old}_\downarrow}
$$
This reduces the calculation time by a constant factor. The maximal time reduction happens in a system of equal numbers of \( \uparrow \) and \( \downarrow \) particles, so that the two factorized determinants are half the size of the original one.
Consider the case of moving only one particle at a time which originally had the following time scaling for one transition:
$$
O_R(N)+O_\mathrm{inverse}(N^2)
$$
For the factorized determinants one of the two determinants is obviously unaffected by the change so that it cancels from the ratio \( R \).
Therefore, only one determinant of size \( N/2 \) is involved in each calculation of \( R \) and update of the inverse matrix. The scaling of each transition then becomes:
$$
O_R(N/2)+O_\mathrm{inverse}(N^2/4)
$$
and the time scaling when the transitions for all \( N \) particles are put together:
$$
O_R(N^2/2)+O_\mathrm{inverse}(N^3/4)
$$
which gives the same reduction as in the case of moving all particles at once.
Computing the ratios discussed above requires that we maintain the inverse of the Slater matrix evaluated at the current position. Each time a trial position is accepted, the row number \( i \) of the Slater matrix changes and updating its inverse has to be carried out. Getting the inverse of an \( N \times N \) matrix by Gaussian elimination has a complexity of order of \( \mathcal{O}(N^3) \) operations, a luxury that we cannot afford for each time a particle move is accepted. We will use the expression
$$ \begin{equation*} \tag{8} d^{-1}_{kj}(\mathbf{x^{new}}) = \left\{\begin{array}{l l} d^{-1}_{kj}(\mathbf{x^{old}}) - \frac{d^{-1}_{ki}(\mathbf{x^{old}})}{R} \sum_{l=1}^{N} d_{il}(\mathbf{x^{new}}) d^{-1}_{lj}(\mathbf{x^{old}}) & \mbox{if $j \neq i$}\nonumber \\ \\ \frac{d^{-1}_{ki}(\mathbf{x^{old}})}{R} \sum_{l=1}^{N} d_{il}(\mathbf{x^{old}}) d^{-1}_{lj}(\mathbf{x^{old}}) & \mbox{if $j=i$} \end{array} \right. \end{equation*} $$
This equation scales as \( O(N^2) \). The evaluation of the determinant of an \( N \times N \) matrix by standard Gaussian elimination requires \( \mathbf{O}(N^3) \) calculations. As there are \( Nd \) independent coordinates we need to evaluate \( Nd \) Slater determinants for the gradient (quantum force) and \( Nd \) for the Laplacian (kinetic energy). With the updating algorithm we need only to invert the Slater determinant matrix once. This can be done by standard LU decomposition methods.
Determining a determinant of an \( N \times N \) matrix by standard Gaussian elimination is of the order of \( \mathbf{O}(N^3) \) calculations. As there are \( N\cdot d \) independent coordinates we need to evaluate \( Nd \) Slater determinants for the gradient (quantum force) and \( N\cdot d \) for the Laplacian (kinetic energy)
With the updating algorithm we need only to invert the Slater determinant matrix once. This is done by calling standard LU decomposition methods.
The expectation value of the kinetic energy expressed in atomic units for electron \( i \) is
$$
\langle \hat{K}_i \rangle = -\frac{1}{2}\frac{\langle\Psi|\nabla_{i}^2|\Psi \rangle}{\langle\Psi|\Psi \rangle},
$$
$$
\begin{equation*}
\tag{9}
K_i = -\frac{1}{2}\frac{\nabla_{i}^{2} \Psi}{\Psi}.
\end{equation*}
$$
$$
\begin{align}
\frac{\nabla^2 \Psi}{\Psi} & = \frac{\nabla^2 ({\Psi_{D} \, \Psi_C})}{\Psi_{D} \, \Psi_C} = \frac{\nabla \cdot [\nabla {(\Psi_{D} \, \Psi_C)}]}{\Psi_{D} \, \Psi_C} = \frac{\nabla \cdot [ \Psi_C \nabla \Psi_{D} + \Psi_{D} \nabla \Psi_C]}{\Psi_{D} \, \Psi_C}\nonumber\\
& = \frac{\nabla \Psi_C \cdot \nabla \Psi_{D} + \Psi_C \nabla^2 \Psi_{D} + \nabla \Psi_{D} \cdot \nabla \Psi_C + \Psi_{D} \nabla^2 \Psi_C}{\Psi_{D} \, \Psi_C}\nonumber\\
\tag{10}
\end{align}
$$
$$
\begin{align}
\frac{\nabla^2 \Psi}{\Psi}
& = \frac{\nabla^2 \Psi_{D}}{\Psi_{D}} + \frac{\nabla^2 \Psi_C}{ \Psi_C} + 2 \frac{\nabla \Psi_{D}}{\Psi_{D}}\cdot\frac{\nabla \Psi_C}{ \Psi_C}
\tag{11}
\end{align}
$$
The second derivative of the Jastrow factor divided by the Jastrow factor (the way it enters the kinetic energy) is
$$
\left[\frac{\nabla^2 \Psi_C}{\Psi_C}\right]_x =\
2\sum_{k=1}^{N}
\sum_{i=1}^{k-1}\frac{\partial^2 g_{ik}}{\partial x_k^2}\ +\
\sum_{k=1}^N
\left(
\sum_{i=1}^{k-1}\frac{\partial g_{ik}}{\partial x_k} -
\sum_{i=k+1}^{N}\frac{\partial g_{ki}}{\partial x_i}
\right)^2
$$
But we have a simple form for the function, namely
$$
\Psi_{C}=\prod_{i < j}\exp{f(r_{ij})}= \exp{\left\{\sum_{i < j}\frac{ar_{ij}}{1+\beta r_{ij}}\right\}},
$$
and it is easy to see that for particle \( k \) we have
$$
\frac{\nabla^2_k \Psi_C}{\Psi_C }=
\sum_{ij\ne k}\frac{(\mathbf{r}_k-\mathbf{r}_i)(\mathbf{r}_k-\mathbf{r}_j)}{r_{ki}r_{kj}}f'(r_{ki})f'(r_{kj})+
\sum_{j\ne k}\left( f''(r_{kj})+\frac{2}{r_{kj}}f'(r_{kj})\right)
$$
Using
$$
f(r_{ij})= \frac{ar_{ij}}{1+\beta r_{ij}},
$$
and \( g'(r_{kj})=dg(r_{kj})/dr_{kj} \) and \( g''(r_{kj})=d^2g(r_{kj})/dr_{kj}^2 \) we find that for particle \( k \) we have
$$
\frac{\nabla^2_k \Psi_C}{\Psi_C }=
\sum_{ij\ne k}\frac{(\mathbf{r}_k-\mathbf{r}_i)(\mathbf{r}_k-\mathbf{r}_j)}{r_{ki}r_{kj}}\frac{a}{(1+\beta r_{ki})^2}
\frac{a}{(1+\beta r_{kj})^2}+
\sum_{j\ne k}\left(\frac{2a}{r_{kj}(1+\beta r_{kj})^2}-\frac{2a\beta}{(1+\beta r_{kj})^3}\right)
$$
The gradient and Laplacian can be calculated as follows:
$$
\frac{\mathbf{\nabla}_i\vert\hat{D}(\mathbf{r})\vert}
{\vert\hat{D}(\mathbf{r})\vert} =
\sum_{j=1}^N \vec\nabla_i d_{ij}(\mathbf{r})\,
d_{ji}^{-1}(\mathbf{r}) =
\sum_{j=1}^N \vec\nabla_i \phi_j(\mathbf{r}_i)\,
d_{ji}^{-1}(\mathbf{r})
$$
and
$$
\frac{\nabla^2_i\vert\hat{D}(\mathbf{r})\vert}
{\vert\hat{D}(\mathbf{r})\vert} =
\sum_{j=1}^N \nabla^2_i d_{ij}(\mathbf{r})\,
d_{ji}^{-1}(\mathbf{r}) =
\sum_{j=1}^N \nabla^2_i \phi_j(\mathbf{r}_i)\,
d_{ji}^{-1}(\mathbf{r})
$$
The gradient for the determinant is
$$
\frac{\mathbf{\nabla}_i\vert\hat{D}(\mathbf{r})\vert}
{\vert\hat{D}(\mathbf{r})\vert} =
\sum_{j=1}^N \mathbf{\nabla}_i d_{ij}(\mathbf{r})\,
d_{ji}^{-1}(\mathbf{r}) =
\sum_{j=1}^N \mathbf{\nabla}_i \phi_j(\mathbf{r}_i)\,
d_{ji}^{-1}(\mathbf{r}).
$$
We have
$$
\Psi_C=\prod_{i < j}g(r_{ij})= \exp{\left\{\sum_{i < j}\frac{ar_{ij}}{1+\beta r_{ij}}\right\}},
$$
the gradient needed for the quantum force and local energy is easy to compute. We get for particle \( k \)
$$
\frac{ \nabla_k \Psi_C}{ \Psi_C }= \sum_{j\ne k}\frac{\mathbf{r}_{kj}}{r_{kj}}\frac{a}{(1+\beta r_{kj})^2},
$$
which is rather easy to code. Remember to sum over all particles when you compute the local energy.
We need to compute the ratio between wave functions, in particular for the Slater determinants.
$$
R =\sum_{j=1}^N d_{ij}(\mathbf{r}^{\mathrm{new}})\,
d_{ji}^{-1}(\mathbf{r}^{\mathrm{old}}) =
\sum_{j=1}^N \phi_j(\mathbf{r}_i^{\mathrm{new}})\,
d_{ji}^{-1}(\mathbf{r}^{\mathrm{old}})
$$
What this means is that in order to get the ratio when only the i-th particle has been moved, we only need to calculate the dot product of the vector \( \left(\phi_1(\mathbf{r}_i^\mathrm{new}),\,\dots,\, \phi_N(\mathbf{r}_i^\mathrm{new})\right) \) of single particle wave functions evaluated at this new position with the i-th column of the inverse matrix \( \hat{D}^{-1} \) evaluated at the original position. Such an operation has a time scaling of \( O(N) \). The only extra thing we need to do is to maintain the inverse matrix \( \hat{D}^{-1}(\mathbf{x}^{\mathrm{old}}) \).
As a starting point we may consider that each time a new position is suggested in the Metropolis algorithm, a row of the current Slater matrix experiences some kind of perturbation. Hence, the Slater matrix with its orbitals evaluated at the new position equals the old Slater matrix plus a perturbation matrix,
$$
\begin{equation*}
\tag{12}
d_{jk}(\mathbf{x^{new}}) = d_{jk}(\mathbf{x^{old}}) + \Delta_{jk},
\end{equation*}
$$
where
$$
\begin{equation*}
\tag{13}
\Delta_{jk} = \delta_{ik}[\phi_j(\mathbf{x_{i}^{new}}) - \phi_j(\mathbf{x_{i}^{old}})] = \delta_{ik}(\Delta\phi)_j .
\end{equation*}
$$
Computing the inverse of the transposed matrix we arrive at
$$
\begin{equation}
\tag{14}
d_{kj}(\mathbf{x^{new}})^{-1} = [d_{kj}(\mathbf{x^{old}}) + \Delta_{kj}]^{-1}.
\end{equation}
$$
The evaluation of the right hand side (rhs) term above is carried out by applying the identity \( (A + B)^{-1} = A^{-1} - (A + B)^{-1} B A^{-1} \). In compact notation it yields
$$
\begin{eqnarray*}
[\mathbf{D}^{T}(\mathbf{x^{new}})]^{-1} & = & [\mathbf{D}^{T}(\mathbf{x^{old}}) + \Delta^T]^{-1}\\
& = & [\mathbf{D}^{T}(\mathbf{x^{old}})]^{-1} - [\mathbf{D}^{T}(\mathbf{x^{old}}) + \Delta^T]^{-1} \Delta^T [\mathbf{D}^{T}(\mathbf{x^{old}})]^{-1}\\
& = & [\mathbf{D}^{T}(\mathbf{x^{old}})]^{-1} - \underbrace{{[\mathbf{D}^{T}(\mathbf{x^{new}})]^{-1}}}_{\text{By Eq.}{\ref{invDkj}}} \Delta^T [\mathbf{D}^{T}(\mathbf{x^{old}})]^{-1}.
\end{eqnarray*}
$$
Using index notation, the last result may be expanded by
$$
\begin{eqnarray*}
d^{-1}_{kj}(\mathbf{x^{new}}) & = & d^{-1}_{kj}(\mathbf{x^{old}}) - \sum_{l} \sum_{m} d^{-1}_{km}(\mathbf{x^{new}}) \Delta^{T}_{ml} d^{-1}_{lj}(\mathbf{x^{old}})\\
& = & d^{-1}_{kj}(\mathbf{x^{old}}) - \sum_{l} \sum_{m} d^{-1}_{km}(\mathbf{x^{new}}) \Delta_{lm} d^{-1}_{lj}(\mathbf{x^{cur}})\\
& = & d^{-1}_{kj}(\mathbf{x^{old}}) - \sum_{l} \sum_{m} d^{-1}_{km}(\mathbf{x^{new}}) \delta_{im} (\Delta \phi)_{l} d^{-1}_{lj}(\mathbf{x^{old}})\\
& = & d^{-1}_{kj}(\mathbf{x^{old}}) - d^{-1}_{ki}(\mathbf{x^{new}}) \sum_{l=1}^{N}(\Delta \phi)_{l} d^{-1}_{lj}(\mathbf{x^{old}})\\
& = & d^{-1}_{kj}(\mathbf{x^{old}}) - d^{-1}_{ki}(\mathbf{x^{new}}) \sum_{l=1}^{N}[\phi_{l}(\mathbf{r_{i}^{new}}) - \phi_{l}(\mathbf{r_{i}^{old}})] D^{-1}_{lj}(\mathbf{x^{old}}).
\end{eqnarray*}
$$
Using
$$\mathbf{D}^{-1}(\mathbf{x^{old}}) = \frac{adj \mathbf{D}}{|\mathbf{D}(\mathbf{x^{old}})|} \, \quad \text{and} \, \quad \mathbf{D}^{-1}(\mathbf{x^{new}}) = \frac{adj \mathbf{D}}{|\mathbf{D}(\mathbf{x^{new}})|},$$
and dividing these two equations we get
$$\frac{\mathbf{D}^{-1}(\mathbf{x^{old}})}{\mathbf{D}^{-1}(\mathbf{x^{new}})} = \frac{|\mathbf{D}(\mathbf{x^{new}})|}{|\mathbf{D}(\mathbf{x^{old}})|} = R \Rightarrow d^{-1}_{ki}(\mathbf{x^{new}}) = \frac{d^{-1}_{ki}(\mathbf{x^{old}})}{R}.$$
We have
$$d^{-1}_{kj}(\mathbf{x^{new}}) = d^{-1}_{kj}(\mathbf{x^{old}}) - \frac{d^{-1}_{ki}(\mathbf{x^{old}})}{R} \sum_{l=1}^{N}[\phi_{l}(\mathbf{r_{i}^{new}}) - \phi_{l}(\mathbf{r_{i}^{old}})] d^{-1}_{lj}(\mathbf{x^{old}}),$$
or
$$
\begin{align}
d^{-1}_{kj}(\mathbf{x^{new}}) = d^{-1}_{kj}(\mathbf{x^{old}}) \qquad & - & \frac{d^{-1}_{ki}(\mathbf{x^{old}})}{R} \sum_{l=1}^{N}\phi_{l}(\mathbf{r_{i}^{new}}) d^{-1}_{lj}(\mathbf{x^{old}}) \nonumber\\
& + & \frac{d^{-1}_{ki}(\mathbf{x^{old}})}{R} \sum_{l=1}^{N}\phi_{l}(\mathbf{r_{i}^{old}}) d^{-1}_{lj}(\mathbf{x^{old}})\nonumber\\
= d^{-1}_{kj}(\mathbf{x^{old}}) \qquad & - & \frac{d^{-1}_{ki}(\mathbf{x^{old}})}{R} \sum_{l=1}^{N} d_{il}(\mathbf{x^{new}}) d^{-1}_{lj}(\mathbf{x^{old}}) \nonumber\\
& + & \frac{d^{-1}_{ki}(\mathbf{x^{old}})}{R} \sum_{l=1}^{N} d_{il}(\mathbf{x^{old}}) d^{-1}_{lj}(\mathbf{x^{old}}).\nonumber
\end{align}
$$
In this equation, the first line becomes zero for \( j=i \) and the second for \( j \neq i \). Therefore, the update of the inverse for the new Slater matrix is given by
$$ \begin{eqnarray} \boxed{d^{-1}_{kj}(\mathbf{x^{new}}) = \left\{ \begin{array}{l l} d^{-1}_{kj}(\mathbf{x^{old}}) - \frac{d^{-1}_{ki}(\mathbf{x^{old}})}{R} \sum_{l=1}^{N} d_{il}(\mathbf{x^{new}}) d^{-1}_{lj}(\mathbf{x^{old}}) & \mbox{if $j \neq i$}\nonumber \\ \\ \frac{d^{-1}_{ki}(\mathbf{x^{old}})}{R} \sum_{l=1}^{N} d_{il}(\mathbf{x^{old}}) d^{-1}_{lj}(\mathbf{x^{old}}) & \mbox{if $j=i$} \end{array} \right.} \end{eqnarray} $$