\
Here we will discuss some of the classical methods for integrating a function. The methods we discuss are
The latter are more suitable for the case where the abscissas are not equally spaced. We emphasize methods for evaluating few-dimensional (typically up to four dimensions) integrals. Multi-dimensional integrals will be discussed in connection with Monte Carlo methods.
The integral
$$
\begin{equation}
I=\int_a^bf(x) dx
\tag{1}
\end{equation}
$$
has a very simple meaning. The integral is the
area enscribed by the function \( f(x) \) starting from \( x=a \) to \( x=b \). It is subdivided in several smaller areas whose evaluation is to be approximated by different techniques. The areas under the curve can for example be approximated by rectangular boxes or trapezoids.
In considering equal step methods, our basic approach is that of approximating a function \( f(x) \) with a polynomial of at most degree \( N-1 \), given \( N \) integration points. If our polynomial is of degree \( 1 \), the function will be approximated with \( f(x)\approx a_0+a_1x \).
The algorithm for these integration methods is rather simple, and the number of approximations perhaps unlimited!
$$
\begin{equation*}
\int_a^bf(x) dx= \int_a^{a+h}f(x)dx + \int_{a+h}^{a+2h}f(x)dx+\dots \int_{b-h}^{b}f(x)dx.
\end{equation*}
$$
One possible strategy then is to find a reliable polynomial expansion for \( f(x) \) in the smaller subintervals. Consider for example evaluating
$$
\begin{equation*}
\int_a^{a+2h}f(x)dx,
\end{equation*}
$$
which we rewrite as
$$
\begin{equation}
\int_a^{a+2h}f(x)dx=
\int_{x_0-h}^{x_0+h}f(x)dx.
\tag{2}
\end{equation}
$$
We have chosen a midpoint \( x_0 \) and have defined \( x_0=a+h \).
Using Lagrange's interpolation formula
$$
\begin{equation*}
P_N(x)=\sum_{i=0}^{N}\prod_{k\ne i} \frac{x-x_k}{x_i-x_k}y_i,
\end{equation*}
$$
we could attempt to approximate the function \( f(x) \) with a first-order polynomial in \( x \) in the two
sub-intervals \( x\in[x_0-h,x_0] \) and \( x\in[x_0,x_0+h] \). A first order polynomial means simply that
we have for say the interval \( x\in[x_0,x_0+h] \)
$$
\begin{equation*}
f(x)\approx P_1(x)=\frac{x-x_0}{(x_0+h)-x_0}f(x_0+h)+\frac{x-(x_0+h)}{x_0-(x_0+h)}f(x_0),
\end{equation*}
$$
and for the interval \( x\in[x_0-h,x_0] \)
$$
\begin{equation*}
f(x)\approx P_1(x)=\frac{x-(x_0-h)}{x_0-(x_0-h)}f(x_0)+\frac{x-x_0}{(x_0-h)-x_0}f(x_0-h).
\end{equation*}
$$
Having performed this subdivision and polynomial approximation, one from \( x_0-h \) to \( x_0 \) and the other from \( x_0 \) to \( x_0+h \),
$$
\begin{equation*}
\int_a^{a+2h}f(x)dx=\int_{x_0-h}^{x_0}f(x)dx+\int_{x_0}^{x_0+h}f(x)dx,
\end{equation*}
$$
we can easily calculate for example the second integral as
$$
\begin{equation*}
\int_{x_0}^{x_0+h}f(x)dx\approx \int_{x_0}^{x_0+h}\left(\frac{x-x_0}{(x_0+h)-x_0}f(x_0+h)+\frac{x-(x_0+h)}{x_0-(x_0+h)}f(x_0)\right)dx.
\end{equation*}
$$
This integral can be simplified to
$$
\begin{equation*}
\int_{x_0}^{x_0+h}f(x)dx\approx \int_{x_0}^{x_0+h}\left(\frac{x-x_0}{h}f(x_0+h)-\frac{x-(x_0+h)}{h}f(x_0)\right)dx,
\end{equation*}
$$
resulting in
$$
\begin{equation*}
\int_{x_0}^{x_0+h}f(x)dx=\frac{h}{2}\left(f(x_0+h) + f(x_0)\right)+O(h^3).
\end{equation*}
$$
Here we added the error made in approximating our integral
with a polynomial of degree \( 1 \).
The other integral gives
$$
\begin{equation*}
\int_{x_0-h}^{x_0}f(x)dx=\frac{h}{2}\left(f(x_0) + f(x_0-h)\right)+O(h^3),
\end{equation*}
$$
and adding up we obtain
$$
\begin{equation}
\int_{x_0-h}^{x_0+h}f(x)dx=\frac{h}{2}\left(f(x_0+h) + 2f(x_0) + f(x_0-h)\right)+O(h^3),
\tag{3}
\end{equation}
$$
which is the well-known trapezoidal rule. Concerning the error in the approximation made,
\( O(h^3)=O((b-a)^3/N^3) \), you should note
that this is the local error. Since we are splitting the integral from
\( a \) to \( b \) in \( N \) pieces, we will have to perform approximately \( N \)
such operations.
This means that the global error goes like \( \approx O(h^2) \). The trapezoidal reads then
$$
\begin{equation}
I=\int_a^bf(x) dx=h\left(f(a)/2 + f(a+h) +f(a+2h)+
\dots +f(b-h)+ f_{b}/2\right),
\tag{4}
\end{equation}
$$
with a global error which goes like \( O(h^2) \).
Hereafter we use the shorthand notations \( f_{-h}=f(x_0-h) \), \( f_{0}=f(x_0) \) and \( f_{h}=f(x_0+h) \).
The correct mathematical expression for the local error for the trapezoidal rule is
$$
\begin{equation*}
\int_a^bf(x)dx -\frac{b-a}{2}\left[f(a)+f(b)\right]=-\frac{h^3}{12}f^{(2)}(\xi),
\end{equation*}
$$
and the global error reads
$$
\begin{equation*}
\int_a^bf(x)dx -T_h(f)=-\frac{b-a}{12}h^2f^{(2)}(\xi),
\end{equation*}
$$
where \( T_h \) is the trapezoidal result and \( \xi \in [a,b] \).
The trapezoidal rule is easy to implement numerically through the following simple algorithm
A simple function which implements this algorithm is as follows
double TrapezoidalRule(double a, double b, int n, double (*func)(double))
{
double TrapezSum;
double fa, fb, x, step;
int j;
step=(b-a)/((double) n);
fa=(*func)(a)/2. ;
fb=(*func)(b)/2. ;
TrapezSum=0.;
for (j=1; j <= n-1; j++){
x=j*step+a;
TrapezSum+=(*func)(x);
}
TrapezSum=(TrapezSum+fb+fa)*step;
return TrapezSum;
} // end TrapezoidalRule
The function returns a new value for the specific integral through the variable TrapezSum.
There is one new feature to note here, namely the transfer of a user defined function called func in the definition
void TrapezoidalRule(double a, double b, int n, double *TrapezSum, double (*func)(double) )
What happens here is that we are transferring a pointer to the name of a user defined function, which has as input a double precision variable and returns a double precision number. The function TrapezoidalRule is called as
TrapezoidalRule(a, b, n, &MyFunction )
in the calling function. We note that a, b and n are called by value, while TrapezSum and the user defined function MyFunction are called by reference.
Python offers an extremely versatile programming environment, allowing for the inclusion of analytical studies in a numerical program. Here we show an example code with the trapezoidal rule using SymPy to evaluate an integral and compute the absolute error with respect to the numerically evaluated one of the integral \( 4\int_0^1 dx/(1+x^2) = \pi \):
The following extended version of the trapezoidal rule allows you to plot the relative error by comparing with the exact result. By increasing to \( 10^8 \) points one arrives at a region where numerical errors start to accumulate.
The last example shows the potential of combining numerical algorithms with symbolic calculations, allowing us thereby to
Another very simple approach is the so-called midpoint or rectangle method. In this case the integration area is split in a given number of rectangles with length \( h \) and height given by the mid-point value of the function. This gives the following simple rule for approximating an integral
$$
\begin{equation}
I=\int_a^bf(x) dx \approx h\sum_{i=1}^N f(x_{i-1/2}),
\tag{5}
\end{equation}
$$
where \( f(x_{i-1/2}) \) is the midpoint value of \( f \) for a given rectangle. We will discuss its truncation
error below. It is easy to implement this algorithm, as shown here
double RectangleRule(double a, double b, int n, double (*func)(double))
{
double RectangleSum;
double fa, fb, x, step;
int j;
step=(b-a)/((double) n);
RectangleSum=0.;
for (j = 0; j <= n; j++){
x = (j+0.5)*step+; // midpoint of a given rectangle
RectangleSum+=(*func)(x); // add value of function.
}
RectangleSum *= step; // multiply with step length.
return RectangleSum;
} // end RectangleRule
The correct mathematical expression for the local error for the rectangular rule \( R_i(h) \) for element \( i \) is
$$
\begin{equation*}
\int_{-h}^hf(x)dx - R_i(h)=-\frac{h^3}{24}f^{(2)}(\xi),
\end{equation*}
$$
and the global error reads
$$
\begin{equation*}
\int_a^bf(x)dx -R_h(f)=-\frac{b-a}{24}h^2f^{(2)}(\xi),
\end{equation*}
$$
where \( R_h \) is the result obtained with rectangular rule and \( \xi \in [a,b] \).
Instead of using the above first-order polynomials approximations for \( f \), we attempt at using a second-order polynomials. In this case we need three points in order to define a second-order polynomial approximation
$$
\begin{equation*}
f(x) \approx P_2(x)=a_0+a_1x+a_2x^2.
\end{equation*}
$$
Using again Lagrange's interpolation formula we have
$$
\begin{equation*}
P_2(x)=\frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}y_2+
\frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)}y_1+
\frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)}y_0.
\end{equation*}
$$
Inserting this formula in the integral of Eq. (2) we obtain
$$
\begin{equation*}
\int_{-h}^{+h}f(x)dx=\frac{h}{3}\left(f_h + 4f_0 + f_{-h}\right)+O(h^5),
\end{equation*}
$$
which is Simpson's rule.
Note that the improved accuracy in the evaluation of the derivatives gives a better error approximation, \( O(h^5) \) vs.\ \( O(h^3) \) . But this is again the local error approximation. Using Simpson's rule we can easily compute the integral of Eq. (1) to be
$$
\begin{equation}
I=\int_a^bf(x) dx=\frac{h}{3}\left(f(a) + 4f(a+h) +2f(a+2h)+
\dots +4f(b-h)+ f_{b}\right),
\tag{6}
\end{equation}
$$
with a global error which goes like \( O(h^4) \).
More formal expressions for the local and global errors are for the local error
$$
\begin{equation*}
\int_a^bf(x)dx -\frac{b-a}{6}\left[f(a)+4f((a+b)/2)+f(b)\right]=-\frac{h^5}{90}f^{(4)}(\xi),
\end{equation*}
$$
and for the global error
$$
\begin{equation*}
\int_a^bf(x)dx -S_h(f)=-\frac{b-a}{180}h^4f^{(4)}(\xi).
\end{equation*}
$$
with \( \xi\in[a,b] \) and \( S_h \) the results obtained with Simpson's method.
The method can easily be implemented numerically through the following simple algorithm
In more general terms, what we have done here is to approximate a given function \( f(x) \) with a polynomial of a certain degree. One can show that given \( n+1 \) distinct points \( x_0,\dots, x_n\in[a,b] \) and \( n+1 \) values \( y_0,\dots,y_n \) there exists a unique polynomial \( P_n(x) \) with the property
$$
\begin{equation*}
P_n(x_j) = y_j\hspace{0.5cm} j=0,\dots,n
\end{equation*}
$$
In the Lagrange representation the interpolating polynomial is given by
$$
\begin{equation*}
P_n = \sum_{k=0}^nl_ky_k,
\end{equation*}
$$
with the Lagrange factors
$$
\begin{equation*}
l_k(x) = \prod_{\begin{array}{c}i=0 \\ i\ne k\end{array}}^n\frac{x-x_i}{x_k-x_i}\hspace{0.2cm} k=0,\dots,n.
\end{equation*}
$$
If we for example set \( n=1 \), we obtain
$$
\begin{equation*}
P_1(x) = y_0\frac{x-x_1}{x_0-x_1}+y_1\frac{x-x_0}{x_1-x_0}=\frac{y_1-y_0}{x_1-x_0}x-\frac{y_1x_0+y_0x_1}{x_1-x_0},
\end{equation*}
$$
which we recognize as the equation for a straight line.
The polynomial interpolatory quadrature of order \( n \) with equidistant quadrature points \( x_k=a+kh \) and step \( h=(b-a)/n \) is called the Newton-Cotes quadrature formula of order \( n \).
The methods we have presented hitherto are tailored to problems where the mesh points \( x_i \) are equidistantly spaced, \( x_i \) differing from \( x_{i+1} \) by the step \( h \).
The basic idea behind all integration methods is to approximate the integral
$$
\begin{equation*}
I=\int_a^bf(x)dx \approx \sum_{i=1}^N\omega_if(x_i),
\end{equation*}
$$
where \( \omega \) and \( x \) are the weights and the chosen mesh points, respectively.
In our previous discussion, these mesh points were fixed at the beginning, by choosing
a given number of points \( N \). The weigths \( \omega \) resulted then from the integration
method we applied. Simpson's rule, see Eq. (6) would give
$$
\begin{equation*}
\omega : \left\{h/3,4h/3,2h/3,4h/3,\dots,4h/3,h/3\right\},
\end{equation*}
$$
for the weights, while the trapezoidal rule resulted in
$$
\begin{equation*}
\omega : \left\{h/2,h,h,\dots,h,h/2\right\}.
\end{equation*}
$$
In general, an integration formula which is based on a Taylor series using \( N \) points, will integrate exactly a polynomial \( P \) of degree \( N-1 \). That is, the \( N \) weights \( \omega_n \) can be chosen to satisfy \( N \) linear equations, see chapter 3 of Ref.\ [3]. A greater precision for a given amount of numerical work can be achieved if we are willing to give up the requirement of equally spaced integration points. In Gaussian quadrature (hereafter GQ), both the mesh points and the weights are to be determined. The points will not be equally spaced.
The theory behind GQ is to obtain an arbitrary weight \( \omega \) through the use of so-called orthogonal polynomials. These polynomials are orthogonal in some interval say e.g., [-1,1]. Our points \( x_i \) are chosen in some optimal sense subject only to the constraint that they should lie in this interval. Together with the weights we have then \( 2N \) (\( N \) the number of points) parameters at our disposal.
Even though the integrand is not smooth, we could render it smooth by extracting from it the weight function of an orthogonal polynomial, i.e., we are rewriting
$$
\begin{equation}
I= \int_a^b f(x)dx =\int_a^b W(x)g(x)dx \approx \sum_{i=1}^N\omega_ig(x_i),
\tag{7}
\end{equation}
$$
where \( g \) is smooth and \( W \) is the weight function, which is to be associated with a given
orthogonal polynomial. Note that with a given weight function we end up evaluating the integrand
for the function \( g(x_i) \).
The weight function \( W \) is non-negative in the integration interval \( x\in [a,b] \) such that for any \( n \ge 0 \), the integral \( \int_a^b |x|^n W(x) dx \) is integrable. The naming weight function arises from the fact that it may be used to give more emphasis to one part of the interval than another. A quadrature formula
$$
\begin{equation}
\int_a^b W(x)f(x)dx \approx \sum_{i=1}^N\omega_if(x_i),
\tag{8}
\end{equation}
$$
with \( N \) distinct quadrature points (mesh points) is a called a Gaussian quadrature
formula if it integrates all polynomials \( p\in P_{2N-1} \) exactly, that is
$$
\begin{equation}
\int_a^bW(x)p(x)dx =\sum_{i=1}^N\omega_ip(x_i),
\tag{9}
\end{equation}
$$
It is assumed that \( W(x) \) is continuous and positive and that the integral
$$
\begin{equation*}
\int_a^bW(x)dx
\end{equation*}
$$
exists. Note that the replacement of \( f\rightarrow Wg \) is normally a better approximation
due to the fact that we may isolate possible singularities of \( W \) and its
derivatives at the endpoints of the interval.
The quadrature weights or just weights (not to be confused with the weight function) are positive and the sequence of Gaussian quadrature formulae is convergent if the sequence \( Q_N \) of quadrature formulae
$$
\begin{equation*}
Q_N(f)\rightarrow Q(f)=\int_a^bf(x)dx,
\end{equation*}
$$
in the limit \( N\rightarrow \infty \).
Then we say that the sequence
$$
\begin{equation*}
Q_N(f) = \sum_{i=1}^N\omega_i^{(N)}f(x_i^{(N)}),
\end{equation*}
$$
is convergent for all polynomials \( p \), that is
$$
\begin{equation*}
Q_N(p) = Q(p)
\end{equation*}
$$
if there exits a constant \( C \) such that
$$
\begin{equation*}
\sum_{i=1}^N|\omega_i^{(N)}| \le C,
\end{equation*}
$$
for all \( N \) which are natural numbers.
The error for the Gaussian quadrature formulae of order \( N \) is given by
$$
\begin{equation*}
\int_a^bW(x)f(x)dx-\sum_{k=1}^Nw_kf(x_k)=\frac{f^{2N}(\xi)}{(2N)!}\int_a^bW(x)[q_{N}(x)]^2dx
\end{equation*}
$$
where \( q_{N} \) is the chosen orthogonal polynomial and \( \xi \) is a number in the interval \( [a,b] \).
We have assumed that \( f\in C^{2N}[a,b] \), viz. the space of all real or complex \( 2N \) times continuously
differentiable functions.
In science there are several important orthogonal polynomials which arise from the solution of differential equations. Well-known examples are the Legendre, Hermite, Laguerre and Chebyshev polynomials. They have the following weight functions
Weight function | Interval | Polynomial |
---|---|---|
\( W(x)=1 \) | \( x\in [-1,1] \) | Legendre |
\( W(x)=e^{-x^2} \) | \( -\infty \le x \le \infty \) | Hermite |
\( W(x)=x^{\alpha}e^{-x} \) | \( 0 \le x \le \infty \) | Laguerre |
\( W(x)=1/(\sqrt{1-x^2}) \) | \( -1 \le x \le 1 \) | Chebyshev |
The importance of the use of orthogonal polynomials in the evaluation of integrals can be summarized as follows.
Methods based on Taylor series using \( N \) points will integrate exactly a polynomial \( P \) of degree \( N-1 \). If a function \( f(x) \) can be approximated with a polynomial of degree \( N-1 \)
$$
\begin{equation*}
f(x)\approx P_{N-1}(x),
\end{equation*}
$$
with \( N \) mesh points we should be able to integrate exactly the polynomial \( P_{N-1} \).
Gaussian quadrature methods promise more than this. We can get a better polynomial approximation with order greater than \( N \) to \( f(x) \) and still get away with only \( N \) mesh points. More precisely, we approximate
$$
\begin{equation*}
f(x) \approx P_{2N-1}(x),
\end{equation*}
$$
and with only \( N \) mesh points these methods promise that
$$
\begin{equation*}
\int f(x)dx \approx \int P_{2N-1}(x)dx=\sum_{i=0}^{N-1} P_{2N-1}(x_i)\omega_i,
\end{equation*}
$$
The reason why we can represent a function \( f(x) \) with a polynomial of degree \( 2N-1 \) is due to the fact that we have \( 2N \) equations, \( N \) for the mesh points and \( N \) for the weights.
The mesh points are the zeros of the chosen orthogonal polynomial of order \( N \), and the weights are determined from the inverse of a matrix. An orthogonal polynomials of degree \( N \) defined in an interval \( [a,b] \) has precisely \( N \) distinct zeros on the open interval \( (a,b) \).
Before we detail how to obtain mesh points and weights with orthogonal polynomials, let us revisit some features of orthogonal polynomials by specializing to Legendre polynomials. In the text below, we reserve hereafter the labelling \( L_N \) for a Legendre polynomial of order \( N \), while \( P_N \) is an arbitrary polynomial of order \( N \). These polynomials form then the basis for the Gauss-Legendre method.
The Legendre polynomials are the solutions of an important differential equation in Science, namely
$$
\begin{equation*}
C(1-x^2)P-m_l^2P+(1-x^2)\frac{d}{dx}\left((1-x^2)\frac{dP}{dx}\right)=0.
\end{equation*}
$$
Here \( C \) is a constant. For \( m_l=0 \) we obtain the Legendre polynomials
as solutions, whereas \( m_l \ne 0 \) yields the so-called associated Legendre
polynomials. This differential equation arises in for example the solution
of the angular dependence of Schroedinger's
equation with spherically symmetric potentials such as
the Coulomb potential.
$$
\begin{equation*}
L_k(x)=\frac{1}{2^kk!}\frac{d^k}{dx^k}(x^2-1)^k \hspace{1cm} k=0,1,2,\dots,
\end{equation*}
$$
which, up to a factor, are the Legendre polynomials \( L_k \).
The latter fulfil the orthogonality relation
$$
\begin{equation}
\int_{-1}^1L_i(x)L_j(x)dx=\frac{2}{2i+1}\delta_{ij},
\tag{10}
\end{equation}
$$
and the recursion relation
$$
\begin{equation}
(j+1)L_{j+1}(x)+jL_{j-1}(x)-(2j+1)xL_j(x)=0.
\tag{11}
\end{equation}
$$
$$
\begin{equation*}
L_N(1)=1.
\end{equation*}
$$
With these equations we can determine a Legendre polynomial of arbitrary order
with input polynomials of order \( N-1 \) and \( N-2 \).
As an example, consider the determination of \( L_0 \), \( L_1 \) and \( L_2 \). We have that
$$
\begin{equation*}
L_0(x) = c,
\end{equation*}
$$
with \( c \) a constant. Using the normalization equation \( L_0(1)=1 \)
we get that
$$
\begin{equation*}
L_0(x) = 1.
\end{equation*}
$$
$$
\begin{equation*}
L_1(x) = a+bx,
\end{equation*}
$$
and using the orthogonality relation
$$
\begin{equation*}
\int_{-1}^1L_0(x)L_1(x)dx=0,
\end{equation*}
$$
we obtain \( a=0 \) and with the condition \( L_1(1)=1 \), we obtain \( b=1 \), yielding
$$
\begin{equation*}
L_1(x) = x.
\end{equation*}
$$
$$
\begin{equation*}
L_2(x) = a+bx+cx^2,
\end{equation*}
$$
using the orthogonality relations
$$
\begin{equation*}
\int_{-1}^1L_0(x)L_2(x)dx=0,
\end{equation*}
$$
and
$$
\begin{equation*}
\int_{-1}^1L_1(x)L_2(x)dx=0,
\end{equation*}
$$
and the condition
\( L_2(1)=1 \) we would get
$$
\begin{equation}
L_2(x) = \frac{1}{2}\left(3x^2-1\right).
\tag{12}
\end{equation}
$$
Alternatively, we could have employed the recursion relation of Eq. (11), resulting in
$$
\begin{equation*}
2L_2(x)=3xL_1(x)-L_0,
\end{equation*}
$$
which leads to Eq. (12).
$$
\begin{equation}
Q_{N-1}(x)=\sum_{k=0}^{N-1}\alpha_kL_{k}(x),
\tag{13}
\end{equation}
$$
where \( \alpha_k \)'s are constants.
Using the orthogonality relation of Eq. (10) we see that
$$
\begin{equation}
\int_{-1}^1L_N(x)Q_{N-1}(x)dx=\sum_{k=0}^{N-1} \int_{-1}^1L_N(x) \alpha_kL_{k}(x)dx=0.
\tag{14}
\end{equation}
$$
We will use this result in our construction of mesh points and weights
in the next subsection.
$$
\begin{equation*}
L_0(x) =1,
\end{equation*}
$$
$$
\begin{equation*}
L_1(x) = x,
\end{equation*}
$$
$$
\begin{equation*}
L_2(x) = (3x^2-1)/2,
\end{equation*}
$$
$$
\begin{equation*}
L_3(x) = (5x^3-3x)/2,
\end{equation*}
$$
and
$$
\begin{equation*}
L_4(x) = (35x^4-30x^2+3)/8.
\end{equation*}
$$
// This function computes the Legendre polynomial of degree N
double Legendre( int n, double x)
{
double r, s, t;
int m;
r = 0; s = 1.;
// Use recursion relation to generate p1 and p2
for (m=0; m < n; m++ )
{
t = r; r = s;
s = (2*m+1)*x*r - m*t;
s /= (m+1);
} // end of do loop
return s;
} // end of function Legendre
The variable \( s \) represents \( L_{j+1}(x) \), while \( r \) holds \( L_j(x) \) and \( t \) the value \( L_{j-1}(x) \).
To understand how the weights and the mesh points are generated, we define first a polynomial of degree \( 2N-1 \) (since we have \( 2N \) variables at hand, the mesh points and weights for \( N \) points). This polynomial can be represented through polynomial division by
$$
\begin{equation*}
P_{2N-1}(x)=L_N(x)P_{N-1}(x)+Q_{N-1}(x),
\end{equation*}
$$
where \( P_{N-1}(x) \) and \( Q_{N-1}(x) \) are some polynomials of degree \( N-1 \) or less.
The function \( L_N(x) \) is a Legendre polynomial of order \( N \).
Recall that we wanted to approximate an arbitrary function \( f(x) \) with a polynomial \( P_{2N-1} \) in order to evaluate
$$
\begin{equation*}
\int_{-1}^1f(x)dx\approx \int_{-1}^1P_{2N-1}(x)dx.
\end{equation*}
$$
$$
\begin{equation*}
\int_{-1}^1P_{2N-1}(x)dx=\int_{-1}^1(L_N(x)P_{N-1}(x)+Q_{N-1}(x))dx=\int_{-1}^1Q_{N-1}(x)dx,
\end{equation*}
$$
due to the orthogonality properties of the Legendre polynomials. We see that it suffices
to evaluate the integral over \( \int_{-1}^1Q_{N-1}(x)dx \) in order to evaluate
\( \int_{-1}^1P_{2N-1}(x)dx \). In addition, at the points \( x_k \) where \( L_N \) is zero, we have
$$
\begin{equation*}
P_{2N-1}(x_k)=Q_{N-1}(x_k)\hspace{1cm} k=0,1,\dots, N-1,
\end{equation*}
$$
and we see that through these \( N \) points we can fully define \( Q_{N-1}(x) \) and thereby the
integral. Note that we have chosen to let the numbering of the points run from \( 0 \) to \( N-1 \).
The reason for this choice is that we wish to have the same numbering as the order of a
polynomial of degree \( N-1 \). This numbering will be useful below when we introduce the matrix
elements which define the integration weights \( w_i \).
$$
\begin{equation}
Q_{N-1}(x)=\sum_{i=0}^{N-1}\alpha_iL_i(x).
\tag{15}
\end{equation}
$$
Using the orthogonality property of the Legendre polynomials we have
$$
\begin{equation*}
\int_{-1}^1Q_{N-1}(x)dx=\sum_{i=0}^{N-1}\alpha_i\int_{-1}^1L_0(x)L_i(x)dx=2\alpha_0,
\end{equation*}
$$
where we have just inserted \( L_0(x)=1 \)!
$$
\begin{equation}
Q_{N-1}(x_k)=\sum_{i=0}^{N-1}\alpha_iL_i(x_k)=\sum_{i=0}^{N-1}\alpha_iL_{ik} \hspace{1cm} k=0,1,\dots, N-1.
\tag{16}
\end{equation}
$$
Since the Legendre polynomials are linearly independent of each other, none
of the columns in the matrix \( L_{ik} \) are linear combinations of the others.
$$
\begin{equation*}
\hat{L}^{-1}\hat{L} = \hat{I}.
\end{equation*}
$$
Multiplying both sides of Eq. (16) with \( \sum_{j=0}^{N-1}L_{ji}^{-1} \) results in
$$
\begin{equation}
\sum_{i=0}^{N-1}(L^{-1})_{ki}Q_{N-1}(x_i)=\alpha_k.
\tag{17}
\end{equation}
$$
$$
\begin{equation*}
\hat{x}_k=\left(\begin{array} {c} x_0\\
x_1\\
.\\
.\\
x_{N-1}\end{array}\right) \hspace{0.5cm}
\hat{\alpha}=\left(\begin{array} {c} \alpha_0\\
\alpha_1\\
.\\
.\\
\alpha_{N-1}\end{array}\right),
\end{equation*}
$$
and the matrix
$$
\begin{equation*}
\hat{L}=\left(\begin{array} {cccc} L_0(x_0) & L_1(x_0) &\dots &L_{N-1}(x_0)\\
L_0(x_1) & L_1(x_1) &\dots &L_{N-1}(x_1)\\
\dots & \dots &\dots &\dots\\
L_0(x_{N-1}) & L_1(x_{N-1}) &\dots &L_{N-1}(x_{N-1})
\end{array}\right).
\end{equation*}
$$
$$
\begin{equation*}
Q_{N-1}(\hat{x}_k) = \hat{L}\hat{\alpha},
\end{equation*}
$$
yielding (if \( \hat{L} \) has an inverse)
$$
\begin{equation*}
\hat{L}^{-1}Q_{N-1}(\hat{x}_k) = \hat{\alpha},
\end{equation*}
$$
which is Eq. (17).
$$
\begin{equation*}
\int_{-1}^1P_{2N-1}(x)dx=\int_{-1}^1Q_{N-1}(x)dx,
\end{equation*}
$$
we get
$$
\begin{equation*}
\int_{-1}^1P_{2N-1}(x)dx=\int_{-1}^1Q_{N-1}(x)dx=2\alpha_0=
2\sum_{i=0}^{N-1}(L^{-1})_{0i}P_{2N-1}(x_i).
\end{equation*}
$$
$$
\begin{equation*}
\int_{-1}^1P_{2N-1}(x)dx=\sum_{i=0}^{N-1}\omega_iP_{2N-1}(x_i)
\end{equation*}
$$
and if our function \( f(x) \) can be approximated by a polynomial \( P \) of degree
\( 2N-1 \), we have finally that
$$
\begin{equation*}
\int_{-1}^1f(x)dx\approx \int_{-1}^1P_{2N-1}(x)dx=\sum_{i=0}^{N-1}\omega_iP_{2N-1}(x_i) .
\end{equation*}
$$
In summary, the mesh points \( x_i \) are defined by the zeros of an orthogonal polynomial of degree \( N \), that is
\( L_N \), while the weights are
given by \( 2(L^{-1})_{0i} \).
Let us apply the above formal results to the case \( N=2 \). This means that we can approximate a function \( f(x) \) with a polynomial \( P_3(x) \) of order \( 2N-1=3 \).
The mesh points are the zeros of \( L_2(x)=1/2(3x^2-1) \). These points are \( x_0=-1/\sqrt{3} \) and \( x_1=1/\sqrt{3} \).
Specializing Eq. (16)
$$
\begin{equation*}
Q_{N-1}(x_k)=\sum_{i=0}^{N-1}\alpha_iL_i(x_k) \hspace{1cm} k=0,1,\dots, N-1.
\end{equation*}
$$
to \( N=2 \) yields
$$
\begin{equation*}
Q_1(x_0)=\alpha_0-\alpha_1\frac{1}{\sqrt{3}},
\end{equation*}
$$
and
$$
\begin{equation*}
Q_1(x_1)=\alpha_0+\alpha_1\frac{1}{\sqrt{3}},
\end{equation*}
$$
since \( L_0(x=\pm 1/\sqrt{3})=1 \) and \( L_1(x=\pm 1/\sqrt{3})=\pm 1/\sqrt{3} \).
$$
\begin{equation*}
\hat{L}=\left(\begin{array} {cc} 1 & -\frac{1}{\sqrt{3}}\\
1 & \frac{1}{\sqrt{3}}\end{array}\right),
\end{equation*}
$$
with an inverse given by
$$
\begin{equation*}
\hat{L}^{-1}=\frac{\sqrt{3}}{2}\left(\begin{array} {cc} \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}}\\
-1 & 1\end{array}\right).
\end{equation*}
$$
The weights are given by the matrix elements \( 2(L_{0k})^{-1} \). We have thence
\( \omega_0=1 \) and \( \omega_1=1 \).
Summarizing, for Legendre polynomials with \( N=2 \) we have weights
$$
\begin{equation*}
\omega : \left\{1,1\right\},
\end{equation*}
$$
and mesh points
$$
\begin{equation*}
x : \left\{-\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}}\right\}.
\end{equation*}
$$
$$
\begin{equation*}
\int_{-1}^1f(x)dx,
\end{equation*}
$$
with \( f(x)=x^2 \), we approximate
$$
\begin{equation*}
I=\int_{-1}^1x^2dx \approx \sum_{i=0}^{N-1}\omega_ix_i^2.
\end{equation*}
$$
$$
\begin{equation*}
I=\int_{-1}^1x^2dx =\sum_{i=0}^{1}\omega_ix_i^2=\frac{1}{3}+\frac{1}{3}=\frac{2}{3},
\end{equation*}
$$
the exact answer!
If we were to emply the trapezoidal rule we would get
$$
\begin{equation*}
I=\int_{-1}^1x^2dx =\frac{b-a}{2}\left((a)^2+(b)^2\right)/2=
\frac{1-(-1)}{2}\left((-1)^2+(1)^2\right)/2=1!
\end{equation*}
$$
With just two points we can calculate exactly the integral for a second-order
polynomial since our methods approximates the exact function with higher
order polynomial.
How many points do you need with the trapezoidal rule in order to achieve a
similar accuracy?
Note that the Gauss-Legendre method is not limited to an interval [-1,1], since we can always through a change of variable
$$
\begin{equation*}
t=\frac{b-a}{2}x+\frac{b+a}{2},
\end{equation*}
$$
rewrite the integral for an interval [a,b]
$$
\begin{equation*}
\int_a^bf(t)dt=\frac{b-a}{2}\int_{-1}^1f\left(\frac{(b-a)x}{2}+\frac{b+a}{2}\right)dx.
\end{equation*}
$$
$$
\begin{equation*}
\int_0^{\infty}f(t)dt,
\end{equation*}
$$
we can choose new mesh points and weights by using the mapping
$$
\begin{equation*}
\tilde{x}_i=tan\left\{\frac{\pi}{4}(1+x_i)\right\},
\end{equation*}
$$
and
$$
\begin{equation*}
\tilde{\omega}_i= \frac{\pi}{4}\frac{\omega_i}{cos^2\left(\frac{\pi}{4}(1+x_i)\right)},
\end{equation*}
$$
where \( x_i \) and \( \omega_i \) are the original mesh points and weights in the
interval \( [-1,1] \), while \( \tilde{x}_i \) and \( \tilde{\omega}_i \) are the new
mesh points and weights for the interval \( [0,\infty) \).
If we are able to rewrite our integral of Eq. (7) with a weight function \( W(x)=x^{\alpha}e^{-x} \) with integration limits \( [0,\infty) \), we could then use the Laguerre polynomials. The polynomials form then the basis for the Gauss-Laguerre method which can be applied to integrals of the form
$$
\begin{equation*}
I=\int_0^{\infty}f(x)dx =\int_0^{\infty}x^{\alpha}e^{-x}g(x)dx.
\end{equation*}
$$
$$
\begin{equation*}
\left(\frac{d^2 }{dx^2}-\frac{d }{dx}+\frac{\lambda}{x}-\frac{l(l+1)}{x^2}\right){\cal L}(x)=0,
\end{equation*}
$$
where \( l \) is an integer \( l\ge 0 \) and \( \lambda \) a constant. This equation
arises for example from the solution of the radial Schr\"odinger equation with
a centrally symmetric potential such as the Coulomb potential.
$$
\begin{equation*}
{\cal L}_0(x)=1,
\end{equation*}
$$
$$
\begin{equation*}
{\cal L}_1(x)=1-x,
\end{equation*}
$$
$$
\begin{equation*}
{\cal L}_2(x)=2-4x+x^2,
\end{equation*}
$$
$$
\begin{equation*}
{\cal L}_3(x)=6-18x+9x^2-x^3,
\end{equation*}
$$
and
$$
\begin{equation*}
{\cal L}_4(x)=x^4-16x^3+72x^2-96x+24.
\end{equation*}
$$
$$
\begin{equation*}
\int_{0}^{\infty}e^{-x}{\cal L}_n(x)^2dx=1,
\end{equation*}
$$
and the recursion relation
$$
\begin{equation*}
(n+1){\cal L}_{n+1}(x)=(2n+1-x){\cal L}_{n}(x)-n{\cal L}_{n-1}(x).
\end{equation*}
$$
In a similar way, for an integral which goes like
$$
\begin{equation*}
I=\int_{-\infty}^{\infty}f(x)dx =\int_{-\infty}^{\infty}e^{-x^2}g(x)dx.
\end{equation*}
$$
we could use the Hermite polynomials in order to extract weights and mesh points.
The Hermite polynomials are the solutions of the following differential
equation
$$
\begin{equation}
\frac{d^2H(x)}{dx^2}-2x\frac{dH(x)}{dx}+
(\lambda-1)H(x)=0.
\tag{18}
\end{equation}
$$
$$
\begin{equation*}
H_0(x)=1,
\end{equation*}
$$
$$
\begin{equation*}
H_1(x)=2x,
\end{equation*}
$$
$$
\begin{equation*}
H_2(x)=4x^2-2,
\end{equation*}
$$
$$
\begin{equation*}
H_3(x)=8x^3-12,
\end{equation*}
$$
and
$$
\begin{equation*}
H_4(x)=16x^4-48x^2+12.
\end{equation*}
$$
They fulfil the orthogonality relation
$$
\begin{equation*}
\int_{-\infty}^{\infty}e^{-x^2}H_n(x)^2dx=2^nn!\sqrt{\pi},
\end{equation*}
$$
and the recursion relation
$$
\begin{equation*}
H_{n+1}(x)=2xH_{n}(x)-2nH_{n-1}(x).
\end{equation*}
$$
$$
\begin{equation*}
\int_1^{100}\frac{\exp{(-x)}}{x}dx,
\end{equation*}
$$
and
$$
\begin{equation*}
\int_{0}^{3}\frac{1}{2+x^2}dx.
\end{equation*}
$$
#include <iostream>
#include "lib.h"
using namespace std;
// Here we define various functions called by the main program
// this function defines the function to integrate
double int_function(double x);
// Main function begins here
int main()
{
int n;
double a, b;
cout << "Read in the number of integration points" << endl;
cin >> n;
cout << "Read in integration limits" << endl;
cin >> a >> b;
// reserve space in memory for vectors containing the mesh points
// weights and function values for the use of the gauss-legendre
// method
double *x = new double [n];
double *w = new double [n];
// set up the mesh points and weights
gauss_legendre(a, b,x,w, n);
// evaluate the integral with the Gauss-Legendre method
// Note that we initialize the sum
double int_gauss = 0.;
for ( int i = 0; i < n; i++){
int_gauss+=w[i]*int_function(x[i]);
}
// final output
cout << "Trapez-rule = " << trapezoidal_rule(a, b,n, int_function)
<< endl;
cout << "Simpson's rule = " << simpson(a, b,n, int_function)
<< endl;
cout << "Gaussian quad = " << int_gauss << endl;
delete [] x;
delete [] w;
return 0;
} // end of main program
// this function defines the function to integrate
double int_function(double x)
{
double value = 4./(1.+x*x);
return value;
} // end of function to evaluate
\( N \) | Trapez | Simpson | Gauss-Legendre |
---|---|---|---|
10 | 1.821020 | 1.214025 | 0.1460448 |
20 | 0.912678 | 0.609897 | 0.2178091 |
40 | 0.478456 | 0.333714 | 0.2193834 |
100 | 0.273724 | 0.231290 | 0.2193839 |
1000 | 0.219984 | 0.219387 | 0.2193839 |
We note here that, since the area over where we integrate is rather large and the integrand goes slowly to zero for large values of \( x \), both the trapezoidal rule and Simpson's method need quite many points in order to approach the Gauss-Legendre method. This integrand demonstrates clearly the strength of the Gauss-Legendre method (and other GQ methods as well), viz., few points are needed in order to achieve a very high precision.
\( N \) | Trapez | Simpson | Gauss-Legendre |
---|---|---|---|
10 | 0.798861 | 0.799231 | 0.799233 |
20 | 0.799140 | 0.799233 | 0.799233 |
40 | 0.799209 | 0.799233 | 0.799233 |
100 | 0.799229 | 0.799233 | 0.799233 |
1000 | 0.799233 | 0.799233 | 0.799233 |
The following python code allows you to run interactively either in a browser or using ipython notebook. It compares the trapezoidal rule and Gaussian quadrature with the exact result from symbolic python SYMPY up to 1000 integration points for the integral
$$
I = 2 = \int_0^{\infty} x^2 \exp{-x} dx.
$$
For the trapezoidal rule the results will vary strongly depending on how the infinity limit is approximated. Try to run the code below for different finite approximations to \( \infty \).
So-called principal value (PV) integrals are often employed in physics, from Green's functions for scattering to dispersion relations. Dispersion relations are often related to measurable quantities and provide important consistency checks in atomic, nuclear and particle physics. A PV integral is defined as
$$
\begin{equation*}
I(x)={\cal P}\int_a^bdt\frac{f(t)}{t-x}=\lim_{\epsilon\rightarrow 0^+}
\left[\int_a^{x-\epsilon}dt\frac{f(t)}{t-x}+\int_{x+\epsilon}^bdt\frac{f(t)}{t-x}\right],
\end{equation*}
$$
and
arises in applications
of Cauchy's residue theorem when the pole \( x \) lies
on the real axis within the interval of integration \( [a,b] \). Here \( {\cal P} \) stands for the principal value. An important assumption is that the function \( f(t) \) is continuous
on the interval of integration.
In case \( f(t) \) is a closed form expression or it has an analytic continuation in the complex plane, it may be possible to obtain an expression on closed form for the above integral.
However, the situation which we are often confronted with is that \( f(t) \) is only known at some points \( t_i \) with corresponding values \( f(t_i) \). In order to obtain \( I(x) \) we need to resort to a numerical evaluation.
To evaluate such an integral, let us first rewrite it as
$$
\begin{equation*}
{\cal P}\int_a^bdt\frac{f(t)}{t-x}=
\int_a^{x-\Delta}dt\frac{f(t)}{t-x}+\int_{x+\Delta}^bdt\frac{f(t)}{t-x}+
{\cal P}\int_{x-\Delta}^{x+\Delta}dt\frac{f(t)}{t-x},
\end{equation*}
$$
where we have isolated the principal value part in the last integral.
Defining a new variable \( u=t-x \), we can rewrite the principal value integral as
$$
\begin{equation}
I_{\Delta}(x)={\cal P}\int_{-\Delta}^{+\Delta}du\frac{f(u+x)}{u}.
\tag{19}
\end{equation}
$$
One possibility is to Taylor expand \( f(u+x) \) around \( u=0 \), and compute
derivatives to a certain order as we did for the Trapezoidal rule or
Simpson's rule.
Since all terms with even powers of \( u \) in the Taylor expansion dissapear,
we have that
$$
\begin{equation*}
I_{\Delta}(x)\approx \sum_{n=0}^{N_{max}}f^{(2n+1)}(x)
\frac{\Delta^{2n+1}}{(2n+1)(2n+1)!}.
\end{equation*}
$$
To evaluate higher-order derivatives may be both time consuming and delicate from a numerical point of view, since there is always the risk of loosing precision when calculating derivatives numerically. Unless we have an analytic expression for \( f(u+x) \) and can evaluate the derivatives in a closed form, the above approach is not the preferred one.
Rather, we show here how to use the Gauss-Legendre method to compute Eq. (19). Let us first introduce a new variable \( s=u/\Delta \) and rewrite Eq. (19) as
$$
\begin{equation}
I_{\Delta}(x)={\cal P}\int_{-1}^{+1}ds\frac{f(\Delta s+x)}{s}.
\tag{20}
\end{equation}
$$
The integration limits are now from \( -1 \) to \( 1 \), as for the Legendre polynomials. The principal value in Eq. (20) is however rather tricky to evaluate numerically, mainly since computers have limited precision. We will here use a subtraction trick often used when dealing with singular integrals in numerical calculations. We introduce first the calculus relation
$$
\begin{equation*}
\int_{-1}^{+1} \frac{ds}{s} =0.
\end{equation*}
$$
It means that the curve \( 1/(s) \) has equal and opposite
areas on both sides of the singular point \( s=0 \).
If we then note that \( f(x) \) is just a constant, we have also
$$
\begin{equation*}
f(x)\int_{-1}^{+1} \frac{ds}{s}=\int_{-1}^{+1}f(x) \frac{ds}{s} =0.
\end{equation*}
$$
Subtracting this equation from Eq. (20) yields
$$
\begin{equation}
I_{\Delta}(x)={\cal P}\int_{-1}^{+1}ds\frac{f(\Delta s+x)}{s}=\int_{-1}^{+1}ds\frac{f(\Delta s+x)-f(x)}{s},
\tag{21}
\end{equation}
$$
and the integrand is no longer singular since we have that
\( \lim_{s \rightarrow 0} (f(s+x) -f(x))=0 \) and for the particular case
\( s=0 \) the integrand
is now finite.
Eq. (21) is now rewritten using the Gauss-Legendre method resulting in
$$
\begin{equation}
\int_{-1}^{+1}ds\frac{f(\Delta s+x)-f(x)}{s}=\sum_{i=1}^{N}\omega_i\frac{f(\Delta s_i+x)-f(x)}{s_i},
\tag{22}
\end{equation}
$$
where \( s_i \) are the mesh points (\( N \) in total) and \( \omega_i \) are the weights.
In the selection of mesh points for a PV integral, it is important to use an even number of points, since an odd number of mesh points always picks \( s_i=0 \) as one of the mesh points. The sum in Eq. (22) will then diverge.
Let us apply this method to the integral
$$
\begin{equation}
I(x)={\cal P}\int_{-1}^{+1}dt\frac{e^t}{t}.
\tag{23}
\end{equation}
$$
The integrand diverges at \( x=t=0 \). We
rewrite it using Eq. (21) as
$$
\begin{equation}
{\cal P}\int_{-1}^{+1}dt\frac{e^t}{t}=\int_{-1}^{+1}\frac{e^t-1}{t},
\tag{24}
\end{equation}
$$
since \( e^x=e^0=1 \). With Eq. (22) we have then
$$
\begin{equation}
\int_{-1}^{+1}\frac{e^t-1}{t}\approx \sum_{i=1}^{N}\omega_i\frac{e^{t_i}-1}{t_i}.
\tag{25}
\end{equation}
$$
The exact results is \( 2.11450175075.... \). With just two mesh points we recall from the previous subsection that \( \omega_1=\omega_2=1 \) and that the mesh points are the zeros of \( L_2(x) \), namely \( x_1=-1/\sqrt{3} \) and \( x_2=1/\sqrt{3} \). Setting \( N=2 \) and inserting these values in the last equation gives
$$
\begin{equation*}
I_2(x=0)=\sqrt{3}\left(e^{1/\sqrt{3}}-e^{-1/\sqrt{3}}\right)=2.1129772845.
\end{equation*}
$$
With six mesh points we get even the exact result to the tenth digit
$$
\begin{equation*}
I_6(x=0)=2.11450175075!
\end{equation*}
$$
We can repeat the above subtraction trick for more complicated integrands. First we modify the integration limits to \( \pm \infty \) and use the fact that
$$
\begin{equation*}
\int_{-\infty}^{\infty} \frac{dk}{k-k_0}=
\int_{-\infty}^{0} \frac{dk}{k-k_0}+
\int_{0}^{\infty} \frac{dk}{k-k_0} =0.
\end{equation*}
$$
A change of variable \( u=-k \) in the integral with limits from \( -\infty \) to \( 0 \) gives
$$
\begin{equation*}
\int_{-\infty}^{\infty} \frac{dk}{k-k_0}=
\int_{\infty}^{0} \frac{-du}{-u-k_0}+
\int_{0}^{\infty} \frac{dk}{k-k_0}= \int_{0}^{\infty} \frac{dk}{-k-k_0}+
\int_{0}^{\infty} \frac{dk}{k-k_0}=0.
\end{equation*}
$$
It means that the curve \( 1/(k-k_0) \) has equal and opposite areas on both sides of the singular point \( k_0 \). If we break the integral into one over positive \( k \) and one over negative \( k \), a change of variable \( k\rightarrow -k \) allows us to rewrite the last equation as
$$
\begin{equation*}
\int_{0}^{\infty} \frac{dk}{k^2-k_0^2} =0.
\end{equation*}
$$
We can use this to express a principal values integral as
$$
\begin{equation}
{\cal P}\int_{0}^{\infty} \frac{f(k)dk}{k^2-k_0^2} =
\int_{0}^{\infty} \frac{(f(k)-f(k_0))dk}{k^2-k_0^2},
\tag{26}
\end{equation}
$$
where the right-hand side is no longer singular at
\( k=k_0 \), it is proportional to the derivative \( df/dk \),
and can be evaluated numerically as any other integral.
Such a trick is often used when evaluating integral equations.
Here we show an example of a multidimensional integral which appears in quantum mechanical calculations.
The ansatz for the wave function for two electrons is given by the product of two \( 1s \) wave functions as
$$
\Psi({\bf r}_1,{\bf r}_2) = \exp{-(\alpha (r_1+r_2))}.
$$
The integral we need to solve is the quantum mechanical expectation value of the correlation
energy between two electrons, namely
$$
I = \int_{-\infty}^{\infty} d{\bf r}_1d{\bf r}_2 \exp{-2(\alpha (r_1+r_2))}\frac{1}{|{\bf r}_1-{\bf r}_2|}.
$$
The integral has an exact solution \( 5\pi^2/16 = 0.19277 \).
If we use Gaussian quadrature with Legendre polynomials (without rewriting the integral), we have
double *x = new double [N];
double *w = new double [N];
// set up the mesh points and weights
GaussLegendrePoints(a,b,x,w, N);
// evaluate the integral with the Gauss-Legendre method
// Note that we initialize the sum
double int_gauss = 0.;
// six-double loops
for (int i=0;i<N;i++){
for (int j = 0;j<N;j++){
for (int k = 0;k<N;k++){
for (int l = 0;l<N;l++){
for (int m = 0;m<N;m++){
for (int n = 0;n<N;n++){
int_gauss+=w[i]*w[j]*w[k]*w[l]*w[m]*w[n]
*int_function(x[i],x[j],x[k],x[l],x[m],x[n]);
}}}}}
}
// this function defines the function to integrate
double int_function(double x1, double y1, double z1, double x2, double y2, double z2)
{
double alpha = 2.;
// evaluate the different terms of the exponential
double exp1=-2*alpha*sqrt(x1*x1+y1*y1+z1*z1);
double exp2=-2*alpha*sqrt(x2*x2+y2*y2+z2*z2);
double deno=sqrt(pow((x1-x2),2)+pow((y1-y2),2)+pow((z1-z2),2));
return exp(exp1+exp2)/deno;
} // end of function to evaluate
Using Legendre polynomials for the Gaussian quadrature is not very efficient. There are several reasons for this:
It is very useful here to change to spherical coordinates
$$
d{\bf r}_1d{\bf r}_2 = r_1^2dr_1 r_2^2dr_2 dcos(\theta_1)dcos(\theta_2)d\phi_1d\phi_2,
$$
and
$$
\frac{1}{r_{12}}= \frac{1}{\sqrt{r_1^2+r_2^2-2r_1r_2cos(\beta)}}
$$
with
$$
\cos(\beta) = \cos(\theta_1)\cos(\theta_2)+\sin(\theta_1)\sin(\theta_2)\cos(\phi_1-\phi_2))
$$
This means that our integral becomes
$$
I=\int_0^{\infty} r_1^2dr_1 \int_0^{\infty}r_2^2dr_2 \int_0^{\pi}dcos(\theta_1)\int_0^{\pi}dcos(\theta_2)\int_0^{2\pi}d\phi_1\int_0^{2\pi}d\phi_2 \frac{\exp{-2\alpha (r_1+r_2)}}{r_{12}}
$$
where we have defined
$$
\frac{1}{r_{12}}= \frac{1}{\sqrt{r_1^2+r_2^2-2r_1r_2cos(\beta)}}
$$
with
$$
\cos(\beta) = \cos(\theta_1)\cos(\theta_2)+\sin(\theta_1)\sin(\theta_2)\cos(\phi_1-\phi_2))
$$
Our integral is now given by
$$
I=\int_0^{\infty} r_1^2dr_1 \int_0^{\infty}r_2^2dr_2 \int_0^{\pi}dcos(\theta_1)\int_0^{\pi}dcos(\theta_2)\int_0^{2\pi}d\phi_1\int_0^{2\pi}d\phi_2 \frac{\exp{-2\alpha (r_1+r_2)}}{r_{12}}
$$
For the angles we need to perform the integrations over \( \theta_i\in [0,\pi] \) and \( \phi_i \in [0,2\pi] \). However, for the radial part we can now either use
\( r_{\mathrm{max}} \) | Integral | Error |
---|---|---|
1.00 | 0.161419805 | 0.0313459063 |
1.50 | 0.180468967 | 0.012296744 |
2.00 | 0.177065182 | 0.0157005292 |
2.50 | 0.167970694 | 0.0247950165 |
3.00 | 0.156139391 | 0.0366263199 |
\( N \) | Integral | Error |
---|---|---|
10 | 0.129834248 | 0.0629314631 |
16 | 0.167860437 | 0.0249052742 |
20 | 0.177065182 | 0.0157005292 |
26 | 0.183543237 | 0.00922247353 |
30 | 0.185795624 | 0.00697008738 |
\( N \) | Integral | Error |
---|---|---|
10 | 0.186457345 | 0.00630836601 |
16 | 0.190113364 | 0.00265234708 |
20 | 0.19108178 | 0.00168393093 |
26 | 0.191831828 | 0.000933882594 |
30 | 0.192113712 | 0.000651999339 |
The code that was used to generate these results can be found under the program link.