$$
\begin{equation}
\tag{1}
\frac{\partial^2 u}{\partial x^2}=A\frac{\partial^2 u}{\partial t^2},
\end{equation}
$$
where \( A \) is a constant. The solution \( u \) depends on both spatial and temporal variables, viz. \( u=u(x,t) \).
$$
\begin{equation*}
\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}=A\frac{\partial^2 u}{\partial t^2},
\end{equation*}
$$
$$
\begin{equation}
\tag{2}
\frac{\partial^2 u}{\partial x^2}=A\frac{\partial u}{\partial t},
\end{equation}
$$
and \( A \) is in this case called the diffusion constant. It can be used to model
a wide selection of diffusion processes, from molecules to the diffusion of heat
in a given material.
$$
\begin{equation}
\tag{3}
\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}=0,
\end{equation}
$$
or if we have a finite electric charge represented by a charge density
\( \rho(\mathbf{x}) \) we have the familiar Poisson equation
$$
\begin{equation}
\tag{4}
\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}=-4\pi \rho(\mathbf{x}).
\end{equation}
$$
$$
\begin{equation}
\tag{5}
-\frac{\partial^2 u}{\partial x^2}-\frac{\partial^2 u}{\partial y^2}=\lambda u,
\end{equation}
$$
the linear transport equation (in \( 2+1 \) dimensions) familiar from Brownian motion as well
$$
\begin{equation}
\tag{6}
\frac{\partial u}{\partial t} +\frac{\partial u}{\partial x}+\frac{\partial u}{\partial y }=0,
\end{equation}
$$
$$
\begin{equation*}
-\frac{\partial^2 u}{\partial x^2}-\frac{\partial^2 u}{\partial y^2}+f(x,y)u = \imath\frac{\partial u}{\partial t}.
\end{equation*}
$$
$$
\frac{\partial \mathbf{E}}{\partial t} = \mathrm{curl}\mathbf{B},
$$
and
$$
-\mathrm{curl} \mathbf{E} = \mathbf{B}
$$
and
$$
\mathrm{div} \mathbf{E} = \mathrm{div}\mathbf{B} = 0.
$$
$$
\begin{equation*}
\frac{\partial \mathbf{u}}{\partial t} +\mathbf{u}\nabla\mathbf{u}= -Dp; \hspace{1cm} \mathrm{div} \mathbf{u} = 0,
\end{equation*}
$$
with \( p \) being the pressure and
$$
\begin{equation*}
\nabla = \frac{\partial}{\partial x}e_x+\frac{\partial}{\partial y}e_y,
\end{equation*}
$$
in the two dimensions. The unit vectors are \( e_x \) and \( e_y \).
$$
\begin{equation*}
\frac{\partial \mathbf{u}}{\partial t} +\mathbf{u}\nabla\mathbf{u}-\Delta \mathbf{u}= -Dp; \hspace{1cm} \mathrm{div} \mathbf{u} = 0.
\end{equation*}
$$
$$
\begin{equation*}
A(x,y)\frac{\partial^2 u}{\partial x^2}+B(x,y)\frac{\partial^2 u}{\partial x\partial y}
+C(x,y)\frac{\partial^2 u}{\partial y^2}=F(x,y,u,\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}),
\end{equation*}
$$
and if we set
$$
\begin{equation*}
B=C=0,
\end{equation*}
$$
we recover the \( 1+1 \)-dimensional diffusion equation which is an example
of a so-called parabolic partial differential equation.
With
$$
\begin{equation*}
B=0, \hspace{1cm} AC < 0
\end{equation*}
$$
we get the \( 2+1 \)-dim wave equation which is an example of a so-called
elliptic PDE, where more generally we have
\( B^2 > AC \).
For \( B^2 < AC \)
we obtain a so-called hyperbolic PDE, with the Laplace equation in
Eq. (3) as one of the
classical examples.
These equations can all be easily extended to non-linear partial differential
equations and \( 3+1 \) dimensional cases.
The basis is the assumption that the flux density \( \mathbf{\rho} \) obeys the Gauss-Green theorem
$$
\begin{equation*}
\int_V \mathrm{div}\mathbf{\rho} dx = \int_{\partial V} \mathbf{\rho}\mathbf{n}dS,
\end{equation*}
$$
where \( n \) is the unit outer normal field and \( V \) is a smooth region with the space where
we seek a solution.
The Gauss-Green theorem leads to
$$
\begin{equation*}
\mathrm{div} \mathbf{\rho} = 0.
\end{equation*}
$$
$$
\begin{equation*}
\mathbf{\rho} = -D\mathbf{\nabla} u,
\end{equation*}
$$
resulting in
$$
\begin{equation*}
\mathrm{div} \mathbf{\nabla} u = D\Delta u = 0,
\end{equation*}
$$
which is Laplace's equation. The constant \( D \) can be coupled with various physical
constants, such as the diffusion constant or the specific heat and thermal conductivity discussed below.
Coupling the rate of change (temporal dependence) of \( u \) with the flux density we have
$$
\begin{equation*}
\frac{\partial u}{\partial t} = -\mathrm{div}\mathbf{\rho},
\end{equation*}
$$
which results in
$$
\begin{equation*}
\frac{\partial u}{\partial t}= D \mathrm{div} \mathbf{\nabla} u = D \Delta u,
\end{equation*}
$$
the diffusion equation, or heat equation.
$$
\begin{equation*}
\frac{\kappa}{C\rho}\nabla^2 T(\mathbf{x},t) =\frac{\partial T(\mathbf{x},t)}{\partial t}
\end{equation*}
$$
where \( C \) is the specific heat and \( \rho \)
the density of the material.
Here we let the density be represented by a
constant, but there is no problem introducing an explicit spatial dependence, viz.,
$$
\begin{equation*}
\frac{\kappa}{C\rho(\mathbf{x},t)}\nabla^2 T(\mathbf{x},t) =
\frac{\partial T(\mathbf{x},t)}{\partial t}.
\end{equation*}
$$
$$
\begin{equation*}
D=\frac{C\rho}{\kappa},
\end{equation*}
$$
we arrive at
$$
\begin{equation*}
\nabla^2 T(\mathbf{x},t) =
D\frac{\partial T(\mathbf{x},t)}{\partial t}.
\end{equation*}
$$
Specializing to the \( 1+1 \)-dimensional case we have
$$
\begin{equation*}
\frac{\partial^2 T(x,t)}{\partial x^2}=D\frac{\partial T(x,t)}{\partial t}.
\end{equation*}
$$
$$
\begin{equation*}
\frac{\partial^2 T(x,t)}{\alpha^2\partial \hat{x}^2}=
D\frac{\partial T(x,t)}{\partial t},
\end{equation*}
$$
and since \( \alpha \) is just a constant we could define
\( \alpha^2D= 1 \) or use the last expression to define a dimensionless time-variable
\( \hat{t} \). This yields a simplified diffusion equation
$$
\begin{equation*}
\frac{\partial^2 T(\hat{x},\hat{t})}{\partial \hat{x}^2}=
\frac{\partial T(\hat{x},\hat{t})}{\partial \hat{t}}.
\end{equation*}
$$
It is now a partial differential equation in terms of dimensionless
variables. In the discussion below, we will however, for the sake
of notational simplicity replace \( \hat{x}\rightarrow x \) and
\( \hat{t}\rightarrow t \). Moreover, the solution to the \( 1+1 \)-dimensional
partial differential equation is replaced by \( T(\hat{x},\hat{t})\rightarrow u(x,t) \).
$$
\begin{equation*}
\nabla^2 u(x,t) =\frac{\partial u(x,t)}{\partial t},
\end{equation*}
$$
or
$$
\begin{equation*}
u_{xx} = u_t,
\end{equation*}
$$
with initial conditions, i.e., the conditions at \( t=0 \),
$$
\begin{equation*}
u(x,0)= g(x) \hspace{0.5cm} 0 < x < L
\end{equation*}
$$
with \( L=1 \) the length of the \( x \)-region of interest.
$$
\begin{equation*}
u(0,t)= a(t) \hspace{0.5cm} t \ge 0,
\end{equation*}
$$
and
$$
\begin{equation*}
u(L,t)= b(t) \hspace{0.5cm} t \ge 0,
\end{equation*}
$$
where \( a(t) \) and \( b(t) \) are two functions which depend on time only, while
\( g(x) \) depends only on the position \( x \).
Our next step is to find a numerical algorithm for solving this equation. Here we recur
to our familiar equal-step methods
and introduce different step lengths for the space-variable \( x \) and time \( t \) through
the step length for \( x \)
$$
\begin{equation*}
\Delta x=\frac{1}{n+1}
\end{equation*}
$$
and the time step length \( \Delta t \). The position after \( i \) steps and
time at time-step \( j \) are now given by
$$
\begin{equation*}
\begin{array}{cc} t_j=j\Delta t & j \ge 0 \\
x_i=i\Delta x & 0 \le i \le n+1\end{array}
\end{equation*}
$$
$$
\begin{equation*}
u_t\approx \frac{u(x,t+\Delta t)-u(x,t)}{\Delta t}=\frac{u(x_i,t_j+\Delta t)-u(x_i,t_j)}{\Delta t}
\end{equation*}
$$
with a local approximation error \( O(\Delta t) \)
and
$$
\begin{equation*}
u_{xx}\approx \frac{u(x+\Delta x,t)-2u(x,t)+u(x-\Delta x,t)}{\Delta x^2},
\end{equation*}
$$
or
$$
\begin{equation*}
u_{xx}\approx \frac{u(x_i+\Delta x,t_j)-2u(x_i,t_j)+u(x_i-\Delta x,t_j)}{\Delta x^2},
\end{equation*}
$$
with a local approximation error \( O(\Delta x^2) \). Our approximation is to higher order
in coordinate space. This can be justified since in most cases it is the spatial
dependence which causes numerical problems.
$$
\begin{equation*}
u_t\approx \frac{u_{i,j+1}-u_{i,j}}{\Delta t},
\end{equation*}
$$
and
$$
\begin{equation*}
u_{xx}\approx \frac{u_{i+1,j}-2u_{i,j}+u_{i-1,j}}{\Delta x^2}.
\end{equation*}
$$
The one-dimensional diffusion equation can then be rewritten in its
discretized version as
$$
\begin{equation*}
\frac{u_{i,j+1}-u_{i,j}}{\Delta t}=\frac{u_{i+1,j}-2u_{i,j}+u_{i-1,j}}{\Delta x^2}.
\end{equation*}
$$
Defining \( \alpha = \Delta t/\Delta x^2 \) results in the explicit scheme
$$
\begin{equation}
\tag{7}
u_{i,j+1}= \alpha u_{i-1,j}+(1-2\alpha)u_{i,j}+\alpha u_{i+1,j}.
\end{equation}
$$
$$
\begin{equation*}
u_{i,0} = g(x_i),
\end{equation*}
$$
are known, then after one time-step the only unknown quantity is
\( u_{i,1} \) which is given by
$$
\begin{equation*}
u_{i,1}= \alpha u_{i-1,0}+(1-2\alpha)u_{i,0}+\alpha u_{i+1,0}=
\alpha g(x_{i-1})+(1-2\alpha)g(x_{i})+\alpha g(x_{i+1}).
\end{equation*}
$$
We can then obtain \( u_{i,2} \) using the previously calculated values \( u_{i,1} \)
and the boundary conditions \( a(t) \) and \( b(t) \).
This algorithm results in a so-called explicit scheme, since the next functions
\( u_{i,j+1} \) are explicitely given by Eq. (7).
$$
\begin{equation*}
V_j=\begin{bmatrix}u_{1,j}\\ u_{2,j} \\ \dots \\ u_{n,j}\end{bmatrix}.
\end{equation*}
$$
$$
\begin{equation*}
V_{j+1} = \mathbf{A}V_{j}
\end{equation*}
$$
with the matrix \( \mathbf{A} \) given by
$$
\begin{equation*}
\mathbf{A}=\begin{bmatrix}1-2\alpha&\alpha&0& 0\dots\\
\alpha&1-2\alpha&\alpha & 0\dots \\
\dots & \dots & \dots & \dots \\
0\dots & 0\dots &\alpha& 1-2\alpha\end{bmatrix}
\end{equation*}
$$
which means we can rewrite the original partial differential equation as
a set of matrix-vector multiplications
$$
\begin{equation*}
V_{j+1} = \mathbf{A}V_{j}=\dots = \mathbf{A}^{j+1}V_0,
\end{equation*}
$$
where \( V_0 \) is the initial vector at time \( t=0 \) defined by the initial value
\( g(x) \).
In the numerical implementation
one should avoid to treat this problem as a matrix vector multiplication
since the matrix is triangular and at most three elements in each row are different from zero.
// First we set initialise the new and old vectors
// Here we have chosen the boundary conditions to be zero.
// n+1 is the number of mesh points in x
// Armadillo notation for vectors
u(0) = unew(0) = u(n) = unew(n) = 0.0;
for (int i = 1; i < n; i++) {
x = i*step;
// initial condition
u(i) = func(x);
// intitialise the new vector
unew(i) = 0;
}
// Time integration
for (int t = 1; t <= tsteps; t++) {
for (int i = 1; i < n; i++) {
// Discretized diff eq
unew(i) = alpha * u(i-1) + (1 - 2*alpha) * u(i) + alpha * u(i+1);
}
// note that the boundaries are not changed.
$$
\begin{equation*}
\Delta t/\Delta x^2 \le 1/2.
\end{equation*}
$$
This means that if \( \Delta x = 0.01 \) (a rather frequent choice), then \( \Delta t= 5\times 10^{-5} \). This has obviously
bad consequences if our time interval is large.
In order to derive this relation we need some results from studies of iterative schemes.
If we require that our solution approaches a definite value after
a certain amount of time steps we need to require that the so-called
spectral radius \( \rho(\mathbf{A}) \) of our matrix \( \mathbf{A} \) satisfies the condition
$$
\begin{equation}
\tag{8}
\rho(\mathbf{A}) < 1.
\end{equation}
$$
$$
\begin{equation*}
\rho(\mathbf{A}) = \hspace{0.1cm}\mathrm{max}\left\{|\lambda|:\mathrm{det}(\mathbf{A}-\lambda\hat{I})=0\right\},
\end{equation*}
$$
which is interpreted as the smallest number such that a circle with radius centered at zero in the complex plane
contains all eigenvalues of \( \mathbf{A} \). If the matrix is positive definite, the condition in
Eq. (8) is always satisfied.
$$
\begin{equation*}
\mathbf{A}=\hat{I}-\alpha\hat{B},
\end{equation*}
$$
with
$$
\begin{equation*}
\hat{B} =\begin{bmatrix}2&-1&0& 0 &\dots\\
-1&2&-1& 0&\dots \\
\dots & \dots & \dots & \dots & -1 \\
0 & 0 &\dots &-1&2\end{bmatrix}.
\end{equation*}
$$
$$
\begin{equation*}
b_{ij} = 2\delta_{ij}-\delta_{i+1j}-\delta_{i-1j},
\end{equation*}
$$
meaning that we
have the following set of eigenequations for component \( i \)
$$
\begin{equation*}
(\hat{B}\hat{x})_i = \mu_ix_i,
\end{equation*}
$$
resulting in
$$
\begin{equation*}
(\hat{B}\hat{x})_i=\sum_{j=1}^n\left(2\delta_{ij}-\delta_{i+1j}-\delta_{i-1j}\right)x_j =
2x_i-x_{i+1}-x_{i-1}=\mu_ix_i.
\end{equation*}
$$
$$
\begin{equation*}
2\sin{(i\theta)}-\sin{((i+1)\theta)}-\sin{((i-1)\theta)}=\mu_i\sin{(i\theta)},
\end{equation*}
$$
or
$$
\begin{equation*}
2\left(1-\cos{(\theta)}\right)\sin{(i\theta)}=\mu_i\sin{(i\theta)},
\end{equation*}
$$
which is nothing but
$$
\begin{equation*}
2\left(1-\cos{(\theta)}\right)x_i=\mu_ix_i,
\end{equation*}
$$
with eigenvalues \( \mu_i = 2-2\cos{(\theta)} \).
Our requirement in Eq. (8) results in
$$
\begin{equation*}
-1 < 1-\alpha2\left(1-\cos{(\theta)}\right) < 1,
\end{equation*}
$$
which is satisfied only if \( \alpha < \left(1-\cos{(\theta)}\right)^{-1} \) resulting in
\( \alpha \le 1/2 \) or \( \Delta t/\Delta x^2 \le 1/2 \).
$$
\begin{equation*}
\mathbf{A} =\begin{bmatrix}a&b&0& 0 &\dots\\
c&a&b& 0&\dots \\
\dots & \dots & \dots & \dots & b \\
0 & 0 &\dots &c&a\end{bmatrix},
\end{equation*}
$$
has eigenvalues \( \mu_i=a+s\sqrt{bc}\cos{(i\pi/n+1)} \) with \( i=1:n \).
$$
\begin{equation*}
u_t\approx \frac{u(x_i,t_j+\Delta t)-u(x_i,t_j)}{\Delta t}.
\end{equation*}
$$
However, there is nothing which hinders us from using the backward formula
$$
\begin{equation*}
u_t\approx \frac{u(x_i,t_j)-u(x_i,t_j-\Delta t)}{\Delta t},
\end{equation*}
$$
still with a truncation error which goes like \( O(\Delta t) \).
$$
\begin{equation*}
u_t\approx \frac{u(x_i,t_j+\Delta t)-u(x_i,t_j-\Delta t)}{2\Delta t},
\end{equation*}
$$
with a truncation error \( O(\Delta t^2) \).
Here we will stick to the backward formula and come back to the latter below.
For the second derivative we use however
$$
\begin{equation*}
u_{xx}\approx \frac{u(x_i+\Delta x,t_j)-2u(x_i,t_j)+u(x_i-\Delta x,t_j)}{\Delta x^2},
\end{equation*}
$$
and define again \( \alpha = \Delta t/\Delta x^2 \).
$$
\begin{equation*}
u_{i,j-1}= -\alpha u_{i-1,j}+(1-2\alpha)u_{i,j}-\alpha u_{i+1,j}.
\end{equation*}
$$
Here \( u_{i,j-1} \) is the only unknown quantity.
Defining the matrix
\( \mathbf{A} \)
$$
\begin{equation*}
\mathbf{A}=\begin{bmatrix}1+2\alpha&-\alpha&0& 0 &\dots\\
-\alpha&1+2\alpha&-\alpha & 0 & \dots \\
\dots & \dots & \dots & \dots &\dots \\
\dots & \dots & \dots & \dots & -\alpha \\
0 & 0 &\dots &-\alpha& 1+2\alpha\end{bmatrix},
\end{equation*}
$$
we can reformulate again the problem as a matrix-vector multiplication
$$
\begin{equation*}
\mathbf{A}V_{j} = V_{j-1}
\end{equation*}
$$
$$
\begin{equation*}
V_{j} = \mathbf{A}^{-1}V_{j-1}=\mathbf{A}^{-1}\left(\mathbf{A}^{-1}V_{j-2}\right)=\dots = \mathbf{A}^{-j}V_0.
\end{equation*}
$$
This is an implicit scheme since it relies on determining the vector
\( u_{i,j-1} \) instead of \( u_{i,j+1} \).
If \( \alpha \) does not depend on time \( t \), we need
to invert a matrix only once. Alternatively we can solve this system of equations using our methods
from linear algebra.
These are however very cumbersome ways of solving since they involve \( \sim O(N^3) \) operations
for a \( N\times N \) matrix.
It is much faster to solve these linear equations using methods for tridiagonal matrices,
since these involve only \( \sim O(N) \) operations.
// parts of the function for backward Euler
void backward_euler(int n, int tsteps, double delta_x, double alpha)
{
double a, b, c;
vec u(n+1); // This is u of Au = y
vec y(n+1); // Right side of matrix equation Au=y, the solution at a previous step
// Initial conditions
for (int i = 1; i < n; i++) {
y(i) = u(i) = func(delta_x*i);
}
// Boundary conditions (zero here)
y(n) = u(n) = u(0) = y(0);
// Matrix A, only constants
a = c = - alpha;
b = 1 + 2*alpha;
// Time iteration
for (int t = 1; t <= tsteps; t++) {
// here we solve the tridiagonal linear set of equations,
tridag(a, b, c, y, u, n+1);
// boundary conditions
u(0) = 0;
u(n) = 0;
// replace previous time solution with new
for (int i = 0; i <= n; i++) {
y(i) = u(i);
}
// You may consider printing the solution at regular time intervals
.... // print statements
} // end time iteration
...
}
$$
\begin{equation}
\tag{9}
\frac{\theta}{\Delta x^2}\left(u_{i-1,j}-2u_{i,j}+u_{i+1,j}\right)+
\frac{1-\theta}{\Delta x^2}\left(u_{i+1,j-1}-2u_{i,j-1}+u_{i-1,j-1}\right)=
\frac{1}{\Delta t}\left(u_{i,j}-u_{i,j-1}\right),
\end{equation}
$$
which for \( \theta=0 \) yields the forward formula for the first derivative and
the explicit scheme, while \( \theta=1 \) yields the backward formula and the implicit
scheme. These two schemes are called the backward and forward Euler schemes, respectively.
For \( \theta = 1/2 \) we obtain a new scheme after its inventors, Crank and Nicolson.
This scheme yields a truncation in time which goes like \( O(\Delta t^2) \) and it is stable
for all possible combinations of \( \Delta t \) and \( \Delta x \).
$$
\begin{align}
u(x+\Delta x,t)&=u(x,t)+\frac{\partial u(x,t)}{\partial x} \Delta x+\frac{\partial^2 u(x,t)}{2\partial x^2}\Delta x^2+\mathcal{O}(\Delta x^3),
\tag{10}\\ \nonumber
u(x-\Delta x,t)&=u(x,t)-\frac{\partial u(x,t)}{\partial x}\Delta x+\frac{\partial^2 u(x,t)}{2\partial x^2} \Delta x^2+\mathcal{O}(\Delta x^3),\\ \nonumber
u(x,t+\Delta t)&=u(x,t)+\frac{\partial u(x,t)}{\partial t}\Delta t+ \mathcal{O}(\Delta t^2).
\tag{11}
\end{align}
$$
$$
\begin{align}
&\left[\frac{\partial u(x,t)}{\partial t}\right]_{\text{approx}} =\frac{\partial u(x,t)}{\partial t}+\mathcal{O}(\Delta t) ,
\tag{12}\\ \nonumber
&\left[\frac{\partial^2 u(x,t)}{\partial x^2}\right]_{\text{approx}}=\frac{\partial^2 u(x,t)}{\partial x^2}+\mathcal{O}(\Delta x^2).
\end{align}
$$
It is easy to convince oneself that the backward Euler method must have the same truncation errors as the forward Euler scheme.
$$
\begin{align}
u(x+\Delta x, t+\Delta t)&=u(x,t')+\frac{\partial u(x,t')}{\partial x}\Delta x+\frac{\partial u(x,t')}{\partial t} \frac{\Delta t}{2} +\frac{\partial^2 u(x,t')}{2\partial x^2}\Delta x^2+\frac{\partial^2 u(x,t')}{2\partial t^2}\frac{\Delta t^2}{4} +\notag
\tag{13}\\ \nonumber
&\frac{\partial^2 u(x,t')}{\partial x\partial t}\frac{\Delta t}{2} \Delta x+ \mathcal{O}(\Delta t^3)\\ \nonumber
u(x-\Delta x, t+\Delta t)&=u(x,t')-\frac{\partial u(x,t')}{\partial x}\Delta x+\frac{\partial u(x,t')}{\partial t} \frac{\Delta t}{2} +\frac{\partial^2 u(x,t')}{2\partial x^2}\Delta x^2+\frac{\partial^2 u(x,t')}{2\partial t^2}\frac{\Delta t^2}{4} - \notag\\ \nonumber
&\frac{\partial^2 u(x,t')}{\partial x\partial t}\frac{\Delta t}{2} \Delta x+ \mathcal{O}(\Delta t^3)\\
u(x+\Delta x,t)&=u(x,t')+\frac{\partial u(x,t')}{\partial x}\Delta x-\frac{\partial u(x,t')}{\partial t} \frac{\Delta t}{2} +\frac{\partial^2 u(x,t')}{2\partial x^2}\Delta x^2+\frac{\partial^2 u(x,t')}{2\partial t^2}\frac{\Delta t^2}{4} -\notag
\tag{14}\\ \nonumber
&\frac{\partial^2 u(x,t')}{\partial x\partial t}\frac{\Delta t}{2} \Delta x+ \mathcal{O}(\Delta t^3)\\ \nonumber
u(x-\Delta x,t)&=u(x,t')-\frac{\partial u(x,t')}{\partial x}\Delta x-\frac{\partial u(x,t')}{\partial t} \frac{\Delta t}{2} +\frac{\partial^2 u(x,t')}{2\partial x^2}\Delta x^2+\frac{\partial^2 u(x,t')}{2\partial t^2}\frac{\Delta t^2}{4} +\notag \\ \nonumber
&\frac{\partial^2 u(x,t')}{\partial x\partial t}\frac{\Delta t}{2} \Delta x+ \mathcal{O}(\Delta t^3)\\ \nonumber
u(x,t+\Delta t)&=u(x,t')+\frac{\partial u(x,t')}{\partial t}\frac{\Delta_t}{2} +\frac{\partial ^2 u(x,t')}{2\partial t^2}\Delta t^2 + \mathcal{O}(\Delta t^3)\\ \nonumber
u(x,t)&=u(x,t')-\frac{\partial u(x,t')}{\partial t}\frac{\Delta t}{2}+\frac{\partial ^2 u(x,t')}{2\partial t^2}\Delta t^2 + \mathcal{O}(\Delta t^3)
\tag{15}
\end{align}
$$
We now insert these expansions in the approximations for the derivatives to find
$$
\begin{align}
&\left[\frac{\partial u(x,t')}{\partial t}\right]_{\text{approx}} =\frac{\partial u(x,t')}{\partial t}+\mathcal{O}(\Delta t^2) ,
\tag{16}\\ \nonumber
&\left[\frac{\partial^2 u(x,t')}{\partial x^2}\right]_{\text{approx}}=\frac{\partial^2 u(x,t')}{\partial x^2}+\mathcal{O}(\Delta x^2).
\end{align}
$$
The following table summarizes the three methods.
Scheme: | Truncation Error: | Stability requirements: |
---|---|---|
Crank-Nicolson | \( \mathcal{O}(\Delta x^2) \) and \( \mathcal{O}(\Delta t^2) \) | Stable for all \( \Delta t \) and \( \Delta x \). |
Backward Euler | \( \mathcal{O}(\Delta x^2) \) and \( \mathcal{O}(\Delta t) \) | Stable for all \( \Delta t \) and \( \Delta x \). |
Forward Euler | \( \mathcal{O}(\Delta x^2) \) and \( \mathcal{O}(\Delta t) \) | \( \Delta t\leq \frac{1}{2}\Delta x^2 \) |
$$
\begin{equation*}
-\alpha u_{i-1,j}+\left(2+2\alpha\right)u_{i,j}-\alpha u_{i+1,j}=
\alpha u_{i-1,j-1}+\left(2-2\alpha\right)u_{i,j-1}+\alpha u_{i+1,j-1},
\end{equation*}
$$
or in matrix-vector form as
$$
\begin{equation*}
\left(2\hat{I}+\alpha\hat{B}\right)V_{j}=
\left(2\hat{I}-\alpha\hat{B}\right)V_{j-1},
\end{equation*}
$$
where the vector \( V_{j} \) is the same as defined in the implicit case while the matrix
\( \hat{B} \) is
$$
\begin{equation*}
\hat{B}=\begin{bmatrix}2&-1&0&0 & \dots\\
-1& 2& -1 & 0 &\dots \\
\dots & \dots & \dots & \dots & \dots \\
\dots & \dots & \dots & \dots &-1 \\
0& 0 & \dots &-1& 2\end{bmatrix}.
\end{equation*}
$$
$$
\begin{equation*}
V_{j}=
\left(2\hat{I}+\alpha\hat{B}\right)^{-1}\left(2\hat{I}-\alpha\hat{B}\right)V_{j-1}.
\end{equation*}
$$
We have already obtained the eigenvalues for the two matrices
\( \left(2\hat{I}+\alpha\hat{B}\right) \) and \( \left(2\hat{I}-\alpha\hat{B}\right) \).
This means that the spectral function has to satisfy
$$
\begin{equation*}
\rho(\left(2\hat{I}+\alpha\hat{B}\right)^{-1}\left(2\hat{I}-\alpha\hat{B}\right)) < 1,
\end{equation*}
$$
meaning that
$$
\begin{equation*}
\left|(\left(2+\alpha\mu_i\right)^{-1}\left(2-\alpha\mu_i\right)\right| < 1,
\end{equation*}
$$
and since \( \mu_i = 2-2cos(\theta) \) we have \( 0 < \mu_i < 4 \). A little algebra shows that
the algorithm is stable for all possible values of \( \Delta t \) and \( \Delta x \).
$$
\begin{equation*}
\tilde{V}_{j-1}=\left(2\hat{I}-\alpha\hat{B}\right)V_{j-1},
\end{equation*}
$$
with our previous vector \( V_{j-1} \) using the matrix-vector multiplication algorithm for a
tridiagonal matrix, as done in the forward-Euler scheme. Thereafter we can solve the equation
$$
\begin{equation*}
\left(2\hat{I}+\alpha\hat{B}\right) V_{j}=
\tilde{V}_{j-1},
\end{equation*}
$$
using our method for systems of linear equations with a tridiagonal matrix, as done for the backward Euler scheme.
void crank_nicolson(int n, int tsteps, double delta_x, double alpha)
{
double a, b, c;
vec u(n+1); // This is u in Au = r
vec r(n+1); // Right side of matrix equation Au=r
....
// setting up the matrix
a = c = - alpha;
b = 2 + 2*alpha;
// Time iteration
for (int t = 1; t <= tsteps; t++) {
// Calculate r for use in tridag, right hand side of the Crank Nicolson method
for (int i = 1; i < n; i++) {
r(i) = alpha*u(i-1) + (2 - 2*alpha)*u(i) + alpha*u(i+1);
}
r(0) = 0;
r(n) = 0;
// Then solve the tridiagonal matrix
tridiag(a, b, c, r, u, xsteps+1);
u(0) = 0;
u(n) = 0;
// Eventual print statements etc
....
}
It cannot be repeated enough, it is always useful to find cases where one can compare the numerical results and the developed algorithms and codes with closed-form solutions. The above case is also particularly simple. We have the following partial differential equation
$$
\begin{equation*}
\nabla^2 u(x,t) =\frac{\partial u(x,t)}{\partial t},
\end{equation*}
$$
with initial conditions
$$
\begin{equation*}
u(x,0)= g(x) \hspace{0.5cm} 0 < x < L.
\end{equation*}
$$
$$
\begin{equation*}
u(0,t)= 0 \hspace{0.5cm} t \ge 0, \hspace{1cm} u(L,t)= 0 \hspace{0.5cm} t \ge 0,
\end{equation*}
$$
We assume that we have solutions of the form (separation of variable)
$$
\begin{equation*}
u(x,t)=F(x)G(t).
\end{equation*}
$$
which inserted in the partial differential equation results in
$$
\begin{equation*}
\frac{F''}{F}=\frac{G'}{G},
\end{equation*}
$$
where the derivative is with respect to \( x \) on the left hand side and with respect to \( t \) on right hand side.
This equation should hold for all \( x \) and \( t \). We must require the rhs and lhs to be equal to a constant.
$$
\begin{equation*}
F''+\lambda^2F=0; \hspace{1cm} G'=-\lambda^2G,
\end{equation*}
$$
with general solutions
$$
\begin{equation*}
F(x)=A\sin(\lambda x)+B\cos(\lambda x); \hspace{1cm} G(t)=Ce^{-\lambda^2t}.
\end{equation*}
$$
$$
\begin{equation*}
u(x,t)=A_n\sin(n\pi x/L)e^{-n^2\pi^2 t/L^2}.
\end{equation*}
$$
But there are infinitely many possible \( n \) values (infinite number of solutions). Moreover,
the diffusion equation is linear and because of this we know that a superposition of solutions
will also be a solution of the equation. We may therefore write
$$
\begin{equation*}
u(x,t)=\sum_{n=1}^{\infty} A_n \sin(n\pi x/L) e^{-n^2\pi^2 t/L^2}.
\end{equation*}
$$
$$
\begin{equation*}
u(x,0)=g(x)=\sum_{n=1}^{\infty} A_n \sin(n\pi x/L).
\end{equation*}
$$
The coefficient \( A_n \) is the Fourier coefficients for the function \( g(x) \). Because of this, \( A_n \) is given by (from the theory on Fourier series)
$$
\begin{equation*}
A_n=\frac{2}{L}\int_0^L g(x)\sin(n\pi x/L) \mathrm{d}x.
\end{equation*}
$$
Different \( g(x) \) functions will obviously result in different results for \( A_n \).
$$
\begin{equation*}
\frac{\partial u}{\partial t}=\left(\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}\right),
\end{equation*}
$$
where we have \( u=u(x,y,t) \).
We assume that we have a square lattice of length \( L \) with equally
many mesh points in the \( x \) and \( y \) directions.
We discretize again position and time using now
$$
\begin{equation*}
u_{xx}\approx \frac{u(x+h,y,t)-2u(x,y,t)+u(x-h,y,t)}{h^2},
\end{equation*}
$$
which we rewrite as, in its discretized version,
$$
\begin{equation*}
u_{xx}\approx \frac{u^{l}_{i+1,j}-2u^{l}_{i,j}+u^{l}_{i-1,j}}{h^2},
\end{equation*}
$$
where \( x_i=x_0+ih \), \( y_j=y_0+jh \) and \( t_l=t_0+l\Delta t \), with \( h=L/(n+1) \) and \( \Delta t \) the time step.
$$
\begin{equation*}
u_{yy}\approx \frac{u^{l}_{i,j+1}-2u^{l}_{i,j}+u^{l}_{i,j-1}}{h^2}.
\end{equation*}
$$
We use again the so-called forward-going Euler formula for the first derivative in time. In its discretized form we have
$$
\begin{equation*}
u_{t}\approx \frac{u^{l+1}_{i,j}-u^{l}_{i,j}}{\Delta t},
\end{equation*}
$$
resulting in
$$
\begin{equation*}
u^{l+1}_{i,j}= u^{l}_{i,j} + \alpha\left[u^{l}_{i+1,j}+u^{l}_{i-1,j}+u^{l}_{i,j+1}+u^{l}_{i,j-1}-4u^{l}_{i,j}\right],
\end{equation*}
$$
where the left hand side, with the solution at the new time step, is the only unknown term, since starting with \( t=t_0 \), the right hand side is entirely
determined by the boundary and initial conditions. We have \( \alpha=\Delta t/h^2 \).
This scheme can be implemented using essentially the same approach as we used in Eq. (7).
$$
\begin{equation*}
\nabla^2 u(\mathbf{x})=u_{xx}+u_{yy}=0.
\end{equation*}
$$
with possible boundary conditions
\( u(x,y) = g(x,y) \) on the border \( \delta\Omega \). There is no time-dependence.
We seek a solution in the region \( \Omega \) and we choose a quadratic mesh
with equally many steps in both directions. We could choose the grid to be rectangular or following
polar coordinates \( r,\theta \) as well. Here we choose equal steps lengths in the \( x \) and
the \( y \) directions. We set
$$
\begin{equation*} h=\Delta x = \Delta y = \frac{L}{n+1},\end{equation*}
$$
where \( L \) is the length of the sides and we have \( n+1 \) points in both directions.
$$
\begin{equation*}
u_{xx}\approx \frac{u(x+h,y)-2u(x,y)+u(x-h,y)}{h^2},
\end{equation*}
$$
and
$$
\begin{equation*}
u_{yy}\approx \frac{u(x,y+h)-2u(x,y)+u(x,y-h)}{h^2},
\end{equation*}
$$
which we rewrite as
$$
\begin{equation*}
u_{xx}\approx \frac{u_{i+1,j}-2u_{i,j}+u_{i-1,j}}{h^2},
\end{equation*}
$$
and
$$
\begin{equation*}
u_{yy}\approx \frac{u_{i,j+1}-2u_{i,j}+u_{i,j-1}}{h^2}.
\end{equation*}
$$
$$
\begin{equation}
\tag{17}
u_{i,j}= \frac{1}{4}\left[u_{i,j+1}+u_{i,j-1}+u_{i+1,j}+u_{i-1,j}\right].
\end{equation}
$$
This is our final numerical scheme for solving Laplace's equation. Poisson's equation adds only a minor complication to the above equation since in this case we have
$$
\begin{equation*}
u_{xx}+u_{yy}=-\rho(x,y),
\end{equation*}
$$
and we need only to add a discretized version of \( \rho(\mathbf{x}) \)
resulting in
$$
\begin{equation}
\tag{18}
u_{i,j}= \frac{1}{4}\left[u_{i,j+1}+u_{i,j-1}+u_{i+1,j}+u_{i-1,j}\right]
+\frac{h^2}{4}\rho_{i,j}.
\end{equation}
$$
$$
\begin{equation*}
u_{i,0} = g_{i,0} \hspace{0.5cm} 0\le i \le n+1,
\end{equation*}
$$
$$
\begin{equation*}
u_{i,L} = g_{i,0} \hspace{0.5cm} 0\le i \le n+1,
\end{equation*}
$$
$$
\begin{equation*}
u_{0,j} = g_{0,j} \hspace{0.5cm} 0\le j \le n+1,
\end{equation*}
$$
and
$$
\begin{equation*}
u_{L,j} = g_{L,j} \hspace{0.5cm} 0\le j \le n+1.
\end{equation*}
$$
With \( n+1 \) mesh points the equations for \( u \) result in a system of \( (n+1)^2 \) linear equations in the \( (n+1)^2 \) unknown \( u_{i,j} \).
$$
\begin{equation}
\tag{19}
4u_{i,j}= \left[u_{i,j+1}+u_{i,j-1}+u_{i+1,j}+u_{i-1,j}\right]
-h^2\rho_{i,j}=\Delta_{ij}-\tilde{\rho}_{ij},
\end{equation}
$$
where we have defined
$$
\begin{equation*}
\Delta_{ij}= \left[u_{i,j+1}+u_{i,j-1}+u_{i+1,j}+u_{i-1,j}\right],
\end{equation*}
$$
and
$$
\begin{equation*}
\tilde{\rho}_{ij}=h^2\rho_{i,j}.
\end{equation*}
$$
The inner values of the function \( u \) are then given by
$$
\begin{align}
4u_{11} -u_{21} -u_{01} - u_{12}- u_{10}=&-\tilde{\rho}_{11} \nonumber \\
4u_{12} - u_{02} - u_{22} - u_{13}- u_{11}=&-\tilde{\rho}_{12} \nonumber \\
4u_{21} - u_{11} - u_{31} - u_{22}- u_{20}=&-\tilde{\rho}_{21} \nonumber \\
4u_{22} - u_{12} - u_{32} - u_{23}- u_{21}=&-\tilde{\rho}_{22}. \nonumber
\end{align}
$$
$$
\begin{equation*}
Ax = b,
\end{equation*}
$$
or in more detail
$$
\begin{equation*}
\begin{bmatrix} 4&-1 &-1 &0 \\
-1& 4 &0 &-1 \\
-1& 0 &4 &-1 \\
0& -1 &-1 &4 \\
\end{bmatrix}\begin{bmatrix}
u_{11}\\
u_{12}\\
u_{21} \\
u_{22} \\
\end{bmatrix}=\begin{bmatrix}
u_{01}+u_{10}-\tilde{\rho}_{11}\\
u_{13}+u_{02}-\tilde{\rho}_{12}\\
u_{31}+u_{20}-\tilde{\rho}_{21} \\
u_{32}+u_{23}-\tilde{\rho}_{22}\\
\end{bmatrix}.
\end{equation*}
$$
Since the matrix is rather sparse but is not on a tridiagonal form, elimination methods like the LU decomposition discussed, are not very practical. Rather, iterative schemes like Jacobi's method or the Gauss-Seidel are preferred. The above matrix is also always diagonally dominant, a necessary condition for these iterative solvers to converge.
$$
\begin{equation*}
\mathbf{A}=\mathbf{D}+\mathbf{U}+\mathbf{L},
\end{equation*}
$$
with \( \mathbf{D} \) being a diagonal matrix with \( 4 \) as the only value, \( \mathbf{U} \) is an upper triangular matrix and \( \mathbf{L} \)
a lower triangular matrix. In our case we have
$$
\begin{equation*}
\mathbf{D}=\begin{bmatrix}4&0 &0 &0 \\
0& 4 &0 &0 \\
0& 0 &4 &0 \\
0& 0 &0 &4 \\
\end{bmatrix},
\end{equation*}
$$
and
$$
\begin{equation*}
\mathbf{L}=\begin{bmatrix} 0&0 &0 &0 \\
-1& 0 &0 &0 \\
-1& 0 &0 &0 \\
0& -1 &-1 &0 \\
\end{bmatrix} \hspace{1cm} \mathbf{U}= \begin{bmatrix}
0&-1 &-1 &0 \\
0& 0 &0 &-1 \\
0& 0 &0 &-1 \\
0& 0 &0 &0 \\
\end{bmatrix}.
\end{equation*}
$$
We assume now that we have an estimate for the unknown functions \( u_{11} \), \( u_{12} \), \( u_{21} \) and \( u_{22} \). We will call this the zeroth value and label it as \( u^{(0)}_{11} \), \( u^{(0)}_{12} \), \( u^{(0)}_{21} \) and \( u^{(0)}_{22} \). We can then set up an iterative scheme where the next solution is defined in terms of the previous one as
$$
\begin{align}
u^{(1)}_{11} =&\frac{1}{4}(b_1-u^{(0)}_{12} -u^{(0)}_{21}) \nonumber \\
u^{(1)}_{12} =&\frac{1}{4}(b_2-u^{(0)}_{11}-u^{(0)}_{22}) \nonumber \\
u^{(1)}_{21} =&\frac{1}{4}(b_3-u^{(0)}_{11}-u^{(0)}_{22}) \nonumber \\
u^{(1)}_{22}=&\frac{1}{4}(b_4-u^{(0)}_{12}-u^{(0)}_{21}), \nonumber
\end{align}
$$
where we have defined the vector
$$
\begin{equation*}
\mathbf{b}= \begin{bmatrix} u_{01}+u_{10}-\tilde{\rho}_{11}\\
u_{13}+u_{02}-\tilde{\rho}_{12}\\
u_{31}+u_{20}-\tilde{\rho}_{21} \\
u_{32}+u_{23}-\tilde{\rho}_{22}\\
\end{bmatrix}.
\end{equation*}
$$
$$
\begin{equation} \tag{20}
\mathbf{x}^{(r+1)}= \mathbf{D}^{-1}\left(\mathbf{b} - (\mathbf{L}+\mathbf{U})\mathbf{x}^{(r)}\right),
\end{equation}
$$
where the unknown functions are now defined in terms of
$$
\begin{equation*}
\mathbf{x}= \begin{bmatrix} u_{11}\\
u_{12}\\
u_{21}\\
u_{22}\\
\end{bmatrix}.
\end{equation*}
$$
If we wish to implement Gauss-Seidel's algorithm,
the set of equations to solve are then given by
$$
\begin{equation} \tag{21}
\mathbf{x}^{(r+1)}= -(\mathbf{D}+\mathbf{L})^{-1}\left(\mathbf{b} -\mathbf{U}\mathbf{x}^{(r)}\right),
\end{equation}
$$
or alternatively as
$$
\begin{equation*}
\mathbf{x}^{(r+1)}= \mathbf{D}^{-1}\left(\mathbf{b} -\mathbf{L}\mathbf{x}^{(r+1)}-\mathbf{U}\mathbf{x}^{(r)}\right).
\end{equation*}
$$
Implementing Jacobi's method is rather simple. We start with an initial guess for \( u_{i,j}^{(0)} \) where all values are known. To obtain a new solution we solve Eq. (17) or Eq. (18) in order to obtain a new solution \( u_{i,j}^{(1)} \). Most likely this solution will not be a solution to Eq. (17). This solution is in turn used to obtain a new and improved \( u_{i,j}^{(2)} \). We continue this process till we obtain a result which satisfies some specific convergence criterion.
....
// We define the step size for a square lattice with n+1 points
double h = (xmax-xmin)/(n+1);
double L = xmax-xmin; // The length of the lattice
// We allocate space for the vector u and the temporary vector to
// be upgraded in every iteration
mat u( n+1, n+1); // using Armadillo to define matrices
mat u_temp( n+1, n+1); // This is the temporary value
u = 0. // This is also our initial guess for all unknown values
// We need to set up the boundary conditions. Specify for various cases
.....
// The iteration algorithm starts here
iterations = 0;
while( (iterations <= max_iter) && ( diff > 0.00001) ){
u_temp = u; diff = 0.;
for (j = 1; j<= n,j++){
for(l = 1; l <= n; l++){
u(j,l) = 0.25*(u_temp(j+1,l)+u_temp(j-1,l)+ &
u_temp(j,l+1)+u_temp(j,l-1));
diff += fabs(u_temp(i,j)-u(i,j));
}
}
iterations++;
diff /= pow((n),2.0);
} // end while loop
Let us know implement the implicit scheme and show how we can extend the previous algorithm for solving Laplace's or Poisson's equations to the diffusion equation as well. As the reader will notice, this simply implies a slight redefinition of the vector \( \mathbf{b} \) defined in Eq. (20).
To see this, let us first set up the diffusion in two spatial dimensions, with boundary and initial conditions. The \( 2+1 \)-dimensional diffusion equation (with dimensionless variables) reads for a function \( u=u(x,y,t) \)
$$
\begin{equation*}
\frac{\partial u}{\partial t}= D\left(\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}\right).
\end{equation*}
$$
$$
\begin{equation*}
\begin{array}{cc}u_t= u_{xx}+u_{yy}& x, y\in(0,L), t>0 \\
u(x,y,0) = g(x,y)& x, y\in (0,L) \\
u(0,y,t)=u(L,y,t)=u(x,0,t)=u(x,L,t)0 & t > 0\\
\end{array}
\end{equation*}
$$
$$
\begin{equation*}
u_{xx}\approx \frac{u(x+h,y,t)-2u(x,y,t)+u(x-h,y,t)}{h^2},
\end{equation*}
$$
which we rewrite as, in its discretized version,
$$
\begin{equation*}
u_{xx}\approx \frac{u^{l}_{i+1,j}-2u^{l}_{i,j}+u^{l}_{i-1,j}}{h^2},
\end{equation*}
$$
where \( x_i=x_0+ih \), \( y_j=y_0+jh \) and \( t_l=t_0+l\Delta t \), with \( h=L/(n+1) \) and \( \Delta t \) the time step.
$$
\begin{equation*}
u_{yy}\approx \frac{u^{l}_{i,j+1}-2u^{l}_{i,j}+u^{l}_{i,j-1}}{h^2}.
\end{equation*}
$$
We use now the so-called backward going Euler formula for the first derivative in time. In its discretized form we have
$$
\begin{equation*}
u_{t}\approx \frac{u^{l}_{i,j}-u^{l-1}_{i,j}}{\Delta t},
\end{equation*}
$$
resulting in
$$
\begin{equation*}
u^{l}_{i,j}+4\alpha u^{l}_{i,j}- \alpha\left[u^{l}_{i+1,j}+u^{l}_{i-1,j}+u^{l}_{i,j+1}+u^{l}_{i,j-1}\right] = u^{l-1}_{i,j},
\end{equation*}
$$
where the right hand side is the only known term, since starting with \( t=t_0 \), the right hand side is entirely
determined by the boundary and initial conditions. We have \( \alpha=\Delta t/h^2 \).
$$
\begin{equation*}
u^{l}_{i,j}= \frac{1}{1+4\alpha}\left[\alpha(u^{l}_{i+1,j}+u^{l}_{i-1,j}+u^{l}_{i,j+1}+u^{l}_{i,j-1})+u^{l-1}_{i,j}\right],
\end{equation*}
$$
or in a more compact form as
$$
\begin{equation}
\tag{22}
u^{l}_{i,j}= \frac{1}{1+4\alpha}\left[\alpha\Delta^l_{ij}+u^{l-1}_{i,j}\right],
\end{equation}
$$
with \( \Delta^l_{ij}= \left[u^l_{i,j+1}+u^l_{i,j-1}+u^l_{i+1,j}+u^l_{i-1,j}\right] \).
This equation has essentially the same structure as Eq. (19), except that
the function \( \rho_{ij} \) is replaced by the solution at a previous time step \( l-1 \). Furthermore, the diagonal matrix elements
are now given by \( 1+4\alpha \), while the non-zero non-diagonal matrix elements equal \( \alpha \). This matrix is also positive definite, meaning in turn that
iterative schemes like the Jacobi or the Gauss-Seidel methods will converge to the desired solution after a given number of iterations.
// Solves linear equations for simple tridiagonal matrix using the iterative Jacobi method
....
// Begin main program
int main(int argc, char *argv[]){
// missing statements, see code link above
mat A = zeros<mat>(n,n);
// Set up arrays for the simple case
vec b(n); vec x(n);
A(0,1) = -1; x(0) = h; b(0) = hh*f(x(0));
x(n-1) = x(0)+(n-1)*h; b(n-1) = hh*f(x(n-1));
for (int i = 1; i < n-1; i++){
x(i) = x(i-1)+h;
b(i) = hh*f(x(i));
A(i,i-1) = -1.0;
A(i,i+1) = -1.0;
}
A(n-2,n-1) = -1.0; A(n-1,n-2) = -1.0;
// solve Ax = b by iteration with a random starting vector
int maxiter = 100; double diff = 1.0;
double epsilon = 1.0e-10; int iter = 0;
vec SolutionOld = randu<vec>(n);
vec SolutionNew = zeros<vec>(n);
while (iter <= maxiter || diff > epsilon){
SolutionNew = (b -A*SolutionOld)*0.5;
iter++; diff = fabs(sum(SolutionNew-SolutionOld)/n);
SolutionOld = SolutionNew;
}
vec solution = SolutionOld;}
/* Simple program for solving the two-dimensional diffusion
equation or Poisson equation using Jacobi's iterative method
Note that this program does not contain a loop over the time
dependence.
*/
#include <iostream>
#include <iomanip>
#include <armadillo>
using namespace std;
using namespace arma;
int JacobiSolver(int, double, double, mat &, mat &, double);
int main(int argc, char * argv[]){
int Npoints = 40;
double ExactSolution;
double dx = 1.0/(Npoints-1);
double dt = 0.25*dx*dx;
double tolerance = 1.0e-14;
mat A = zeros<mat>(Npoints,Npoints);
mat q = zeros<mat>(Npoints,Npoints);
// setting up an additional source term
for(int i = 0; i < Npoints; i++)
for(int j = 0; j < Npoints; j++)
q(i,j) = -2.0*M_PI*M_PI*sin(M_PI*dx*i)*sin(M_PI*dx*j);
int itcount = JacobiSolver(Npoints,dx,dt,A,q,tolerance);
// Testing against exact solution
double sum = 0.0;
for(int i = 0; i < Npoints; i++){
for(int j=0;j < Npoints; j++){
ExactSolution = -sin(M_PI*dx*i)*sin(M_PI*dx*j);
sum += fabs((A(i,j) - ExactSolution));
}
}
cout << setprecision(5) << setiosflags(ios::scientific);
cout << "Jacobi method with error " << sum/Npoints << " in " << itcount << " iterations" << endl;
}
// Function for setting up the iterative Jacobi solver
int JacobiSolver(int N, double dx, double dt, mat &A, mat &q, double abstol)
{
int MaxIterations = 100000;
mat Aold = zeros<mat>(N,N);
double D = dt/(dx*dx);
for(int i=1; i < N-1; i++)
for(int j=1; j < N-1; j++)
Aold(i,j) = 1.0;
// Boundary Conditions -- all zeros
for(int i=0; i < N; i++){
A(0,i) = 0.0;
A(N-1,i) = 0.0;
A(i,0) = 0.0;
A(i,N-1) = 0.0;
}
// Start the iterative solver
for(int k = 0; k < MaxIterations; k++){
for(int i = 1; i < N-1; i++){
for(int j=1; j < N-1; j++){
A(i,j) = dt*q(i,j) + Aold(i,j) +
D*(Aold(i+1,j) + Aold(i,j+1) - 4.0*Aold(i,j) +
Aold(i-1,j) + Aold(i,j-1));
}
}
double sum = 0.0;
for(int i = 0; i < N;i++){
for(int j = 0; j < N;j++){
sum += (Aold(i,j)-A(i,j))*(Aold(i,j)-A(i,j));
Aold(i,j) = A(i,j);
}
}
if(sqrt (sum) <abstol){
return k;
}
}
cerr << "Jacobi: Maximum Number of Interations Reached Without Convergence\n";
return MaxIterations;
}
In order to parallelize the Jacobi method we need to introduce to new MPI functions, namely MPIGather and MPIAllgather.
Here we present a parallel implementation of the Jacobi method without an explicit link to the diffusion equation. Let us go back to the plain Jacobi method and implement it in parallel.
// Main program first
#include <mpi.h>
// Omitted statements
int main(int argc, char * argv[]){
int i,j, N = 20;
double **A,*x,*q;
int totalnodes,mynode;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &totalnodes);
MPI_Comm_rank(MPI_COMM_WORLD, &mynode);
if(mynode==0){
}
ParallelJacobi(mynode,totalnodes,N,A,x,q,1.0e-14);
if(mynode==0){
for(int i = 0; i < N; i++)
cout << x[i] << endl;
}
MPI_Finalize();
}
Here follows the parallel implementation of the Jacobi algorithm
int ParallelJacobi(int mynode, int numnodes, int N, double **A, double *x, double *b, double abstol){
int i,j,k,i_global;
int maxit = 100000;
int rows_local,local_offset,last_rows_local,*count,*displacements;
double sum1,sum2,*xold;
double error_sum_local, error_sum_global;
MPI_Status status;
rows_local = (int) floor((double)N/numnodes);
local_offset = mynode*rows_local;
if(mynode == (numnodes-1))
rows_local = N - rows_local*(numnodes-1);
/*Distribute the Matrix and R.H.S. among the processors */
if(mynode == 0){
for(i=1;i<numnodes-1;i++){
for(j=0;j<rows_local;j++)
MPI_Send(A[i*rows_local+j],N,MPI_DOUBLE,i,j,MPI_COMM_WORLD);
MPI_Send(b+i*rows_local,rows_local,MPI_DOUBLE,i,rows_local,
MPI_COMM_WORLD);
}
last_rows_local = N-rows_local*(numnodes-1);
for(j=0;j<last_rows_local;j++)
MPI_Send(A[(numnodes-1)*rows_local+j],N,MPI_DOUBLE,numnodes-1,j,
MPI_COMM_WORLD);
MPI_Send(b+(numnodes-1)*rows_local,last_rows_local,MPI_DOUBLE,numnodes-1,
last_rows_local,MPI_COMM_WORLD);
}
else{
A = CreateMatrix(rows_local,N);
x = new double[rows_local];
b = new double[rows_local];
for(i=0;i<rows_local;i++)
MPI_Recv(A[i],N,MPI_DOUBLE,0,i,MPI_COMM_WORLD,&status);
MPI_Recv(b,rows_local,MPI_DOUBLE,0,rows_local,MPI_COMM_WORLD,&status);
}
xold = new double[N];
count = new int[numnodes];
displacements = new int[numnodes];
//set initial guess to all 1.0
for(i=0; i<N; i++){
xold[i] = 1.0;
}
for(i=0;i<numnodes;i++){
count[i] = (int) floor((double)N/numnodes);
displacements[i] = i*count[i];
}
count[numnodes-1] = N - ((int)floor((double)N/numnodes))*(numnodes-1);
for(k=0; k<maxit; k++){
error_sum_local = 0.0;
for(i = 0; i<rows_local; i++){
i_global = local_offset+i;
sum1 = 0.0; sum2 = 0.0;
for(j=0; j < i_global; j++)
sum1 = sum1 + A[i][j]*xold[j];
for(j=i_global+1; j < N; j++)
sum2 = sum2 + A[i][j]*xold[j];
x[i] = (-sum1 - sum2 + b[i])/A[i][i_global];
error_sum_local += (x[i]-xold[i_global])*(x[i]-xold[i_global]);
}
MPI_Allreduce(&error_sum_local,&error_sum_global,1,MPI_DOUBLE,
MPI_SUM,MPI_COMM_WORLD);
MPI_Allgatherv(x,rows_local,MPI_DOUBLE,xold,count,displacements,
MPI_DOUBLE,MPI_COMM_WORLD);
if(sqrt(error_sum_global)<abstol){
if(mynode == 0){
for(i=0;i<N;i++)
x[i] = xold[i];
}
else{
DestroyMatrix(A,rows_local,N);
delete[] x;
delete[] b;
}
delete[] xold;
delete[] count;
delete[] displacements;
return k;
}
}
cerr << "Jacobi: Maximum Number of Interations Reached Without Convergence\n";
if(mynode == 0){
for(i=0;i<N;i++)
x[i] = xold[i];
}
else{
DestroyMatrix(A,rows_local,N);
delete[] x;
delete[] b;
}
delete[] xold;
delete[] count;
delete[] displacements;
return maxit;
}
Here follows the parallel implementation of the diffusion equation using OpenMP
/* Simple program for solving the two-dimensional diffusion
equation or Poisson equation using Jacobi's iterative method
Note that this program does not contain a loop over the time
dependence. It uses OpenMP to parallelize
*/
#include <iostream>
#include <iomanip>
#include <armadillo>
#include <omp.h>
using namespace std;
using namespace arma;
int JacobiSolver(int, double, double, mat &, mat &, double);
int main(int argc, char * argv[]){
int Npoints = 100;
double ExactSolution;
double dx = 1.0/(Npoints-1);
double dt = 0.25*dx*dx;
double tolerance = 1.0e-8;
mat A = zeros<mat>(Npoints,Npoints);
mat q = zeros<mat>(Npoints,Npoints);
int thread_num;
omp_set_num_threads(4);
thread_num = omp_get_max_threads ();
cout << " The number of processors available = " << omp_get_num_procs () << endl ;
cout << " The number of threads available = " << thread_num << endl;
// setting up an additional source term
for(int i = 0; i < Npoints; i++)
for(int j = 0; j < Npoints; j++)
q(i,j) = -2.0*M_PI*M_PI*sin(M_PI*dx*i)*sin(M_PI*dx*j);
int itcount = JacobiSolver(Npoints,dx,dt,A,q,tolerance);
// Testing against exact solution
double sum = 0.0;
for(int i = 0; i < Npoints; i++){
for(int j=0;j < Npoints; j++){
ExactSolution = -sin(M_PI*dx*i)*sin(M_PI*dx*j);
sum += fabs((A(i,j) - ExactSolution));
}
}
cout << setprecision(5) << setiosflags(ios::scientific);
cout << "Jacobi error is " << sum/Npoints << " in " << itcount << " iterations" << endl;
}
// Function for setting up the iterative Jacobi solver
int JacobiSolver(int N, double dx, double dt, mat &A, mat &q, double abstol)
{
int MaxIterations = 100000;
double D = dt/(dx*dx);
// initial guess
mat Aold = randu<mat>(N,N);
// Boundary conditions, all zeros
for(int i=0; i < N; i++){
A(0,i) = 0.0;
A(N-1,i) = 0.0;
A(i,0) = 0.0;
A(i,N-1) = 0.0;
}
double sum = 1.0;
int k = 0;
// Start the iterative solver
while (k < MaxIterations && sum > abstol){
int i, j;
sum = 0.0;
// Define parallel region
# pragma omp parallel default(shared) private (i, j) reduction(+:sum)
{
# pragma omp for
for(i = 1; i < N-1; i++){
for(j = 1; j < N-1; j++){
A(i,j) = dt*q(i,j) + Aold(i,j) +
D*(Aold(i+1,j) + Aold(i,j+1) - 4.0*Aold(i,j) +
Aold(i-1,j) + Aold(i,j-1));
}
}
for(i = 0; i < N;i++){
for(j = 0; j < N;j++){
sum += fabs(Aold(i,j)-A(i,j));
Aold(i,j) = A(i,j);
}
}
sum /= (N*N);
} //end parallel region
k++;
} //end while loop
return k;
}
The \( 1+1 \)-dimensional wave equation reads
$$
\begin{equation*}
\frac{\partial^2 u}{\partial x^2}=\frac{\partial^2 u}{\partial t^2},
\end{equation*}
$$
with \( u=u(x,t) \) and we have assumed that we operate with
dimensionless variables. Possible boundary and initial conditions
with \( L=1 \) are
$$
\begin{equation*}
\begin{array}{cc} u_{xx} = u_{tt}& x\in(0,1), t>0 \\
u(x,0) = g(x)& x\in (0,1) \\
u(0,t)=u(1,t)=0 & t > 0\\
\partial u/\partial t|_{t=0}=0 & x\in (0,1)\\
\end{array} .
\end{equation*}
$$
$$
\begin{equation*}
u_{xx}\approx \frac{u(x+\Delta x,t)-2u(x,t)+u(x-\Delta x,t)}{\Delta x^2},
\end{equation*}
$$
and
$$
\begin{equation*}
u_{tt}\approx \frac{u(x,t+\Delta t)-2u(x,t)+u(x,t-\Delta t)}{\Delta t^2},
\end{equation*}
$$
which we rewrite as
$$
\begin{equation*}
u_{xx}\approx \frac{u_{i+1,j}-2u_{i,j}+u_{i-1,j}}{\Delta x^2},
\end{equation*}
$$
and
$$
\begin{equation*}
u_{tt}\approx \frac{u_{i,j+1}-2u_{i,j}+u_{i,j-1}}{\Delta t^2},
\end{equation*}
$$
resulting in
$$
\begin{equation}
\tag{23}
u_{i,j+1}=2u_{i,j}-u_{i,j-1}+\frac{\Delta t^2}{\Delta x^2}\left(u_{i+1,j}-2u_{i,j}+u_{i-1,j}\right).
\end{equation}
$$
$$
\begin{equation*}
u_t\approx \frac{u(x_i,t_j+\Delta t)-u(x_i,t_j-\Delta t)}{2\Delta t},
\end{equation*}
$$
and at \( t=0 \) it reduces to
$$
\begin{equation*}
u_t\approx \frac{u_{i,+1}-u_{i,-1}}{2\Delta t}=0,
\end{equation*}
$$
implying that \( u_{i,+1}=u_{i,-1} \).
$$
\begin{equation}
\tag{24}
u_{i,1}=u_{i,0}+\frac{\Delta t^2}{2\Delta x^2}\left(u_{i+1,0}-2u_{i,0}+u_{i-1,0}\right).
\end{equation}
$$
We need seemingly two different equations, one for the first time step
given by Eq. (24) and one for all other time-steps
given by Eq. (23). However, it suffices to use
Eq. (23) for all times as long as we
provide \( u(i,-1) \) using
$$
\begin{equation*}
u_{i,-1}=u_{i,0}+\frac{\Delta t^2}{2\Delta x^2}\left(u_{i+1,0}-2u_{i,0}+u_{i-1,0}\right),
\end{equation*}
$$
in our setup of the initial conditions.
$$
\begin{equation*}
\begin{array}{cc} t_l=l\Delta t& l \ge 0 \\
x_i=i\Delta x& 0 \le i \le n_x\\
y_j=j\Delta y& 0 \le j \le n_y\end{array} ,
\end{equation*}
$$
and we will let \( \Delta x=\Delta y = h \) and \( n_x=n_y \) for the sake of
simplicity.
The equation with initial and boundary conditions reads now
$$
\begin{equation*}
\begin{array}{cc} u_{xx}+u_{yy} = u_{tt}& x,y\in(0,1), t>0 \\
u(x,y,0) = g(x,y)& x,y\in (0,1) \\
u(0,0,t)=u(1,1,t)=0 & t > 0\\
\partial u/\partial t|_{t=0}=0 & x,y\in (0,1)\\
\end{array}.
\end{equation*}
$$
$$
\begin{equation*}
u_{xx}\approx \frac{u_{i+1,j}^l-2u_{i,j}^l+u_{i-1,j}^l}{h^2},
\end{equation*}
$$
and
$$
\begin{equation*}
u_{yy}\approx \frac{u_{i,j+1}^l-2u_{i,j}^l+u_{i,j-1}^l}{h^2},
\end{equation*}
$$
and
$$
\begin{equation*}
u_{tt}\approx \frac{u_{i,j}^{l+1}-2u_{i,j}^{l}+u_{i,j}^{l-1}}{\Delta t^2},
\end{equation*}
$$
which we merge into the discretized \( 2+1 \)-dimensional wave equation
as
$$
\begin{equation}
\tag{25}
u_{i,j}^{l+1}
=2u_{i,j}^{l}-u_{i,j}^{l-1}+\frac{\Delta t^2}{h^2}\left(u_{i+1,j}^l-4u_{i,j}^l+u_{i-1,j}^l+u_{i,j+1}^l+u_{i,j-1}^l\right),
\end{equation}
$$
where again we have an explicit scheme with \( u_{i,j}^{l+1} \) as the only
unknown quantity.
$$
\begin{equation*}
u_{i,j}^{-1}=u_{i,j}^0+\frac{\Delta t}{2h^2}\left(u_{i+1,j}^0-4u_{i,j}^0+u_{i-1,j}^0+u_{i,j+1}^0+u_{i,j-1}^0\right),
\end{equation*}
$$
in our setup of the initial conditions.
$$
\begin{equation*}
\begin{array}{cc} c^2(u_{xx}+u_{yy}) = u_{tt}& x,y\in(0,L), t>0 \\
u(x,y,0) = f(x,y) & x,y\in (0,L) \\
u(0,0,t)=u(L,L,t)=0 & t > 0\\
\partial u/\partial t|_{t=0}= g(x,y) & x,y\in (0,L)\\
\end{array} .
\end{equation*}
$$
$$
\begin{equation*}
u(x,y,t) = F(x,y) G(t),
\end{equation*}
$$
resulting in the equation
$$
\begin{equation*}
FG_{tt}= c^2(F_{xx}G+F_{yy}G),
\end{equation*}
$$
or
$$
\begin{equation*}
\frac{G_{tt}}{c^2G} = \frac{1}{F}(F_{xx}+F_{yy}) = -\nu^2.
\end{equation*}
$$
$$
\begin{equation*}
F_{xx}+F_{yy}+F\nu^2=0,
\end{equation*}
$$
and
$$
\begin{equation*}
G_{tt} + Gc^2\nu^2 = G_{tt} + G\lambda^2 = 0,
\end{equation*}
$$
with \( \lambda = c\nu \).
We can in turn make the following ansatz for the \( x \) and \( y \) dependent part
$$
\begin{equation*}
F(x,y) = H(x)Q(y),
\end{equation*}
$$
which results in
$$
\begin{equation*}
\frac{1}{H}H_{xx} = -\frac{1}{Q}(Q_{yy}+Q\nu^2)= -\kappa^2.
\end{equation*}
$$
$$
\begin{equation*}
H_{xx} + \kappa^2H = 0,
\end{equation*}
$$
and
$$
\begin{equation*}
Q_{yy} + \rho^2Q = 0,
\end{equation*}
$$
with \( \rho^2= \nu^2-\kappa^2 \).
$$
\begin{equation*}
H(x) = A\cos(\kappa x)+B\sin(\kappa x),
\end{equation*}
$$
and
$$
\begin{equation*}
Q(y) = C\cos(\rho y)+D\sin(\rho y).
\end{equation*}
$$
$$
\begin{equation*}
H_m(x) = \sin(\frac{m\pi x}{L}) \hspace{1cm} Q_n(y) = \sin(\frac{n\pi y}{L}),
\end{equation*}
$$
or
$$
\begin{equation*}
F_{mn}(x,y) = \sin(\frac{m\pi x}{L})\sin(\frac{n\pi y}{L}).
\end{equation*}
$$
With \( \rho^2= \nu^2-\kappa^2 \) and \( \lambda = c\nu \) we have an eigenspectrum \( \lambda=c\sqrt{\kappa^2+\rho^2} \)
or \( \lambda_{mn}= c\pi/L\sqrt{m^2+n^2} \).
$$
\begin{equation*}
G_{mn}(t) = B_{mn}\cos(\lambda_{mn} t)+D_{mn}\sin(\lambda_{mn} t),
\end{equation*}
$$
with the general solution of the form
$$
\begin{equation*}
u(x,y,t) = \sum_{mn=1}^{\infty} u_{mn}(x,y,t) = \sum_{mn=1}^{\infty}F_{mn}(x,y)G_{mn}(t).
\end{equation*}
$$
$$
\begin{equation*}
B_{mn} = \frac{2}{L}\int_0^L\int_0^L dxdy f(x,y) \sin(\frac{m\pi x}{L})\sin(\frac{n\pi y}{L}),
\end{equation*}
$$
and
$$
\begin{equation*}
D_{mn} = \frac{2}{L}\int_0^L\int_0^L dxdy g(x,y) \sin(\frac{m\pi x}{L})\sin(\frac{n\pi y}{L}).
\end{equation*}
$$
Inserting the particular functional forms of \( f(x,y) \) and \( g(x,y) \) one obtains the final closed-form expressions.