15. Solving Differential Equations with Deep Learning#
The Universal Approximation Theorem states that a neural network can approximate any function at a single hidden layer along with one input and output layer to any given precision.
An ordinary differential equation (ODE) is an equation involving functions having one variable.
In general, an ordinary differential equation looks like
where
The
Let the trial solution
where
But what about the network
As described previously, an optimization method could be used to minimize the parameters of a neural network, that being its weights and biases, through backward propagation.
For the minimization to be defined, we need to have a cost function at hand to minimize.
It is given that
If
The neural net should then find the parameters
To perform the minimization using gradient descent, the gradient of
Luckily, there exists libraries that makes the job for us through automatic differentiation. Automatic differentiation is a method of finding the derivatives numerically with very high precision.
15.1. Example: Exponential decay#
An exponential decay of a quantity
with
The analytical solution of (4) is
Having an analytical solution at hand, it is possible to use it to compare how well a neural network finds a solution of (4).
The program will use a neural network to solve
where
In this example,
To begin with, a trial solution
with
In this network, there are no weights and bias at the input layer, so
The first column in
Its first column represents the bias of each neuron and the remaining columns represents the weights to each neuron.
It is given that
15.2. Reformulating the problem#
We wish that our neural network manages to minimize a given cost function.
A reformulation of out equation, (6), must therefore be done, such that it describes the problem a neural network can solve for.
The neural network must find the set of weights and biases
The trial solution
has been chosen such that it already solves the condition
is fulfilled as best as possible.
The left hand side and right hand side of (8) must be computed separately, and then the neural network must choose weights and biases, contained in
This gives the following cost function our neural network must solve for:
(the notation
or, in terms of weights and biases for the hidden and output layer in our network:
for an input value
If the neural network evaluates
Letting
In terms of
For simplicity, it is assumed that the input is an array
First, the neural network must feed forward the inputs.
This means that
For the
The result after weighting the inputs at the
The vector
After having found
In this example, the sigmoid function has been chosen to be the activation function for each hidden neuron:
It is possible to use other activations functions for the hidden layer also.
The output
The outputs
The output layer consists of one neuron in this case, and combines the
output from each of the neurons in the hidden layers. The output layer
combines the results from the hidden layer using some weights
The procedure of weighting the output neuron
Expressing
In this case we seek a continuous range of values since we are approximating a function. This means that after computing
The next step is to decide how the parameters should be changed such that they minimize the cost function.
The chosen cost function for this problem is
In order to minimize the cost function, an optimization method must be chosen.
Here, gradient descent with a constant step size has been chosen.
15.3. Gradient descent#
The idea of the gradient descent algorithm is to update parameters in a direction where the cost function decreases goes to a minimum.
In general, the update of some parameters
for a number of iterations or until
The value of
In our case, we have to minimize the cost function
This means that
15.4. The code for solving the ODE#
%matplotlib inline
import autograd.numpy as np
from autograd import grad, elementwise_grad
import autograd.numpy.random as npr
from matplotlib import pyplot as plt
def sigmoid(z):
return 1/(1 + np.exp(-z))
# Assuming one input, hidden, and output layer
def neural_network(params, x):
# Find the weights (including and biases) for the hidden and output layer.
# Assume that params is a list of parameters for each layer.
# The biases are the first element for each array in params,
# and the weights are the remaning elements in each array in params.
w_hidden = params[0]
w_output = params[1]
# Assumes input x being an one-dimensional array
num_values = np.size(x)
x = x.reshape(-1, num_values)
# Assume that the input layer does nothing to the input x
x_input = x
## Hidden layer:
# Add a row of ones to include bias
x_input = np.concatenate((np.ones((1,num_values)), x_input ), axis = 0)
z_hidden = np.matmul(w_hidden, x_input)
x_hidden = sigmoid(z_hidden)
## Output layer:
# Include bias:
x_hidden = np.concatenate((np.ones((1,num_values)), x_hidden ), axis = 0)
z_output = np.matmul(w_output, x_hidden)
x_output = z_output
return x_output
# The trial solution using the deep neural network:
def g_trial(x,params, g0 = 10):
return g0 + x*neural_network(params,x)
# The right side of the ODE:
def g(x, g_trial, gamma = 2):
return -gamma*g_trial
# The cost function:
def cost_function(P, x):
# Evaluate the trial function with the current parameters P
g_t = g_trial(x,P)
# Find the derivative w.r.t x of the neural network
d_net_out = elementwise_grad(neural_network,1)(P,x)
# Find the derivative w.r.t x of the trial function
d_g_t = elementwise_grad(g_trial,0)(x,P)
# The right side of the ODE
func = g(x, g_t)
err_sqr = (d_g_t - func)**2
cost_sum = np.sum(err_sqr)
return cost_sum / np.size(err_sqr)
# Solve the exponential decay ODE using neural network with one input, hidden, and output layer
def solve_ode_neural_network(x, num_neurons_hidden, num_iter, lmb):
## Set up initial weights and biases
# For the hidden layer
p0 = npr.randn(num_neurons_hidden, 2 )
# For the output layer
p1 = npr.randn(1, num_neurons_hidden + 1 ) # +1 since bias is included
P = [p0, p1]
print('Initial cost: %g'%cost_function(P, x))
## Start finding the optimal weights using gradient descent
# Find the Python function that represents the gradient of the cost function
# w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer
cost_function_grad = grad(cost_function,0)
# Let the update be done num_iter times
for i in range(num_iter):
# Evaluate the gradient at the current weights and biases in P.
# The cost_grad consist now of two arrays;
# one for the gradient w.r.t P_hidden and
# one for the gradient w.r.t P_output
cost_grad = cost_function_grad(P, x)
P[0] = P[0] - lmb * cost_grad[0]
P[1] = P[1] - lmb * cost_grad[1]
print('Final cost: %g'%cost_function(P, x))
return P
def g_analytic(x, gamma = 2, g0 = 10):
return g0*np.exp(-gamma*x)
# Solve the given problem
if __name__ == '__main__':
# Set seed such that the weight are initialized
# with same weights and biases for every run.
npr.seed(15)
## Decide the vales of arguments to the function to solve
N = 10
x = np.linspace(0, 1, N)
## Set up the initial parameters
num_hidden_neurons = 10
num_iter = 10000
lmb = 0.001
# Use the network
P = solve_ode_neural_network(x, num_hidden_neurons, num_iter, lmb)
# Print the deviation from the trial solution and true solution
res = g_trial(x,P)
res_analytical = g_analytic(x)
print('Max absolute difference: %g'%np.max(np.abs(res - res_analytical)))
# Plot the results
plt.figure(figsize=(10,10))
plt.title('Performance of neural network solving an ODE compared to the analytical solution')
plt.plot(x, res_analytical)
plt.plot(x, res[0,:])
plt.legend(['analytical','nn'])
plt.xlabel('x')
plt.ylabel('g(x)')
plt.show()
Initial cost: 367.01
Final cost: 0.0666807
Max absolute difference: 0.0437499

15.6. Using forward Euler to solve the ODE#
A straightforward way of solving an ODE numerically, is to use Euler’s method.
Euler’s method uses Taylor series to approximate the value at a function
In our case, using Euler’s method to approximate the value of
along with the condition that
Let
For
Now, if
for
Equation (12) could be implemented in the following way, extending the program that uses the network using Autograd:
# Assume that all function definitions from the example program using Autograd
# are located here.
if __name__ == '__main__':
npr.seed(4155)
## Decide the vales of arguments to the function to solve
Nt = 10
T = 1
t = np.linspace(0,T, Nt)
## Set up the initial parameters
num_hidden_neurons = [100,50,25]
num_iter = 1000
lmb = 1e-3
P = solve_ode_deep_neural_network(t, num_hidden_neurons, num_iter, lmb)
g_dnn_ag = g_trial_deep(t,P)
g_analytical = g_analytic(t)
# Find the maximum absolute difference between the solutons:
diff_ag = np.max(np.abs(g_dnn_ag - g_analytical))
print("The max absolute difference between the solutions is: %g"%diff_ag)
plt.figure(figsize=(10,10))
plt.title('Performance of neural network solving an ODE compared to the analytical solution')
plt.plot(t, g_analytical)
plt.plot(t, g_dnn_ag[0,:])
plt.legend(['analytical','nn'])
plt.xlabel('t')
plt.ylabel('g(t)')
## Find an approximation to the funtion using forward Euler
alpha, A, g0 = get_parameters()
dt = T/(Nt - 1)
# Perform forward Euler to solve the ODE
g_euler = np.zeros(Nt)
g_euler[0] = g0
for i in range(1,Nt):
g_euler[i] = g_euler[i-1] + dt*(alpha*g_euler[i-1]*(A - g_euler[i-1]))
# Print the errors done by each method
diff1 = np.max(np.abs(g_euler - g_analytical))
diff2 = np.max(np.abs(g_dnn_ag[0,:] - g_analytical))
print('Max absolute difference between Euler method and analytical: %g'%diff1)
print('Max absolute difference between deep neural network and analytical: %g'%diff2)
# Plot results
plt.figure(figsize=(10,10))
plt.plot(t,g_euler)
plt.plot(t,g_analytical)
plt.plot(t,g_dnn_ag[0,:])
plt.legend(['euler','analytical','dnn'])
plt.xlabel('Time t')
plt.ylabel('g(t)')
plt.show()
15.7. Solving the one dimensional Poisson equation#
The Poisson equation for
where
The conditions that
This equation can be solved numerically using programs where e.g Autograd and TensorFlow are used. The results from the networks can then be compared to the analytical solution. In addition, it could be interesting to see how a typical method for numerically solving second order ODEs compares to the neural networks.
Here, the function
where
In this example, we consider the case when
For this case, a possible trial solution satisfying the conditions could be
The analytical solution for this problem is
import autograd.numpy as np
from autograd import grad, elementwise_grad
import autograd.numpy.random as npr
from matplotlib import pyplot as plt
def sigmoid(z):
return 1/(1 + np.exp(-z))
def deep_neural_network(deep_params, x):
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assumes input x being an one-dimensional array
num_values = np.size(x)
x = x.reshape(-1, num_values)
# Assume that the input layer does nothing to the input x
x_input = x
# Due to multiple hidden layers, define a variable referencing to the
# output of the previous layer:
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_values)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_values)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output
def solve_ode_deep_neural_network(x, num_neurons, num_iter, lmb):
# num_hidden_neurons is now a list of number of neurons within each hidden layer
# Find the number of hidden layers:
N_hidden = np.size(num_neurons)
## Set up initial weigths and biases
# Initialize the list of parameters:
P = [None]*(N_hidden + 1) # + 1 to include the output layer
P[0] = npr.randn(num_neurons[0], 2 )
for l in range(1,N_hidden):
P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias
# For the output layer
P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included
print('Initial cost: %g'%cost_function_deep(P, x))
## Start finding the optimal weigths using gradient descent
# Find the Python function that represents the gradient of the cost function
# w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer
cost_function_deep_grad = grad(cost_function_deep,0)
# Let the update be done num_iter times
for i in range(num_iter):
# Evaluate the gradient at the current weights and biases in P.
# The cost_grad consist now of N_hidden + 1 arrays; the gradient w.r.t the weights and biases
# in the hidden layers and output layers evaluated at x.
cost_deep_grad = cost_function_deep_grad(P, x)
for l in range(N_hidden+1):
P[l] = P[l] - lmb * cost_deep_grad[l]
print('Final cost: %g'%cost_function_deep(P, x))
return P
## Set up the cost function specified for this Poisson equation:
# The right side of the ODE
def f(x):
return (3*x + x**2)*np.exp(x)
def cost_function_deep(P, x):
# Evaluate the trial function with the current parameters P
g_t = g_trial_deep(x,P)
# Find the derivative w.r.t x of the trial function
d2_g_t = elementwise_grad(elementwise_grad(g_trial_deep,0))(x,P)
right_side = f(x)
err_sqr = (-d2_g_t - right_side)**2
cost_sum = np.sum(err_sqr)
return cost_sum/np.size(err_sqr)
# The trial solution:
def g_trial_deep(x,P):
return x*(1-x)*deep_neural_network(P,x)
# The analytic solution;
def g_analytic(x):
return x*(1-x)*np.exp(x)
if __name__ == '__main__':
npr.seed(4155)
## Decide the vales of arguments to the function to solve
Nx = 10
x = np.linspace(0,1, Nx)
## Set up the initial parameters
num_hidden_neurons = [200,100]
num_iter = 1000
lmb = 1e-3
P = solve_ode_deep_neural_network(x, num_hidden_neurons, num_iter, lmb)
g_dnn_ag = g_trial_deep(x,P)
g_analytical = g_analytic(x)
# Find the maximum absolute difference between the solutons:
max_diff = np.max(np.abs(g_dnn_ag - g_analytical))
print("The max absolute difference between the solutions is: %g"%max_diff)
plt.figure(figsize=(10,10))
plt.title('Performance of neural network solving an ODE compared to the analytical solution')
plt.plot(x, g_analytical)
plt.plot(x, g_dnn_ag[0,:])
plt.legend(['analytical','nn'])
plt.xlabel('x')
plt.ylabel('g(x)')
plt.show()
15.7.1. Comparing with a numerical scheme#
The Poisson equation is possible to solve using Taylor series to approximate the second derivative.
Using Taylor series, the second derivative can be expressed as
where
Looking away from the error terms gives an approximation to the second derivative:
If
Since we know from our problem that
along with the conditions
for
The equation can be rewritten into a matrix equation:
which makes it possible to solve for the vector
We can then compare the result from this numerical scheme with the output from our network using Autograd:
import autograd.numpy as np
from autograd import grad, elementwise_grad
import autograd.numpy.random as npr
from matplotlib import pyplot as plt
def sigmoid(z):
return 1/(1 + np.exp(-z))
def deep_neural_network(deep_params, x):
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assumes input x being an one-dimensional array
num_values = np.size(x)
x = x.reshape(-1, num_values)
# Assume that the input layer does nothing to the input x
x_input = x
# Due to multiple hidden layers, define a variable referencing to the
# output of the previous layer:
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_values)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_values)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output
def solve_ode_deep_neural_network(x, num_neurons, num_iter, lmb):
# num_hidden_neurons is now a list of number of neurons within each hidden layer
# Find the number of hidden layers:
N_hidden = np.size(num_neurons)
## Set up initial weigths and biases
# Initialize the list of parameters:
P = [None]*(N_hidden + 1) # + 1 to include the output layer
P[0] = npr.randn(num_neurons[0], 2 )
for l in range(1,N_hidden):
P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias
# For the output layer
P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included
print('Initial cost: %g'%cost_function_deep(P, x))
## Start finding the optimal weigths using gradient descent
# Find the Python function that represents the gradient of the cost function
# w.r.t the 0-th input argument -- that is the weights and biases in the hidden and output layer
cost_function_deep_grad = grad(cost_function_deep,0)
# Let the update be done num_iter times
for i in range(num_iter):
# Evaluate the gradient at the current weights and biases in P.
# The cost_grad consist now of N_hidden + 1 arrays; the gradient w.r.t the weights and biases
# in the hidden layers and output layers evaluated at x.
cost_deep_grad = cost_function_deep_grad(P, x)
for l in range(N_hidden+1):
P[l] = P[l] - lmb * cost_deep_grad[l]
print('Final cost: %g'%cost_function_deep(P, x))
return P
## Set up the cost function specified for this Poisson equation:
# The right side of the ODE
def f(x):
return (3*x + x**2)*np.exp(x)
def cost_function_deep(P, x):
# Evaluate the trial function with the current parameters P
g_t = g_trial_deep(x,P)
# Find the derivative w.r.t x of the trial function
d2_g_t = elementwise_grad(elementwise_grad(g_trial_deep,0))(x,P)
right_side = f(x)
err_sqr = (-d2_g_t - right_side)**2
cost_sum = np.sum(err_sqr)
return cost_sum/np.size(err_sqr)
# The trial solution:
def g_trial_deep(x,P):
return x*(1-x)*deep_neural_network(P,x)
# The analytic solution;
def g_analytic(x):
return x*(1-x)*np.exp(x)
if __name__ == '__main__':
npr.seed(4155)
## Decide the vales of arguments to the function to solve
Nx = 10
x = np.linspace(0,1, Nx)
## Set up the initial parameters
num_hidden_neurons = [200,100]
num_iter = 1000
lmb = 1e-3
P = solve_ode_deep_neural_network(x, num_hidden_neurons, num_iter, lmb)
g_dnn_ag = g_trial_deep(x,P)
g_analytical = g_analytic(x)
# Find the maximum absolute difference between the solutons:
plt.figure(figsize=(10,10))
plt.title('Performance of neural network solving an ODE compared to the analytical solution')
plt.plot(x, g_analytical)
plt.plot(x, g_dnn_ag[0,:])
plt.legend(['analytical','nn'])
plt.xlabel('x')
plt.ylabel('g(x)')
## Perform the computation using the numerical scheme
dx = 1/(Nx - 1)
# Set up the matrix A
A = np.zeros((Nx-2,Nx-2))
A[0,0] = 2
A[0,1] = -1
for i in range(1,Nx-3):
A[i,i-1] = -1
A[i,i] = 2
A[i,i+1] = -1
A[Nx - 3, Nx - 4] = -1
A[Nx - 3, Nx - 3] = 2
# Set up the vector f
f_vec = dx**2 * f(x[1:-1])
# Solve the equation
g_res = np.linalg.solve(A,f_vec)
g_vec = np.zeros(Nx)
g_vec[1:-1] = g_res
# Print the differences between each method
max_diff1 = np.max(np.abs(g_dnn_ag - g_analytical))
max_diff2 = np.max(np.abs(g_vec - g_analytical))
print("The max absolute difference between the analytical solution and DNN Autograd: %g"%max_diff1)
print("The max absolute difference between the analytical solution and numerical scheme: %g"%max_diff2)
# Plot the results
plt.figure(figsize=(10,10))
plt.plot(x,g_vec)
plt.plot(x,g_analytical)
plt.plot(x,g_dnn_ag[0,:])
plt.legend(['numerical scheme','analytical','dnn'])
plt.show()
15.8. Partial Differential Equations#
A partial differential equation (PDE) has a solution here the function is defined by multiple variables. The equation may involve all kinds of combinations of which variables the function is differentiated with respect to.
In general, a partial differential equation for a function
where
15.8.1. Type of problem#
The problem our network must solve for, is similar to the ODE case.
We must have a trial solution
For instance, the trial solution could be expressed as
where
The role of the function
15.8.2. Network requirements#
The network tries then the minimize the cost function following the
same ideas as described for the ODE case, but now with more than one
variables to consider. The concept still remains the same; find a set
of parameters
As for the ODE case, the cost function is the mean squared error that the network must try to minimize. The cost function for the network to minimize is
If we let
If we also have
15.9. Example: The diffusion equation#
In one spatial dimension, the equation reads
where a possible choice of conditions are
with
For this case, we want to find
and
with
First, let us set up the deep neural network. The deep neural network will follow the same structure as discussed in the examples solving the ODEs. First, we will look into how Autograd could be used in a network tailored to solve for bivariate functions.
The only change to do here, is to extend our network such that
functions of multiple parameters are correctly handled. In this case
we have two variables in our function to solve for, that is time
def sigmoid(z):
return 1/(1 + np.exp(-z))
def deep_neural_network(deep_params, x):
# x is now a point and a 1D numpy array; make it a column vector
num_coordinates = np.size(x,0)
x = x.reshape(num_coordinates,-1)
num_points = np.size(x,1)
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assume that the input layer does nothing to the input x
x_input = x
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_points)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_points)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output[0][0]
The cost function must then iterate through the given arrays
containing values for
A possible trial solution for this PDE is
with
To fulfill the conditions,
since
The Jacobian is used because the program must find the derivative of
the trial solution with respect to
This gives the necessity of computing the Jacobian matrix, as we want
to evaluate the gradient with respect to
In Autograd, the differentiation is by default done with respect to
the first input argument of your Python function. Since the points is
an array representing
To find the second derivative with respect to
# Set up the trial function:
def u(x):
return np.sin(np.pi*x)
def g_trial(point,P):
x,t = point
return (1-t)*u(x) + x*(1-x)*t*deep_neural_network(P,point)
# The right side of the ODE:
def f(point):
return 0.
# The cost function:
def cost_function(P, x, t):
cost_sum = 0
g_t_jacobian_func = jacobian(g_trial)
g_t_hessian_func = hessian(g_trial)
for x_ in x:
for t_ in t:
point = np.array([x_,t_])
g_t = g_trial(point,P)
g_t_jacobian = g_t_jacobian_func(point,P)
g_t_hessian = g_t_hessian_func(point,P)
g_t_dt = g_t_jacobian[1]
g_t_d2x = g_t_hessian[0][0]
func = f(point)
err_sqr = ( (g_t_dt - g_t_d2x) - func)**2
cost_sum += err_sqr
return cost_sum
15.9.1. Setting up the network using Autograd; The full program#
Having set up the network, along with the trial solution and cost function, we can now see how the deep neural network performs by comparing the results to the analytical solution.
The analytical solution of our problem is
A possible way to implement a neural network solving the PDE, is given below. Be aware, though, that it is fairly slow for the parameters used. A better result is possible, but requires more iterations, and thus longer time to complete.
Indeed, the program below is not optimal in its implementation, but rather serves as an example on how to implement and use a neural network to solve a PDE. Using TensorFlow results in a much better execution time. Try it!
import autograd.numpy as np
from autograd import jacobian,hessian,grad
import autograd.numpy.random as npr
from matplotlib import cm
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
## Set up the network
def sigmoid(z):
return 1/(1 + np.exp(-z))
def deep_neural_network(deep_params, x):
# x is now a point and a 1D numpy array; make it a column vector
num_coordinates = np.size(x,0)
x = x.reshape(num_coordinates,-1)
num_points = np.size(x,1)
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assume that the input layer does nothing to the input x
x_input = x
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_points)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_points)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output[0][0]
## Define the trial solution and cost function
def u(x):
return np.sin(np.pi*x)
def g_trial(point,P):
x,t = point
return (1-t)*u(x) + x*(1-x)*t*deep_neural_network(P,point)
# The right side of the ODE:
def f(point):
return 0.
# The cost function:
def cost_function(P, x, t):
cost_sum = 0
g_t_jacobian_func = jacobian(g_trial)
g_t_hessian_func = hessian(g_trial)
for x_ in x:
for t_ in t:
point = np.array([x_,t_])
g_t = g_trial(point,P)
g_t_jacobian = g_t_jacobian_func(point,P)
g_t_hessian = g_t_hessian_func(point,P)
g_t_dt = g_t_jacobian[1]
g_t_d2x = g_t_hessian[0][0]
func = f(point)
err_sqr = ( (g_t_dt - g_t_d2x) - func)**2
cost_sum += err_sqr
return cost_sum /( np.size(x)*np.size(t) )
## For comparison, define the analytical solution
def g_analytic(point):
x,t = point
return np.exp(-np.pi**2*t)*np.sin(np.pi*x)
## Set up a function for training the network to solve for the equation
def solve_pde_deep_neural_network(x,t, num_neurons, num_iter, lmb):
## Set up initial weigths and biases
N_hidden = np.size(num_neurons)
## Set up initial weigths and biases
# Initialize the list of parameters:
P = [None]*(N_hidden + 1) # + 1 to include the output layer
P[0] = npr.randn(num_neurons[0], 2 + 1 ) # 2 since we have two points, +1 to include bias
for l in range(1,N_hidden):
P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias
# For the output layer
P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included
print('Initial cost: ',cost_function(P, x, t))
cost_function_grad = grad(cost_function,0)
# Let the update be done num_iter times
for i in range(num_iter):
cost_grad = cost_function_grad(P, x , t)
for l in range(N_hidden+1):
P[l] = P[l] - lmb * cost_grad[l]
print('Final cost: ',cost_function(P, x, t))
return P
if __name__ == '__main__':
### Use the neural network:
npr.seed(15)
## Decide the vales of arguments to the function to solve
Nx = 10; Nt = 10
x = np.linspace(0, 1, Nx)
t = np.linspace(0,1,Nt)
## Set up the parameters for the network
num_hidden_neurons = [100, 25]
num_iter = 250
lmb = 0.01
P = solve_pde_deep_neural_network(x,t, num_hidden_neurons, num_iter, lmb)
## Store the results
g_dnn_ag = np.zeros((Nx, Nt))
G_analytical = np.zeros((Nx, Nt))
for i,x_ in enumerate(x):
for j, t_ in enumerate(t):
point = np.array([x_, t_])
g_dnn_ag[i,j] = g_trial(point,P)
G_analytical[i,j] = g_analytic(point)
# Find the map difference between the analytical and the computed solution
diff_ag = np.abs(g_dnn_ag - G_analytical)
print('Max absolute difference between the analytical solution and the network: %g'%np.max(diff_ag))
## Plot the solutions in two dimensions, that being in position and time
T,X = np.meshgrid(t,x)
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.set_title('Solution from the deep neural network w/ %d layer'%len(num_hidden_neurons))
s = ax.plot_surface(T,X,g_dnn_ag,linewidth=0,antialiased=False,cmap=cm.viridis)
ax.set_xlabel('Time $t$')
ax.set_ylabel('Position $x$');
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.set_title('Analytical solution')
s = ax.plot_surface(T,X,G_analytical,linewidth=0,antialiased=False,cmap=cm.viridis)
ax.set_xlabel('Time $t$')
ax.set_ylabel('Position $x$');
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.set_title('Difference')
s = ax.plot_surface(T,X,diff_ag,linewidth=0,antialiased=False,cmap=cm.viridis)
ax.set_xlabel('Time $t$')
ax.set_ylabel('Position $x$');
## Take some slices of the 3D plots just to see the solutions at particular times
indx1 = 0
indx2 = int(Nt/2)
indx3 = Nt-1
t1 = t[indx1]
t2 = t[indx2]
t3 = t[indx3]
# Slice the results from the DNN
res1 = g_dnn_ag[:,indx1]
res2 = g_dnn_ag[:,indx2]
res3 = g_dnn_ag[:,indx3]
# Slice the analytical results
res_analytical1 = G_analytical[:,indx1]
res_analytical2 = G_analytical[:,indx2]
res_analytical3 = G_analytical[:,indx3]
# Plot the slices
plt.figure(figsize=(10,10))
plt.title("Computed solutions at time = %g"%t1)
plt.plot(x, res1)
plt.plot(x,res_analytical1)
plt.legend(['dnn','analytical'])
plt.figure(figsize=(10,10))
plt.title("Computed solutions at time = %g"%t2)
plt.plot(x, res2)
plt.plot(x,res_analytical2)
plt.legend(['dnn','analytical'])
plt.figure(figsize=(10,10))
plt.title("Computed solutions at time = %g"%t3)
plt.plot(x, res3)
plt.plot(x,res_analytical3)
plt.legend(['dnn','analytical'])
plt.show()
15.10. Solving the wave equation with Neural Networks#
The wave equation is
with
Here, the chosen conditions are
where
The wave equation to solve for, is
where
In this example, let
Setting up the network is done in similar matter as for the example of solving the diffusion equation. The only things we have to change, is the trial solution such that it satisfies the conditions from (20) and the cost function.
The trial solution becomes slightly different since we have other conditions than in the example of solving the diffusion equation. Here, a possible trial solution
where
Note that this trial solution satisfies the conditions only if
The analytical solution for our specific problem, is
import autograd.numpy as np
from autograd import hessian,grad
import autograd.numpy.random as npr
from matplotlib import cm
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
## Set up the trial function:
def u(x):
return np.sin(np.pi*x)
def v(x):
return -np.pi*np.sin(np.pi*x)
def h1(point):
x,t = point
return (1 - t**2)*u(x) + t*v(x)
def g_trial(point,P):
x,t = point
return h1(point) + x*(1-x)*t**2*deep_neural_network(P,point)
## Define the cost function
def cost_function(P, x, t):
cost_sum = 0
g_t_hessian_func = hessian(g_trial)
for x_ in x:
for t_ in t:
point = np.array([x_,t_])
g_t_hessian = g_t_hessian_func(point,P)
g_t_d2x = g_t_hessian[0][0]
g_t_d2t = g_t_hessian[1][1]
err_sqr = ( (g_t_d2t - g_t_d2x) )**2
cost_sum += err_sqr
return cost_sum / (np.size(t) * np.size(x))
## The neural network
def sigmoid(z):
return 1/(1 + np.exp(-z))
def deep_neural_network(deep_params, x):
# x is now a point and a 1D numpy array; make it a column vector
num_coordinates = np.size(x,0)
x = x.reshape(num_coordinates,-1)
num_points = np.size(x,1)
# N_hidden is the number of hidden layers
N_hidden = np.size(deep_params) - 1 # -1 since params consist of parameters to all the hidden layers AND the output layer
# Assume that the input layer does nothing to the input x
x_input = x
x_prev = x_input
## Hidden layers:
for l in range(N_hidden):
# From the list of parameters P; find the correct weigths and bias for this layer
w_hidden = deep_params[l]
# Add a row of ones to include bias
x_prev = np.concatenate((np.ones((1,num_points)), x_prev ), axis = 0)
z_hidden = np.matmul(w_hidden, x_prev)
x_hidden = sigmoid(z_hidden)
# Update x_prev such that next layer can use the output from this layer
x_prev = x_hidden
## Output layer:
# Get the weights and bias for this layer
w_output = deep_params[-1]
# Include bias:
x_prev = np.concatenate((np.ones((1,num_points)), x_prev), axis = 0)
z_output = np.matmul(w_output, x_prev)
x_output = z_output
return x_output[0][0]
## The analytical solution
def g_analytic(point):
x,t = point
return np.sin(np.pi*x)*np.cos(np.pi*t) - np.sin(np.pi*x)*np.sin(np.pi*t)
def solve_pde_deep_neural_network(x,t, num_neurons, num_iter, lmb):
## Set up initial weigths and biases
N_hidden = np.size(num_neurons)
## Set up initial weigths and biases
# Initialize the list of parameters:
P = [None]*(N_hidden + 1) # + 1 to include the output layer
P[0] = npr.randn(num_neurons[0], 2 + 1 ) # 2 since we have two points, +1 to include bias
for l in range(1,N_hidden):
P[l] = npr.randn(num_neurons[l], num_neurons[l-1] + 1) # +1 to include bias
# For the output layer
P[-1] = npr.randn(1, num_neurons[-1] + 1 ) # +1 since bias is included
print('Initial cost: ',cost_function(P, x, t))
cost_function_grad = grad(cost_function,0)
# Let the update be done num_iter times
for i in range(num_iter):
cost_grad = cost_function_grad(P, x , t)
for l in range(N_hidden+1):
P[l] = P[l] - lmb * cost_grad[l]
print('Final cost: ',cost_function(P, x, t))
return P
if __name__ == '__main__':
### Use the neural network:
npr.seed(15)
## Decide the vales of arguments to the function to solve
Nx = 10; Nt = 10
x = np.linspace(0, 1, Nx)
t = np.linspace(0,1,Nt)
## Set up the parameters for the network
num_hidden_neurons = [50,20]
num_iter = 1000
lmb = 0.01
P = solve_pde_deep_neural_network(x,t, num_hidden_neurons, num_iter, lmb)
## Store the results
res = np.zeros((Nx, Nt))
res_analytical = np.zeros((Nx, Nt))
for i,x_ in enumerate(x):
for j, t_ in enumerate(t):
point = np.array([x_, t_])
res[i,j] = g_trial(point,P)
res_analytical[i,j] = g_analytic(point)
diff = np.abs(res - res_analytical)
print("Max difference between analytical and solution from nn: %g"%np.max(diff))
## Plot the solutions in two dimensions, that being in position and time
T,X = np.meshgrid(t,x)
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.set_title('Solution from the deep neural network w/ %d layer'%len(num_hidden_neurons))
s = ax.plot_surface(T,X,res,linewidth=0,antialiased=False,cmap=cm.viridis)
ax.set_xlabel('Time $t$')
ax.set_ylabel('Position $x$');
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.set_title('Analytical solution')
s = ax.plot_surface(T,X,res_analytical,linewidth=0,antialiased=False,cmap=cm.viridis)
ax.set_xlabel('Time $t$')
ax.set_ylabel('Position $x$');
fig = plt.figure(figsize=(10,10))
ax = fig.gca(projection='3d')
ax.set_title('Difference')
s = ax.plot_surface(T,X,diff,linewidth=0,antialiased=False,cmap=cm.viridis)
ax.set_xlabel('Time $t$')
ax.set_ylabel('Position $x$');
## Take some slices of the 3D plots just to see the solutions at particular times
indx1 = 0
indx2 = int(Nt/2)
indx3 = Nt-1
t1 = t[indx1]
t2 = t[indx2]
t3 = t[indx3]
# Slice the results from the DNN
res1 = res[:,indx1]
res2 = res[:,indx2]
res3 = res[:,indx3]
# Slice the analytical results
res_analytical1 = res_analytical[:,indx1]
res_analytical2 = res_analytical[:,indx2]
res_analytical3 = res_analytical[:,indx3]
# Plot the slices
plt.figure(figsize=(10,10))
plt.title("Computed solutions at time = %g"%t1)
plt.plot(x, res1)
plt.plot(x,res_analytical1)
plt.legend(['dnn','analytical'])
plt.figure(figsize=(10,10))
plt.title("Computed solutions at time = %g"%t2)
plt.plot(x, res2)
plt.plot(x,res_analytical2)
plt.legend(['dnn','analytical'])
plt.figure(figsize=(10,10))
plt.title("Computed solutions at time = %g"%t3)
plt.plot(x, res3)
plt.plot(x,res_analytical3)
plt.legend(['dnn','analytical'])
plt.show()