Why the Jacobian?

The Jacobian is used because the program must find the derivative of the trial solution with respect to \( x \) and \( t \).

This gives the necessity of computing the Jacobian matrix, as we want to evaluate the gradient with respect to \( x \) and \( t \) (note that the Jacobian of a scalar-valued multivariate function is simply its gradient).

In Autograd, the differentiation is by default done with respect to the first input argument of your Python function. Since the points is an array representing \( x \) and \( t \), the Jacobian is calculated using the values of \( x \) and \( t \).

To find the second derivative with respect to \( x \) and \( t \), the Jacobian can be found for the second time. The result is a Hessian matrix, which is the matrix containing all the possible second order mixed derivatives of \( g(x,t) \).

# Set up the trial function:
def u(x):
    return np.sin(np.pi*x)

def g_trial(point,P):
    x,t = point
    return (1-t)*u(x) + x*(1-x)*t*deep_neural_network(P,point)

# The right side of the ODE:
def f(point):
    return 0.

# The cost function:
def cost_function(P, x, t):
    cost_sum = 0

    g_t_jacobian_func = jacobian(g_trial)
    g_t_hessian_func = hessian(g_trial)

    for x_ in x:
        for t_ in t:
            point = np.array([x_,t_])

            g_t = g_trial(point,P)
            g_t_jacobian = g_t_jacobian_func(point,P)
            g_t_hessian = g_t_hessian_func(point,P)

            g_t_dt = g_t_jacobian[1]
            g_t_d2x = g_t_hessian[0][0]

            func = f(point)

            err_sqr = ( (g_t_dt - g_t_d2x) - func)**2
            cost_sum += err_sqr

    return cost_sum