Exercises week 41
Contents
Overarching aims of the exercises this week¶
The aim of the exercises this week is to get started with implementing gradient methods of relevance for project 2. This exercise will also be continued next week with the addition of automatic differentation. Everything you develop here will be used in project 2.
In order to get started, we will now replace in our standard ordinary least squares (OLS) and Ridge regression codes (from project 1) the matrix inversion algorithm with our own gradient descent (GD) and SGD codes. You can use the Franke function or the terrain data from project 1. However, we recommend using a simpler function like \(f(x)=a_0+a_1x+a_2x^2\) or higher-order one-dimensional polynomials. You can obviously test your final codes against for example the Franke function. Automatic differentiation will be discussed next week.
You should include in your analysis of the GD and SGD codes the following elements
A plain gradient descent with a fixed learning rate (you will need to tune it) using the analytical expression of the gradients
Add momentum to the plain GD code and compare convergence with a fixed learning rate (you may need to tune the learning rate), again using the analytical expression of the gradients.
Repeat these steps for stochastic gradient descent with mini batches and a given number of epochs. Use a tunable learning rate as discussed in the lectures from week 39. Discuss the results as functions of the various parameters (size of batches, number of epochs etc)
Implement the Adagrad method in order to tune the learning rate. Do this with and without momentum for plain gradient descent and SGD.
Add RMSprop and Adam to your library of methods for tuning the learning rate.
The lecture notes from weeks 39 and 40 contain more information and code examples. Feel free to use these examples.
In summary, you should perform an analysis of the results for OLS and Ridge regression as function of the chosen learning rates, the number of mini-batches and epochs as well as algorithm for scaling the learning rate. You can also compare your own results with those that can be obtained using for example Scikit-Learn’s various SGD options. Discuss your results. For Ridge regression you need now to study the results as functions of the hyper-parameter \(\lambda\) and the learning rate \(\eta\). Discuss your results.
You will need your SGD code for the setup of the Neural Network and Logistic Regression codes. You will find the Python Seaborn package useful when plotting the results as function of the learning rate \(\eta\) and the hyper-parameter \(\lambda\) when you use Ridge regression.
We recommend reading chapter 8 on optimization from the textbook of Goodfellow, Bengio and Courville. This chapter contains many useful insights and discussions on the optimization part of machine learning.
Code examples from week 39 and 40¶
Code with a Number of Minibatches which varies, analytical gradient¶
In the code here we vary the number of mini-batches.
%matplotlib inline
# Importing various packages
from math import exp, sqrt
from random import random, seed
import numpy as np
import matplotlib.pyplot as plt
n = 100
x = 2*np.random.rand(n,1)
y = 4+3*x+np.random.randn(n,1)
X = np.c_[np.ones((n,1)), x]
XT_X = X.T @ X
theta_linreg = np.linalg.inv(X.T @ X) @ (X.T @ y)
print("Own inversion")
print(theta_linreg)
# Hessian matrix
H = (2.0/n)* XT_X
EigValues, EigVectors = np.linalg.eig(H)
print(f"Eigenvalues of Hessian Matrix:{EigValues}")
theta = np.random.randn(2,1)
eta = 1.0/np.max(EigValues)
Niterations = 1000
#while (iter <= Ni... or test)
for iter in range(Niterations):
gradients = 2.0/n*X.T @ ((X @ theta)-y)
theta -= eta*gradients
print("theta from own gd")
print(theta)
xnew = np.array([[0],[2]])
Xnew = np.c_[np.ones((2,1)), xnew]
ypredict = Xnew.dot(theta)
ypredict2 = Xnew.dot(theta_linreg)
n_epochs = 50
M = 5 #size of each minibatch
m = int(n/M) #number of minibatches
t0, t1 = 5, 50
def learning_schedule(t):
return t0/(t+t1)
theta = np.random.randn(2,1)
for epoch in range(n_epochs):
# Can you figure out a better way of setting up the contributions to each batch?
for i in range(m):
random_index = M*np.random.randint(m)
xi = X[random_index:random_index+M]
yi = y[random_index:random_index+M]
gradients = (2.0/M)* xi.T @ ((xi @ theta)-yi)
eta = learning_schedule(epoch*m+i)
theta = theta - eta*gradients
print("theta from own sdg")
print(theta)
plt.plot(xnew, ypredict, "r-")
plt.plot(xnew, ypredict2, "b-")
plt.plot(x, y ,'ro')
plt.axis([0,2.0,0, 15.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Random numbers ')
plt.show()
Own inversion
[[3.7056279 ]
[3.11632514]]
Eigenvalues of Hessian Matrix:[0.27378241 4.9133261 ]
theta from own gd
[[3.7056279 ]
[3.11632514]]
theta from own sdg
[[3.62227539]
[3.09214577]]
In the above code, we have use replacement in setting up the mini-batches. The discussion here may be useful.
Momentum based GD¶
The stochastic gradient descent (SGD) is almost always used with a momentum or inertia term that serves as a memory of the direction we are moving in parameter space. This is typically implemented as follows
where we have introduced a momentum parameter \(\gamma\), with \(0\le\gamma\le 1\), and for brevity we dropped the explicit notation to indicate the gradient is to be taken over a different mini-batch at each step. We call this algorithm gradient descent with momentum (GDM). From these equations, it is clear that \(\mathbf{v}_t\) is a running average of recently encountered gradients and \((1-\gamma)^{-1}\) sets the characteristic time scale for the memory used in the averaging procedure. Consistent with this, when \(\gamma=0\), this just reduces down to ordinary SGD as discussed earlier. An equivalent way of writing the updates is
where we have defined \(\Delta \boldsymbol{\theta}_{t}= \boldsymbol{\theta}_t-\boldsymbol{\theta}_{t-1}\).
Algorithms and codes for Adagrad, RMSprop and Adam¶
The algorithms we have implemented are well described in the text by Goodfellow, Bengio and Courville, chapter 8.
The codes which implement these algorithms are discussed after our presentation of automatic differentiation.
Practical tips¶
Randomize the data when making mini-batches. It is always important to randomly shuffle the data when forming mini-batches. Otherwise, the gradient descent method can fit spurious correlations resulting from the order in which data is presented.
Transform your inputs. Learning becomes difficult when our landscape has a mixture of steep and flat directions. One simple trick for minimizing these situations is to standardize the data by subtracting the mean and normalizing the variance of input variables. Whenever possible, also decorrelate the inputs. To understand why this is helpful, consider the case of linear regression. It is easy to show that for the squared error cost function, the Hessian of the cost function is just the correlation matrix between the inputs. Thus, by standardizing the inputs, we are ensuring that the landscape looks homogeneous in all directions in parameter space. Since most deep networks can be viewed as linear transformations followed by a non-linearity at each layer, we expect this intuition to hold beyond the linear case.
Monitor the out-of-sample performance. Always monitor the performance of your model on a validation set (a small portion of the training data that is held out of the training process to serve as a proxy for the test set. If the validation error starts increasing, then the model is beginning to overfit. Terminate the learning process. This early stopping significantly improves performance in many settings.
Adaptive optimization methods don’t always have good generalization. Recent studies have shown that adaptive methods such as ADAM, RMSPorp, and AdaGrad tend to have poor generalization compared to SGD or SGD with momentum, particularly in the high-dimensional limit (i.e. the number of parameters exceeds the number of data points). Although it is not clear at this stage why these methods perform so well in training deep neural networks, simpler procedures like properly-tuned SGD may work as well or better in these applications.
Geron’s text, see chapter 11, has several interesting discussions.
Using Automatic differentation with OLS¶
We conclude the part on optmization by showing how we can make codes for linear regression and logistic regression using autograd. The first example shows results with ordinary leats squares.
# Using Autograd to calculate gradients for OLS
from random import random, seed
import numpy as np
import autograd.numpy as np
import matplotlib.pyplot as plt
from autograd import grad
def CostOLS(beta):
return (1.0/n)*np.sum((y-X @ beta)**2)
n = 100
x = 2*np.random.rand(n,1)
y = 4+3*x+np.random.randn(n,1)
X = np.c_[np.ones((n,1)), x]
XT_X = X.T @ X
theta_linreg = np.linalg.pinv(XT_X) @ (X.T @ y)
print("Own inversion")
print(theta_linreg)
# Hessian matrix
H = (2.0/n)* XT_X
EigValues, EigVectors = np.linalg.eig(H)
print(f"Eigenvalues of Hessian Matrix:{EigValues}")
theta = np.random.randn(2,1)
eta = 1.0/np.max(EigValues)
Niterations = 1000
# define the gradient
training_gradient = grad(CostOLS)
for iter in range(Niterations):
gradients = training_gradient(theta)
theta -= eta*gradients
print("theta from own gd")
print(theta)
xnew = np.array([[0],[2]])
Xnew = np.c_[np.ones((2,1)), xnew]
ypredict = Xnew.dot(theta)
ypredict2 = Xnew.dot(theta_linreg)
plt.plot(xnew, ypredict, "r-")
plt.plot(xnew, ypredict2, "b-")
plt.plot(x, y ,'ro')
plt.axis([0,2.0,0, 15.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Random numbers ')
plt.show()
Own inversion
[[4.39586211]
[2.54142449]]
Eigenvalues of Hessian Matrix:[0.29253829 4.61759155]
theta from own gd
[[4.39586211]
[2.54142449]]
Same code but now with momentum gradient descent¶
# Using Autograd to calculate gradients for OLS
from random import random, seed
import numpy as np
import autograd.numpy as np
import matplotlib.pyplot as plt
from autograd import grad
def CostOLS(beta):
return (1.0/n)*np.sum((y-X @ beta)**2)
n = 100
x = 2*np.random.rand(n,1)
y = 4+3*x#+np.random.randn(n,1)
X = np.c_[np.ones((n,1)), x]
XT_X = X.T @ X
theta_linreg = np.linalg.pinv(XT_X) @ (X.T @ y)
print("Own inversion")
print(theta_linreg)
# Hessian matrix
H = (2.0/n)* XT_X
EigValues, EigVectors = np.linalg.eig(H)
print(f"Eigenvalues of Hessian Matrix:{EigValues}")
theta = np.random.randn(2,1)
eta = 1.0/np.max(EigValues)
Niterations = 30
# define the gradient
training_gradient = grad(CostOLS)
for iter in range(Niterations):
gradients = training_gradient(theta)
theta -= eta*gradients
print(iter,gradients[0],gradients[1])
print("theta from own gd")
print(theta)
# Now improve with momentum gradient descent
change = 0.0
delta_momentum = 0.3
for iter in range(Niterations):
# calculate gradient
gradients = training_gradient(theta)
# calculate update
new_change = eta*gradients+delta_momentum*change
# take a step
theta -= new_change
# save the change
change = new_change
print(iter,gradients[0],gradients[1])
print("theta from own gd wth momentum")
print(theta)
Own inversion
[[4.]
[3.]]
Eigenvalues of Hessian Matrix:[0.28360073 4.6466172 ]
0 [-12.76941488] [-15.71728253]
1 [-0.06385864] [0.05142606]
2 [-0.0599611] [0.04828733]
3 [-0.05630145] [0.04534017]
4 [-0.05286516] [0.04257289]
5 [-0.0496386] [0.0399745]
6 [-0.04660896] [0.03753471]
7 [-0.04376424] [0.03524382]
8 [-0.04109314] [0.03309276]
9 [-0.03858507] [0.03107298]
10 [-0.03623008] [0.02917648]
11 [-0.03401882] [0.02739573]
12 [-0.03194252] [0.02572366]
13 [-0.02999295] [0.02415365]
14 [-0.02816236] [0.02267946]
15 [-0.02644351] [0.02129525]
16 [-0.02482956] [0.01999552]
17 [-0.02331412] [0.01877511]
18 [-0.02189117] [0.0176292]
19 [-0.02055507] [0.01655322]
20 [-0.01930051] [0.01554291]
21 [-0.01812253] [0.01459427]
22 [-0.01701644] [0.01370353]
23 [-0.01597786] [0.01286715]
24 [-0.01500267] [0.01208182]
25 [-0.014087] [0.01134442]
26 [-0.01322722] [0.01065202]
27 [-0.01241991] [0.01000189]
28 [-0.01166188] [0.00939144]
29 [-0.01095011] [0.00881824]
theta from own gd
[[3.96374557]
[3.02919609]]
0 [-0.01028178] [0.00828003]
1 [-0.00965425] [0.00777467]
2 [-0.00887675] [0.00714854]
3 [-0.00810172] [0.0065244]
4 [-0.00737473] [0.00593895]
5 [-0.00670653] [0.00540084]
6 [-0.00609674] [0.00490977]
7 [-0.0055417] [0.00446279]
8 [-0.00503695] [0.00405631]
9 [-0.00457811] [0.0036868]
10 [-0.00416103] [0.00335092]
11 [-0.00378195] [0.00304564]
12 [-0.00343739] [0.00276817]
13 [-0.00312423] [0.00251598]
14 [-0.0028396] [0.00228676]
15 [-0.0025809] [0.00207843]
16 [-0.00234577] [0.00188907]
17 [-0.00213205] [0.00171697]
18 [-0.00193781] [0.00156054]
19 [-0.00176127] [0.00141837]
20 [-0.00160081] [0.00128915]
21 [-0.00145497] [0.0011717]
22 [-0.00132241] [0.00106495]
23 [-0.00120193] [0.00096793]
24 [-0.00109243] [0.00087975]
25 [-0.00099291] [0.0007996]
26 [-0.00090245] [0.00072675]
27 [-0.00082023] [0.00066054]
28 [-0.0007455] [0.00060036]
29 [-0.00067758] [0.00054567]
theta from own gd wth momentum
[[3.99782845]
[3.00174877]]
But noen of these can compete with Newton’s method¶
# Using Newton's method
from random import random, seed
import numpy as np
import autograd.numpy as np
import matplotlib.pyplot as plt
from autograd import grad
def CostOLS(beta):
return (1.0/n)*np.sum((y-X @ beta)**2)
n = 100
x = 2*np.random.rand(n,1)
y = 4+3*x+np.random.randn(n,1)
X = np.c_[np.ones((n,1)), x]
XT_X = X.T @ X
beta_linreg = np.linalg.pinv(XT_X) @ (X.T @ y)
print("Own inversion")
print(beta_linreg)
# Hessian matrix
H = (2.0/n)* XT_X
# Note that here the Hessian does not depend on the parameters beta
invH = np.linalg.pinv(H)
EigValues, EigVectors = np.linalg.eig(H)
print(f"Eigenvalues of Hessian Matrix:{EigValues}")
beta = np.random.randn(2,1)
Niterations = 5
# define the gradient
training_gradient = grad(CostOLS)
for iter in range(Niterations):
gradients = training_gradient(beta)
beta -= invH @ gradients
print(iter,gradients[0],gradients[1])
print("beta from own Newton code")
print(beta)
Own inversion
[[3.60911869]
[3.33987045]]
Eigenvalues of Hessian Matrix:[0.31385073 4.08833652]
0 [-17.89749468] [-19.42504764]
1 [-6.56419363e-15] [-6.5758826e-15]
2 [-4.99600361e-16] [-5.64393388e-16]
3 [-4.99600361e-16] [-5.64393388e-16]
4 [-4.99600361e-16] [-5.64393388e-16]
beta from own Newton code
[[3.60911869]
[3.33987045]]
Including Stochastic Gradient Descent with Autograd¶
In this code we include the stochastic gradient descent approach discussed above. Note here that we specify which argument we are taking the derivative with respect to when using autograd.
# Using Autograd to calculate gradients using SGD
# OLS example
from random import random, seed
import numpy as np
import autograd.numpy as np
import matplotlib.pyplot as plt
from autograd import grad
# Note change from previous example
def CostOLS(y,X,theta):
return np.sum((y-X @ theta)**2)
n = 100
x = 2*np.random.rand(n,1)
y = 4+3*x+np.random.randn(n,1)
X = np.c_[np.ones((n,1)), x]
XT_X = X.T @ X
theta_linreg = np.linalg.pinv(XT_X) @ (X.T @ y)
print("Own inversion")
print(theta_linreg)
# Hessian matrix
H = (2.0/n)* XT_X
EigValues, EigVectors = np.linalg.eig(H)
print(f"Eigenvalues of Hessian Matrix:{EigValues}")
theta = np.random.randn(2,1)
eta = 1.0/np.max(EigValues)
Niterations = 1000
# Note that we request the derivative wrt third argument (theta, 2 here)
training_gradient = grad(CostOLS,2)
for iter in range(Niterations):
gradients = (1.0/n)*training_gradient(y, X, theta)
theta -= eta*gradients
print("theta from own gd")
print(theta)
xnew = np.array([[0],[2]])
Xnew = np.c_[np.ones((2,1)), xnew]
ypredict = Xnew.dot(theta)
ypredict2 = Xnew.dot(theta_linreg)
plt.plot(xnew, ypredict, "r-")
plt.plot(xnew, ypredict2, "b-")
plt.plot(x, y ,'ro')
plt.axis([0,2.0,0, 15.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Random numbers ')
plt.show()
n_epochs = 50
M = 5 #size of each minibatch
m = int(n/M) #number of minibatches
t0, t1 = 5, 50
def learning_schedule(t):
return t0/(t+t1)
theta = np.random.randn(2,1)
for epoch in range(n_epochs):
# Can you figure out a better way of setting up the contributions to each batch?
for i in range(m):
random_index = M*np.random.randint(m)
xi = X[random_index:random_index+M]
yi = y[random_index:random_index+M]
gradients = (1.0/M)*training_gradient(yi, xi, theta)
eta = learning_schedule(epoch*m+i)
theta = theta - eta*gradients
print("theta from own sdg")
print(theta)
Own inversion
[[4.41451936]
[2.55289042]]
Eigenvalues of Hessian Matrix:[0.33812981 4.19713213]
theta from own gd
[[4.41451936]
[2.55289042]]
theta from own sdg
[[4.38642255]
[2.56985133]]
Same code but now with momentum gradient descent¶
# Using Autograd to calculate gradients using SGD
# OLS example
from random import random, seed
import numpy as np
import autograd.numpy as np
import matplotlib.pyplot as plt
from autograd import grad
# Note change from previous example
def CostOLS(y,X,theta):
return np.sum((y-X @ theta)**2)
n = 100
x = 2*np.random.rand(n,1)
y = 4+3*x+np.random.randn(n,1)
X = np.c_[np.ones((n,1)), x]
XT_X = X.T @ X
theta_linreg = np.linalg.pinv(XT_X) @ (X.T @ y)
print("Own inversion")
print(theta_linreg)
# Hessian matrix
H = (2.0/n)* XT_X
EigValues, EigVectors = np.linalg.eig(H)
print(f"Eigenvalues of Hessian Matrix:{EigValues}")
theta = np.random.randn(2,1)
eta = 1.0/np.max(EigValues)
Niterations = 100
# Note that we request the derivative wrt third argument (theta, 2 here)
training_gradient = grad(CostOLS,2)
for iter in range(Niterations):
gradients = (1.0/n)*training_gradient(y, X, theta)
theta -= eta*gradients
print("theta from own gd")
print(theta)
n_epochs = 50
M = 5 #size of each minibatch
m = int(n/M) #number of minibatches
t0, t1 = 5, 50
def learning_schedule(t):
return t0/(t+t1)
theta = np.random.randn(2,1)
change = 0.0
delta_momentum = 0.3
for epoch in range(n_epochs):
for i in range(m):
random_index = M*np.random.randint(m)
xi = X[random_index:random_index+M]
yi = y[random_index:random_index+M]
gradients = (1.0/M)*training_gradient(yi, xi, theta)
eta = learning_schedule(epoch*m+i)
# calculate update
new_change = eta*gradients+delta_momentum*change
# take a step
theta -= new_change
# save the change
change = new_change
print("theta from own sdg with momentum")
print(theta)
Own inversion
[[3.89425071]
[3.03737777]]
Eigenvalues of Hessian Matrix:[0.32691458 3.94394646]
theta from own gd
[[3.89408559]
[3.03753095]]
theta from own sdg with momentum
[[3.92801972]
[3.02259232]]
AdaGrad algorithm, taken from Goodfellow et al¶
Figure 1:
Similar (second order function now) problem but now with AdaGrad¶
# Using Autograd to calculate gradients using AdaGrad and Stochastic Gradient descent
# OLS example
from random import random, seed
import numpy as np
import autograd.numpy as np
import matplotlib.pyplot as plt
from autograd import grad
# Note change from previous example
def CostOLS(y,X,theta):
return np.sum((y-X @ theta)**2)
n = 1000
x = np.random.rand(n,1)
y = 2.0+3*x +4*x*x
X = np.c_[np.ones((n,1)), x, x*x]
XT_X = X.T @ X
theta_linreg = np.linalg.pinv(XT_X) @ (X.T @ y)
print("Own inversion")
print(theta_linreg)
# Note that we request the derivative wrt third argument (theta, 2 here)
training_gradient = grad(CostOLS,2)
# Define parameters for Stochastic Gradient Descent
n_epochs = 50
M = 5 #size of each minibatch
m = int(n/M) #number of minibatches
# Guess for unknown parameters theta
theta = np.random.randn(3,1)
# Value for learning rate
eta = 0.01
# Including AdaGrad parameter to avoid possible division by zero
delta = 1e-8
for epoch in range(n_epochs):
Giter = 0.0
for i in range(m):
random_index = M*np.random.randint(m)
xi = X[random_index:random_index+M]
yi = y[random_index:random_index+M]
gradients = (1.0/M)*training_gradient(yi, xi, theta)
Giter += gradients*gradients
update = gradients*eta/(delta+np.sqrt(Giter))
theta -= update
print("theta from own AdaGrad")
print(theta)
Own inversion
[[2.]
[3.]
[4.]]
theta from own AdaGrad
[[1.99987339]
[3.00070766]
[3.99931264]]
Running this code we note an almost perfect agreement with the results from matrix inversion.
RMSProp algorithm, taken from Goodfellow et al¶
Figure 1:
RMSprop for adaptive learning rate with Stochastic Gradient Descent¶
# Using Autograd to calculate gradients using RMSprop and Stochastic Gradient descent
# OLS example
from random import random, seed
import numpy as np
import autograd.numpy as np
import matplotlib.pyplot as plt
from autograd import grad
# Note change from previous example
def CostOLS(y,X,theta):
return np.sum((y-X @ theta)**2)
n = 1000
x = np.random.rand(n,1)
y = 2.0+3*x +4*x*x# +np.random.randn(n,1)
X = np.c_[np.ones((n,1)), x, x*x]
XT_X = X.T @ X
theta_linreg = np.linalg.pinv(XT_X) @ (X.T @ y)
print("Own inversion")
print(theta_linreg)
# Note that we request the derivative wrt third argument (theta, 2 here)
training_gradient = grad(CostOLS,2)
# Define parameters for Stochastic Gradient Descent
n_epochs = 50
M = 5 #size of each minibatch
m = int(n/M) #number of minibatches
# Guess for unknown parameters theta
theta = np.random.randn(3,1)
# Value for learning rate
eta = 0.01
# Value for parameter rho
rho = 0.99
# Including AdaGrad parameter to avoid possible division by zero
delta = 1e-8
for epoch in range(n_epochs):
Giter = 0.0
for i in range(m):
random_index = M*np.random.randint(m)
xi = X[random_index:random_index+M]
yi = y[random_index:random_index+M]
gradients = (1.0/M)*training_gradient(yi, xi, theta)
# Accumulated gradient
# Scaling with rho the new and the previous results
Giter = (rho*Giter+(1-rho)*gradients*gradients)
# Taking the diagonal only and inverting
update = gradients*eta/(delta+np.sqrt(Giter))
# Hadamard product
theta -= update
print("theta from own RMSprop")
print(theta)
Own inversion
[[2.]
[3.]
[4.]]
theta from own RMSprop
[[1.99430797]
[3.0204309 ]
[3.97790579]]
ADAM algorithm, taken from Goodfellow et al¶
Figure 1:
And finally ADAM¶
# Using Autograd to calculate gradients using RMSprop and Stochastic Gradient descent
# OLS example
from random import random, seed
import numpy as np
import autograd.numpy as np
import matplotlib.pyplot as plt
from autograd import grad
# Note change from previous example
def CostOLS(y,X,theta):
return np.sum((y-X @ theta)**2)
n = 1000
x = np.random.rand(n,1)
y = 2.0+3*x +4*x*x# +np.random.randn(n,1)
X = np.c_[np.ones((n,1)), x, x*x]
XT_X = X.T @ X
theta_linreg = np.linalg.pinv(XT_X) @ (X.T @ y)
print("Own inversion")
print(theta_linreg)
# Note that we request the derivative wrt third argument (theta, 2 here)
training_gradient = grad(CostOLS,2)
# Define parameters for Stochastic Gradient Descent
n_epochs = 50
M = 5 #size of each minibatch
m = int(n/M) #number of minibatches
# Guess for unknown parameters theta
theta = np.random.randn(3,1)
# Value for learning rate
eta = 0.01
# Value for parameters beta1 and beta2, see https://arxiv.org/abs/1412.6980
beta1 = 0.9
beta2 = 0.999
# Including AdaGrad parameter to avoid possible division by zero
delta = 1e-7
iter = 0
for epoch in range(n_epochs):
first_moment = 0.0
second_moment = 0.0
iter += 1
for i in range(m):
random_index = M*np.random.randint(m)
xi = X[random_index:random_index+M]
yi = y[random_index:random_index+M]
gradients = (1.0/M)*training_gradient(yi, xi, theta)
# Computing moments first
first_moment = beta1*first_moment + (1-beta1)*gradients
second_moment = beta2*second_moment+(1-beta2)*gradients*gradients
first_term = first_moment/(1.0-beta1**iter)
second_term = second_moment/(1.0-beta2**iter)
# Scaling with rho the new and the previous results
update = eta*first_term/(np.sqrt(second_term)+delta)
theta -= update
print("theta from own ADAM")
print(theta)
Own inversion
[[2.]
[3.]
[4.]]
theta from own ADAM
[[1.99988575]
[3.00066222]
[3.99946201]]
Introducing JAX¶
Presently, instead of using autograd, we recommend using JAX
JAX is Autograd and XLA (Accelerated Linear Algebra)), brought together for high-performance numerical computing and machine learning research. It provides composable transformations of Python+NumPy programs: differentiate, vectorize, parallelize, Just-In-Time compile to GPU/TPU, and more.
Getting started with Jax, note the way we import numpy¶
import jax
import jax.numpy as jnp
import numpy as np
import matplotlib.pyplot as plt
from jax import grad as jax_grad
A warm-up example¶
def function(x):
return x**2
def analytical_gradient(x):
return 2*x
def gradient_descent(starting_point, learning_rate, num_iterations, solver="analytical"):
x = starting_point
trajectory_x = [x]
trajectory_y = [function(x)]
if solver == "analytical":
grad = analytical_gradient
elif solver == "jax":
grad = jax_grad(function)
x = jnp.float64(x)
learning_rate = jnp.float64(learning_rate)
for _ in range(num_iterations):
x = x - learning_rate * grad(x)
trajectory_x.append(x)
trajectory_y.append(function(x))
return trajectory_x, trajectory_y
x = np.linspace(-5, 5, 100)
plt.plot(x, function(x), label="f(x)")
descent_x, descent_y = gradient_descent(5, 0.1, 10, solver="analytical")
jax_descend_x, jax_descend_y = gradient_descent(5, 0.1, 10, solver="jax")
plt.plot(descent_x, descent_y, label="Gradient descent", marker="o")
plt.plot(jax_descend_x, jax_descend_y, label="JAX", marker="x")
/Users/mhjensen/miniforge3/envs/myenv/lib/python3.9/site-packages/jax/_src/numpy/lax_numpy.py:173: UserWarning: Explicitly requested dtype float64 requested in asarray is not available, and will be truncated to dtype float32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
return asarray(x, dtype=self.dtype)
[<matplotlib.lines.Line2D at 0x12288f460>]
A more advanced example¶
backend = np
def function(x):
return x*backend.sin(x**2 + 1)
def analytical_gradient(x):
return backend.sin(x**2 + 1) + 2*x**2*backend.cos(x**2 + 1)
x = np.linspace(-5, 5, 100)
plt.plot(x, function(x), label="f(x)")
descent_x, descent_y = gradient_descent(1, 0.01, 300, solver="analytical")
# Change the backend to JAX
backend = jnp
jax_descend_x, jax_descend_y = gradient_descent(1, 0.01, 300, solver="jax")
plt.scatter(descent_x, descent_y, label="Gradient descent", marker="v", s=10, color="red")
plt.scatter(jax_descend_x, jax_descend_y, label="JAX", marker="x", s=5, color="black")
<matplotlib.collections.PathCollection at 0x110657310>