Week 8 February 19-23: Gradient Methods
Contents
Overview
Brief reminder on Newton-Raphson's method
The equations
Simple geometric interpretation
Extending to more than one variable
Steepest descent
More on Steepest descent
The ideal
The sensitiveness of the gradient descent
Convex functions
Convex function
Conditions on convex functions
More on convex functions
Some simple problems
Standard steepest descent
Gradient method
Steepest descent method
Steepest descent method
Final expressions
Our simple \( 2\times 2 \) example
Derivatives and more
First a simple gradient descent solution
Implementing the steepest descent
Simple codes for steepest descent and conjugate gradient using a \( 2\times 2 \) matrix, in c++
The routine for the steepest descent method
Conjugate gradient method
Conjugate gradient method
Conjugate gradient method
Conjugate gradient method
Conjugate gradient method and iterations
Conjugate gradient method
Conjugate gradient method
Conjugate gradient method
Simple implementation of the Conjugate gradient algorithm
Broyden–Fletcher–Goldfarb–Shanno algorithm
Using gradient descent methods, limitations
Codes from numerical recipes
Finding the minimum of the harmonic oscillator model in one dimension
Functions to observe
Bringing back the full code from last week
General expression for the derivative of the energy
Python program for 2-electrons in 2 dimensions
Using Broyden's algorithm in scipy
Conjugate gradient method
An example is given by the eigenvectors of the matrix
$$ \begin{equation*} \hat{v}_i^T\hat{A}\hat{v}_j= \lambda\hat{v}_i^T\hat{v}_j, \end{equation*} $$
which is zero unless \( i=j \).
«
1
...
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
...
44
»