Processing math: 100%

 

 

 

BFGS optimization problem

The optimization problem is to minimize f(x) where x is a vector in Rn, and f is a differentiable scalar function. There are no constraints on the values that x can take.

The algorithm begins at an initial estimate for the optimal value x0 and proceeds iteratively to get a better estimate at each stage.

The search direction pk at stage k is given by the solution of the analogue of the Newton equation

Bkpk=f(xk),

where Bk is an approximation to the Hessian matrix, which is updated iteratively at each stage, and f(xk) is the gradient of the function evaluated at xk. A line search in the direction pk is then used to find the next point xk+1 by minimising

f(xk+αpk),

over the scalar α>0.