Mathematical optimization of convex functions

A mathematical (quadratic) optimization problem, or just optimization problem, has the form $$ \begin{align*} &\mathrm{min}_{\lambda}\hspace{0.2cm} \frac{1}{2}\boldsymbol{\lambda}^T\boldsymbol{P}\boldsymbol{\lambda}+\boldsymbol{q}^T\boldsymbol{\lambda},\\ \nonumber &\mathrm{subject\hspace{0.1cm}to} \hspace{0.2cm} \boldsymbol{G}\boldsymbol{\lambda} \preceq \boldsymbol{h} \wedge \boldsymbol{A}\boldsymbol{\lambda}=f. \end{align*} $$ subject to some constraints for say a selected set \( i=1,2,\dots, n \). In our case we are optimizing with respect to the Lagrangian multipliers \( \lambda_i \), and the vector \( \boldsymbol{\lambda}=[\lambda_1, \lambda_2,\dots, \lambda_n] \) is the optimization variable we are dealing with.

In our case we are particularly interested in a class of optimization problems called convex optmization problems. In our discussion on gradient descent methods we discussed at length the definition of a convex function.

Convex optimization problems play a central role in applied mathematics and we recommend strongly Boyd and Vandenberghe's text on the topics.