On Recent Developments in BFGS Methods for Unconstrained Optimization
by Philip E. Gill, Jeb H. Runnoe
Abstract:
Quasi-Newton methods form the basis of many effective methods for
unconstrained and constrained optimization. As the iterations proceed, a
quasi-Newton method incorporates new curvature information by performing a
low-rank update to a matrix that serves as an approximation to a Hessian
matrix of second derivatives. In the years following the publication of the
Davidon-Fletcher-Powell (DFP) method in 1963 the
Broyden-Fletcher-Goldfarb-Shanno (BFGS) update emerged as the best update
formula for use in unconstrained minimization. More recently, a number of
quasi-Newton methods have been proposed that are intended to improve on the
efficiency and reliability of the BFGS method. Unfortunately, there is no
known analytical means of determining the relative performance of these
methods on a general nonlinear function, and there is a real need for
extensive experimental testing to justify the theoretical basis of each
approach. The goal of this report is to implement and test these methods in
a uniform, systematic, and consistent way. In the first part of the report,
we review several quasi-Newton methods, discuss their relative benefits,
and discuss their implementation. In the second part, we investigate more
recent variations, explain their motivation and theory, and investigate
their performance.