Limited-memory reduced-Hessian methods for unconstrained optimization
Quasi-Newton methods are reliable and efficient on a wide range of
problems, but they can require many iterations if no good estimate of
the Hessian is available or the problem is ill-conditioned. Methods
that are less susceptible to ill-conditioning can be formulated by
exploiting the fact that quasi-Newton methods accumulate
second-derivative information in a sequence of expanding manifolds.
These modified methods represent the approximate second derivatives by a
smaller reduced approximate Hessian. The availability of a reduced
Hessian allows conventional quasi-Newton methods to be improved in two
ways. First, it is possible to reduce the objective while the iterates
are forced to linger on a manifold whose dimension can be
significantly smaller than the manifold on which curvature is being
accumulated. Second, approximate curvature in directions off the
manifold can be reinitialized as the iterations proceed, thereby
reducing the influence of a poor initial estimate of the Hessian.
We conclude by presenting extensive numerical results from problems in
the CUTE test set. Our experiments provide strong evidence that reduced
Hessian quasi-Newton are more robust and more efficient than conventional
quasi-Newton methods on small- to medium-sized problems.