Recent Developments in Quasi-Newton Methods for Numerical Optimization
Quasi-Newton methods form the basis of many effective methods for unconstrained and constrained optimization. Quasi-Newton methods require only the first-derivatives of the problem to be provided and update an estimate of the Hessian matrix of second derivatives to reflect new approximate curvature information found during each iteration. In the years following the publication of the Davidon-Fletcher-Powell (DFP) method in 1963 the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update emerged as the best update formula for use in unconstrained minimization. More recently, a number of quasi-Newton methods have been proposed that are intended to improve on the efficiency and reliability of the BFGS method. Unfortunately, there is no known analytical means of determining the relative performance of these methods on a general nonlinear function, and there is no accepted standard set of test problems that may be used to verify that results reported in the literature are comparable. In this talk we will discuss ongoing work to provide a thorough derivation, implementation, and numerical comparison of these methods in a systematic and consistent way. We will look in detail at several modifications, discuss their relative benefits, and review relevant numerical results.
Tuesday, May 17, 2022
11:00AM Zoom ID 954 6624 3503
Center for Computational Mathematics9500 Gilman Dr. #0112La Jolla, CA 92093-0112Tel: (858)534-9056