[Home]   [  News]   [  Events]   [  People]   [  Research]   [  Education]   [Visitor Info]   [UCSD Only]   [Admin]
Home > Research > Publications > Abstract
Search this site:

A Class of Projected-Search Methods for Bound-Constrained Optimization

by Michael W. Ferry, Philip E. Gill, Elizabeth Wong, Minxin Zhang

Abstract:

Projected-search methods for bound-constrained optimization are based on performing a search along a piecewise-linear continuous path obtained by projecting a search direction onto the feasible region. A potential benefit of a projected-search method is that many changes to the active set can be made at the cost of computing a single search direction. As the objective function is not differentiable along the search path, it is not possible to use a projected-search method with a step that satisfies the Wolfe conditions, which require the directional derivative of the objective function at a point on the path. For this reason, methods based in full or in part on a simple backtracking procedure must be used to give a step that satisfies an "Armijo-like" sufficient decrease condition. As a consequence, conventional projected-search methods are unable to exploit sophisticated safeguarded polynomial interpolation techniques that have been shown to be effective for the unconstrained case. This paper describes a new framework for the development of a general class of projected-search methods for bound-constrained optimization. At each iteration, a descent direction is computed with respect to a certain extended active set. This direction is used to specify a search direction that is used in conjunction with a step length computed by a quasi-Wolfe search. The quasi-Wolfe search is designed specifically for use with a piecewise-linear search path and is similar to a conventional Wolfe line search, except that a step is accepted under a wider range of conditions. These conditions take into consideration steps at which the restriction of the objective function on the search path is not differentiable. Standard existence and convergence results associated with a conventional Wolfe line search are extended to the quasi-Wolfe case. In addition, it is shown that under a standard nondegeneracy assumption, any method within the framework will identify the optimal active set in a finite number of iterations. Computational results are given for a specific projected-search method that uses a limited-memory quasi-Newton approximation of the Hessian. The results show that, in this context, a quasi-Wolfe search is substantially more efficient and reliable than an Armijo-like search based on simple backtracking. Comparisons with a state-of-the-art bound-constrained optimization package are also presented.

UCSD-CCoM-20-07.pdf   August 2020