A Stabilized SQP Method: Superlinear Convergence


Regularized and stabilized sequential quadratic programming (SQP) methods are two classes of methods designed to resolve the numerical and theoretical difficulties associated with ill-posed or degenerate nonlinear optimization problems. Recently, a stabilized SQP method has been proposed that allows convergence to points satisfying certain second-order KKT conditions (Report CCoM 13-04, Center for Computational Mathematics , University of California, San Diego, 2013). The method is formulated as a regularized SQP method with an implicit safeguarding strategy based on minimizing a bound-constrained primal-dual augmented Lagrangian. The method involves a flexible line search along a direction formed from the approximate solution of a regularized quadratic programming subproblem and, when one exists, a direction of negative curvature for the primal-dual augmented Lagrangian. With an appropriate choice of termination condition, the method terminates in a finite number of iterations under weak assumptions on the problem functions. Safeguarding becomes relevant only when the iterates are converging to an infeasible stationary point of the norm of the constraint violations. Otherwise, the method terminates with a point that either satisfies the second-order necessary conditions for optimality, or fails to satisfy a weak second-order constraint qualification. The purpose of this paper is to establish the conditions under which this second-order stabilized SQP algorithm is equivalent to the conventional stabilized SQP method. It is shown that the method has superlinear local convergence under assumptions that are no stronger than those required by conventional stabilized SQP methods. The required convergence properties are obtained by allowing a small relaxation of the optimality conditions for the quadratic programming subproblem in the neighborhood of a solution. Numerical results on both degenerate and nondegenerate problems are reported.


Return To PEG's Home Page.