Iterative Methods for Finding a Trust-Region Step


We consider methods for large-scale unconstrained minimization based on finding an approximate minimizer of a quadratic function subject to a two-norm trust-region inequality constraint. The Steihaug--Toint method uses the conjugate-gradient algorithm to minimize the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the constraint boundary. The benefit of this approach is that an approximate solution may be obtained with minimal work and storage. However, the method does not allow the accuracy of a constrained solution to be specified. We propose an extension of the Steihaug--Toint method that allows a solution to be calculated to any prescribed accuracy. If the Steihaug--Toint point lies on the boundary, the constrained problem is solved on a sequence of evolving low-dimensional subspaces. Each subspace includes an accelerator direction obtained from a regularized Newton method applied to the constrained problem. A crucial property of this direction is that it can be computed by applying the conjugate-gradient method to a positive-definite system in both the primal and dual variables of the constrained problem. The method includes a parameter that allows the user to take advantage of the tradeoff between the overall number of function evaluations and matrix-vector products associated with the underlying trust-region method. At one extreme, a low-accuracy solution is obtained that is comparable to the Steihaug--Toint point. At the other extreme, a high-accuracy solution can be specified that minimizes the overall number of function evaluations at the expense of more matrix-vector products.


Return To PEG's Home Page.