Abstract
We present a new algorithm for nonlinear minimax optimization which is well suited for large and sparse problems. The method is based on trust regions and sequential linear programming. On each iteration a linear minimax problem is solved for a basic step. If necessary, this is followed by the determination of a minimum norm corrective step based on a first-order Taylor approximation. No Hessian information needs to be stored. Global convergence is proved. This new method has been extensively tested and compared with other methods, including two well known codes for nonlinear programming. The numerical tests indicate that in many cases the new method can find the solution in just as few iterations as methods based on approximate second-order information. The tests also show that for some problems the corrective steps give much faster convergence than for similar methods which do not employ such steps.
Original language | English |
---|---|
Pages (from-to) | 372-387 |
Number of pages | 16 |
Journal | BIT |
Volume | 34 |
Issue number | 3 |
DOIs | |
Publication status | Published - Sept 1994 |
Other keywords
- AMS subject classification: 65K10
- Nonlinear minimax optimization
- sequential linear programming
- sparse nonlinear programming