Cost approximation algorithms with nonmonotone line searches for a general class of nonlinear programs Michael Patriksson Department of Mathematics Linköping Institute of Technology S-581 83 Linköping Sweden ABSTRACT: When solving ill-conditioned nonlinear programs by descent algorithms, the descent requirement may induce the step lengths to become very small, thus resulting in very poor performances. Recently, suggestions have been made to circumvent this problem, among which is a class of approaches in which the objective value may be allowed to increase temporarily. Grippo {\em et al.}~\cite{GLL91} introduce nonmonotone line searches in the class of deflected gradient methods in unconstrained differentiable optimization; this technique allows for longer steps (typically of unit length) to be taken, and is successfully applied to some ill-conditioned problems. This paper extends their nonmonotone approach and convergence results to the large class of cost approximation algorithms of Patriksson (1993), and to optimization problems with both convex constraints and nondifferentiable objective functions.