Ergodic Convergence in Subgradient Optimization Torbjörn Larsson, Michael Patriksson, and Ann-Brith Strömberg Division of Optimization Department of Mathematics Linköping Institute of Technology S-581 83 Linköping Sweden ABSTRACT: When nonsmooth, convex minimization problems are solved by subgradient optimization methods, the subgradients used will in general not accumulate to subgradients which verify the optimality of a solution obtained in the limit. It is therefore not a straightforward task to monitor the progress of a subgradient method in terms of the approximate fulfilment of optimality conditions. Further, certain supplementary information, such as convergent estimates of Lagrange multipliers and convergent lower bounds on the optimal objective value, is not directly available in subgradient schemes. As a means of overcoming these weaknesses in subgradient methods, we introduce the computation of an ergodic (averaged) sequence of subgradients. Specifically, we consider a nonsmooth, convex program solved by a conditional subgradient optimization scheme with divergent series step lengths, and show that the elements of the ergodic sequence of subgradients in the limit fulfil the optimality conditions at the optimal solution, to which the sequence of iterates converges. This result has three important implications. First, it enables the finite identification of active constraints at the solution obtained in the limit. Second, it is used to establish the ergodic convergence of sequences of Lagrange multipliers; this result enables us to carry out sensitivity analyses for solutions obtained by subgradient methods. The third implication is the convergence of a lower bounding procedure based on an ergodic sequence of affine underestimates of the objective function; this procedure provides a proper termination criterion for subgradient methods. KEY WORDS: Nonsmooth Minimization, Conditional Subgradient Optimization, Ergodic Convergence, Finite Identification Results, Lagrange Multipliers, Bounding Procedure