Ergodic Results in Subgradient Optimization Torbjörn Larsson, Michael Patriksson, and Ann-Brith Strömberg Division of Optimization Department of Mathematics Linköping Institute of Technology S-581 83 Linköping Sweden ABSTRACT: Subgradient methods for nonsmooth, convex minimization have rendered considerable popularity; this is due to their simplicity and proven practical usefulness, particularly in the context of Lagrangean relaxation. In subgradient methods the optimality conditions will, in general, not become fulfilled, not even in the limit. Therefore, in contrast to descent methods, it is in a subgradient method not straightforward to monitor the progress of the method in terms of the approximate fulfilment of the optimality conditions. A further consequence of the difficulty to verify optimality is that certain supplementary information, such as convergent estimates of Lagrangean multipliers, is not easily available in subgradient schemes. As a means for obtaining this supplementary information in the subgradient optimization method, we extend it with the computation of an ergodic sequence of subgradients. Specifically, we consider a nonsmooth, convex program solved by a conditional subgradient optimization scheme (of which the traditional optimization method is a special case) with divergent series step lengths, which generates a sequence of iterates that converges to an optimal solution. We show that the elements of the ergodic sequence of subgradients in the limit fulfil the optimality conditions at this optimal solution. Further, we use the convergence properties of the ergodic sequence of subgradients to establish convergence of an ergodic sequence of Lagrangean multipliers. Finally, some potential applications of these ergodic results are briefly discussed. KEY WORDS: Nonsmooth Minimization, Conditional Subgradient Optimization, Ergodic Sequences, Lagrangean Multipliers