### Arizona State University Network Science Seminar Series

## Upcoming Seminar: On addressing uncertainty and high-dimensionality in optimization and variational inequality problems: self-tuned stepsizes, and randomized block coordinate schemes

Speaker |
Farzad Yousefian (Oklahoma State University) |

Date |
1:30 p.m., Mar 17, 2017 |

Location |
GWC 487 |

Short Bio |

Farzad Yousefian is currently an assistant professor in the school of Industrial Engineering and Management at Oklahoma State University. Before joining OSU, he was a postdoctoral researcher in the Department of Industrial and Manufacturing Engineering at Penn State. He obtained his Ph.D. in industrial engineering from the University of Illinois at Urbana-Champaign in 2013. His thesis is focused on the design, analysis, and implementation of stochastic approximation methods for solving optimization and variational problems in nonsmooth and uncertain regimes. His current research interests lie in the development of efficient algorithms to address ill-posed stochastic optimization and equilibrium problems arising from machine learning and multi-agent systems. He is the recipient of the best theoretical paper award in the 2013 Winter Simulation Conference. |

Abstract |

A wide range of emerging applications in machine learning, signal processing, and multi-agent systems result in optimization, and more generally variational inequality problems. Such models are often complicated by uncertainty, and/or high-dimensionality. In the first part of this talk, we consider stochastic mirror descent methods for solving stochastic convex optimization problems. It has been discussed that the performance of this class of methods is very sensitive to the choice of the stepsize sequence. Motivated by this gap, we present a unifying self-tuned update rule for the stepsize sequence such that: (i) it is characterized in terms of problem parameters and algorithm’s settings; and (ii) under this update rule, a suitably defined error metric is minimized. We present the performance of this update rule for the soft margin linear SVM problem over different large data sets. In the second part of the talk, motivated by multi-user optimization problems and non-cooperative Nash games in uncertain regimes, we consider stochastic Cartesian variational inequalities where the number of the component sets is huge. We develop a randomized block stochastic mirror-prox (B-SMP) algorithm, where at each iteration only a randomly selected block coordinate of the solution is updated through implementing two consecutive projection steps. The convergence analysis of the B-SMP method equipped with rate statements will be presented. |