Arizona State University Network Science Seminar Series

Upcoming Seminar: Delay-Optimal Scheduling for Data Center Networks and Input-Queued Switches in Heavy Traffic

Speaker Siva Theja Maguluri (IBM)
Date 1:30 p.m., Oct 28th, 2016
Location GWC 487
Short Bio
Siva Theja Maguluri is a Research Staff Member in the Mathematical Sciences Department at IBM T. J. Watson Research Center. He obtained his Ph.D. from the University of Illinois at Urbana-Champaign in Electrical and Computer Engineering where he worked on resource allocation algorithms for cloud computing and wireless networks. Earlier, he received an MS in ECE and an MS in Applied Math from UIUC and a B.Tech in Electrical Engineering from IIT Madras. His research interests include Stochastic Processes, Optimization, Cloud Computing, Data Centers, Resource Allocation and Scheduling Algorithms, Networks, and Game Theory. He will start next spring as an Assistant Professor in the School of Industrial and Systems Engineering at Georgia Tech.
Today's era of cloud computing is powered by massive data centers. A data center network enables the exchange of data in the form of packets among the servers within these data centers. Given the size of today's data centers, it is desirable to design low-complexity scheduling algorithms which result in a fixed average packet delay, independent of the size of the data center. We consider the scheduling problem in an input-queued switch, which is a good abstraction for a data center network. In particular, we study the queue length (equivalently, delay) behavior under the so-called MaxWeight scheduling algorithm, which has low computational complexity. Under various traffic patterns, we show that the algorithm achieves optimal scaling of the heavy-traffic scaled queue length with respect to the size of the switch. This settles one version of an open conjecture that has been a central question in the area of stochastic networks. We obtain this result by using a Lyapunov-type drift technique to characterize the heavy-traffic behavior of the expected total queue length in the network, in steady-state.

The High-Dimensional Limit of Stochastic Iterative Methods for Convex and Nonconvex Optimization: Dynamics and Phase Transitions

Speaker Yue M. Lu (Harvard University)
Date 10:30 a.m., Sept 27th, 2016
Location GWC 487
Short Bio
Yue M. Lu was born in Shanghai. After finishing undergraduate studies at Shanghai Jiao Tong University, he attended the University of Illinois at Urbana-Champaign, where he received the M.Sc. degree in mathematics and the Ph.D. degree in electrical engineering, both in 2007. He was a Research Assistant at the University of Illinois at Urbana-Champaign, and has worked for Microsoft Research Asia, Beijing, and Siemens Corporate Research, Princeton, NJ. Following his work as a postdoctoral researcher at the Audiovisual Communications Laboratory at Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, he joined Harvard University in 2010, where he is currently an Associate Professor of Electrical Engineering at the John A. Paulson School of Engineering and Applied Sciences.

He received the Most Innovative Paper Award (with Minh N. Do) of IEEE International Conference on Image Processing (ICIP) in 2006, the Best Student Paper Award of IEEE ICIP in 2007, and the Best Student Presentation Award at the 31st SIAM SEAS Conference in 2007. Student papers supervised and coauthored by him won the Best Student Paper Award (with Ivan Dokmanic and Martin Vetterli) of IEEE International Conference on Acoustics, Speech and Signal Processing in 2011 and the Best Student Paper Award (with Ameya Agaskar and Chuang Wang) of IEEE Global Conference on Signal and Information Processing (GlobalSIP) in 2014.

He has been an Associate Editor of the IEEE Transactions on Image Processing since 2014, an Elected Member of the IEEE Image, Video, and Multidimensional Signal Processing Technical Committee since 2015, and an Elected Member of the IEEE Signal Processing Theory and Methods Technical Committee since 2016. He received the ECE Illinois Young Alumni Achievement Award in 2015.
We consider efficient iterative methods (e.g., stochastic gradient descent, randomized Kaczmarz algorithms, iterative coordinate descent) for solving large-scale optimization problems, whether convex or nonconvex. A flurry of recent work has focused on establishing their theoretical performance guarantees. This intense interest is spurred on by the remarkably impressive empirical performance achieved by these low-complexity and memory-efficient methods.

In this talk, we will present a framework for analyzing the exact dynamics of these methods in the high-dimensional limit. For concreteness, we consider two classical problems: regularized linear regression (e.g. LASSO) and sparse principal component analysis. For each case, we show that the time-varying estimates given by the algorithms will converge weakly to a deterministic “limiting process” in the high-dimensional (scaling and mean-field) limit. Moreover, this limiting process can be characterized as the unique solution of a nonlinear PDE, and it provides exact information regarding the asymptotic performance of the algorithms. For example, performance metrics such as the MSE, the cosine similarity and the misclassification rate in sparse support recovery can all be obtained by examining the deterministic limiting process. A steady-state analysis of the nonlinear PDE also reveals interesting phase transition phenomenons related to the performance of the algorithms. Although our analysis is asymptotic in nature, numerical simulations show that the theoretical predictions are accurate for moderate signal dimensions.

What makes our analysis tractable is the notion of exchangeability, a fundamental property of symmetry that is inherent in many of the optimization problems encountered in signal processing and machine learning.