Arizona State University Network Science Seminar Series

Upcoming Seminar: An Information-theoretic approach towards Communication-efficient Distributed Machine Learning

Speaker Ravi Tandon (University of Arizona)
Date 1:15 p.m., Sept 22nd, 2017
Location GWC 487
Short Bio
Dr. Ravi Tandon received the PhD degree in ECE from the University of Maryland, College Park in 2010 and the B.Tech degree in Electrical Engineering from IIT, Kanpur, India in 2004. He was a post-doctoral research associate at Princeton University from 2010-12, and then worked as a research assistant professor at Virginia Tech till 2015. Since 2015, he has been an assistant professor in the ECE department at the University of Arizona. He received the Best Paper Award at IEEE Globecom 2011, and is a recipient of NSF CAREER award in 2017. He is a senior member of the IEEE. His current research interests include information and coding theory with applications in large-scale distributed learning, wireless communications, signal processing, security and privacy. ​
Distributed computing systems for large-scale data-sets have gained significant recent interest as they enable the processing of data-intensive tasks for machine learning, model tracing, and data analysis over a large number of commodity machines, and servers (e.g., Apache Spark, and MapReduce). Generally speaking, a master node, which has the entire dataset, sends data blocks to be processed at distributed worker nodes. The workers subsequently respond with locally computed functions to the master node for the desired data analysis. This enables the processing of many terabytes of data over thousands of distributed servers to provide speedup. However, intermediate communication across distributed machines emerges as of the key bottlenecks in achieving ideal speedups. In this talk, I will present recent approaches that have shown how a novel application of codes can be used to reduce the communication footprint of distributed learning algorithms. I will talk about fundamental information-theoretic tradeoffs arising in such problems, connections to, and differences from classical index coding problems, and directions for future work.


Title Speaker Time Location
Distributed Algorithms for Cyberphysical Systems Nikolaos Freris (New York University Abu Dhabi (NYUAD)) 1:30 p.m., Aug 18th, 2017 ERC 490
Enabling Technologies for Autonomous Driving Xinzhou Wu (Qualcomm) 10:00 p.m., Sept 6th, 2017 GWC 487
Learning-aided Stochastic Network Optimization with Imperfect State Prediction Longbo Huang (Tsinghua University) 1:00 p.m., Sept 13th, 2017 GWC 487
An Information-theoretic approach towards Communication-efficient Distributed Machine Learning Ravi Tandon (University of Arizona) 1:15 p.m., Sept 22nd, 2017 GWC 487

Distributed Algorithms for Cyberphysical Systems

Speaker Nikolaos Freris (New York University Abu Dhabi (NYUAD))
Date 1:30 p.m., Aug 18th, 2017
Location ERC 490
Short Bio
Nikolaos Freris is an assistant professor of Electrical and Computer Engineering at New York University Abu Dhabi (NYUAD), and a Global Network Assistant Professor at New York University Tandon School of Engineering. He is the director of Cyberphysical Systems Laboratory (CPSLab) at NYUAD, and a member of the Center for Cyber Security (CCS). He received the Diploma in Electrical and Computer Engineering from the National Technical University of Athens (NTUA), Greece in 2005, and the M.S. degree in Electrical and Computer Engineering, the M.S. degree in Mathematics, and the Ph.D. degree in Electrical and Computer Engineering all from the University of Illinois at Urbana-Champaign in 2007, 2008, and 2010, respectively. Dr. Freris’s research interests lie in the area of cyberphysical systems: distributed estimation, optimization and control, data mining/machine learning, cyber security, and applications in transportation, sensor networks and robotics. His research was recognized with the 2014 IBM High Value Patent award, two IBM invention achievement awards, and the Gerondelis foundation award. Previously, Dr. Freris was a senior researcher in the School of Computer and Communication Sciences at École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, from 2012-2014, and a postdoctoral researcher in IBM Research – Zurich, Switzerland, from 2010-2012. Dr. Freris is a senior member of IEEE, and a member of SIAM and ACM.
Cyberphysical Systems (CPS) are very large networks in which collaborating agents possessing sensing, communication and computation capabilities are interconnected for controlling physical entities. Applications are ubiquitous in sensor networks, robotics, transportation, and smart grids. In this talk, I will present distributed, asynchronous and real-time algorithms for CPS, and illustrate applications in transportation, robotics and cyber security of CPS. In specific:

a) Distributed optimization: We propose a new block-coordinate operator splitting method that can handle a wide range of problems in multi-agent systems, signal processing and machine learning. We establish exponential convergence under a certain metric subregularity condition (which is weaker than strong convexity). We proceed to develop randomized distributed methods for multi-agent optimization, and exhibit our methods in the context of Network Utility Maximization and Distributed Model Predictive Control. On another front, we propose a novel exponentially converging gossip algorithm for GPS-free multi-agent localization.

b) Travel time estimation: We propose and analyze a method for performing compressed sensing on an infinite data stream. Our protocol involves a) encoding, via compressively sampling sliding windows of the data stream, and b) decoding, by means of solving LASSO using a newly developed quasi-Newton proximal method with accelerated convergence rates. We apply our framework to the problem of sparse kernel density estimation, and delineate its advantages for adaptively learning travel time distributions in transportation networks in real-time.

c) Cyber Security: We establish fundamental asymptotic bounds on the security of distributed protocols to collusion attacks. Our analysis enacts an encouraging result, in that the number of attackers that can be tolerated in large-scale CPS is ‘almost linear’ in the number of benign agents. Furthermore, we propose a theme for performing computations directly on encrypted data in a distributed fashion, and discuss its implications in the realm of secure cloud computing.

Upcoming Seminar: Enabling Technologies for Autonomous Driving

Speaker Xinzhou Wu (Qualcomm)
Date 10:00 a.m., Sept 6th, 2017
Location GWC 487
Short Bio
Dr. Xinzhou Wu is a Sr. Director of Engineering at Qualcomm. He is currently working on enabling autonomous driving with Qualcomm technologies in compute platforms, computer vision, deep learning, connectivity and positioning.

He has many years industry experience in wireless communications and vehicular networking, and is currently holding 115 issued US patents and 100+ patent applications. He is a key inventor, researcher and developer of the vehicular networks and next generation wireless mobile network. Dr. Wu has published extensively in the area of vehicular networking, communication theory, distributed networking algorithms and information theory.
Autonomous driving is becoming reality at an increasing pace, with all the tech giants now actively developing the technology. In this talk, we will focus on the key challenges of autonomous driving and how they can be addressed, from a chip maker's perspective. In particular, we will try to bring clarity to two important questions: (1) how do we enable massive compute for big data and machine learning tasks required for autonomous driving in an SoC, but still within a reasonable thermal envelop; (2) what are the key enabling technologies to remove dependency on cost inhibitive sensors for massive adoption of the technology.

Upcoming Seminar: Learning-aided Stochastic Network Optimization with Imperfect State Prediction

Speaker Longbo Huang (Tsinghua University)
Date 1:00 p.m., Sept 13th, 2017
Location GWC 487
Short Bio
Longbo Huang is an assistant professor at the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University, Beijing, China. He received his Ph.D. in EE from the University of Southern California in August 2011, and then worked as a postdoctoral researcher in the EECS dept. at University of California at Berkeley from July 2011 to August 2012. Dr. Huang has been a visiting scholar at the LIDS lab at MIT and at the EECS department at UC Berkeley, and a visiting professor at the Chinese University of Hong Kong, Bell-labs France and Microsoft Research Asia (MSRA). He was a visiting scientist at the Simons Institute for the Theory of Computing at UC Berkeley in Fall 2016. Dr. Huang was selected into China’s Youth 1000-talent program in 2013, and received the outstanding teaching award from Tsinghua university in 2014. Dr. Huang has served as the lead guest editor for the JSAC special issue on “Human-In-The-Loop Mobile Networks” in 2016, and an associate editor for ACM Transactions on Modeling and Performance Evaluation of Computing Systems (ToMPECS) in 2017-2019. Dr. Huang’s current research interests are in the areas of online learning, network optimization, online algorithm design, and sharing economy.
We investigate the problem of stochastic network optimization in the presence of imperfect state prediction and non-stationarity. Based on a novel distribution-accuracy curve prediction model, we develop the predictive learning-aided control (PLC) algorithm, which jointly utilizes historic and predicted network state information for decision making. PLC is an online algorithm that requires zero a-prior system statistical information, and consists of three key components, namely sequential distribution estimation and change detection, dual learning, and online queue-based control.

Specifically, we show that PLC simultaneously achieves good long-term performance, short-term queue size reduction, accurate change detection, and fast algorithm convergence. Moreover, PLC detects distribution change O(w) slots faster with high probability (w is the prediction size) and achieves an O(min(ε^{−1+c/2},ew/ε) + log(1/ε)^2) convergence time (ew is the prediction error measure), which is faster than Backpressure and other algorithms. Our results demonstrate that state prediction (even imperfect) can help (i) achieve faster detection and convergence, and (ii) obtain better utility-delay tradeoffs. They also quantify the benefits of prediction in four important performance metrics, i.e., utility (efficiency), delay (quality-of-service), detection (robustness), and convergence (adaptability), and provide new insight for joint prediction, learning and optimization in stochastic networks.