Arizona State University Network Science Seminar Series

Upcoming Seminar: Algorithms for Power Grid State Estimation after Cyber-Physical Attacks

Speaker Gil Zussman (Columbia Unviersity)
Date 10:30 a.m., Dec. 28, 2017
Location GWC 487
Short Bio
Gil Zussman received the Ph.D. degree in Electrical Engineering from the Technion in 2004 and was a postdoctoral associate at MIT in 2004–-2007. He is currently an Associate Professor of Electrical Engineering at Columbia University. He is a co-recipient of 7 paper awards including the ACM SIGMETRICS'06 Best Paper Award, the 2011 IEEE Communications Society Award for Advances in Communication, and the ACM CoNEXT'16 Best Paper Award. He received the Fulbright Fellowship, the DTRA Young Investigator Award, and the NSF CAREER Award, and was the PI of a team that won first place in the 2009 Vodafone Foundation Wireless Innovation Project competition.
We present methods for estimating the state of the power grid following a cyber-physical attack. We assume that an adversary attacks an area by: (i) disconnecting some lines within that area (failed lines), and (ii) obstructing the information from within the area to reach the control center. Given the phase angles of the nodes outside the attacked area under either the DC or AC power flow models (before and after the attack), the provided methods can estimate the phase angles of the nodes and detect the failed lines inside the attacked area. The novelty of our approach is the transformation of the line failures detection problem, which is combinatorial in nature, to a convex optimization problem. As a result, our methods can detect any number of line failures in a running time that is independent of the number of failures and is solely dependent on the size of the attacked area.

Upcoming Seminar: Collimated light propagation: The next frontier in underwater wireless communication

Speaker Mohamed-Slim Alouini (King Abdullah University of Science and Technology (KAUST))
Date 1:00 p.m., Jan 12th, 2018
Location GWC 487
Short Bio
Mohamed-Slim Alouini was born in Tunis, Tunisia. He received the Ph.D. degree in Electrical Engineering from the California Institute of Technology (Caltech), Pasadena, CA, USA, in 1998. He served as a faculty member in the University of Minnesota, Minneapolis, MN, USA, then in the Texas A&M University at Qatar, Education City, Doha, Qatar before joining King Abdullah University of Science and Technology (KAUST), Thuwal, Makkah Province, Saudi Arabia as a Professor of Electrical Engineering in 2009.

Professor Alouini has won several awards in his career. For instance, he recently received the 2016 Recognition Award of the IEEE Communication Society Wireless Technical Committee, the 2016 Abdul Hameed Shoman Award for Arab Researchers in Engineering Sciences, and the Inaugural Organization of Islamic Cooperation (OIC) Science & Technology Achievement Award in Engineering Sciences in 2017.

Other recognitions include his selection as (i) Fellow of the Institute of Electrical and Electronics Engineers (IEEE), (ii) IEEE Distinguished Lecturer for the IEEE Communication Society, (iii) member for several times in the annual Thomson ISI Web of Knowledge list of Highly Cited Researchers as well as the Shanghai Ranking/Elsevier list of Most Cited Researchers, and (iv) a co-recipient of best paper awards in eleven IEEE conferences (including ICC, GLOBECOM, VTC, PIMRC, ISWCS, and DySPAN).
Traditional underwater communication systems rely on acoustic modems due their reliability and long range. However their limited data rates, lead to the exploration of alternative techniques. In this talk, we briefly go over the potential offered by underwater wireless optical communication systems. We then summarizes some of the underwater channel challenges going from severe absorption and scattering that need to be surpassed before such kind of systems can be deployed in practice. We finally present some of the on-going research directions in the area of underwater wireless optical communication systems in order to (i) better characterize and model the underwater optical channel and (ii) design, develop, and test experimentally new suitable modulation and coding techniques suitable for this environment.


Title Speaker Time Location
Distributed Algorithms for Cyberphysical Systems Nikolaos Freris (New York University Abu Dhabi (NYUAD)) 1:30 p.m., Aug 18th, 2017 ERC 490
Enabling Technologies for Autonomous Driving Xinzhou Wu (Qualcomm) 10:00 p.m., Sept 6th, 2017 GWC 487
Learning-aided Stochastic Network Optimization with Imperfect State Prediction Longbo Huang (Tsinghua University) 1:00 p.m., Sept 13th, 2017 GWC 487
An Information-theoretic approach towards Communication-efficient Distributed Machine Learning Ravi Tandon (University of Arizona) 1:15 p.m., Sept 22nd, 2017 GWC 487
Information Disclosure under Privacy Constraints: Probability of Correct Guessing Mario Diaz (Arizona State University) 1:15 p.m., Oct. 9th, 2017 GWC 487
Sparse Sampling for Active Learning of Multi-source & Multi-modal Environments Urbashi Mitra (University of Southern California) 3:00 p.m., Nov. 16th, 2017 GWC 487
Delay Asymptotics in Cloud Computing Weina Wang (University of Illinois at Urbana-Champaign) 10:00 a.m., Dec. 22th, 2017 GWC 487
Algorithms for Power Grid State Estimation after Cyber-Physical Attacks Gil Zussman (Columbia University) 10:30 a.m., Dec 28, 2017 GWC 487
Collimated light propagation: The next frontier in underwater wireless communication Mohamed-Slim Alouini (King Abdullah University of Science and Technology (KAUST)) 1:00 p.m., Jan. 12th, 2018 GWC 487

Distributed Algorithms for Cyberphysical Systems

Speaker Nikolaos Freris (New York University Abu Dhabi (NYUAD))
Date 1:30 p.m., Aug 18th, 2017
Location ERC 490
Short Bio
Nikolaos Freris is an assistant professor of Electrical and Computer Engineering at New York University Abu Dhabi (NYUAD), and a Global Network Assistant Professor at New York University Tandon School of Engineering. He is the director of Cyberphysical Systems Laboratory (CPSLab) at NYUAD, and a member of the Center for Cyber Security (CCS). He received the Diploma in Electrical and Computer Engineering from the National Technical University of Athens (NTUA), Greece in 2005, and the M.S. degree in Electrical and Computer Engineering, the M.S. degree in Mathematics, and the Ph.D. degree in Electrical and Computer Engineering all from the University of Illinois at Urbana-Champaign in 2007, 2008, and 2010, respectively. Dr. Freris’s research interests lie in the area of cyberphysical systems: distributed estimation, optimization and control, data mining/machine learning, cyber security, and applications in transportation, sensor networks and robotics. His research was recognized with the 2014 IBM High Value Patent award, two IBM invention achievement awards, and the Gerondelis foundation award. Previously, Dr. Freris was a senior researcher in the School of Computer and Communication Sciences at École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, from 2012-2014, and a postdoctoral researcher in IBM Research – Zurich, Switzerland, from 2010-2012. Dr. Freris is a senior member of IEEE, and a member of SIAM and ACM.
Cyberphysical Systems (CPS) are very large networks in which collaborating agents possessing sensing, communication and computation capabilities are interconnected for controlling physical entities. Applications are ubiquitous in sensor networks, robotics, transportation, and smart grids. In this talk, I will present distributed, asynchronous and real-time algorithms for CPS, and illustrate applications in transportation, robotics and cyber security of CPS. In specific:

a) Distributed optimization: We propose a new block-coordinate operator splitting method that can handle a wide range of problems in multi-agent systems, signal processing and machine learning. We establish exponential convergence under a certain metric subregularity condition (which is weaker than strong convexity). We proceed to develop randomized distributed methods for multi-agent optimization, and exhibit our methods in the context of Network Utility Maximization and Distributed Model Predictive Control. On another front, we propose a novel exponentially converging gossip algorithm for GPS-free multi-agent localization.

b) Travel time estimation: We propose and analyze a method for performing compressed sensing on an infinite data stream. Our protocol involves a) encoding, via compressively sampling sliding windows of the data stream, and b) decoding, by means of solving LASSO using a newly developed quasi-Newton proximal method with accelerated convergence rates. We apply our framework to the problem of sparse kernel density estimation, and delineate its advantages for adaptively learning travel time distributions in transportation networks in real-time.

c) Cyber Security: We establish fundamental asymptotic bounds on the security of distributed protocols to collusion attacks. Our analysis enacts an encouraging result, in that the number of attackers that can be tolerated in large-scale CPS is ‘almost linear’ in the number of benign agents. Furthermore, we propose a theme for performing computations directly on encrypted data in a distributed fashion, and discuss its implications in the realm of secure cloud computing.

Upcoming Seminar: Enabling Technologies for Autonomous Driving

Speaker Xinzhou Wu (Qualcomm)
Date 10:00 a.m., Sept 6th, 2017
Location GWC 487
Short Bio
Dr. Xinzhou Wu is a Sr. Director of Engineering at Qualcomm. He is currently working on enabling autonomous driving with Qualcomm technologies in compute platforms, computer vision, deep learning, connectivity and positioning.

He has many years industry experience in wireless communications and vehicular networking, and is currently holding 115 issued US patents and 100+ patent applications. He is a key inventor, researcher and developer of the vehicular networks and next generation wireless mobile network. Dr. Wu has published extensively in the area of vehicular networking, communication theory, distributed networking algorithms and information theory.
Autonomous driving is becoming reality at an increasing pace, with all the tech giants now actively developing the technology. In this talk, we will focus on the key challenges of autonomous driving and how they can be addressed, from a chip maker's perspective. In particular, we will try to bring clarity to two important questions: (1) how do we enable massive compute for big data and machine learning tasks required for autonomous driving in an SoC, but still within a reasonable thermal envelop; (2) what are the key enabling technologies to remove dependency on cost inhibitive sensors for massive adoption of the technology.

Upcoming Seminar: Learning-aided Stochastic Network Optimization with Imperfect State Prediction

Speaker Longbo Huang (Tsinghua University)
Date 1:00 p.m., Sept 13th, 2017
Location GWC 487
Short Bio
Longbo Huang is an assistant professor at the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University, Beijing, China. He received his Ph.D. in EE from the University of Southern California in August 2011, and then worked as a postdoctoral researcher in the EECS dept. at University of California at Berkeley from July 2011 to August 2012. Dr. Huang has been a visiting scholar at the LIDS lab at MIT and at the EECS department at UC Berkeley, and a visiting professor at the Chinese University of Hong Kong, Bell-labs France and Microsoft Research Asia (MSRA). He was a visiting scientist at the Simons Institute for the Theory of Computing at UC Berkeley in Fall 2016. Dr. Huang was selected into China’s Youth 1000-talent program in 2013, and received the outstanding teaching award from Tsinghua university in 2014. Dr. Huang has served as the lead guest editor for the JSAC special issue on “Human-In-The-Loop Mobile Networks” in 2016, and an associate editor for ACM Transactions on Modeling and Performance Evaluation of Computing Systems (ToMPECS) in 2017-2019. Dr. Huang’s current research interests are in the areas of online learning, network optimization, online algorithm design, and sharing economy.
We investigate the problem of stochastic network optimization in the presence of imperfect state prediction and non-stationarity. Based on a novel distribution-accuracy curve prediction model, we develop the predictive learning-aided control (PLC) algorithm, which jointly utilizes historic and predicted network state information for decision making. PLC is an online algorithm that requires zero a-prior system statistical information, and consists of three key components, namely sequential distribution estimation and change detection, dual learning, and online queue-based control.

Specifically, we show that PLC simultaneously achieves good long-term performance, short-term queue size reduction, accurate change detection, and fast algorithm convergence. Moreover, PLC detects distribution change O(w) slots faster with high probability (w is the prediction size) and achieves an O(min(ε^{−1+c/2},ew/ε) + log(1/ε)^2) convergence time (ew is the prediction error measure), which is faster than Backpressure and other algorithms. Our results demonstrate that state prediction (even imperfect) can help (i) achieve faster detection and convergence, and (ii) obtain better utility-delay tradeoffs. They also quantify the benefits of prediction in four important performance metrics, i.e., utility (efficiency), delay (quality-of-service), detection (robustness), and convergence (adaptability), and provide new insight for joint prediction, learning and optimization in stochastic networks.

An Information-theoretic approach towards Communication-efficient Distributed Machine Learning

Speaker Ravi Tandon (University of Arizona)
Date 1:15 p.m., Sept 22nd, 2017
Location GWC 487
Short Bio
Dr. Ravi Tandon received the PhD degree in ECE from the University of Maryland, College Park in 2010 and the B.Tech degree in Electrical Engineering from IIT, Kanpur, India in 2004. He was a post-doctoral research associate at Princeton University from 2010-12, and then worked as a research assistant professor at Virginia Tech till 2015. Since 2015, he has been an assistant professor in the ECE department at the University of Arizona. He received the Best Paper Award at IEEE Globecom 2011, and is a recipient of NSF CAREER award in 2017. He is a senior member of the IEEE. His current research interests include information and coding theory with applications in large-scale distributed learning, wireless communications, signal processing, security and privacy. ​
Distributed computing systems for large-scale data-sets have gained significant recent interest as they enable the processing of data-intensive tasks for machine learning, model tracing, and data analysis over a large number of commodity machines, and servers (e.g., Apache Spark, and MapReduce). Generally speaking, a master node, which has the entire dataset, sends data blocks to be processed at distributed worker nodes. The workers subsequently respond with locally computed functions to the master node for the desired data analysis. This enables the processing of many terabytes of data over thousands of distributed servers to provide speedup. However, intermediate communication across distributed machines emerges as of the key bottlenecks in achieving ideal speedups. In this talk, I will present recent approaches that have shown how a novel application of codes can be used to reduce the communication footprint of distributed learning algorithms. I will talk about fundamental information-theoretic tradeoffs arising in such problems, connections to, and differences from classical index coding problems, and directions for future work.

Information Disclosure under Privacy Constraints: Probability of Correct Guessing

Speaker Mario Diaz (Arizona State University)
Date 1:15 p.m., Oct. 9th, 2017
Location GWC 487
Short Bio
Mario Diaz is currently a postdoctoral scholar at the School of Electrical, Computer, and Energy Engineering, Arizona State University, and at the School of Engineering and Applied Sciences, Harvard. He received the Ph.D. degree in Mathematics and Statistics from Queen’s University, Canada, in 2017, the M.Sc. degree in Probability and Statistics in 2013 from the Center for Research in Mathematics (CIMAT), Mexico, and the B.Eng. degree in Electrical Engineering in 2011 from the University of Guadalajara, Mexico. His research interests include the mathematical and statistical theories of information privacy, free probability theory, random matrix theory, and multiantenna communications.
The awareness of the potential uses and misuses of personal information have made necessary the development of new statistical techniques that allow the legitimate use of information and at the same time prevent illegitimate uses. In this context, it is natural to try to understand the fundamental limits of such statistical techniques in different scenarios. In this talk we present some recent results in this direction in the following setup: Assume that some private information X is correlated with some non-private information Y, thus it may be unsafe to release Y publicly. If some utility is obtained by revealing as much about Y as possible (e.g., personalized services), how much can we reveal about Y without compromising the private information X? This talk is based on joint work with S. Asoodeh, F. Alajaji, and T. Linder.

Sparse Sampling for Active Learning of Multi-source & Multi-modal Environments

Speaker Urbashi Mitra (University of Southern California)
Date 3:00 p.m., Nov. 16th, 2017
Location GWC 487
Short Bio
Urbashi Mitra received the B.S. and the M.S. degrees from the University of California at Berkeley and her Ph.D. from Princeton University. She began her academic career at The Ohio State University. Dr. Mitra is currently the Gordon S. Marshall Professor in Engineering at the University of Southern California with appointments in Electrical Engineering and Computer Science. She is the inaugural Editor-in-Chief for the IEEE Transactions on Molecular, Biological and Multi-scale Communications. She is a member of the IEEE Information Theory Society's Board of Governors (2002-2007, 2012-2017), the IEEE Signal Processing Society’s Technical Committee on Signal Processing for Communications and Networks (2012-2016), the IEEE Signal Processing Society’s Awards Board (2017), and the Vice Chair of the IEEE Communications Society, Communication Theory Working Group (2017). Dr. Mitra is a Fellow of the IEEE. She is the recipient of: a 2015 UK Royal Academy of Engineering Distinguished Visiting Professorship, a 2015 US Fulbright Scholar Award, a 2015-2016 UK Leverhulme Trust Visiting Professorship, IEEE Communications Society Distinguished Lecturer, 2012 Globecom Signal Processing for Communications Symposium Best Paper Award, 2012 US National Academy of Engineering Lillian Gilbreth Lectureship, the 2009 DCOSS Applications & Systems Best Paper Award, Texas Instruments Visiting Professor (Fall 2002, Rice University), 2001 Okawa Foundation Award, 2000 OSU College of Engineering Lumley Award for Research, 1997 OSU College of Engineering MacQuigg Award for Teaching, and a 1996 National Science Foundation CAREER Award. She has been an Associate Editor for the following IEEE publications: Transactions on Signal Processing, Transactions on Information Theory, Journal of Oceanic Engineering, and Transactions on Communications. Dr. Mitra has held visiting appointments at: King’s College, London, Imperial College, the Delft University of Technology, Stanford University, Rice University, and the Eurecom Institute. Her research interests are in: wireless communications, communication and sensor networks, biological communication systems, detection and estimation and the interface of communication, sensing and control.
Consider a field of interest from which you can only collect a few observations. From these observations, we wish to find a target's location. Such a problem arises in military surveillance, environmental monitoring, cyber-security, medical diagnosis, and epidemic detection. In this talk, we consider a novel approach to target detection (or environmental sensing) from sparse samples. In particular, we model the target of interest as one emitting a signature that has spatial extent across the field (versus being a single pixel); furthermore, this signature is spatially separable and decays as a function of the distance of the observation point from the target (unimodal). The target detection and localization algorithm employs highly incomplete and noisy samples. By exploiting modern signal processing techniques such as matrix completion and active search methods, we develop a high performance, moderate complexity algorithm for target detection. This method is extended to the case of multiple targets via novel matrix factorization and isotonic projection methods. We further extend the approach to handle multimodal sensor data by exploiting tensor completion methods. Theoretical performance bounds are derived. The methods are tested against the state of the art on both synthetic and real data sets.

Delay Asymptotics in Cloud Computing

Speaker Weina Wang (University of Illinois at Urbana-Champaign)
Date 10:00 a.m., Dec 22th, 2017
Location GWC 487
Short Bio
Weina Wang is a joint postdoctoral research associate in the Coordinated Science Lab at the University of Illinois at Urbana-Champaign, and in the School of ECEE at Arizona State University, working with Prof. R. Srikant and Prof. Lei Ying. She received her B.E. from Tsinghua University and her Ph.D. from Arizona State University, both in Electrical Engineering. Her research lies in the broad area of applied probability and stochastic systems, with applications in cloud computing, data centers, and privacy-preserving data analytics. Her dissertation received the Dean’s Dissertation Award in the Ira A. Fulton Schools of Engineering at Arizona State University in 2016. She received the Kenneth C. Sevcik Outstanding Student Paper Award at ACM SIGMETRICS 2016.
Cloud computing systems and data center networks are the engines that drive modern big-data technologies. Resource allocation and provisioning algorithms in such networks and systems are designed to meet very stringent delay requirements. In this talk, I will focus on the delay of jobs that consist of many parallel tasks, where a job is completed when all the tasks in the job are completed. While the delay of tasks has been extensively studied in various asymptotic regimes, job delay has not been well-understood, even though job delay is the only metric of interest to end users. We first show that assuming task delays to be independent actually gives an upper bound on the job delay. Then in the large-system regime where the number of servers goes to infinity, we establish the asymptotic independence of the amount of work in each queue under suitable assumptions on the job size (i.e., number of tasks in a job). Here, the job size is allowed to increase with the number of servers to capture the growing volume of data. This implies that the upper bound given by the independence assumption is asymptotically tight, and allows one to compute the tail probability and average job delay easily. At last, I will also briefly discuss some other problems in cloud computing systems including data locality issues in task scheduling, connection-level resource allocation for data transfer, and data collection with privacy concerns.