Arizona State University Network Science Seminar Series

Upcoming Seminar: Modeling and Optimizing Complex Dynamic Transportation Systems: A State-space-time Network-based Framework

Speaker Xuesong Zhou (Arizona State University)
Date 1:30 p.m., April 15th, 2016
Location GWC 487
Short Bio
Dr. Xuesong Zhou is an Associate Professor at the School of Sustainable Engineering and the Built Environment at Arizona State University. Dr. Zhou’s research interests include large-scale dynamic transportation routing assignment, simulation, and optimization. Dr. Zhou is currently an Associate Editor of Transportation Research Part C, an Associate Executive Editor-in-Chief of Urban Rail Transit, an Associate Editor of Networks and Spatial Economics, an Editorial Board Member of Transportation Research Part B. He is Chair of INFORMS Rail Application Section, and the Co-Chair of the IEEE ITS Society Technical Commit¬tee on Traffic and Travel Management, as well as a subcommittee chair of the TRB Committee on Transportation Network Modeling (ADB30). He is the principle architect and developer of DTALite, a light-weight open-source traffic assignment/simulation engine, and he has been assisting FHWA, many state DOT and metropolitan planning agencies to learn and deploy advanced transportation network modeling tools.
Abstract
Transportation state estimation and optimization techniques aim to use accurate state representation and optimized decisions to guide planning and operational management decisions. A wide range of time-discretized network flow models have been proposed to represent transportation systems through space-time or time-expanded networks. By adding additional state dimensions (e.g., energy, speed and vehicle carrying states), we are able to construct a systematic representation to prebuild many complex state transition constraints into a well-structured hyper network, so that the resulting optimization model can be nicely reformulated as multi-commodity network flow models with a very limited number of side constraints. In this talk, we will walk through examples of recasting several classic transportation systems optimization problems using the SST framework, namely solving large-scale vehicle ridesharing optimization, electronic vehicle routing, and signal phase optimization.

Seminars

Title Speaker Time Location
A Unified Framework for Large-Scale Block-Structured Optimization Mingyi Hong (Iowa State University) 1:30 p.m., February 26th, 2016 GWC 487
Stochastic and Information-theoretic Approaches to Analysis and Storage of Biological Data Farzad Farnoud (California Institute of Technology) 1:30 p.m., March 11th, 2016 GWC 487
Minimizing Latency in Cloud Based Systems: Replication Over Parallel Servers Yin Sun (Ohio State University) 3:30 p.m., April 7th, 2016 GWC 487
SGD and Randomized Projections Methods for Linear Systems Deanna Needell (Claremont McKenna College) 1:30 p.m., April 8th, 2016 GWC 487
Modeling and Optimizing Complex Dynamic Transportation Systems: A State-space-time Network-based Framework Xuesong Zhou (Arizona State University) 1:30 p.m., April 15th, 2016 GWC 487

A Unified Framework for Large-Scale Block-Structured Optimization

Speaker Mingyi Hong (Iowa State University)
Date 1:30 p.m., February 26th, 2016
Location GWC 487
Short Bio
Mingyi Hong received the B.E. degree from Zhejiang University, China, the M.S. degree in Stony Brook University, and the Ph.D. degree from University of Virginia in 2005, 2007, and 2011 respectively. From 2011 to 2014 he holds research positions in the Department of Electrical and Computer Engineering, University of Minnesota. He is currently a Black & Veatch Faculty Fellow and an Assistant Professor with the Department of Industrial and Manufacturing Systems Engineering and the Department of Electrical and Computer Engineering (by courtesy), Iowa State University. His research interests are primarily in the fields of large-scale optimization theory, statistical signal processing, next generation wireless communications, and their applications in big data problems.
Abstract
In this talk we present a powerful algorithmic framework for large-scale optimization, called the Block Successive Upper bound Minimization (BSUM). The BSUM includes as special cases many well-known methods for signal processing, communication or massive data analysis, such as Block Coordinate Descent (BCD), Convex-Concave Procedure (CCCP), Block Coordinate Proximal Gradient (BCPG) method, Nonnegative Matrix Factorization (NMF), Expectation Maximization (EM) method and so on. In this talk, various features and properties of the BSUM are discussed from the viewpoint of design flexibility, computational efficiency and parallel/distributed implementation. Illustrative examples from networking, signal processing and machine learning are presented to demonstrate the practical performance of the BSUM framework.

Stochastic and Information-theoretic Approaches to Analysis and Storage of Biological Data

Speaker Farzad Farnoud (California Institute of Technology)
Date 1:30 p.m., March 11th, 2016
Location GWC 487
Short Bio
Farzad Farnoud is a postdoctoral scholar at the California Institute of Technology. He received his MS degree in Electrical and Computer Engineering from the University of Toronto in 2008. From the University of Illinois at Urbana-Champaign, he received his MS degree in mathematics and his PhD in Electrical and Computer Engineering in 2012 and 2013, respectively. His research interests include the information-theoretic and probabilistic analysis of genomic evolutionary processes; rank aggregation and gene prioritization; and coding for flash memory and DNA storage. He is a recipient of the Robert T. Chien Memorial Award for demonstrating excellence in research in electrical engineering from the University of Illinois at Urbana-Champaign.
Abstract
By 2025, we may be generating as much as 1 Zetta bases of DNA sequencing data per year, with its growth potentially outpacing computational power and storage capacity. It is thus imperative to develop efficient analysis and storage algorithms in order to benefit from the full potential of available biological data. In this talk, I will present our work on aspects of both analysis and storage of such data. First, I will describe a method for estimating the rates of tandem duplication and substitution mutations in DNA tandem repeat regions, which form about 3% of the human genome and are known to cause several diseases. The proposed method, obtained through a stochastic approximation framework, presents an efficient alternative to solving the possibly NP-hard problem of combinatorially reconstructing duplication histories. We show that compared to previous algorithms, this method achieves better accuracy while being more scalable. The mutation rate estimates can be used to approximate distances between genomic sequences for phylogenetic reconstruction; and also for capacity computation and design of error-correcting codes for DNA data embedding. In the context of storage of biological data, I will present MetaCRAM, our compression platform for metagenomic sequence reads, which aims to address challenges arising from the growing size of metagenomic datasets, by integrating taxonomy identification, alignment, and source coding. By testing MetaCRAM on a variety of metagenomic datasets, we show that it reduces file sizes by more than 87%, outperforming standard tools.

Minimizing Latency in Cloud Based Systems: Replication Over Parallel Servers

Speaker Yin Sun (Ohio State University)
Date 3:30 p.m., April 7th, 2016
Location GWC 487
Short Bio
Yin Sun received his B.S. and Ph.D. degrees from Tsinghua University, Beijing, China, in 2006 and 2011, respectively, both in Electrical Engineering. He received the Excellent Doctoral Dissertation Award of Tsinghua University, Excellent Bachelor's Thesis Award of Tsinghua University, and many scholarships. Since 2014, Yin Sun has been a research associate in the Department of Electrical and Computer Engineering at the Ohio State University. He was a Postdoctoral Fellow in the Department of Electrical and Computer Engineering at the Ohio State University during 2011-2014. His research interests include the fundamental limits in the design, control, performance of information and computer systems, with applications to large-scale Web services, cyber-physical systems, and communication networks. The paper he co-authored received the best student paper award at IEEE WiOpt 2013.
Abstract
We are in the midst of a major data revolution. The total data generated by humans from the dawn of civilization until the turn of the new millennium is now being generated every two days. Driven by a wide range of data-intensive devices and applications, this growth is expected to continue its astonishing march, and fuel the development of new and larger data centers. In order to exploit the low-cost services offered by these resource-rich data centers, application developers are pushing computing and storage away from the end-devices and instead deeper into the data-centers. Hence, the end-users' experience is now dependent on the performance of the algorithms used in data-centers. In particular, providing low-latency services is critically important to the end-user experience for a wide variety of applications. Our goal has been to develop the analytical foundations and methodologies to enable cloud computing and storage solutions that result in low-latency services. A variety of of cloud based systems can be modeled using multi-server, multi-queue queueing systems with data locality constraints. In these systems, replication (or most sophisticated coding schemes) can be used to not only improve reliability but to also reduce latency. However, delay optimality for multi-server queueing systems has been a long-standing open problem, with limited results usually in asymptotic regimes. The key question is can we design resource allocation schemes that are near optimal in distribution for minimizing several different classes of delay metrics that are important in web and cloud based services? In this talk, I will overview some of our recent research efforts at solving this problem, provide some key design principles, and outline a set of what I believe are important open problems.

SGD and Randomized Projections Methods for Linear Systems

Speaker Deanna Needell (Claremont McKenna College)
Date 1:30 p.m., April 8th, 2016
Location GWC 487
Short Bio
Deanna Needell earned her PhD from UC Davis under the advisement of Roman Vershynin in Topics in Compressed Sensing. She then did a two year postdoc at Stanford with Emmanuel Candès and is currently an associate professor at Claremont McKenna College in Southern California. She has received awards including the IEEE Best Paper Award, an Alfred P. Sloan fellowship, and an NSF CAREER award.
Abstract
In this talk we will give a brief overview of stochastic gradient pursuit and the closely related Kaczmarz method for solving linear systems, or more generally convex optimization problems. We will present some new results which tie these methods together and prove the best known convergence rates for these methods under mild Lipschitz conditions. The methods empirically and theoretically rely on probability distributions to dictate the order of sampling in the algorithms. It turns out that the choice of distribution may drastically change the performance of the algorithm, and the theory has only begun to explain this phenomenon.
 
��ҳģ����ҳģ��