ACM SIGMETRICS 2023

Orlando, Florida, USA

June 19-22, 2023

Quantum computing has the potential to speed up many scientific and optimization applications including scientific simulations and machine learning tasks. Quantum computing has many properties including superposition, entanglement, interference, and reversibility, that set it apart from classical computing and enable it to solve classically-unsolvable problems. However, there are many challenges to the adaption of this technology in the current noisy intermediate-scale quantum (NISQ) era of quantum computing. This era is characterized by high levels of quantum hardware noise and relatively small-sized quantum computers. Nonetheless, the current technology can still be used in innovative ways, including for machine-learning problems. The main goal of the tutorial is to introduce the emerging and rapidly-advancing field of quantum machine learning (QML) in the NISQ era. The tutorial will specifically focus on how today’s quantum computers can be used to execute machine-learning tasks.

We will first motivate the use of quantum computing for optimization and machine learning and the underlying challenges of the technology. We will also introduce the mathematical fundamentals relevant to understanding quantum computing. Attendees will then be introduced to quantum programs and parameterized variational quantum circuits. The tutorial will also cover the fundamentals of machine learning, and then proceed to introduce the concepts of quantum machine learning and the hybrid quantum-classical approach to solving problems. We will focus on the problem of image classification and demonstrate how it can be implemented using small-scale quantum codes to run inference tasks on current quantum computers. The tutorial will be self-contained and introduce the relevant background for mathematical fundamentals, quantum programs, variational quantum circuits, machine learning, quantum machine learning, and hybrid quantum-classical design. At the completion of the tutorial, the attendees will have a strong grasp of the fundamental concepts of quantum computing and quantum machine learning. Attendees will also be able to program an image classifier that can run on real quantum computers.

** Tirthak Patel ** is an Assistant Professor at the Rice University Department of Computer Science conducting systems-level research at the intersection of quantum computing and high-performance computing (HPC). His research explores the trade-offs among factors affecting reliability, performance, and efficiency, in recognition of which he has received the ACM-IEEE CS George Michael Memorial HPC Fellowship, the NSERC Alexander Graham Bell Canada Graduate Scholarship (CGS D-3), and the Northeastern University Outstanding Graduate Student in Research award. He received his Ph.D. in Computer Engineering from Northeastern University.

** Daniel Silver ** is a Ph.D. Candidate at Northeastern University with a focus on integrating the principles of machine learning with the emerging field of quantum computing. Before starting his Ph.D. journey, Daniel also received his B.Sc. in Computer Engineering and M.Sc. in Machine Learning from Northeastern University. He has published his research at top-tier conference venues such as AAAI, SC, ISCA, and DATE.

** Devesh Tiwari ** is an educator and researcher at Northeastern University. His group, the Goodwill Computing Lab, focuses on innovating system software solutions for making HPC and quantum computing systems more efficient. Before joining Northeastern, Professor Tiwari was a staff scientist at the United States Department of Energy (DOE) Oak Ridge National Laboratory. For his teaching and mentoring efforts, he was awarded the Professor of the Year by the Northeastern University IEEE student chapter. Professor Tiwari was recognized with the TPDS Editorial Excellence Award for his exceptional contributions to the TPDS journal as an editor. He was invited to serve as the technical program co-chair of the ACM HPDC and IEEE IPDPS - the flagship conferences in the HPC area and currently serves as the steering committee co-chair of the ACM HPDC conference.

For the last 50 years, product-forms and time-reversibility have been important instruments for the stationary analysis of stochastic models whose underlying processes are Markov chains. Research on product-form models is still vibrant in the community of Sigmetrics and Performance.

Since the seminal studies on these fields, methodology for product-form analysis has changed, especially thanks to the introduction of the Reversed Compound Agent Theory. In particular, this result and its extensions allowed for a constructive methodology to product-forms, different from the traditional approach based on an educated guess of the stationary distribution and a proof relying on the verification of the local or global balance equations.

Product-forms are strictly related to time-reversibility theory where new results have also been recently derived. These consider a permutation of states in the statistical equivalence between the forward and reversed processes.

In this tutorial, we review these results and will have a new look to well-known product-forms and develop a methodology to derive new ones in simple way, avoiding heavy algebraic computations. In particular, in studying product-forms, we will extend the analysis beyond the traditional pairwise synchronization, taking into account models whose components' interactions depend on and may affect the state of more than one component.

** Andrea Marin **is Associate professor of computer science at the University Ca' Foscari of Venice, Italy. He works in the field of performance modeling with queueing networks, stochastic Petri nets and Markovian Process Algebras. With P. Harrison, he has proposed an extension of the Reversed Compound Agent Theorem to study multi-way synchronizations and, with S. Rossi, he has extended the theory of dynamic reversibility. He has worked on theory of product-form both from the methodological and application points of view. In particular, he has used the results to study problems in telecommunication and distributed systems. He has given a tutorial at IFIP Performance 2010 on product-form solutions for stochastic models with pairwise synchronization, at ACM/ICPE 2011 and QEST 2010. For the last decade, he has been teaching the course of \emph{Software Performance and Scalability} at Master's Programme in Computer Science at the University of Venice.

As a paradigm for sequential decision making in unknown environments, reinforcement learning (RL) has received a flurry of attention in recent years. However, the explosion of model complexity in emerging applications and the presence of nonconvexity exacerbate the challenge of achieving efficient RL in sample-starved situations, where data collection is expensive, time-consuming, or even high-stakes (e.g., in clinical trials, autonomous systems, and online advertising). How to understand and enhance the sample and computational efficiencies of RL algorithms is thus of great interest and in imminent need. In this tutorial, we aim to present a coherent framework that covers important algorithmic and theoretical developments in RL, highlighting the connections between new ideas and classical topics. Employing Markov Decision Processes as the central mathematical model, we introduce several distinctive RL scenarios (i.e., online RL, offline RL, and multi-agent RL), and present three mainstream RL paradigms (i.e., model-based approach, model-free approach, and policy optimization). Our discussions gravitate around the issues of sample complexity, computational efficiency, function approximation, as well as algorithm-dependent and information-theoretic lower bounds in the non-asymptotic regime. We will systematically introduce several effective algorithmic ideas (e.g., stochastic approximation, variance reduction, optimism/pessimism in the face of uncertainty) that permeate the design of efficient RL algorithms.

** Dr. Yuejie Chi ** is a Professor in the department of Electrical and Computer Engineering, and a faculty affiliate with the Machine Learning department and CyLab at Carnegie Mellon University, where she held the Robert E. Doherty Early Career Development Professorship from 2018 to 2020. She received her Ph.D. and M.A. from Princeton University, and B. Eng. (Hon.) from Tsinghua University, all in Electrical Engineering. Her research interests lie in the theoretical and algorithmic foundations of data science, signal processing, machine learning and inverse problems, with applications in sensing systems, broadly defined. Among others, Dr. Chi received the Presidential Early Career Award for Scientists and Engineers (PECASE), the inaugural IEEE Signal Processing Society Early Career Technical Achievement Award for contributions to high-dimensional structured signal processing, and early career awards from NSF, AFOSR and ONR. She is named an IEEE Fellow (Class of 2023) for contributions to statistical signal processing with low-dimensional structures. She was named a Goldsmith Lecturer by IEEE Information Theory Society in 2021, and a Distinguished Lecturer by IEEE Signal Processing Society in 2022-2023. She served as a Program Co-Chair for the 2022 Conference on Machine Learning Systems (MLSys) and is currently a board member of MLSys (2022-2025). She serves (or served) as an Associate Editor for IEEE Trans. on Information Theory, IEEE Trans. on Signal Processing, IEEE Trans. on Pattern Recognition and Machine Intelligence, Information and Inference: A Journal of the IMA, and SIAM Journal on Mathematics of Data Science, as well as a guest editor for Proceedings of the IEEE.

** Dr. Yuxin Chen ** is currently an Associate Professor of Statistics and Data Science and of Electrical and Systems Engineering at the University of Pennsylvania. Before joining UPenn, he was an Assistant Professor of Electrical and Computer Engineering at Princeton University. He completed his Ph.D. in Electrical Engineering at Stanford University under supervision of Andrea Goldsmith, and was also a postdoc scholar at Stanford Statistics under supervision of Emmanuel Candes. His current research interests include high-dimensional statistics, nonconvex optimization, reinforcement learning, information theory, and their applications to power electronics and computational biology. He has received the Alfred P. Sloan Research Fellowship, the ICCM best paper award (gold medal), the Google Research Scholar Award, the AFOSR Young Investigator Award, the ARO Young Investigator Award, the Princeton Graduate Mentoring Award, the Princeton SEAS junior faculty award, and was selected as a finalist for the Best Paper Prize for Young Researchers in Continuous Optimization.

** Dr. Yuting Wei ** is currently an Assistant Professor in the Department of Statistics and Data Science at the Wharton School, University of Pennsylvania. Prior to that, Yuting spent two years at Carnegie Mellon University as an Assistant Professor of Statistics and Data Science, and one year at the Department of Statistics at Stanford University as a Stein Fellow. She received her Ph.D. in statistics at University of California, Berkeley under supervision of Martin Wainwright and Aditya Guntuboyina, and Bachelor of Science in Peking University with honor. She was the recipient of the 2022 NSF CAREER Award, honorable mention for the 2023 Bernoulli Society's New Researcher Award, and the 2018 Erich L. Lehmann Citation from the Berkeley statistics department for her Ph.D. dissertation in theoretical statistics. Her research interests include high-dimensional statistics, non-parametric statistics, statistical machine learning, and reinforcement learning.