University of Michigan Central Campus - venue for ACM SIGMETRICS 2026

ACM SIGMETRICS 2026

Ann Arbor, Michigan, USA
June 8-12, 2026

Keynotes

Vishal Misra

Vishal Misra

Columbia University

Small Models of Large Models: What Transformers Compute, and Why Modeling Found It First

Abstract: Three questions are loose in our community right now. Are the methods we built our careers on still useful in the age of scale? Is there anything left to say about a system whose behavior is studied by benchmark? Does modeling, in the SIGMETRICS sense, still have purchase on the most important computational artifact of our time? This talk argues that the answer to all three is the same.

Transformers are not approximating Bayesian inference. They are implementing it, and the implementation has a geometry we can characterize, predict, and stress-test. I will take you through three results that together establish this. In small wind tunnels where the Bayes posterior is analytically computable, transformers recover it to machine precision while capacity-matched MLPs fail by orders of magnitude. The mechanism is not exotic. Cross-entropy gradients decompose in a way that forces the structure, through routing and value specialization. And the same geometric signatures survive the jump to production models, modulated in interpretable ways by architecture, data, and depth.

What this picture says is that the boundary between what scale can and cannot do is not mysterious. It is analyzable. Transformers compile position-local inference circuits when statistics are stationary, and they fail to construct reusable programs beyond the training horizon. The community that built queueing theory, identifiability, and controlled experiment is the community whose tools this science is missing. I will close with the case for why.

Bio: Vishal Misra is the RKS Family Professor of Computer Science and the Vice Dean for Computing and AI in the School of Engineering at Columbia University. He is an ACM and IEEE Fellow and his research emphasis is on mathematical modeling of systems, bridging the gap between practice and analysis. As a graduate student, he co-founded CricInfo, which was acquired by ESPN in 2007. In 2021 he developed one of the world's first commercial applications built on top of GPT-3 for ESPNCricinfo, and has since been modeling the behavior of LLMs. He also played an active part in the Net Neutrality regulation process in India, where his definition of Net Neutrality was adopted both by the citizen's movement as well as the regulators. He has been awarded a Distinguished Alumnus Award by IIT Bombay (2019) and a Distinguished Young Alumnus Award by the UMass Amherst College of Engineering (2014).


Steve Teig

Amazon

Keynote title and abstract forthcoming

Bio: Steve Teig is an American technology executive, entrepreneur, and computer engineer. He earned a B.S.E. in electrical engineering and computer science from Princeton University in 1982, co-founded Simplex in 1998, later served as chief scientist at Cadence after its acquisition, and went on to co-found Tabula as CTO. He subsequently served as CTO of Tessera Technologies, which became Xperi, and then became CEO of Perceive, a semiconductor company focused on machine learning hardware for mobile devices. After Amazon acquired Perceive in 2024, he became a Vice President and Distinguished Engineer at Amazon. He is also known for holding more than 390 patents.


Adam Tauman Kalai

OpenAI

Evaluating large language models for accuracy incentivizes hallucinations

Abstract: Large language models sometimes produce confident, plausible falsehoods (“hallucinations”), limiting their reliability. Prior work has offered numerous explanations and effective mitigations, such as retrieval and tool use, consistency-based self-verification, and reinforcement learning from human feedback. Nonetheless, the problem persists even in state-of-the-art language models. Here we show how next-word prediction and accuracy-based evaluations inadvertently reward unwarranted guessing. Initially, next-word pretraining creates statistical pressure toward hallucination even with idealized error-free data: using learning theory, we show that facts lacking repeated support in training data, such as one-off details, yield unavoidable errors, while recurring regularities, such as grammar, do not. Subsequent training stages aim to correct such errors. However, dominant headline metrics like accuracy systematically reward guessing over admitting uncertainty. To align incentives, we suggest two additions to the classic approach of adding error penalties to evaluations to control abstention. First, we propose “open-rubric” evaluations that explicitly state how errors are penalized, if at all, which test whether a model modulates its abstentions to stated stakes while optimizing accuracy. Second, since hallucination-specific benchmarks rarely make leaderboards, we suggest using open-rubric variants of existing evaluations to reverse their guessing incentives. Reframing hallucination as an incentive problem opens a practical path toward more reliable language models.

Joint work with: Santosh Vempala, Ofir Nachum, and Edwin Zhang.

Bio: Adam Tauman Kalai is a Research Scientist at OpenAI whose work spans AI safety and ethics, algorithms, fairness, AI theory, game theory, and crowdsourcing. He earned his BA from Harvard University and his PhD from Carnegie Mellon University, and has held research positions across academia and industry, including at MIT, TTIC, Georgia Tech, and Microsoft Research New England. His work has received numerous honors, including the Majulook Prize.


Additional keynote speakers may be announced as the program is finalized.