About this series

The Quarterly Theory Workshop brings in three theoretical computer science experts present their perspective and research on a common theme.  Chicago area researchers with interest in theoretical computer science are invited to attend.  The technical program is in the morning and includes coffee and lunch.  The afternoon of the workshop will allow for continued discussion between attendees and the speakers.


This Quarterly Theory Workshop is on the theme of Algorithmic Game Theory and Data Science.  This theme revisions algorithmic game theory in the context of a data rich world.  Mechanisms are designed from samples.  Games are evaluated with respect to data from the actions players took.  The theory describes how developers of computer systems in strategic environment can understand their systems from the data their systems generate.  Speakers are Avrim Blum, Tim Roughgarden, and Eva Tardos. The talks will be video recorded.


  • Date: Monday, March 7, 2016.
  • Location: ITW Lecture Hall (1st floor), Ford Motor Company Engineering Design Center (map), Northwestern U, 2133 Sheridan Rd, Evanston, IL 60208.
  • Transit: Noyes St. Purple Line (map).
  • Parking: Validation for North Campus Parking Garage (map) available at workshop.
  • Registration: none necessary, bring your own name badge from past conference.


  • 9:00 – 9:25: Coffee
  • 9:25 – 9:30: Introduction
  • 9:30 – 10:15: Tim Roughgarden: The Sample Complexity of Learning Near-Optimal Auctions and Heuristics (video)
  • 10:15 – 11:00: Avrim Blum: Reconstructing Preferences from Opaque Transactions (video)
  • 11:00 – 11:30: Coffee
  • 11:30 – 12:15: Eva Tardos: Econometrics for Learning Agents (video)
  • 12:15 – 1:20: Lunch


Title: The Sample Complexity of Learning Near-Optimal Auctions and Heuristics
Speaker: Tim Roughgarden

Video link

Abstract: We first explain how to use concepts from learning theory to make optimal auction theory more operational, replacing the common prior assumption with a data-driven approach. We then expand on this idea, taking a PAC learning approach to the selection of application-specific algorithms. Our work shows that dimension notions from statistical learning theory, historically used to measure the complexity of classes of binary- and real-valued functions, are relevant in a much broader algorithmic context.

Includes joint work with Rishi Gupta and Jamie Morgenstern.

Reconstructing Preferences from Opaque Transactions
Speaker: Avrim Blum

Abstract: There has been significant work on learning about utility functions of single agents from observing their behavior in isolation.  In this talk, I will discuss the problem of learning about utilities of a collection of agents, when the only information available is some kind of overall outcome of a joint interaction.

For example, consider an auction of a single item where n agents each draw values from their own personal probability distributions D_i, and the only information that can be observed is the identity of the winner (highest bidder).  From repeated observations, plus the ability to enter the auction (and win or lose) yourself, can you reconstruct the relevant parts of the individual distributions?  Or consider a setting with multiple items where agents have combinatorial preferences, and where a seller is running a mechanism that you do not know.  From observing the results of a sequence of these interactions, can you learn both the preferences of the buyers and the mechanism of the seller?  In this talk I will discuss algorithms in the context of both of these problems.  In the process we will see connections to decision-list learning in learning theory and Kaplan-Meier estimators in medical statistics.
This work is joint with Yishay Mansour and Jamie Morgenstern.
Title: Econometrics for Learning Agents
Speaker: Eva Tardos
Abstract: Classical work in economics on inferring agent values from data relies on the assumption that all participant strategies are best responses of the observed play of other players, i.e. they constitute a Nash equilibrium. In this paper, we show how to perform inference relying on a weaker assumption instead: assuming that players are using some form of no-regret learning. Learning outcomes emerged in recent years as an attractive alternative to Nash equilibrium in analyzing game outcomes, modeling players who haven’t reached a stable equilibrium, but rather use algorithmic learning, aiming to learn the best way to play from previous observations. In this paper we show how to infer values of players who use algorithmic learning strategies. Such inference is an important first step before we move to testing any learning theoretic behavioral model on auction data. We apply our techniques to a dataset from Microsoft’s sponsored search ad auction system.

Joint work with Denis Nekipelov and Vasilis Syrgkanis.