About this series
The Quarterly Theory Workshop brings in three theoretical computer science experts present their perspective and research on a common theme. Chicago area researchers with interest in theoretical computer science are invited to attend. The technical program is in the morning and includes coffee and lunch. The afternoon of the workshop will allow for continued discussion between attendees and the speakers.
Synopsis
This Quarterly Theory Workshop is on the theme of Data Science for Online Markets. This theme broadly considers questions for the design and analysis of online marketplaces in the context of a data rich world. The theme is interdisciplinary and draws on connections between mechanism design, machine learning theory, and econometrics. Speakers are Jacob Abernethy, Constantinos Daskalakis, and Denis Nekipelov. The talks will be video recorded and videos will be posted and linked here.
Logistics
- Date: Thursday, January 25, 2018.
- Location: Kellogg Global Hub 4101, (map), Northwestern U, 2211 Campus Dr, Evanston, IL 60208.
- Transit: Noyes St. Purple Line (map).
- Parking: Validation for North Campus Parking Garage (map) available at workshop.
- Registration: none necessary, bring your own name badge from past conference.
Schedule
- 9:00 – 9:25: Coffee
- 9:25 – 9:30: Introduction
- 9:30 – 10:15: Constantinos Daskalakis: Learning Multi-Item Auctions with (or without) Samples
- 10:15 – 11:00: Denis Nekipelov: Measuring the Effect of Online Advertising from Observational Data
- 11:00 – 11:30: Coffee
- 11:30 – 12:15: Jacob Abernethy: Building Algorithms by Playing Games
- 12:15 – 1:20: Lunch Provided
- 2:00 – 3:00: Ph.D. students meeting with Prof. Daskalakis (Ford 3-340)
- 3:00 – 4:00: Ph.D. students meeting with Prof. Nekipelov (Ford 3-340)
- 4:00 – 5:00: Ph.D. students meeting with Prof. Abernethy (Ford 3-340)
Abstracts
Title: Learning Multi-Item Auctions with (or without) Samples
Speaker: Constantinos Daskalakis
Abstract: A common assumption in auction theory is common knowledge of a prior distribution over bidder valuations. This is an unnatural assumption in modern applications of auction theory in online markets. As such, a line of recent work has studied whether knowledge of a prior can be lifted, studying in particular whether revenue-optimal auctions can be “learned” from sample-access to the valuation distributions. This work has encountered analytical and information-theoretic barriers in the multi-item domain. We overcome these barriers through a combination of auction theory and computational learning theory techniques.
Joint work with Yang Cai.
Title: Measuring the Effect of Online Advertising from Observational Data
Speaker: Denis Nikepelov
Abstract: In this paper we analyze a dataset from online advertising on Bing.com, a search engine by Microsoft. The dataset contains a large number of instances of ad displays for “branded” search phrases (i.e. the searches for specific brands) and consumer clicks on the ads and on the “organic” search results. Our main focus is the measurement of the lift effect generated by the ad display: we want to analyze if the presence of the brand ad among the advertisements on the page increases the overall probability of consumer click on that page. A distinctive feature of online advertising is that the ad displays are highly targeted. First of all, the advertisers prefer to advertise to specific consumers. Second, the advertising platform evaluates the (unconditional) probability of each consumer clicking on a given ad which leads to a higher probability of displaying the ads that have a higher a priori estimated probability of click. As a result, measuring the causal effect of the ad display on the page clicks by a given consumer from typical observational data is difficult. We utilize the large scale of our dataset to design an estimator that uses extreme quantiles of the consumer distribution to estimate the true causal effect of an ad display. Using simulations, we demonstrate the properties of our estimator as well as the preferred choices of the tuning parameters. To validate our estimates, we use a set of large scale randomized controlled experiments that Microsoft has run on its advertising platform. Our non-experimental estimates turn out to be very close to the results of the randomized controlled trials across a large set of advertisers on Bing.com.
Title: Building Algorithms by Playing Games
Speaker: Jacob Abernethy
Abstract: A very popular trick for solving certain types of optimization problems is this: write your objective as the solution of a two-player zero-sum game, endow both players with an appropriate learning algorithm, watch how the opponents compete, and extract an (approximate) solution from the actions/decisions taken by the players throughout the process. This approach is very generic and provides a natural template to produce new and interesting algorithms. I will describe this framework and show how it applies in several scenarios, and describe recent work that draws a connection to the Frank-Wolfe method of optimization.
Recent Comments