About the Series

The Special Quarter on Data Science and Online Markets will host several workshops on topics related to the special quarter.  These workshops are jointly organized with local and visiting faculty in economics, but are targeted to an general audience that is familiar with current research in game theory and algorithms, but may not be familiar with the specific research areas of the workshops.

Synopsis

Identification is a fundamental concept in econometrics that ensures that available data is sufficient to recover an economic model. In many cases multiple possible values of model parameters may fit a given distribution of the data equally well. In this case the model is referred to as partially identified. Many important models have been found to be partially identified that includes models of discrete games and auctions. Partial identification is fundamentally tied to the idea of approximation in Computer Science in providing universal bounds for components of an Economic model such as auction welfare or revenue.
Econometric theory has made significant progress in analysis of partially identified models and proposed approaches for their inference. This workshop will introduce basic concepts from econometrics for partially identified models and show the audience recent advances in that literature.  The speakers are Ivan Canay (Northwestern), Denis Chetverikov (UCLA), Tatiana Komarova (London School of Economics), and Andres Santos (UCLA)
The technical program of this workshop is organized by Ivan Canay and Denis Nikepelov.

Logistics

  • Date: Thursday-Friday, April 19-20, 2018.
  • Location: Kellogg Global Hub 4101,  (map), Northwestern U, 2211 Campus Dr, Evanston, IL 60208.
  • Transit: Noyes St. Purple Line (map).
  • Parking: Validation for North Campus Parking Garage (map) available at workshop.
  • Registration: none necessary, bring your own name badge from past conference.

Schedule

Day 1: April 19: Tutorial (Kellogg Global Hub 2410)

  • 11:00-12:30: Ivan Canay:
    Part I: Introduction to Partial Identification and Inference
  • 12:30-2:00: Lunch
  • 2:00-3:30: Ivan Canay:
    Part II: Introduction to Partial Identification and Inference

Day 2: April 20: Research Talks (Kellogg Global Hub 4101)

  • 9:00-9:25: Continental Breakfast
  • 9:25-9:30: Opening Remarks
  • 9:30-10:10: Andres Santos:
    Inference on Directionally Differentiable Functions
  • 10:10-10:20: Andres Santos Q/A
  • 10:20-10:40: Coffee Break
  • 10:40-11:20: Tatiana Komarova:
    Binary Choice Models with Discrete Regressors: Identification and Misspecification
  • 11:20-11:30: Tatiana Komarova Q/A
  • 11:30-12:10: Denis Chetverikov:
    Testing Many Moment Inequalities
  • 12:10-12:20: Denis Chetverikov Q/A
  • 12:20-1:30: Lunch

Titles and Abstracts

Speaker: Andres Santos (UCLA)
Title: Inference on Directionally Differentiable Functions

Abstract: This paper studies an asymptotic framework for conducting inference on parameters of the form $\phi(\theta_0)$ where $\phi$ is a known directionally differentiable function and $\theta_0$ is estimated by $\hat{\theta}_n$. In these settings, the asymptotic distribution of the plug-in estimator $\phi(\hat{\theta}_n)$ can be readily derived employing existing extensions to the Delta method. We show, however, that the “standard” bootstrap is only consistent under overly stringent conditions — in particular we establish that differentiability of $\phi$ is a necessary and sufficient condition for bootstrap consistency whenever the limiting distribution of $\hat{\theta}_n$ is Gaussian. An alternative resampling scheme is proposed which remains consistent when the bootstrap fails, and is shown to provide local size control under restrictions on the directional derivative of $\phi$. We illustrate the utility of our results by developing a test of whether a Hilbert space valued parameter belongs to a convex set — a setting that includes moment inequality problems and certain tests of shape restrictions as special cases.

Speaker: Tatiana Komarova (London School of Economics)
Title: Binary Choice Models with Discrete Regressors: Identification and Misspecification
Abstract: In semiparametric binary response models, support conditions on the regressors are required to guarantee point identification of the parameter of interest. For example, one regressor is usually assumed to have continuous support conditional on the other regressors. In some instances, such conditions have precluded the use of these models; in others, practitioners have failed to consider whether the conditions are satisfied in their data. This paper explores the inferential question in these semiparametric models when the continuous support condition is not satisfied and all regressors have discrete support. I suggest a recursive procedure that finds sharp bounds on the components of the parameter of interest and outline several applications, focusing mainly on the models under the conditional median restriction, as in Manski (1985). After deriving closed-form bounds on the components of the parameter, I show how these formulas can help analyze cases where one regressor’s support becomes increasingly dense. Furthermore, I investigate asymptotic properties of estimators of the identification set. I describe a relation between the maximum score estimation and support vector machines and also propose several approaches to address the problem of empty identification sets when a model is misspecified.

Speaker: Denis Chetverikov (UCLA)
Title: Testing Many Moment Inequalities
Abstract: This paper considers the problem of testing many moment inequalities where the number of moment inequalities, denoted by $ p $, is possibly much larger than the sample size $ n $. There are variety of economic applications where the problem of testing many moment inequalities appears; a notable example is a market structure model of Ciliberto and Tamer (2009) where $ p= 2^{m+ 1} $ with $ m $ being the number of firms. We consider the test statistic given by the maximum of $ p $ Studentized (or $ t $-type) statistics, and analyze various ways to compute critical values for the test statistic. Specifically, we consider critical values based upon (i) the union bound combined with a moderate deviation inequality for self-normalized sums,(ii) the multiplier and empirical bootstraps, and (iii) two-step and three-step variants of (i) and (ii) by incorporating selection of uninformative inequalities that are that are far from being binding and novel selection of weakly informative inequalities that are potentially binding but do not provide first order information. We prove validity of these methods, showing that under mild conditions, they lead to tests with error in size decreasing polynomially in $n$ while allowing $p$ while allowing $p$ much larger than $n$; indeed $p$ can be of order $\exp(n^c)$ for some $c>0$. Moreover, when $p$ grows with $n$, we show that all of our tests are (minimax) optimal in the sense that they are uniformly consistent against alternatives whose “distance” from the null is larger than the threshold $(2(\log\,p)/n)^{1/2}$, while {\em any} test can only have trivial power in the worst case when the distance is smaller than the threshold. Finally, we show validity of a test based on block multiplier bootstrap in the case of dependent data under some general mixing conditions.