About the Series

The Quarterly Theory Workshop brings in three or four theoretical computer science experts present their perspective and research on a common theme. Chicago and Midwest area researchers with interest in theoretical computer science are invited to attend. The technical program is in the morning and includes coffee and lunch. The afternoon of the workshop will allow for continued discussion between attendees and the speakers.

Synopsis

The focus of this workshop will be on junior researchers in all areas of theoretical computer science.  The talks will be on the afternoon of the first day and morning of the second day.  There will be time for open discussion after lunch on the second day.  The speakers for this workshop are :

  • Eric Balkanski (Harvard)
  • Arturs Backurs (TTI-C)
  • Kira Goldner (UW-Seattle)
  • Rasmus Kyng (Harvard)
  • Sepideh Mahabadi (TTI-C) 
  • Rad Niazadeh (Stanford)
  • Noah Stephens-Davidowitz (Princeton). 
  • Ellen Vitercik (CMU)
  • Erik Waingarten (Columbia)

Logistics

  • Date: Thursday-Friday, Nov 15-16, 2018.
  • Location: Seeley Mudd 3514,  (map), Northwestern U, 2211 Campus Dr, Evanston, IL 60208.
  • Transit: Noyes St. Purple Line (map).
  • Parking: Validation for North Campus Parking Garage (map) available at workshop.
  • Recommended Hotel: Hilton Orrington.
  • Registration: Please register (not compulsory).  Please bring your own name badge from past conference.

Tentative Schedule

Thursday:

  • 12:30-12:35: Opening Remarks
  • 12:35-1:15: Noah Stephens-Davidowitz
    Towards a better theoretical understanding of the practical (fine-grained) security of lattice-based cryptography
  • 1:20-2:00: Erik Waingarten
    Approximate Nearest Neighbors via Non-Linear Spectral Gaps
  • 2:05-2:35: Coffee Break
  • 2:35-3:15: Sepideh Mehabadi
    Composable Core-sets for Determinant Maximization Problems via Spectral Spanners
  • 3:20-4:00: Rasmus Kyng
    Matrix Martingales in Randomized Numerical Linear Algebra
  • 4:05-4:45: Kira Goldner
    Interdimensional Mechanism Design
  • 5:00-6:30: Cocktail Reception

Friday:

  • 8:30-9:00: Continental Breakfast
  • 9:00-9:05: Opening Remarks
  • 9:05-9:45: Rad Niazadeh
    Optimal Algorithms for Continuous Non-monotone Submodular Maximization
  • 9:50-10:30: Eric Balkanski
    The Adaptive Complexity of Maximizing a Submodular Function
  • 10:35-11:05: Coffee Break
  • 11:05-11:45: Arturs Backurs
    Efficient Density Evaluation for Smooth Kernels
  • 11:50-12:30: Ellen Viterick
    Dispersion for Data-Driven Algorithm Design, Online Learning, and Private Optimization
  • 12:35-1:30: Lunch

Titles and Abstracts

Speaker: Arturs Backurs
Title: Efficient Density Evaluation for Smooth Kernels

Given a kernel function k and a dataset P (of points from R^d), the kernel density function of P at a point x from R^d is equal to KDF_P(x) := 1/|P| sum_{y in P} k(x,y). Kernel density evaluation has numerous applications, in scientific computing, statistics, computer vision, machine learning and other fields. In all of them it is necessary to evaluate KDF_P(x) quickly, often for many inputs x and large point-sets P.

In this paper we present a collection of algorithms for efficient KDF evaluation under the assumptions that the kernel k is “smooth”, i.e., the value changes at most polynomially with the distance. This assumption is satisfied by several well-studied kernels, including the (generalized) t-student kernel and rational quadratic kernel. For smooth kernels, we give a data structure that, after O(dn log(Phi n)/eps^2) preprocessing, estimates KDF_P(x) up to a factor of 1+eps in O(d log (Phi n)/eps^2) time, where Phi is the aspect ratio. The log(Phi n) term can be further replaced by log n under an additional decay condition on k, which is satisfied by the aforementioned examples.

We further extend the results in two ways. First, we use low-distortion embeddings to extend the results to kernels defined for spaces other than l_2. The key feature of this reduction is that the distortion of the embedding affects only the running time of the algorithm, not the accuracy of the estimation. As a result, we obtain (1+eps)-approximate estimation algorithms for kernels over other l_p norms, Earth-Mover Distance, and other metric spaces. Second, for smooth kernels that are decreasing with distance, we present a general reduction from density estimation to approximate near neighbor in the underlying space. This allows us to construct algorithms for general doubling metrics, as well as alternative algorithms for l_p norms and other spaces.

Joint work with Moses Charikar, Piotr Indyk and Paris Siminelakis.

Speaker: Eric Balkanski
Title: The Adaptive Complexity of Maximizing a Submodular Function

Adaptivity quantifies the number of sequential rounds that an algorithm makes when function evaluations can be executed in parallel. Adaptivity is a fundamental concept that is heavily studied across a variety of areas in computer science, largely due to the need for parallelizing computation. Somewhat surprisingly, despite extensive efforts on submodular optimization for large-scale datasets, very little was known about adaptivity in submodular optimization until very recently. For the canonical problem of maximizing a monotone submodular function under a cardinality constraint, all that was known is that the adaptivity needed to obtain a constant approximation is between 1 and Ω(n). Our main result is a tight characterization showing that the adaptivity needed to maximize a monotone submodular function under a cardinality constraint is, up to lower order terms, θ(log n):

  • We describe an algorithm which requires O(log n) sequential rounds and achieves an approximation that is arbitrarily close to 1/3;
  • We show that no algorithm can achieve an approximation better than O(1 / log n) with fewer than O(log n / log log n) rounds.

Thus, when allowing for parallelization, our algorithm achieves a constant factor approximation exponentially faster than any previous algorithm for submodular maximization.

Joint work with Yaron Singer.

Speaker: Kira Goldner
Title: Interdimensional Mechanism Design

What is the best mechanism a seller can use to maximize their revenue?  The answer depends on what types of items the seller has available to sell.  Previous work in revenue maximization depicts a rigid dichotomy between two types of settings. On one hand, we have single-dimensional settings (e.g. with one item or many identical items), which are completely characterized by a simple closed-form theory; on the other, we have “multi-dimensional” settings (e.g. with many heterogenous items, where buyers have unit-demand or additive valuations), where optimal mechanisms are chaotic and often intractable.  I will survey work that identifies a space that is fundamentally different from both of these extremes, lying on the spectrum in between single- and multi-dimensional by various metrics: our understanding of the optimal mechanism, the menu-complexity, and more.  Further, this “interdimensional” setting contains many natural problems, such as private budgets, single-minded bidders, and additive-up-to-capacity for a private capacity.

Joint work with (1) Amos Fiat, Anna Karlin, and Elias Koutsoupias and (2) Nikhil Devanur, Raghuvansh Saxena, Ariel Schvartzman, and Matt Weinberg.

Speaker: Rasmus Kyng
Title: Matrix Martingales in Randomized Numerical Linear Algebra

A matrix martingale is a sequence of random matrices where the expectation of each matrix in the sequence equals the preceding matrix, conditional on the earlier parts of the sequence. Concentration results for matrix martingales have become an important tool in the analysis of algorithms in randomized numerical linear algebra. Simple and fast algorithms for many well-studied problems can be analyzed in using martingales. We survey recent results on using matrix martingales for randomized constructions of sparse approximations of graphs and discuss in detail the use of matrix martingales for analyzing approximate Gaussian elimination on Laplacian matrices associated with undirected and directed graphs.

Speaker: Sepideh Mahabadi
Title: Composable Core-sets for Determinant Maximization Problems via Spectral Spanners

Given a set of vectors $P\subset R^d$ and an integer $k \leq d$, the goal of the Determinant maximization problem is to find a subset $S \subset P$ of size k, such that the determinant of the Gram matrix of the vectors in $S$ is maximized. The problem and many different variants of it have been studied extensively over the last decade. To this date, the best approximation factor is $e^k$, and it is known that the exponential dependence on $k$ is unavoidable. In this work we study this problem from the perspective of massive datasets.

We show an $\tilde{O}(k)^k$-composable core-set for the determinant maximization problem and further prove that it is impossible to design a better than $O(k)^{k-o(k)}$-composable core-set in the worst case. Using composable core-sets, one can obtain efficient solutions to a wide variety of massive data processing applications, including streaming algorithms and distributed algorithms such as map-reduce.

To obtain the results, we introduce and study a spectral generalization of graph spanners. We then prove that a spectral analogue of the classical greedy algorithm provides an almost optimal approximation algorithm for finding spectral spanners. Finally we show applications of spectral spanners for designing composable core-sets for several other spectral optimization problems.

This is a joint work with Piotr Indyk, Shayan Oveis Gharan, and Alireza Rezaei.

Speaker: Rad Niazadeh
Title: Optimal Algorithms for Continuous Non-monotone Submodular Maximization

In this talk, I will explain our recent result on designing optimal approximation algorithms for maximizing a continuous non-monotone submodular function over the hypercube, both with and without coordinate-wise concavity. This family of optimization problems has several applications in machine learning, economics, social network analysis and revenue management under network effects. Our main result is the first 1/2-approximation algorithm for continuous submodular function maximization; this approximation factor of 1/2 is the best possible for algorithms that only query the objective function at polynomially many points. For the special case of DR-submodular maximization, i.e. when the submodular functions is also coordinate wise concave along all coordinates, we provide a different 1/2-approximation algorithm that runs in quasilinear time. Both of these results improve upon prior work [Soma and Yoshida, 2017, Bian et al, 2017, Buchbinder et al, 2012].

Our first algorithm uses novel ideas such as reducing the guaranteed approximation problem to analyzing a zero-sum game for each coordinate, and incorporates the geometry of this zero-sum game to fix the value at this coordinate. Our second algorithm exploits coordinate-wise concavity to identify a monotone equilibrium condition sufficient for getting the required approximation guarantee, and hunts for the equilibrium point using binary search. We further run experiments to verify the performance of our proposed algorithms in related applications.

The talk is based on a joint work with Tim Roughgarden and Joshua Wang, to appear in NIPS’18.

Speaker: Noah Stephens-Davidowitz
Title: Towards a better theoretical understanding of the practical (fine-grained) security of lattice-based cryptography

Celebrated works of Ajtai and Regev showed that lattice-based cryptography is asymptotically secure (under mild assumptions). But, practical security requires specific, *quantitative* lower bounds (“this specific scheme cannot be broken in time less than 2^{128}”), and theorists have therefore been working to clarify the fine-grained complexity of breaking lattice-based cryptography and related problems for the past decade. This task recently gained practical urgency, as the National Institute of Standards and Technology (NIST) is in the process of standardizing post-quantum cryptography. NIST will almost certainly select a lattice-based standard, and this scheme will likely be used on the scale of the internet in the not-too-distant future.

For security, essentially all of the proposals being considered by NIST rely on the strong quantitative conjecture that our current best algorithms for certain lattice problems are essentially optimal up to relatively small multiplicative constants in the running time. These algorithms are rather complicated; they rely on a Rube-Goldberg-style series of reductions, passing through many different computational lattice problems before eventually using an algorithm to solve the most natural one: the Shortest Vector Problem (SVP). Even the last step is not particularly well understood, as our best SVP algorithms rely on certain heuristic assumptions for correctness.

We will give a (very high-level and self-contained) overview of this “Rube-Goldberg-style” sequence of reductions, and outline ways to provide theoretical evidence for or against the optimality of our current algorithms. We’ll discuss some of our recent progress in both directions (i.e., both upper bounds and lower bounds), and if time allows, provide some detail about our simplest such result—an embarrassingly simple provably correct algorithm for SVP whose running time is not much worse than that of the best heuristic algorithms.

Based on joint works with Divesh Aggarwal, Huck Bennett, Sasha Golovnev, Chris Peikert, and Oded Regev.

Speaker: Ellen Vitercik
Title: Dispersion for Data-Driven Algorithm Design, Online Learning, and Private Optimization

Abstract: We provide online and differentially private algorithms for maximizing the sum of piecewise Lipschitz functions, an important family of non-convex optimization problems. The discontinuities of these functions make them challenging to optimize online and privately, since even a small perturbation can lead to severe suboptimality. We introduce a sufficient and general condition on collections of piecewise Lipschitz functions – dispersion — that enables strong online and private utility guarantees. When the functions are dispersed, we give matching upper and lower bounds on regret under full information, and regret bounds under bandit feedback. Similarly, we give matching upper and lower bounds on suboptimality when privately maximizing dispersed functions in the batch setting.

Our analysis applies to a variety of problems in algorithm configuration (parameter tuning) and computational economics. Under mild (or even no) assumptions, these problems satisfy dispersion. Specifically, we consider tuning the parameters of greedy algorithms for subset selection problems, such as the knapsack problem, and of SDP-rounding schemes for problems formulated as integer quadratic programs, such as max-cut and correlation clustering. We also analyze dispersion for multi-item second price auctions with reserves and multi-item posted price mechanisms.

This is joint work with Nina Balcan and Travis Dick to appear in FOCS 2018.

Speaker: Erik Waingarten
Title: Approximate Nearest Neighbors via Non-Linear Spectral Gaps.

I will present recent advances in approximate nearest neighbor search data structures for general normed spaces. I will explain what non-linear spectral gaps are, and how to use estimates on non-linear spectral gaps to partition large graphs whose vertices lie in a normed space. As a result, I will present the first sub-linear time data structure for approximate nearest neighbor search in high-dimensional normed spaces with approximation which is sub-polynomial in the dimension.

Based on joint work with Alex Andoni, Assaf Naor, Sasho Nikolov, and Ilya Razenshteyn.