Synopsis:
The Chicago Junior Theorists Workshop 2025 is being held jointly by Northwestern University and Toyota Technological Institute at Chicago on December 8th (Monday) and December 9th, 2025 (Tuesday). The workshop will go from 9am on Monday to late afternoon on Tuesday.
The focus of this workshop will be on junior researchers in all areas of theoretical computer science. The workshop is supported by ACM SIGACT, IDEAL, NITMB and SkAI institutes, in addition to TTIC and Northwestern. You can register for the workshop here: https://forms.gle/i2XZxSMBRZMATkes7 by December 4th (Thursday), 2025.
Logistics
- Date: Monday, December 8th (TTIC day) & Tuesday, December 9th (Northwestern day@ SkAI/NITMB, downtown Chicago)
- Organizers: Pooja Kulkarni, Aniket Murhekar, Jeff Xu
- Contact Email: chicagojtwteam@gmail.com , and/or contact one of the above organizers.
- Location for Day 1(Monday): The TTIC day will be hosted at TTIC, on the 5th floor of 6045 S Kenwood Ave, Chicago, IL 60637
Location for Day 2 (Tuesday): The Northwestern day will be hosted at the SkAI Institute and NITMB which are both located on the 35th floor of the Hancock Center, 875 N. Michigan Avenue. Please enter on the Chestnut Street side of the building, 172 E. Chestnut Street.
Registration link: https://forms.gle/i2XZxSMBRZMATkes7 . Please register by December 4th (Thursday), 2025, for access to the Hancock building on Tuesday (Dec 9th).
Speaker list:
- John Bostanci (Columbia)
- Giannis Fikioris (Cornell)
- Peyman Jabbarzade (U. Maryland)
- Rhea Jain (UIUC)
- Esty Kelman (BU/MIT)
- Yael Kirkpatrick (MIT)
- Yaowei Long (U Michigan)
- Cassandra Marcussen (Harvard)
- Shivam Nadimpalli (MIT)
- Pranav Nuti (U Chicago)
- Kalen Patton (Georgia Tech)
- Prasanna Ramakrishnan (Stanford)
- Janani Sundaresan (U Waterloo)
- Zihan Zhang (Ohio State University)
- Kai Zhe Zheng (MIT)
- Idan Attias (UIC/TTIC)
- He Jia (Northwestern)
- Liren Shan (TTIC)
- Dravyansh Sharma (TTIC)
- Vaidehi Srinivas (Northwestern)
- Chenhao Zhang (Northwestern)
Tentative Schedule (Central Time)
Monday, Dec 8 at TTIC
9:00-9:20 – Breakfast
9:20-9:30 – Welcome Remarks
9:30-10:00 – Giannis Fikioris (Cornell) on Learning in Budgeted Auctions with Spacing Objectives
10:00-10:30 – Prasanna Ramakrishnan (Stanford) on How to Appease a Voter Majority
10:30-11:00 – Coffee Break
11:00-11:30 – Kalen Patton (Georgia Tech) on Online Allocation with Concave, Diminishing-Returns Objectives
11:30-12:00 – Pranav Nuti (U Chicago) on Prophet Inequalities with Cancellation Costs
12:00–1:00 – Lunch
1:00-2:30 – Local Session (TTIC)
- Liren Shan on Dynamic Algorithm for Explainable k-medians Clustering under Lp Norm
- Dravy Sharma on On Learning Verifiers for Chain-of-Thought Reasoning
- Idan Attias on Positive Distribution Shift as a Framework for Understanding Tractable Learning
- Mahdi Haghifam on TBA
2:30-3:00 – Coffee Break
3:00-3:30 – Cassandra Marcussen (Harvard) on Quality Control on Random Graphs in Sublinear Time
3:30-4:00 – Shivam Nadimpalli (MIT) on Super-resolution without assumptions
4:00-4:30 – Coffee Break
4:30-5:00 – Yael Kirkpatrick (MIT) on Harnessing Matrix Multiplication for Additive APSP Approximation
5:00-5:30 – Janani Sundaresan (U Waterloo) on Distributed Triangle Detection is Hard in Few Rounds
Tuesday, Dec 9 at Northwestern (NITMB/SKAI)
9:00-9:20 – Breakfast
9:20-9:30 – Welcome Remarks
9:30-10:00 – John Bostanci (Columbia) on Separating QMA from QCMA with a classical oracle
10:00-10:30 – Esty Kelman (BU/MIT) on Algebraic property testing with online adversaries.
10:30-11:00 – Coffee Break
11:00-11:30 – Peyman Jabbarzade (U Maryland) on Better-Than-2 Approximation for Steiner Forest
11:30-noon – Rhea Jain (UIUC) on A Polylogarithmic Approximation for Directed Steiner Forest in Planar Digraphs
Noon-1 pm – Lunch
1:00pm-2:00pm – Local Session (Northwestern)
- Vaidehi Srinivas on Online Conformal Prediction with Efficiency Guarantees
- Chenhao Zhang on Regulation of Algorithmic Collusion from Data
- He Jia on Moment Methods and Low-Degree Polynomials Fail to Predict Robust Subspace Recovery
2:00pm – 2:30 – Coffee Break
2:30-3:00 – Kai Zheng (MIT) on Probabilistically Checkable Proofs and their Applications: From Theory to Practice
3:00-3:30 – Zihan Zhang (Ohio State) on Explicit Folded Reed–Solomon and Multiplicity Codes Achieve Relaxed Generalized Singleton Bounds
3:30 – 4:00pm – Yaowei Long (U Michigan) on Fault-Tolerant Distance Oracles and Labeling Schemes with Constant Approximation
Titles and Abstracts
Speaker: Giannis Fikioris
Title: Learning in Budgeted Auctions with Spacing Objectives
Abstract: In many repeated auction settings, participants care not only about how frequently they win but also about how their winnings are distributed over time.
This problem arises in various practical domains where avoiding congested demand is crucial and spacing is important, such as advertising campaigns.
We initiate the study of repeated auctions with preferences for spacing of wins over time.
We introduce a simple model of this phenomenon, where the value of a win is given by any concave function of the time since the last win.
We study the optimal policies for this setting in repeated second-price auctions and offer learning algorithms for the bidders that achieve $\tilde O(\sqrt{T})$ regret.
Speaker: Cassandra Marcussen
Title: Quality Control on Random Graphs in Sublinear Time
Abstract: Many algorithms are designed to perform well on random instances. However, when running such an algorithm on a specific input, can we trust its output? We define a new class of problems whose formulation allows algorithms to benefit from the randomness of instances while remaining reliable to use on any given input. We call these “quality control” problems, specified by a distribution D and a real-valued quality function Q. We say that an input x is “high quality” if Q(x) is approximately 1, and assume that samples from D are high quality with high probability. The task is to accept x if it is drawn from D and reject x if Q(x) is far from 1.
We study this problem in the sublinear setting, where inputs are graphs and the algorithm has access to adjacency matrix queries. Focusing on the case where D is a (sparse) Erdős–Rényi random graph model and Q is proportional to the number of k-cliques in a graph, we show that quality control can be solved superpolynomially faster than the related task of approximating Q in worst-case graphs in the sublinear access model.
Joint work with Ronitt Rubinfeld (MIT) and Madhu Sudan (Harvard).
Speaker: Peyman Jabbarzade
Title: Better-Than-2 Approximation for Steiner Forest
Abstract: The Steiner Forest problem, also known as the Generalized Steiner Tree problem, is a fundamental optimization problem on edge-weighted graphs where, given a set of vertex pairs, the goal is to select a minimum-cost subgraph such that each pair is connected.
This problem generalizes the Steiner Tree problem, first introduced in 1811, for which the best known approximation factor is 1.39 by [Byrka, Grandoni, Rothvoß, and Sanità, 2010].
The celebrated work of [Agrawal, Klein, and Ravi, 1989], along with refinements by [Goemans and Williamson, 1992], established a 2-approximation for Steiner Forest over 35 years ago. Despite the long-standing importance of this problem, breaking the approximation factor of 2 has remained a major challenge.
In this talk, I present a novel deterministic algorithm that breaks the approximation barrier of 2 for this fundamental problem. As a key component of our approach, we also introduce a novel dual-based local search algorithm for the Steiner Tree problem with an approximation guarantee of 1.943, which is of independent interest.
Speaker: Yael Kirkpatrick
Title: Harnessing Matrix Multiplication for Additive APSP Approximation
Abstract: The All-Pairs Shortest Paths (APSP) is a foundational problem in theoretical computer science. Approximating APSP in undirected unweighted graphs has been studied for many years, beginning with the work of Dor, Halperin and Zwick [SICOMP’01]. The combinatorial additive approximation algorithms shown in this paper remained the state of the art for 2 decades and it was unclear if algebraic tools from the world of matrix multiplication had the potential to speed up this problem.
In recent years, however, we have been able to harness these algebraic tools and give increasingly faster algorithms for computing additive approximation to APSP. In this talk I will present two techniques for ordering or partitioning the graph that allow for the use of algebraic tools and produce faster algorithms for approximating APSP.
Speaker: Rhea Jain
Title: A Polylogarithmic Approximation for Directed Steiner Forest in Planar Digraphs
Abstract: Steiner Forest is fundamental problems in network design, in which given an edge-weighted graph and source-sink vertex pairs, the goal is to find a subgraph of minimum cost that contains a path from each source vertex to its corresponding sink. Steiner Forest and its rooted counterpart, Steiner Tree, admit small constant factor approximation ratios in undirected graphs. On the other hand, the corresponding problems on directed graphs have strong lower bounds on their approximability. In fact, no polylogarithmic approximation algorithm is known for Directed Steiner Tree, and unless P = NP, a polylogarithmic approximation is not possible for Directed Steiner Forest.
Recent results demonstrate that these problems are significantly more tractable when the input directed graph is planar! Friggstad and Mousavi [ICALP ’23] recently obtained a simple and elegant polynomial-time O(log k)-approximation for Directed Steiner Tree in planar digraphs, where k is the number of demand pairs. Inspired by this work, we show that Directed Steiner Forest admits a polylogarithmic approximation in planar digraphs. In this talk, I will given an overview of this algorithm and the key structural properties of planarity that are leveraged in these works. I will also highlight some related work on other network design problems in planar digraphs, including some recent extensions to more complex settings.
Speaker: Prasanna Ramakrishnan
Title: How to Appease a Voter Majority
Abstract: In 1785, Condorcet established a frustrating property of elections and majority rule: it is possible that, no matter which candidate you pick as the winner, a majority of voters will prefer someone else. You might have the brilliant idea of picking a small set of winners instead of just one, but how do you avoid the nightmare scenario where a majority of the voters prefer some other candidate over all the ones you picked? How many candidates suffice to appease a majority of the voters? In this talk, we will explore this question. Along the way, we will roll some dice — both because the analysis involves randomness and because of a connection to the curious phenomenon of intransitive dice, that has delighted recreational and professional mathematicians alike ever since Martin Gardner popularized it in 1970.
Based on joint work with Moses Charikar, Alexandra Lassota, Adrian Vetta, and Kangning Wang.
Speaker: Yaowei Long
Title: Fault-Tolerant Distance Oracles and Labeling Schemes with Constant Approximation
Abstract: A fault-tolerant distance labeling scheme assigns a label to each vertex and edge of an undirected weighted graph $G$ with $n$ vertices so that, for any edge set $F$ of size $|F| \leq f$, one can approximate the distance between $p$ and $q$ in $G \setminus F$ by reading only the labels of $F \cup \{p,q\}$. For any $k$, we present a deterministic polynomial-time scheme with $O(k^{4})$ approximation and $\tilde{O}(f^{4}n^{1/k})$ label size. This is the first scheme to achieve a constant approximation while handling any number of edge faults $f$, resolving the open problem posed by Dory and Parter [DP21]. All previous schemes provided only a linear-in-$f$ approximation [DP21, LPS25].
Our labeling scheme directly improves the state of the art in the simpler setting of distance sensitivity oracles. Even for just $f = \Theta(\log n)$ faults, all previous oracles either have super-linear query time, linear-in-$f$ app3roximation [CLPR12], or exponentially worse $2^{\poly(k)}$ approximation dependency in $k$ [HLS24].
Speaker: John Bostanci
Title: Separating QMA from QCMA with a classical oracle
Abstract: We construct a classical oracle proving that, in a relativized setting, the set of languages decidable by an efficient quantum verifier with a quantum witness (QMA) is strictly bigger than those decidable with access only to a classical witness (QCMA). The separating classical oracle we construct is for a decision problem we coin spectral Forrelation — the oracle describes two subsets of the boolean hypercube, and the computational task is to decide if there exists a quantum state whose standard basis measurement distribution is well supported on one subset while its Fourier basis measurement distribution is well supported on the other subset. This is equivalent to estimating the spectral norm of a “Forrelation” matrix between two sets that are accessible through membership queries.
Our lower bound derives from a simple observation that a query algorithm with a classical witness can be run multiple times to generate many samples from a distribution, while a quantum witness is a “use once” object. This observation allows us to reduce proving a QCMA lower bound to proving a sampling hardness result which does not simultaneously prove a QMA lower bound. To prove said sampling hardness result for QCMA, we observe that quantum access to the oracle can be compressed by expressing the problem in terms of bosons — a novel “second quantization” perspective on compressed oracle techniques, which may be of independent interest. Using this compressed perspective on the sampling problem, we prove the sampling hardness result, completing the proof.
Joint work with Jonas Haferkamp, Chinmay Nirkhe, and Mark Zhandry.
Speaker: Kalen Patton
Title: Online Allocation with Concave, Diminishing-Returns Objectives
Abstract: Online resource allocation problems are central challenges in economics and computer science, modeling situations in which n items arriving one at a time must each be immediately allocated among m agents. In such problems, our objective is to maximize a monotone reward function f(x) over the allocation vector x, which describes the amount of each item given to each agent. In settings where f is concave and DR-submodular (i.e. has monotone decreasing gradient), several lines of work over the past two decades have had great success designing constant-competitive algorithms, including the foundational work of Mehta et al. (2005) on the Adwords problem and many follow-ups. Notably, while a greedy algorithm is 1/2-competitive in such settings, these works have shown that one can often obtain a competitive ratio of (1-1/e) in a variety of settings when items are divisible (i.e. allowing fractional allocations). However, prior works have thus far used a variety of problem-specific techniques, leaving open the general question: Does a (1-1/e)-competitive fractional algorithm always exist for online resource allocation problems with concave, DR-submodular objectives?
In this work, we answer this question affirmatively, thereby unifying and generalizing prior results for special cases. Our algorithm is one which makes continuous greedy allocations with respect to an auxiliary objective U(x). Using the online primal-dual method, we show that if U satisfies a “balanced” property with respect to f, then one can bound the competitiveness of such an algorithm. Our crucial observation is that there is a simple expression for U which has this balanced property for any f, yielding our general (1-1/e)-competitive algorithm.
Speaker: Kai Zheng
Title: Probabilistically Checkable Proofs and their Applications: From Theory to Practice
Abstract: Probabilistically Checkable Proofs (PCPs) are a remarkable kind of proof that can be verified by inspecting only a constant number of randomly chosen locations—sometimes as few as even two! Since their discovery in the 1990s, PCPs have become central to theoretical computer science, with far-reaching applications in both theory and practice. In this talk, I will give an overview of some of my previous results on PCPs and highlight their relevance to these applications, namely in the areas of hardness of approximation and cryptographic SNARKs.
Speaker: Zihan Zhang
Title: Explicit Folded Reed–Solomon and Multiplicity Codes Achieve Relaxed Generalized Singleton Bounds
Abstract: List-decodable codes play a crucial role in both theoretical and practical applications. By loosening the constraint of unique decoding — requiring only that the original codeword be among a small set of candidates — they can withstand nearly twice as many errors. This enhanced error resilience is fundamental to applications such as group testing and compressed sensing. Beyond error correction, their well-structured mathematical properties enable broader applications in fields like pseudorandomness, computational complexity, and cryptography.
A central and fundamental question in coding theory is what kinds of codes are optimally list-decodable. Previously, all known constructions either relied on randomness or required significantly larger list sizes. In this talk, I will present recent progress demonstrating the first explicit codes that are list-decodable to capacity with the optimal list size. More concretely, I will show that any “appropriate” folded Reed-Solomon codes and multiplicity codes are almost optimally list-decodable, fully resolving an open problem of Guruswami–Rudra (STOC 2006). I will discuss the high-level ideas behind these constructions and also mention many directions for future research. These results are based on joint work with Yeyuan Chen.
Speaker: Shivam Nadimpalli
Title: Super-resolution without assumptions
Abstract: The problem of super-resolution, roughly speaking, is to reconstruct an unknown signal to high accuracy, given (potentially noisy) information about its low-degree Fourier coefficients. Classical results give positive answers, but only under strong modeling assumptions—typically requiring that the signal is a sum of well-separated point sources.
In this work we analyze a very general version of the super-resolution problem by considering *arbitrary* high-dimensional signals over the torus. We impose no separation conditions between point sources, and do not assume that the signal is even a finite combination of point sources. This raises a natural question: what can be said about super-resolution in such a general setting?
We give two sets of results for two complementary notions of reconstruction: (i) reconstruction w.r.t. Wasserstein distance, and (ii) reconstruction with respect to a new notion of “heavy hitter” distance. In both settings, we obtain essentially matching upper and lower bounds on the number of Fourier coefficients required for accurate reconstruction. A key technical ingredient in our analysis is the use of extremal polynomial constructions from theoretical computer science.
Speaker: Janani Sundaresan
Title: Distributed Triangle Detection is Hard in Few Rounds
Abstract: In the CONGEST model, $n$ vertices with information only about their neighbourhoods are allowed to communicate to each other over the edges of the input graph. Communication happens in synchronous rounds with a bandwidth of $O(log n)$. We prove that detecting a triangle in this model requires $Omega(log log n)$ rounds. Prior to our work, the only lower bound was that at least two rounds are necessary.
It is known that standard communication complexity arguments that have been used to get lower bounds in the CONGEST model in the past are incapable of giving any meaningful multi-round lower bounds for this problem. Our main contribution is a new information-theoretic technique that combines classical round elimination arguments of communication complexity with the point-to-point communication aspects of distributed networks and can be of independent interest.
Joint work with Sepehr Assadi.
Speaker: Pranav Nuti
Title: Prophet Inequalities with Cancellation Costs
Abstract: Most of the literature on online algorithms focuses on settings with irrevocable decisions, where once a decision is made upon the arrival of an input, it cannot be changed. A canonical example is the classic prophet inequality problem, where a sequence of independent random variables with known distributions is revealed one by one, and a decision-maker must decide when to stop and accept the current variable in order to maximize the expected value of their irrevocable pick. We study “”prophet inequalities with cancellations”” under a linear cancellation cost model. In this model, after accepting a random variable, one may later discard it to accept another variable, paying a cancellation cost of f times the discarded variable, where f > 0 is a given parameter. The goal is to maximize the expected net reward, which is the value of the final accepted variable minus the total cancellation cost. In this talk, I will explain some of the ideas behind our main result, which is a simple and polynomial-time policy that achieves the optimal competitive ratio for this problem. Importantly, our policy is order-agnostic, meaning that it only needs to know the set of distributions and not the order in which the random variables arrive.
Speaker: Esty Kelman
Title: Algebraic property testing with online adversaries.
Abstract: Motivated by the right to be forgotten in regulations like GDPR, Kalemaj, Raskhodnikova, and Varma (ITCS 2022 and Theory of Computing 2023) initiated the study of sublinear algorithms that work with continuously and adversarially degrading access to the input. To capture such situations, KRV proposed the online manipulation-resilient model, where after answering each query of the algorithm, the adversary erases t input values for some parameter t. They studied fundamental property testing problems, including linearity and quadraticity of Boolean functions and sortedness of sequences in this model. In particular, they showed that if t is constant, then both linearity and quadraticity can be tested optimally (with the same asymptotic query complexity as in the standard model without an adversary) while sortedness is not testable in their online model, even when t = 1. In this talk, we will start by defining the framework of property testing and the problem of linearity testing of multivariate functions over \mathbb{F}_2. We will then present the online manipulation-resilient testing model and discuss how to overcome the new challenges that come with it. Then, we will explain how to extend the classical linearity test by Blum, Luby, and Rubinfeld (Journal of Computer and System Sciences 1993) in a way that allows us to design a simple tester that is optimal for the online model even when t is large. Our tester for the online model also serves as a new simple tester that are optimal in the standard model without adversaries. We will discuss what is currently known and what are the open questions for other algebraic properties: low degree testing and homomorphism testing for general groups as well as locally testable codes. This talk is based on multiple joint works with Vipul Arora, Omri Ben-Eliezer, Ephraim Linder, Uri Meir, and Sofya Raskhodnikova.
Speaker: He Jia
Title: Moment Methods and Low-Degree Polynomials Fail to Predict Robust Subspace Recovery
Abstract: The low-degree polynomial framework has been highly successful in predicting computational versus statistical gaps for high-dimensional problems in average-case analysis and machine learning. This success has led to the low-degree conjecture, which posits that this method captures the power and limitations of efficient algorithms for a wide class of high-dimensional statistical problems. We identify a natural and basic statistical estimation problem in $\R^n$ where all the low-degree moments match up to degree $k=\widetilde{O}(\sqrt{\log n})$, and the low-degree polynomial method up to the same degree fail to predict its computational tractability.
Our problem is a special case of the well-studied robust subspace recovery problem. In contrast to the lower bounds, we give a simple, robust, and efficient algorithm that solves the problem (and highly noisy variants of it), leveraging anti-concentration properties of the construction. Our results suggest that the moment-based methods, and the low-degree polynomial methods fail to capture algorithmic techniques based on anti-concentration, challenging its universality as a predictor of computational barriers.
Speaker: Vaidehi Srinivas
Title: Online Conformal Prediction with Efficiency Guarantees
Abstract: Conformal prediction is the problem of generating prediction sets (confidence intervals). Conformal methods are widely used in practice to address unreliability of black box models, and for uncertainty quantification. We study this problem in a novel online formulation, which explicitly optimizes for efficiency (size of the confidence sets). We show that this problem admits very different guarantees than online learning, which can be thought of as its unconstrained counterpart.
Speaker: Idan Attias
Title: Positive Distribution Shift as a Framework for Understanding Tractable Learning
Abstract: We study a setting where the goal is to learn a target function f (x) with respect to a target distribution D(x), but training is done on i.i.d. samples from a different training distribution D′(x), labeled by the true target f (x). Such a distribution shift (here in the form of covariate shift) is usually viewed negatively, as hurting or making learning harder, and the traditional distribution shift literature is mostly concerned with limiting or avoiding this negative effect. In contrast, we argue that with a well-chosen D′(x), the shift can be positive and make learning easier – a perspective we call Positive Distribution Shift (PDS). Such a perspective is central to contemporary machine learning, where much of the innovation is in finding good training distributions D′(x), rather than changing the training algorithm. We further argue that the benefit is often computational rather than statistical, and that PDS allows computationally hard problems to become tractable even using standard gradient-based training. We formalize different variants of PDS, show how certain hard classes are easily learnable under PDS, and make connections with membership query learning.
Speaker: Liren Shan
Title: Dynamic Algorithm for Explainable k-medians Clustering under Lp Norm
Abstract: We study the problem of explainable k-medians clustering introduced by Dasgupta, Frost, Moshkovitz, and Rashtchian (2020). In this problem, the goal is to construct a threshold decision tree that partitions data into k clusters while minimizing the k-medians objective. These trees are interpretable because each internal node makes a simple decision by thresholding a single feature, allowing users to trace and understand how each point is assigned to a cluster.
We present the first algorithm for explainable k-medians under Lp norm for every finite p >=1. We show how to implement our algorithm in a dynamic setting. The dynamic algorithm maintains an explainable clustering under a sequence of insertions and deletions, with small amortized update time and recourse, making it suitable for large-scale and evolving datasets.
Speakers: Dravyansh Sharma
Title: On Learning Verifiers for Chain-of-Thought Reasoning
Abstract: Chain-of-Thought reasoning has emerged as a powerful approach for solving complex mathematical and logical problems. However, it can often veer off track through incorrect or unsubstantiated inferences. Formal mathematical reasoning, which can be checked with a formal verifier, is one approach to addressing this issue. However, currently LLMs are simply not good enough to solve complex problems in a formal way, and even just formalizing an informal problem statement can be challenging. If you have tried to use an LLM to generate a mathematical proof, you have likely observed that they can make unexpected critical mistakes, and have possibly manually checked their “proofs” carefully line by line. Can we automate this tedious process of verifying proofs written in natural language automatically, and thereby improve the reliability of Chain-of-Thought reasoners? Formally, given a problem statement and step-by-step solution in natural language, the aim of the verifier is to output [Yes] if the reasoning steps in the solution are all valid, and [No] otherwise along with the first incorrect step. In this work we give a formal PAC-learning framework, and propose and analyze several natural verification goals, at different levels of strength. We provide sample complexity upper and lower bounds for learning verifiers satisfying these goals.
Speaker: Chenhao Zhang
Title: Regulation of Algorithmic Collusion from Data
Abstract: Algorithms are increasingly being used to price goods and services in competitive markets. Many recent papers have shown that in various settings, some configurations of certain pricing algorithms can find and maintain supra-competitive prices from repeated market interactions. We give a definition of plausible algorithmic non-collusion for pricing algorithms. On one hand, the definition allows sellers to utilize useful side information that may be correlated with supply and demand. On the other hand, it allows a regulator to empirically audit algorithms by applying a statistical test to the data they collect. We provide an analysis of the statistical complexity of such an audit, i.e., how much data is sufficient for the test of non-collusion to be accurate. We further discuss the unrelaxability of the definition even when there is no side information, as well as the role of the regulator’s knowledge of the seller’s cost in the audit.

Hedyeh Beyhaghi.
Shravas Rao.
Pattara Sukprasert. Pattara is a third-year Ph.D. student advised by Prof. Samir Khuller. He transferred from the University of Maryland College Park, where he spent his first two Ph.D. years and received a Master’s degree. Before that, he lived mostly in Thailand and got another Master’s degree from Kasetsart University advised by Prof. Jittat Fakcharoenphol. He is interested in approximation algorithms and graph theory, and has worked on network design problems, network flow problems, and some structural graph theory problems. Recently, he has also developed some interests in fast (up to sub-cubic) approximation algorithms.
Saba Ahmadi. Saba is a fifth year PhD student at the University of Maryland College Park visiting Northwestern, and she is advised by Prof. Samir Khuller. She is mainly interested in designing approximation algorithms. She is also interested in the topic of fairness and its relevance to combinatorial optimization. She finds problems at the intersection of AI and combinatorial optimization very interesting, one example is how to have diversity in the matching markets.
Sheng Yang.
The Computer Science Division
Recent Comments