09:30 – 10:00 | Registration |
10:00 – 11:00 |
Invited Talk 1
Mathematicians invented it in the 1940s, game
theorists, economists and political philosophers
developed it from the 1950s, and computer
scientists joined the crowd with new algorithmic
and approximation methods at the beginning of
this century. I will discuss how the two main
concepts, Fair Share and No Envy, evolved with
the emergence of the digital economy and the new
transactions opportunities it offers. And
speculate about some related directions of
research.
|
11:00 – 11:30 | Coffee break |
11:30 – 12:50 |
Session 1
We study a temporal voting model where voters have dynamic preferences over a set of public chores---projects that benefit society, but impose individual costs on those affected by their implementation. We investigate the computational complexity of optimizing utilitarian and egalitarian welfare. Our results show that while optimizing the former is computationally straightforward, minimizing the latter is computationally intractable, even in very restricted cases. Nevertheless, we identify several settings where this problem can be solved efficiently, either exactly or by an approximation algorithm. We also examine the effects of enforcing temporal fairness and its impact on social welfare, and analyze the competitive ratio of online algorithms. We then explore the strategic behavior of agents, providing insights into potential malfeasance in such decision-making environments. Finally, we discuss a range of fairness measures and their suitability for our setting.
The Social Ranking problem consists in determining a ranking over items' power or influence within their population, by measuring their interactions with the other items, based on ordinal preferences over groups they may form. Several methods have been introduced in the literature to determine such a ranking, based on a complete preorder over the coalitions. This input can in practice prove tedious to gather, however, as the number of coalitions to consider grows exponentially with the size of the population. For this reason, we consider a relaxed version of the Social Ranking problem, in which only a partial preorder is required as input, and there is therefore missing information about some coalitions.
We have investigated different contexts of missing information: firstly, we studied a context in which the missing information is available through queries, and proposed elicitation methods, targeting key coalitions to gather sufficient information in order to infer the ranking over items; then, we studied a context in which the missing coalitions are not subject to queries, but the preferences over coalitions are assumed to follow a given structure, and we observed the robustness of "classical" social ranking methods under varying numbers of missing coalitions. Finally, we consider a context in which the missing information is not subject to queries because the missing coalitions are unrealisable, and investigate the possibility of a second, complementary measure of interactions by considering that missing coalitions express incompatibilities between their components.
We study the computational complexity of problems that have at most one solution, namely unambiguous problems. We focus mainly on problems syntactically in $\Sigma_2^P$, typically of the form: "Does there {\em exist} an item that beats all others?" Motivated by questions in social choice and game theory, we study the existence of (1) a dominating strategy in a game, (2) a Condorcet winner, (3) a strongly popular partition in hedonic games, and (4) a winner (sink) in a tournament. We classify these problems, showing the first is $P^{NP}$-complete, the second and third are complete for a class we term $PCW$ (Polynomial Condorcet Winner), and the fourth for a class we term $PTW$ (Polynomial Tournament Winner). We define another unambiguous class $PMA$ (Polynomial Majority Argument), seemingly incomparable to $PTW$. We show that with randomization, $PCW$ and $PTW$ collapse to $P^{NP}$, and $PMA$ is contained in $P^{NP}$. Specifically, we prove: $P^{NP} \subseteq PCW \subseteq PTW \subseteq S_2P$, and $coNP \subseteq PMA \subseteq S_2P$ (and it is know that $S_2P\subseteq ZPP^{NP} \subseteq \Sigma_2^P \cap \Pi_2^P$).
Abstract: Imperfect-recall games, in which players may forget previously acquired information, have found many practical applications, ranging from game abstractions to team games and testing AI agents. In this paper, we quantify the utility gain by endowing a player with perfect recall, which we call the value of recall (VoR). While VoR can be unbounded in general, we parameterize it in terms of various game properties, namely the structure of chance nodes and the degree of absentmindedness (the number of successive times a player enters the same information set). Further, we identify several pathologies that arise with VoR, and show how to circumvent them. We also study the complexity of computing VoR, and how to optimally apportion partial recall. Finally, we connect VoR to other previously studied concepts in game theory, including the price of anarchy. We use that connection in conjunction with the celebrated smoothness framework to characterize VoR in a broad class of games.
Paper: https://arxiv.org/pdf/2412.19659 (AAAI 2025)
|
12:50 – 14:10 | Lunch break |
14:10 – 15:30 |
Session 2
This talk will explore the conceptual and computational landscape of non-bipartite stable matching problems. I will begin with the classical one-to-one setting, where stable and unstable instances can be characterised via stable partitions. I will briefly present new results for this framework and review the tractability frontier of efficiently computable solutions - such as maximum and fractional stable matchings - as well as hard optimisation problems like optimal stable matchings, almost-stable matchings and restricted-agent deletions.
I will then turn to a capacitated generalisation of the problem model and introduce a new extension of stable partitions to this setting. This efficiently computable structure provides a tight characterisation of problem instances and has inherent connections and applications to the computation of alternative solution concepts, including near-feasible and almost-stable matchings, both of which are of high theoretical and practical interest. I will conclude by pointing to a conceptual gap between these two notions and outlining possible directions for resolving it.
Papers:
https://arxiv.org/abs/2406.00437 (SAGT'24),
https://arxiv.org/abs/2505.11456
We study a continuous-time, infinite-horizon dynamic bipartite matching problem. Suppliers arrive according to a Poisson process; while waiting, they may abandon the queue at a uniform rate. Customers on the other hand must be matched upon arrival. The objective is to minimize the expected long-term average cost subject to a throughput constraint on the total match rate.
Previous literature on dynamic matching focuses on "static" policies, where the matching decisions do not depend explicitly on the state of the supplier queues, achieving constant-factor approximations. By contrast, we design "adaptive'' policies, which leverage queue length information, and obtain near-optimal polynomial-time algorithms for several classes of instances.
First, we develop a bi-criteria fully polynomial-time approximation scheme for dynamic matching on networks with a constant number of queues—that computes a $(1-\varepsilon)$-approximation of the optimal policy in time polynomial in both the input size and $1/\varepsilon$. A key new technique is a hybrid LP relaxation, which combines static and state-dependent LP approximations of the queue dynamics, after a decomposition of the network. Networks with a constant number of queues are motivated by deceased organ donation schemes, where the supply types can be divided according to blood and tissue types.
The above algorithm, combined with a careful cell decomposition gives a polynomial-time approximation scheme for dynamic matching on Euclidean networks of fixed dimension. The Euclidean case is of interest in ride-hailing and spatial service platforms, where the goal is to fulfill as many trips as possible while minimizing driving distances.
Paper:
https://arxiv.org/abs/2501.08775 (STOC'25)
We study the utilitarian distortion of social choice mechanisms under the recently proposed learning-augmented framework where some (possibly unreliable) predicted information about the preferences of the agents is given as input. In particular, we consider two fundamental social choice problems: single-winner voting and one-sided matching. In these settings, the ordinal preferences of the agents over the alternatives (either candidates or items) is known, and some prediction about their underlying cardinal values is also provided. The goal is to leverage the prediction to achieve improved distortion guarantees when it is accurate, while simultaneously still achieving reasonable worst-case bounds when it is not. This leads to the notions of consistency and robustness, and the quest to achieve the best possible tradeoffs between the two. We show tight tradeoffs between the consistency and robustness of ordinal mechanisms for single-winner voting and one-sided matching, for different levels of information provided by as prediction.
Paper: https://arxiv.org/abs/2502.06489 (EC 2025)
|
15:30 – 16:00 | Coffee break |
16:00 – 17:00 |
Session 3
Stablecoins are cryptocurrencies whose value is pegged to some non-volatile currency like the US Dollar, and backed by some underlying reserves, which, depending on the design, can be either fiat currencies or cryptocurrencies themselves. In this work, we investigate the relationship between the stability of stablecoins and the price volatility of the reserves. To this end, we propose an appropriate theoretical model for studying the livelihood of stablecoin systems, and obtain both analytical and experimental results on how the reserves deplete as a function of their price volatility. At a more fine-grained level, our results relate the rate of depletion with the variance in the price distribution, as well as the potential for arbitrage opportunities, expressed via the patience and attitude towards risk of the agents involved in the system. Both our theoretical findings and our simulations suggest that, in the absence of other contingency measures, a stablecoin system operating on unstable reserves is not long-term sustainable.
Financial networks model a set of banks interconnected by monetary obligations. Recent work has introduced a class of obligations called Credit Default Swaps ("CDS"), a well-known type of financial derivative.
The main computational challenge in financial systems is the clearing problem. This problem involves the task of determining insolvent firms and quantifying their exposure to systemic risk. The technical term used to describe this exposure is the clearing recovery rate. In essence, the clearing problem involves computing the clearing recovery rates of all financial institutions in a given network.
It is established that the clearing problem is computationally tractable when dealing with simple debt contracts, while existing techniques show PPAD-hardness for finding a weak approximate \( \varepsilon \)-solution when CDSes are involved, within an unspecified small range for \( \varepsilon \).
This presentation addresses the clearing problem in financial networks containing simple debt contracts and credit default swaps and presents results on the following aspects of the problem:
- Exact Computation
- Approximation Strength
- Algorithms and Heuristics
Papers: here, here, and here.
We study a model of subscription-based platforms where users pay a fixed fee for unlimited access to content, and creators receive a share of the revenue. Existing approaches to detecting fraud predominantly rely on machine learning methods, engaging in an ongoing arms race with bad actors. We explore revenue division mechanisms that inherently disincentivize manipulation. We formalize three types of manipulation-resistance axioms and examine which existing rules satisfy these. We show that a mechanism widely used by streaming platforms, not only fails to prevent fraud, but also makes detecting manipulation computationally intractable. We also introduce a novelrule, ScaledUserProp, that satisfies all three manipulation-resistance axioms. Finally, experiments with both real-world and synthetic streaming data support ScaledUserProp as a fairer alternative compared to existing rules.
|
17:30 – | Open problems + pizza |