## Keynote talks

### Monday 9:30-10:30am, Speaker: Adam Polak

**Algorithms with predictions: a subjective survey**

**Abstract: **Algorithms with predictions is a new concept at the intersection of TCS and ML. The core idea is to go beyond worst-case bounds of classic algorithms by using possibly imperfect predictions in a robust way, i.e., to retain best classic worst-case guarantees while achieving optimal performance for accurate predictions. In this talk I will give a brief overview of the area, sketch some of my favorite results, explain what are common challenges and what general tools we have to address them, and mention a couple of open problems.

### Tuesday 9:30-10:30am, Speaker: Malin Rau

**On peak demand minimization and scheduling of parallel tasks**

**On peak demand minimization and scheduling of parallel tasks**

**Abstract:** Consider the classical problem of scheduling jobs on identical machines while minimizing the makespan. Each job has a processing time and requires one machine to be processed, blocking it for other jobs during this time. In this problem, the jobs differ only in one dimension, the processing time.

**Abstract:**Consider the classical problem of scheduling jobs on identical machines while minimizing the makespan. Each job has a processing time and requires one machine to be processed, blocking it for other jobs during this time. In this problem, the jobs differ only in one dimension, the processing time.

There is a wide field of scheduling problems where jobs can vary in more than one dimension. Examples include jobs that need more than one machine to be processed or jobs that require a given amount of energy. These resource requirements (machines, energy) can vary from job to job. Consequently, these jobs are similar to rectangular objects, such that these problems have a strong connection to the field of two-dimensional packing problems.

This talk will cover recent work on problems related to scheduling jobs of this kind, give some insight into their connection to each other, present an overview of the techniques used in the field, and discuss some challenges for future research.

### Thursday 10:00-11:00am, Speaker: Tomasz Kociumaka

**Dynamic string algorithms**

**Abstract: **The study of algorithms on strings (finite sequences of characters from a given alphabet) is a lively research area with strong connections to applications, such as in data compression and bioinformatics. Central problems in string algorithms include pattern matching, testing string similarity, and detection of repetitions. While these problems are well understood in the classic static setting, often with decades-old optimal solutions, designing dynamic algorithms, which support strings that change over time, is significantly more challenging, even for the seemingly trivial task of equality testing. In my talk, I will discuss how to model dynamic strings, survey the recent results for dynamic versions of fundamental string processing problems, and list multiple open problems. Moreover, I will cover some techniques behind recent dynamic string algorithms, including the central tool utilized in most of them: locally consistent parsing.

### Friday 9:30-10:30am, Speaker: Daniel Neuen

**Parameterized Algorithms for Graph Isomorphism - Decompositions via Regularity**

**Abstract:** This talk gives an overview of recent advances on the graph isomorphism problem with a focus on quasi-polynomial parameterized algorithms with a running time of the form n^{polylog(k)}, where k is some graph parameter. The first such algorithm was obtained for k being the maximum degree of the input graphs [FOCS 2018] by extending the group-theoretic tools of Babai's quasi-polynomial time isomorphism test [STOC 2016]. Subsequently, this result was generalized in a series of works to all graphs that exclude the complete graph on k vertices as a topological subgraph [SODA 2022]. In particular, this includes all graphs of Euler genus at most k-7 as well as all graphs of tree-width at most k-2.

The key concept underlying this latest algorithm is the notion of t-CR-bounded graphs that serves as a gateway to combine the group-theoretic tools underlying the isomorphism test for bounded-degree graphs with decomposition-based approaches for graphs that exclude the complete graph on k vertices as a topological subgraph. In this talk, we explore the graph-theoretic side of the algorithm and discuss how regularity of the input graphs can be leveraged to obtain decompositions into parts with few symmetries that can then be handled by the group-theoretic graph isomorphism machinery. Surprisingly, it turns out that this toolbox can also be used to obtain a simple fpt isomorphism test for graphs of bounded Euler genus [ESA 2021] that has a significantly better parameter-dependence than existing algorithms [Kawarabayashi 2015].

## Contributed talks

**Learning algorithms, social choice and differential privacy (Monday 14:00-15:30):**

**Christoph Hertrich**

**Christoph Hertrich**

**Understanding Neural Networks via Polytopes**

Neural networks with rectified linear unit (ReLU) activations are one of the standard models in modern machine learning. They are at the heart of many impressive success stories in applications. At the same time, the theoretical understanding of these networks lags far behind and surprisingly simple questions remain open. For example, it is not known how many layers one needs to (exactly) represent simple functions like computing the maximum of five numbers. In this talk I will explain why polyhedral geometry might be the right tool to solve these questions and what progress has already been achieved.

**Giorgio Vinciguerra**

**Giorgio Vinciguerra**

**Learning-based approaches to compressed data structures design**

An emerging trend in data structures design consists in introducing learned components so to uncover some new regularities in the input data, and thus enable faster queries and more succinct space usage (sometimes of orders of magnitude) compared to traditional approaches.

While the resulting learned data structures are typically difficult to analyse under the classical models of computations, some initial results bound their worst-case performance and build a theoretical foundation for their excellent practical performance.

We provide a walkthrough of new theoretical and experimental achievements in this area, and conclude by discussing the plethora of research opportunities that these new learning-based approaches to data structures design open up.

**Krzysztof Sornat**

**Krzysztof Sornat**

**Proportional Approval Voting, Harmonic k-median, and Negative Association**

Based on one of my paper, I will present an overview of my research interest which are in the intersection of theory of computer science and social choice theory (also called 'computational social choice'). The original abstract of the paper is as follows.

We study a generic framework that provides a unified view on two important classes of problems: (i) extensions of the k-median problem where clients are interested in having multiple facilities in their vicinity (e.g., due to the fact that, with some small probability, the closest facility might be malfunctioning and so might not be available for using), and (ii) finding winners according to some appealing multiwinner election rules, i.e., election system aimed for choosing representatives bodies, such as parliaments, based on preferences of a population of voters over individual candidates. Each problem in our framework is associated with a vector of weights: we show that the approximability of the problem depends on structural properties of these vectors. We specifically focus on the harmonic sequence of weights for which the objective function interpreted in a multiwinner election setup reflects to the well-known Proportional Approval Voting (PAV) rule.

Our main result is that, due to the specific (harmonic) structure of weights, the problem allows constant factor approximation. This is surprising since the problem can be interpreted as a variant of the k-median problem where we do not assume that the connection costs satisfy the triangle inequality. The algorithm we propose is based on dependent rounding [Srinivasan, FOCS'01] applied to the solution of a natural LP-relaxation of the problem. The rounding process is well known to produce distributions over integral solutions satisfying Negative Correlation (NC), which is usually sufficient for the analysis of approximation guarantees offered by rounding procedures. In our analysis, however, we need to use the fact that the carefully implemented rounding process satisfies a stronger property, called Negative Association (NA), which allows us to apply standard concentration bounds for conditional random variables.

**Teresa Steiner**

**Teresa Steiner**

**Introduction to Differential Privacy**

In this talk, I will give a brief introduction to the research field of differential privacy from an algorithms researcher’s perspective. Differential privacy is a concept which allows mathematically proving privacy guarantees for data analysis algorithms: namely, that the (randomized) output of an algorithm does not depend too much on any individual’s data, hence, those data cannot be reconstructed. Since differential privacy emerged as a new research field around 15 years ago, it has become the de facto privacy standard for data analysis in both research and industry. The main challenge is to design algorithms which preserve differential privacy while still yielding provably accurate results.

After a short introduction to the general field, I will highlight some current challenges which I find particularly exciting. An example is designing differentially private dynamic data structures, which allow answering queries about a data set that changes over time, while satisfying differential privacy guarantees.

**Big data algorithms (Tuesday 11:00am-12:30pm):**

**Big data algorithms (Tuesday 11:00am-12:30pm):**

**Runtian Ren **

**Runtian Ren**

**Online matching with delays & Online busy-time scheduling**

I want to introduce my major works on two problems, online matching with delays and online busy-time scheduling.

In the online matching with delays problem, the input is a set of requests arriving at different times and arbitrary points in a given metric space. The online algorithms needs to produce a perfect matching solution for all the requests in an online manner. Different from the classic online matching problem, each request does not has to be matched immediately at its arrival - instead, the online algorithm can delay its matching decision by any duration t with a delay cost of f(t). On the other hand, if two requests r and r’ are matched into a pair, a connection cost of d(x(r), x(r’)) is incurred for producing this pair (here x(r) denotes the location of request r). The target is to minimize the total delay cost plus the total connection cost for matching all the requests.

In the online busy-time scheduling problem, the input is a set of interval jobs. Each job has a size, an arrival time and a departure time, and has to be assigned to a machine of uniform capacity 1. At any time, the total size of the jobs assigned to the same machine cannot exceed the capacity 1. A machine is called active at time t if there exists at least one job running on it at this moment. The goal is to assign all the jobs to the machines in an online manner such that the total busy-time of the machine used is minimized.

**Franziska Eberle**

**Franziska Eberle**

**Online Weighted Throughput Maximization**

We consider a fundamental online scheduling problem in which jobs with processing times, weights, and deadlines arrive online over time at their release dates. The task is to determine a feasible preemptive schedule on a single or multiple (possibly unrelated) machines that maximizes the total weight of jobs that complete before their deadline. Due to strong impossibility results for competitive analysis on a single machine, we require that jobs contain some {\em slack} $\varepsilon>0$, which means that the feasible time window for scheduling a job is at least $1+\varepsilon$ times its processing time on each eligible machine. We present an algorithm for unrelated machines that is $\Theta\big(\frac{1}\varepsilon\big )$-competitive giving the first non-trivial online algorithm for weighted throughput maximization on unrelated machines and improving the previous upper bound on identical machines. Surprisingly, this is the same optimal performance bound (up to constants) as for unweighted throughput maximization on a single machine.

**Anish Mukherjee**

**Anish Mukherjee**

**Subquadratic Dynamic Path Reporting against an Adaptive Adversary**

Algebraic dynamic data structures supporting edge updates and reachability/distance queries have been known for quite a long time, but they do not, in general, allow reporting the underlying paths within the same time bounds, especially against an adaptive adversary. We develop the first known fully dynamic reachability data structures supporting both point-to-point path reporting and single-source reachability tree reporting. Our data structures are randomized but work against an adaptive adversary. En route to accomplishing these goals, one of the interesting sub-result that we obtain is subquadratic fully dynamic algorithms for topological order that I’ll also discuss in this talk.

We also design several deterministic incremental algorithms for reachability and shortest paths that report the respective paths within subquadratic worst-case time bounds. I’ll briefly go over these results and discuss their significance. This is joint work with Adam Karczmarz and Piotr Sankowski.

**Approximation algorithms (Tuesday 14:00-15:30):**

**Approximation algorithms (Tuesday 14:00-15:30):**

**Felix Hommelsheim**

**Felix Hommelsheim**

**Structural Robustness in Combinatorial Optimization**

In this talk I give an overview on structural robustness (also called bulk-robustness) in combinatorial optimization, which was the topic of my PhD thesis:

Most robust combinatorial optimization problems assume that the cost of the resources, e.g. edges in a graph, are uncertain. In this thesis, however, we study the so called bulk-robust approach which models the uncertainty in the underlying combinatorial structure. To be more precise, in the bulk-robust model we are given an explicit list of scenarios comprising a set of resources each, which fail if the corresponding scenario materializes. Additionally, we are given a property that we have to obtain, for example ’containing a perfect matching’, ’s-t-connectivity’, or ’being connected’, which may arise from a fundamental combinatorial optimization problem. The goal of the bulk-robust optimization problem is to find a minimum weight set of resources such that the desired property is satisfied no matter which scenario materializes, i.e. no matter which set of resources from the list is removed.

In this talk I will give an introduction to this topic and summarize results and techniques for (approximately) solving different bulk-robust combinatorial optimization problems.

**Kamyar Khodamoradi**

**Kamyar Khodamoradi**

**Approximation Algorithms for Demand Strip Packing **

In the Demand Strip Packing problem (DSP) we are given a time interval and a collection of tasks, each one characterized by a processing time and a demand for a given resource (such as electricity, computational power etc.). A feasible solution consists of a schedule of the tasks within the mentioned time interval. Our goal is to minimize the peak resource consumption, i.e. the maximum total demand of tasks executed at any point of time. It is known that DSP is NP-hard to approximate below a factor 3/2, and standard techniques for related problems imply a (polynomial-time) 2-approximation. Our main result is a 5/3 + ε approximation for any constant ε > 0. We also achieve the best possible approximation factors for some relevant special cases. Appeared in APPROX'21.

**Ariel Kulik**

**Ariel Kulik**

**Faster Exponential-Time Approximation Algorithms Using Approximate Monotone Local Search**

In this paper we generalize the monotone local search approach of Fomin, Gaspers, Lokshtanov and Saurabh [J.ACM 2019], by establishing a connection between parameterized approximation and exponential-time approximation algorithms for monotone subset minimization problems. In a monotone subset minimization problem the input implicitly describes a non-empty set family over a universe of size n, which is closed under taking supersets. And the task is to find a minimum cardinality set in this family. Broadly speaking, we use approximate monotone local search to show that a parameterized \alpha-approximation algorithm that runs in c^k n^{O(1)} time, where k is the solution size, can be used to derive an \alpha-approximation randomized algorithm that runs in d^n n^{O(1)} time, where d is the unique value in (1, 1+ ((c-1)/\alpha)) such that D(1/\alpha} || (d-1)/(c-1)) = ln c / \alpha and D(a||b) is the Kullback-Leibler divergence. This running time matches that of Fomin et al. for \alpha=1, and is strictly better when \alpha >1, for any c >1. Furthermore, we also show that this result can be derandomized at the expense of a sub-exponential multiplicative factor in the running time.

We use an approximate variant of the exhaustive search as a benchmark for our algorithm. We show that the classic 2^n n^{O(1)} exhaustive search can be adapted to an \alpha-approximate exhaustive search that runs in time (1+ exp(-\alpha H (1/\alpha)))^n n^{O(1)}, where H is the entropy function. Furthermore, we provide a lower bound stating that the running time of this \alpha-approximate exhaustive search is the best achievable running time in an oracle model. When compared to approximate exhaustive search, and to other techniques, the running times obtained by approximate monotone local search are strictly better for any \alpha \geq 1, c >1.

We demonstrate the potential of approximate monotone local search by deriving new and faster exponential approximation algorithms for Vertex Cover, 3-Hitting Set, Directed Feedback Vertex Set, Directed Subset Feedback Vertex Set, Directed Odd Cycle Transversal and Undirected Multicut. For instance, we get a 1.1-approximation algorithm for Vertex Cover with running time 1.114^n n^{O(1)}, improving upon the previously best known 1.1-approximation running in time 1.127^n n^{O(1)} by Bourgeois et al. [DAM 2011].

**Nikhil Kumar**

**Nikhil Kumar**

**Approximate Max-Flow Min-Cut Theorems and their Applications to Approximation Algorithms**

**Approximate Max-Flow Min-Cut Theorems and their Applications to Approximation Algorithms**

The max flow-min cut theorem states that the maximum amount of flow between two vertices (source and sink) is equal to the minimum number of edges that need to be removed to disconnect them. When there are multiple source-sink pairs, the problem of computing the maximum flow is called the multicommodity flow problem and the problem of finding the minimum number of edges that need to be removed to disconnect all the source-sink pairs is called the minimum multicut problem. In this talk, I will discuss about the generalizations of the max-flow min-cut theorem to the multicommodity setting and their use in designing approximation algorithms. I will also state some interesting open problems in this line of research.

**Random Processes (Wednesday 11:30am -12:20pm):**

**Random Processes (Wednesday 11:30am -12:20pm):**

**Stefan Walzer**

**Stefan Walzer**

**Insertion Time of Random Walk Cuckoo Hashing below the Peeling Threshold**

Most hash tables have an expected insertion time of O(1). While insertions into cuckoo hash tables indeed seem to take O(1) expected time in practice, only polylogarithmic guarantees are proven in all but the simplest of practically relevant cases.

In this paper, we show that random walk insertions into cuckoo hash tables take O(1) expected amortised time when any number k ≥ 3 of hash functions is used and the load factor is below the corresponding peeling threshold. Because the peeling threshold is less than the threshold up to which a placement of all keys exists (e.g. roughly 0.81 instead of 0.91 for k = 3), this is not quite the result one would hope for. It is still the first meaningful guarantee for constant time insertion for cuckoo hashing that works for k ∈ {3,…,9}.

Relevant Paper: https://arxiv.org/abs/2202.05546

**Emilio Cruciani**

**Emilio Cruciani**

**Biased Opinion Dynamics**

We study opinion dynamics in multi-agent networks when a bias toward one of two possible opinions exists, for example reflecting a status quo versus a superior alternative. Our aim is to investigate the combined effect of bias, network structure, and opinion dynamics on the convergence of the system of agents as a whole. Models of such evolving processes can easily become analytically intractable. In this paper, we consider a simple yet mathematically rich setting, in which all agents initially share an initial opinion representing the status quo. The system evolves in steps. In each step, one agent selected uniformly at random follows an underlying update rule to revise its opinion on the basis of those held by its neighbors, but with a probabilistic bias towards the superior alternative. We analyze convergence of the resulting process under well-known update rules. The framework we propose is simple and modular, but at the same time complex enough to highlight a nonobvious interplay between topology and underlying update rule.

Based on: https://doi.org/10.1016/j.ins.2022.01.072

**Geometry and graphs (Thursday 11:30-12:35pm):**

**Geometry and graphs (Thursday 11:30-12:35pm):**

**André Nusser**

**André Nusser**

**Polygon Placement Revisited: (Degree of Freedom + 1)-SUM Hardness**

We revisit the classical problem of determining the largest copy of a simple polygon $P$ that can be placed into a simple polygon $Q$. Despite significant effort, known algorithms require high polynomial running times. (Barequet and Har-Peled, 2001) give a lower bound of $n^{2-o(1)}$ under the 3SUM conjecture when $P$ and $Q$ are (convex) polygons with $\Theta(n)$ vertices each. This leaves open whether we can establish (1) hardness beyond quadratic time and (2) any superlinear bound for constant-sized $P$ or $Q$.

In this talk, we affirmatively answer these questions under the $k$SUM conjecture, proving natural hardness results that increase with each degree of freedom (scaling, $x$-translation, $y$-translation, rotation):

(1) Finding the largest copy of $P$ that can be $x$-translated into $Q$ requires time $n^{2-o(1)}$ under the 3SUM conjecture.

(2) Finding the largest copy of $P$ that can be arbitrarily translated into $Q$ requires time $n^{2-o(1)}$ under the 4SUM conjecture.

(3) Finding the largest copy of $P$ that can be arbitrarily rotated and translated into $Q$ requires time $n^{3-o(1)}$ under the 5SUM conjecture.

We are not aware of any other such natural $($degree of freedom $+1)$-SUM hardness for a geometric optimization problem.

**Abhiruk Lahiri**

**Abhiruk Lahiri**

**Algorithms on Sparse Graphs**

In this talk, we will see an overview of a few recent developments in algorithms on sparse graphs. The talk is based on two joint works with Zdenek Dvorak, Daniel Goncalves, Jane Tan and Torsten Ueckerdt appeared in ESA 2021 and SoCG 2022.

**Jacob Focke**

**Jacob Focke**

**From statistical physics to counting homomorphisms**

A short demonstration outlining how problems from statistical physics motivate the study of counting graph homomorphisms.