Print this page


9:00-9:30 Coffee break Coffee break Coffee break Coffee break Coffee break
9:30-10:30 Parinya Chalermsook Graph Products, Hardness of Approximation, and Beyond Vahab Mirrokni Ad Allocation: Online Problems and Mechanism Design Amit Kumar Greedy Algorithms for Steiner Forest Evdokia Nikolova Approximation Algorithms for Offline Risk-averse Combinatorial Optimization Leen Stougie Scheduling over Scenarios on Two Machines
Vincenzo Bonifaci Open Problems in Natural Network Design Piotr Krysta Size versus Truthfulness in the House Allocation Problem Seeun William Umboh Online Network Design Algorithms via Hierarchical Decompositions Tim Roughgarden Decompositions of Triangle-Dense Graphs Guido Schäfer Optimal Tolls with Arc Restrictions and Heterogeneous Players
10:30-11:00 Coffee break Coffee break Coffee break Coffee break Coffee break
11:00-12:30 Marek Cygan 5/4 Approximation for Upper Degree-Constrained Partial Orientations Hyung-Chan An LP-Based Algorithms for Capacitated Facility Location Debmalya Panigrahi Online Algorithms for Node-Weighted Network Design Virginia Vassilevska Williams Subcubic Equivalences Between Graph Centrality Problems, APSP and Diameter Karthekeyan Chandrasekaran Finding Small Stabilizers for Unstable Graphs
Vineet Goyal Sparse Process Flexibility Networks: Is Long-Chain Really Optimal? Tomasz Kociumaka Constant-Factor Approximation for Capacitated k-Center with Outliers MohammadTaghi Hajiaghayi Near-optimal Online Algorithms for Prize-Collecting Steiner Problems Andreas Wiese Approximation Schemes for Maximum Weight Independent Set of Rectangles Umang Bhaskar Network Improvement for Equilibrium Routing
Guy Even Best of Two Local Models: Local Centralized and Local Distributed Algorithms Jarek Byrka Recent Progress for the k-Median Problems Alina R. Ene Improved Approximation Algorithms for Degree-bounded Network Design Problems with Node Connectivity Requirements Michael Dinitz Approximating Graph Spanners Jannik Matuschke Fare Evasion in Transit Networks
12:30-14:30 Lunch Lunch Lunch Lunch Lunch
15:30-16:00 Coffee break Coffee break Coffee break Coffee break
16:00-17:00 László Végh A Strongly Polynomial Algorithm for Generalized Flow Maximization Thomas Rothvoss The Matching Polytope has Exponential Extension Complexity Andreas Feldmann On the Equivalence of Bidirected and Hypergraphic Steiner Tree Relaxations Nisheeth Vishnoi On the Feasibiity of Max-Entropy-based Approaches for Combinatorial Problems
Rico Zenklusen A Simple Order-Oblivious O(log log(rank))-Competitive Algorithm for the Matroid Secretary Problem Jakub Łącki Dynamic Steiner Tree and Subgraph TSP Chaitanya Swamy Approximation Algorithms for Multi-Vehicle Minimum Latency Problems via LP-based Techniques
19:00 Welcome aperitif     Social dinner


Where to go

Below is a map of the USI campus. Click here for directions to the campus.
Map of USI campus
  1. Main USI building.
  2. Central block: The university canteen ("mensa") and cafeteria are on the first floor.
  3. Lecture and seminar rooms ("red building"): The workshop will be held in room A12.
  4. Informatics building.
Lunches will be served in the "mensa"; coffee breaks and the welcome aperitif will be held in front of the north entrance of the main building (beneath the central block).
The social dinner will be held at La Perla restaurant, Casino of Lugano, via Stauffacher 1 (lake front). You can download the menu. For directions kindly refer to the local info map.



Hyung-Chan AnLP-Based Algorithms for Capacitated Facility Location
Linear programming has played a key role in the study of algorithms for combinatorial optimization problems. In the field of approximation algorithms, this is well illustrated by the uncapacitated facility location problem. A variety of algorithmic methodologies, such as LP-rounding and primal-dual method, have been applied to and evolved from algorithms for this problem. Unfortunately, this collection of powerful algorithmic techniques had not yet been applicable to the more general capacitated facility location problem. In fact, all of the known algorithms with good performance guarantees were based on a single technique, local search, and no linear programming relaxation was known to efficiently approximate the problem.

In this paper, we present a linear programming relaxation with constant integrality gap for capacitated facility location. We demonstrate that fundamental concepts from the matching theory, including alternating paths and residual networks, provide key insights that lead to the strong relaxation. Our algorithmic proof of integrality gap is obtained by finally accessing the rich toolbox of LP-based methodologies: we present a constant factor approximation algorithm based on LP-rounding. Our results resolve one of the ten open problems selected by the textbook on approximation algorithms of Williamson and Shmoys.

Joint work with Mohit Singh and Ola Svensson.
Back to top
Umang BhaskarNetwork Improvement for Equilibrium Routing
In many diverse applications, users strategically choose routes in a network to minimize the delay they face. Examples of such applications are road traffic, data networks, and cloud computing marketplaces. Routing games model this strategic behaviour of the users, and are used to understand and predict the impact of this behaviour on the traffic and delays in the network.

We consider a fundamental problem facing the manager of such a network: to make improvements to the network under a fixed budget, so that the average delay for the strategic users is minimized. The problem is modeled using routing games and widely studied in transportation research. However, despite its practical relevance, there are no proven guarantees for polynomial-time algorithms. In this talk, I will present both algorithms and hardness results for network improvement in a number of different network topologies, and describe a number of related problems as directions for future research.
Back to top
Vincenzo BonifaciOpen Problems in Natural Network Design
We propose to explore network optimization processes arising in nature. We identify a few natural networks for further study: the tubular network formed by the Physarum polycephalum slime mold when foraging food, and the neuronal structure in the worm Caenorhabditis elegans and in the fruit fly Drosophila melanogaster.

P. polycephalum (Physarum) has been experimentally demonstrated to be able to construct reliable and efficient networks while foraging food. The Physarum cell grows, as long as nutrition is abundant. When nutrition is limited, Physarum forms a network of interconnected veins; the veins are gel-like tubes in which the cytoplasm flows. Analysis of mathematical models of Physarum network formation has been so far mostly limited to the two-terminal case. We discuss striking similarities between existing “Physarum dynamics” for the shortest path problem [Bonifaci et al., SODA 2012, ICALP 2013] and the algorithm by Christiano et al. for (approximate) maximum flow in an undirected network [Christiano et al., STOC 2011]. In the multi-terminal case, the main open problem is to analyze the economy of scale effect and to characterize the set of stable solutions to the dynamics.

The layout of neurons in C. elegans and D. melanogaster follows rules dictated by an efficient use of the neuronal resources. The wiring economy principle postulates that for a given wiring diagram, neurons are arranged so as to minimize the wiring cost, which is related to wire volume and signal propagation. There are some proposals for the mathematical modeling of an Optimal Neuronal Layout problem [Chklovskii, Neural Comput., 2004], but more detailed models for the optimal neuronal layout problem include computationally hard objective functions or constraints, such as the avoidance of configurations in which multiple neurons occupy the same position in space. We formulate some of these models and propose to study them from the point of view of approximation algorithms.
Back to top
Jarek ByrkaRecent Progress for the $k$-Median Problems
Will briefly discuss two sets of results.
First, We improve upon Li-Svensson's approximation ratio for $k$-median from 2.732 to 2.611 by developing an algorithm that improves upon various aspects of their work. In particular we give an improved approximation for generating bi-point solutions: instead of factor 2-approximation by the JMS algorithm, we use an interesting variant of this primal-dual algorithm and obtain 1.9524-aproximation.

Joint work with Thomas Pensyl, Bartosz Rybicki, Aravind Srinivasan, and Khoa Trinh.

Second, we consider the hard capacitated $k$-median problem, and present LP-rounding algorithms that provide constant factor approximation of the connection cost but slightly violate the capacities. We obtain that violating capacities by a factor of $2+\epsilon$ is sufficient in case of uniform capacities (the setting may include nonuniform facility opening costs). We also get that capacity violation by a factor of $3+\epsilon$ is sufficient for nonuniform capacities (no facility opening cost).

This is a joint work with Krzysztof Fleszar, Bartosz Rybicki, and Joachim Spoerhase.
Back to top
Parinya Chalermsook Graph Products, Hardness of Approximation, and Beyond
The study of graph products is a major research topic and typically concerns the term $f(G∗H)$, e.g., to show that $f(G∗H) = f(G)f(H)$. In this paper, we study graph products in a non-standard form $f(R[G∗H])$ where $R$ is a “reduction”, a transformation of any graph into an instance of an intended optimization problem. We have used this technique to resolve some open problems as applications.
* Deterministic Finite Automata (DFA) is not PAC-learnable unless NP=RP. This resolves an open question raised 25 years ago by Pitt and Warmuth.
* Edge-Disjoint Paths on DAGs are hard to approximate to within $n^{1/2-\epsilon}$. This matches the upper bound by [Cherkuri, Khanna and Sherpherd 2005].
* Tight hardness of $k$-Cycle Packing for large $k$.
* Tight hardnesses of Induced Matching and Poset Dimension.
* An improved hardness of Strong Edge Coloring and colorings of power graphs.
* Alternate (and arguably simpler) proofs of many other results, such as hardness of learning DNF, CNF and intersection of halfspaces; hardness of some pricing problems.
Our technique reduces the task of proving hardnesses to merely analyzing graph product inequalities, which are often as simple as textbook exercises.

This talk is based on joint papers with B. Laekhanukit and D. Nanongkai (SODA'13, LATIN'14, and some other unpublished results).
Back to top
Karthik ChandrasekaranFinding Small Stabilizers for Unstable Graphs
An undirected graph $G=(V,E)$ is stable if its inessential vertices (those that are exposed by at least one maximum matching) form a stable set. We call a set of edges $F \subseteq E$ a stabilizer if its removal from $G$ yields a stable graph. In this talk, I will address the following natural edge-deletion question: given a graph $G=(V,E)$, can we find a minimum-cardinality stabilizer?

Stable graphs play an important role in cooperative game theory. In the classic matching game introduced by Shapley and Shubik we are given an undirected graph $G=(V,E)$ where vertices represent players, and we define the value of each subset $S \subseteq V$ as the cardinality of a maximum matching in the subgraph induced by $S$. The core of such a game contains all fair allocations of the value of $V$ among the players, and is well-known to be non-empty iff graph $G$ is stable. The stabilizer problem addresses the question of how to modify the graph to ensure that the core is non-empty.

We show that this problem is vertex-cover hard. We then prove that there is a minimum-cardinality stabilizer that avoids some maximum matching of $G$. We use this insight to give efficient approximation algorithms for sparse graphs and for regular graphs.

Joint work with Adrian Bock, Jochen Koenemann, Britta Peis and Laura Sanità.
Back to top
Marek Cygan5/4 Approximation for Upper Degree-Constrained Partial Orientations
In the Upper Degree-Constrained Partial Orientation problem we are given an undirected graph $G = (V, E)$, together with two degree constraint functions $d^-, d^+ : V -> N$. The goal is to orient as many edges as possible, in such a way that for each vertex $v \in V$ the number of arcs entering $v$ is at most $d^-(v)$, whereas the number of arcs leaving $v$ is at most $d^+(v)$. This problem was introduced by Gabow [SODA’06] and proved to be MAXSNP-hard. In the same paper Gabow presented an LP-based iterative rounding 4/3-approximation algorithm.

Since the problem in question is a special case of the classic 3-Dimensional Matching, which in turn is a special case of the $k$-Set Packing problem, it is reasonable to ask whether recent improvements in approximation algorithms for the latter two problems [Cygan, FOCS’13; Sviridenko & Ward, ICALP’13] allow for an improved approximation for Upper Degree-Constrained Partial Orientation. We follow this line of reasoning and present a polynomial-time local search algorithm with approximation ratio $5/4+\epsilon$. Our algorithm uses a combination of two types of rules, that is improving sets of bounded pathwidth from the recent $(4/3+\epsilon)$-approximation algorithm for 3-Set Packing [Cygan, FOCS’13], together with a simple rule tailor-made for the setting of partial orientations. In particular we exploit the fact that one can check in polynomial time whether it is possible to orient all the edges of a given graph [Gyarfas & Frank, Combinatorics’76].

This is a joint work with Tomasz Kociumaka.
Back to top
Michael DinitzApproximating Graph Spanners
Graphs spanners (subgraphs which approximately preserve distances) have been studied extensively since the 1980's. Many of the known results are about the optimal tradeoffs between various parameters, particularly the stretch and size of the spanner. But there has been some recent progress on a different and less developed line of research: fixing the allowable stretch, and optimizing the size. This turns spanners into more of a computational problem, and allows us to use many of the standard techniques from approximation algorithms (convex relaxations in particular). In this talk we will give an overview of some of the progress in this area, its limitations, and some possible future directions.
Back to top
Alina R. EneImproved Approximation Algorithms for Degree-bounded Network Design Problems with Node Connectivity Requirements
We consider degree bounded network design problems with element and vertex connectivity requirements. In the degree bounded Survivable Network Design (SNDP) problem, the input is an undirected graph $G = (V, E)$ with weights $w(e)$ on the edges and degree bounds $b(v)$ on the vertices, and connectivity requirements $r(uv)$ for each pair $uv$ of vertices. The goal is to select a minimum-weight subgraph $H$ of $G$ that meets the connectivity requirements and it satisfies the degree bounds on the vertices: for each pair $uv$ of vertices, $H$ has $r(uv)$ disjoint paths between $u$ and $v$; additionally, each vertex $v$ is incident to at most $b(v)$ edges in $H$. We give the first $(O(1), O(1) b(v))$ bicriteria approximation algorithms for the degree-bounded SNDP problem with element connectivity requirements and for several degree-bounded SNDP problems with vertex connectivity requirements.

Our algorithms construct a subgraph $H$ whose weight is at most $O(1)$ times the optimal such that each vertex $v$ is incident to at most $O(1) b(v)$ edges in $H$. Our approach extends to network design problems in directed graphs with both in-degree and out-degree constraints.

This talk is based on joint work with Ali Vakilian (MIT).
Back to top
Guy EvenBest of Two Local Models: Local Centralized and Local Distributed Algorithms
We consider two models of computation: local centralized algorithms and local distributed algorithms. Algorithms in one model are adapted to the other model to obtain improved algorithms.

Distributed vertex coloring is employed to design improved local centralized algorithms for: maximal independent set, maximal matching, and an approximation scheme for maximum matching over bounded degree graphs. The improvement is threefold: the algorithms are deterministic, stateless, and the number of probes is ${\rm poly}(\log^* n)$. Our algorithms for maximal independent set and maximal matching improves over previous randomized algorithms by Alon et al. (SODA 2012) and Mansour et al. (ICALP 2012). In these previous algorithms, the number of probes and the space required for storing the state between queries are ${\rm poly}(\log n)$.

The recursive local centralized improvement technique by Nguyen and Onak [2008] is employed to obtain an improved distributed approximation scheme for maximum matching. The number of rounds of the distributed algorithm is $O(\log^*n)$ for bounded degree graphs.

Joint work with Moti Medina and Dana Ron.
Back to top
Andreas FeldmannOn the Equivalence of Bidirected and Hypergraphic Steiner Tree Relaxations
The bottleneck of the currently best $(\ln(4) + \epsilon)$-approximation algorithm for the NP-hard Steiner tree problem is the solution of its large, so called hypergraphic, linear programming relaxation (HYP). Hypergraphic LPs are NP-hard to solve exactly, and it is a formidable computational task to even approximate them sufficiently well.

We focus on another well-studied but poorly understood LP relaxation of the problem: the bidirected cut relaxation (BCR). This LP is compact, and can therefore be solved efficiently. Its integrality gap is known to be greater than 1.16, and while this is widely conjectured to be close to the real answer, only a (trivial) upper bound of 2 is known.

We give an efficient constructive proof that BCR and HYP are polyhedrally equivalent in instances that do not have an (edge-induced) claw on Steiner vertices, i.e., they do not contain a Steiner vertex with 3 Steiner neighbors. This implies faster $\ln(4)$-approximations for these graphs, and is a significant step forward from the previously known equivalence for (so called quasi-bipartite) instances in which Steiner vertices form an independent set. We complement our results by showing that even restricting to instances where Steiner vertices induce one single star, determining whether the two relaxations are equivalent is NP-hard.
Back to top
Vineet GoyalSparse Process Flexibility Networks: Is Long-Chain Really Optimal?
Long chain refers to a directed bi-partite network with directed edges from supply nodes (with fixed unit supply) to demand nodes (with random demand) that form a hamiltonian cycle. This design was introduced in a seminal paper of Jordan and Graves (1995) and has been an important design in process flexibility. Jordan and Graves (1995) observed empirically that the expected performance (satisfied demand) is quite close to the complete bi-partite graph for certain demand distributions. Recently, Simchi-Levi and Wei (2012) show that long chain maximizes the expected performance among all 2-regular networks for exchangeable demand distributions.

We study the performance of long chain in comparison to all networks with $2n$ edges when the assumption of 2-regularity is relaxed. We show that surprisingly long chain is not optimal in this class of networks even for i.i.d. demand distributions. We present a family of instances where a disconnected network with $2n$ edges has a strictly better performance than long chain.

If we restrict to connected networks with $2n$ arcs, we show that long chain is optimal for exchangeable demand distributions. Our proof is based on a combinatorial analysis of the structure of maximum flow in directed graphs and a coupling argument that reduces the comparison of expected performance to a sample pathwise comparison of satisfied demand. Our analysis provides useful insights towards not just understanding the optimality of long chain but also towards designing more general sparse flexibility networks.

This is joint work with Antoine Desir (Columbia), Yehua Wei (Duke) and Jiawei Zhang (NYU).
Back to top
MohammadTaghi HajiaghayiNear-optimal Online Algorithms for Prize-Collecting Steiner Problems
In this paper, we give the first online algorithms with a polylogarithmic competitive ratio for the node-weighted prize-collecting Steiner tree and Steiner forest problems. The competitive ratios are optimal up to logarithmic factors. In fact, we give a generic technique for reducing online prize-collecting Steiner problems to the fractional version of their non-prize-collecting counterparts losing only a logarithmic factor in the competitive ratio. This reduction is agnostic to the cost model (edge-weighted or node-weighted) of the input graph and applies to a wide class of network design problems including Steiner tree, Steiner forest, group Steiner tree, and group Steiner forest.
Consequently, we also give the first online algorithms for the edge-weighted prize-collecting group Steiner tree and group Steiner forest problems with a poly-logarithmic competitive ratio, since corresponding fractional guarantees for the non-prize-collecting variants of these problems were previously known. For the most fundamental problem in this class, namely the prize-collecting Steiner tree problem, we further improve our results. For the node-weighted prize-collecting Steiner tree problem, we use the generic reduction but improve the best known online Steiner tree result from Naor et al on two counts. We improve the competitive ratio by a logarithmic factor to make it optimal (up to constants), and also give a new dual-fitting analysis showing that the competitive ratio holds against the fractional optimum. This result employs a new technique that we call dual averaging which we hope will be useful for other dual-fitting analyses as well. For the edge-weighted prize-collecting Steiner tree problem, we match the optimal (up to constants) competitive ratio of $O(\log n)$ that was previously achieved by Qian and Williamson but provide a substantially simpler analysis.

This is a joint work with Vahid Liaghat and Debmalya Panigrahi.
Back to top
Tomasz KociumakaConstant-Factor Approximation for Capacitated $k$-Center with Outliers
The $k$-center problem is a classic facility location problem, where given an edge-weighted graph $G=(V,E)$ one is to find a subset of $k$ vertices $S$, such that each vertex in $V$ is "close" to some vertex in $S$. The approximation status of this basic problem is well understood, as a simple 2-approximation algorithm is known to be tight. Consequently different extensions were studied. In the capacitated version of the problem each vertex is assigned a capacity, which is a strict upper bound on the number of clients a facility can serve, when located at this vertex. A constant factor approximation for the capacitated $k$-center was obtained last year in [Cygan, Hajiaghayi and Khuller, FOCS'12], which was recently improved to a 9-approximation in [An et al., IPCO'14]. In a different generalization of the problem some clients (denoted as outliers) may be disregarded. Here we are additionally given an integer $p$ and the goal is to serve exactly $p$ clients, which the algorithm is free to choose. In [Charikar et al., SODA'01] the authors presented a 3-approximation for the $k$-center problem with outliers. In this paper we consider a common generalization of the two extensions previously studied separately, i.e. we work with the capacitated $k$-center with outliers. We present the first constant factor approximation algorithm with approximation ratio of 25 even for the case of non-uniform hard capacities.

Joint work with Marek Cygan, originally presented at STACS 2014.
Back to top
Piotr KrystaSize versus Truthfulness in the House Allocation Problem
We study the House Allocation problem (also known as the Assignment problem), i.e., the problem of allocating a set of objects among a set of agents, where each agent has ordinal preferences (possibly involving ties) over a subset of the objects. We focus on truthful mechanisms without monetary transfers for finding large Pareto optimal matchings. It is straightforward to show that no deterministic truthful mechanism can approximate a maximum cardinality Pareto optimal matching with ratio better than 2. We thus consider randomized mechanisms. We give a natural and explicit extension of the classical Random Serial Dictatorship Mechanism (RSDM) specifically for the House Allocation problem where preference lists can include ties. We thus obtain a universally truthful randomized mechanism for finding a Pareto optimal matching and show that it achieves an approximation ratio of $e/(e-1)$ . The same bound holds even when agents have priorities (weights) and our goal is to find a maximum weight (as opposed to maximum cardinality) Pareto optimal matching. On the other hand we give a lower bound of 18/13 on the approximation ratio of any universally truthful Pareto optimal mechanism in settings with strict preferences. In the case that the mechanism must additionally be non-bossy, an improved lower bound of $e/(e-1)$ holds. This lower bound is tight given that RSDM for strict preference lists is non-bossy. We moreover interpret our problem in terms of the classical secretary problem and prove that our mechanism provides the best randomized strategy of the administrator who interviews the applicants.

Joint work with David Manlove, Baharak Rastegari and Jinshan Zhang.
Back to top
Amit KumarGreedy Algorithms for Steiner Forest
We consider the following simple algorithm for the Steiner forest problem: find the closest pair of terminals which have not been connected to their respective mates yet, and connect them by a shortest path. Contract the edges in this path and repeat this process till we get a feasible solution. We show that this is a constant factor approximation algorithm. We apply this result to obtain simple cost sharing schemes for the Steiner forest problem.
This is joint work with Anupam Gupta.
Back to top
Jakub ŁąckiDynamic Steiner Tree and Subgraph TSP
We study the Steiner tree problem over a dynamic set of terminals. Consider the model where we are given an $n$-vertex graph $G = (V, E, w)$ with positive real edge weights, and our goal is to maintain a tree in $G$ which is a good approximation of the minimum Steiner tree spanning a terminal set $S$, which changes over time. The changes applied to the terminal set are either terminal additions or terminal removals. Our task is twofold. We want to support updates in sublinear $o(n)$ time, and keep the approximation factor of the algorithm as small as possible. We obtain algorithms for the Steiner tree problem that maintain $(6 + \epsilon)$-approximate tree in general graphs and $(2 + \epsilon)$-approximate tree in planar graphs. Our results also extend to the similar Subset TSP problem, with the same approximation guarantees.
Back to top
Jannik MatuschkeFare Evasion in Transit Networks
We study several variants of a new model for optimizing fare inspections in public transit networks. The model is based on a bilevel program, where in the first level, the leader (the network operator) determines probabilities for inspecting passengers at different locations, while in the second level, the followers (the fare-evading passengers) respond by optimizing their routes given the inspection probabilities and travel times.
To model the followers' behavior we study both a non-adaptive variant, in which passengers select a path a priori and continue along it throughout their journey, and an adaptive variant, in which they gain information along the way and use it to update their route. For these problems, which are interesting in their own right, we design exact and approximate algorithms. We also prove a tight bound of 3/4 on the ratio of the optimal cost between adaptive and non-adaptive strategies.
For the leader's optimization problem, we study a fixed-fare and a flexible-fare variant, where ticket prices may or may not be set at the operator's will. For the latter variant, we design an LP-based approximation algorithm. For all variants, we devise a local search procedure that shifts inspection probabilities within an initially determined support set. An extensive computational study on instances of the Dutch railway and the Amsterdam subway network reveals that our solutions are within 95% of theoretical upper bounds drawn from the LP relaxation.
This is joint work with José R. Correa, Tobias Harks, and Vincent J.C. Kreuzen.
Back to top
Vahab MirrokniAd Allocation: Online Problems and Mechanism Design
Handling advertiser budget and capacity constraints is a central issue in online advertising, resulting in many interesting optimization and game theoretic problems. In this talk, we discuss two aspects of dealing with budget constraints: the online stochastic optimization problem and ad allocation/pricing mechanisms with strategic advertisers.

In the first part, we survey recent results focusing on simultaneous adversarial and stochastic approximations, improved approximations for stochastic variants of the problem, e.g., improved algorithms for online submodular maximization, and finally the multi-objective online optimization. In this regard, we also pose several remaining open problems.
In the second part, we discuss the mechanism design problems in the online setting, present the framework of clinching auctions to deal with these constraints, and conclude with open problems in this area.
Back to top
Evdokia NikolovaApproximation Algorithms for Offline Risk-averse Combinatorial Optimization
We consider generic optimization problems that can be formulated as minimizing the cost of a feasible solution $w x$ over a combinatorial feasible set $F \subset \{0, 1\}^n$. For these problems we describe a framework of risk-averse stochastic problems where the cost vector $W$ has independent random components, unknown at the time of solution. A natural and important objective that incorporates risk in this stochastic setting is to look for a feasible solution whose stochastic cost has a small tail or a small convex combination of mean and standard deviation. Our models can be equivalently reformulated as nonconvex programs for which no efficient algorithms are known.
We provide efficient general-purpose approximation algorithms. They use as a black-box (exact or approximate) the solution to the underlying deterministic problem and thus immediately apply to arbitrary combinatorial problems. For example, from an available approximation algorithm to the deterministic problem, we construct an approximation algorithm with almost the same approximation factor for the stochastic problem. The algorithms are based on a geometric analysis of the nonlinear level sets of the objective functions.
Back to top
Debmalya PanigrahiOnline Algorithms for Node-Weighted Network Design
In recent years, an online adaptation of the classical primal-dual paradigm has been successfully used to obtain new online algorithms for node-weighted network design problems, and simplify existing ones for their edge-weighted counterparts. In this talk, I will give an outline of this emerging toolbox using, for illustration, the Steiner tree (Naor-P.-Singh, 2011) and Steiner forest (Hajiaghayi-Liaghat-P., 2013) problems, and if time permits, their prize-collecting variants (Hajiaghayi-Liaghat-P., 2014).
Back to top
Thomas RothvossThe Match­ing Poly­tope has Expo­nen­tial Exten­sion Complexity
A pop­u­lar method in com­bi­na­to­r­ial opti­miza­tion is to express poly­topes $P$, which may poten­tially have expo­nen­tially many facets, as solu­tions of lin­ear pro­grams that use few extra vari­ables to reduce the num­ber of con­straints down to a poly­no­mial. After two decades of stand­still, recent years have brought amaz­ing progress in show­ing lower bounds for the so called exten­sion complexity.
How­ever, the cen­tral ques­tion in this field remained wide open: can the per­fect match­ing poly­tope be writ­ten as an LP with poly­no­mi­ally many constraints?

We answer this ques­tion neg­a­tively. In fact, the exten­sion com­plex­ity of the per­fect match­ing poly­tope in a com­plete $n$-node graph is $2^{\Omega(n)}$.
Back to top
Tim RoughgardenDecompositions of Triangle-Dense Graphs
High triangle density — the graph property stating that a constant fraction of two-hop paths belong to a triangle — is a common signature of social networks. This paper studies triangle-dense graphs from a structural perspective. We prove constructively that significant portions of a triangle-dense graph are contained in a disjoint union of dense, radius $2$ subgraphs. This result quantifies the extent to which triangle-dense graphs resemble unions of cliques. We also show that our algorithm recovers planted clusterings in approximation-stable $k$-median instances.

Joint work with Rishi Gupta and C. Seshadhri.
Back to top
Guido SchäferOptimal Tolls with Arc Restrictions and Heterogeneous Players
The problem of computing optimal network tolls that induce a Nash equilibrium of minimum total cost has been studied intensively in the literature, but mostly under the assumption that these tolls are unrestricted. Here we consider this problem under the more realistic assumption that the tolls have to respect some given upper bound restrictions on the arcs. Our main focus is on the heterogeneous player case, i.e., players may have different sensitivities to tolls. This case gives rise to new challenges in devising algorithms to compute optimal restricted tolls. We will review some recent results for the restricted network toll problem and discuss some open problems.
Back to top
Leen StougieScheduling over Scenarios on Two Machines
We consider scheduling problems over scenarios where the goal is to find a single assignment of the jobs to the machines which performs well over all possible scenarios. Each scenario is a subset of jobs that must be executed in that scenario and all scenarios are given explicitly. The two objectives that we consider are minimizing the maximum makespan over all scenarios and minimizing the sum of the makespans of all scenarios. For both versions, we give several approximation algorithms and lower bounds on their approximability. This work is one of the first examples of explicitly dealing with optimization problems over scenarios, a new and rich field of interesting problems.

Joint work with Esteban Feuerstein, Alberto Marchetti-Spaccamela, Frans Schalekamp, René Sitters, Suzanne van der Ster and Anke van Zuylen.
Back to top
Chaitanya SwamyApproximation Algorithms for Multi-Vehicle Minimum Latency Problems via LP-based Techniques
We consider various multi-vehicle versions of the minimum latency problem. There is a fleet of $k$ vehicles located at one or more depot nodes, and we seek a collection of routes for these vehicles that visit all nodes so as to minimize the total latency incurred, which is the sum of the client waiting times. We obtain a 7.18-approximation for the version where all vehicles are located at the same depot and an 8.49-approximation for the version where vehicles may be located at multiple depots, both of which are the first improvements on this problem in a decade. Perhaps more significantly, our algorithms exploit various LP-relaxations including a bidirected relaxation as well as a configuration-style LP relaxation. This is noteworthy for two reasons. First, it gives the first concrete evidence of the effectiveness of LP-relaxations for minimum latency problems. Second, we show how to effectively leverage two classes of LPs — bidirected LPs and configuration LPs — that are often believed to be quite powerful but have only sporadically been effectively leveraged for network-design and vehicle-routing problems. The 7.18-approximation uses a bidirected LP to obtain the following key result that is of independent interest: if there are $k$ strolls rooted at $r$ of total cost $C$ that together cover $l$ nodes, then one can find a single tree of cost essentially $C$ that covers at least $l$ nodes. The 8.49-approximation for the multiple-depot version relies more crucially on an underlying configuration-LP for the problem.

This is joint work with Ian Post.
Back to top
Seeun William UmbohOnline Network Design Algorithms via Hierarchical Decompositions
We develop an approach to analyzing online network design algorithms via a novel application of the hierarchically separated tree (HST) embeddings of Bartal, and Fakcharoenphol, Rao and Talwar. The typical application of tree embeddings in the online setting requires one to embed the entire graph in advance, resulting in a randomized algorithm whose competitive ratio depends on the size of the graph. In contrast, our approach yields simple greedy-like deterministic algorithms whose competitive ratios depend only on the number of requests.

The key technical contribution is a charging scheme which shows that the cost of the algorithm is at most the cost of the optimal solution on any HST embedding of the terminals. The result of Fakcharoenphol, Rao and Talwar then implies that the competitive ratio is $O(\log k)$ where $k$ is the number of requests.

We use our approach to obtain deterministic $O(\log k)$-competitive online algorithms for the Steiner network (with edge duplication), rent-or-buy and connected facility location problems. This improves on the randomized $O(\log k)$ and deterministic $O(\log^2 k)$ competitive ratios by Awerbuch, Azar and Bartal for the rent-or-buy problem, and the randomized $O(\log^2 k)$ competitive ratio by San Felice, Williamson and Lee for the connected facility location problem.
Back to top
László VéghA Strongly Polynomial Algorithm for Generalized Flow Maximization
A strongly polynomial algorithm is given for the generalized flow maximization problem. It uses a new variant of the scaling technique, called continuous scaling. The main measure of progress is that within a strongly polynomial number of steps, an arc can be identified that must be tight in every dual optimal solution, and thus can be contracted.
Back to top
Nisheeth VishnoiOn the Feasibility of Max-Entropy-based Approaches for Combinatorial Problems
In this talk I will discuss the problem of computing max-entropy distributions over a discrete set of objects subject to observed marginals. For instance, given a weighted bipartite graph, can we compute a probability distribution over the set of perfect matchings in the graph whose marginals are the edge weights which maximizes the entropy?

While there has been a tremendous amount of interest in such distributions due to their applicability in areas such as statistical physics, economics, biology, information theory, machine learning, combinatorics and algorithms, a rigorous and systematic study of how to compute such distributions has been lacking.

In this talk, I will discuss the challenges in the computability of max-entropy distributions and show that the problem, in a fairly general setting, is equivalent to a counting problem on the underlying set. The connection is via convex programming. As one consequence, for several combinatorial settings, there is an efficient algorithm to compute max-entropy distributions. Concretely, our result renders certain algorithmic strategies for TSP/ATSP feasible. The next step is to understand properties of such distributions which could allow us to make progress on these and other fundamental problems.
Back to top
Andreas WieseApproximation Schemes for Maximum Weight Independent Set of Rectangles
In the Maximum Weight Independent Set of Rectangles (MWISR) problem we are given a set of $n$ axis-parallel rectangles in the 2D-plane, and the goal is to select a maximum weight subset of pairwise non-overlapping rectangles. Due to many applications, e.g., in data mining, map labeling and admission control, the problem has received a lot of attention by various research communities. We present the first $(1+\epsilon)$-approximation algorithm for the MWISR problem with quasi-polynomial running time $2^{{\rm poly}(\log n/\epsilon)}$. In contrast, the best known polynomial time approximation algorithms for the problem achieve superconstant approximation ratios of $O(\log \log n)$ (unweighted case) and $O(\log n / \log \log n)$ (weighted case).
Key to our results is a new geometric dynamic program which recursively subdivides the plane into polygons of bounded complexity. We provide the technical tools that are needed to analyze its performance.

Finally, I will give an overview of recent follow-up work, in particular a generalization of the above result to arbitrary polygons.

Joint work with Anna Adamaszek.
Back to top
Virginia Vassilevska WilliamsSubcubic Equivalences Between Graph Centrality Problems, APSP and Diameter
Measuring the importance of a node in a network is a major goal in the analysis of social networks, biological systems, transportation networks etc. Different centrality measures have been proposed to capture the notion of node importance. For example, the center of a graph is a node that minimizes the maximum distance to any other node (the latter distance is the radius of the graph). The median of a graph is a node that minimizes the sum of the distances to all other nodes. Informally, the betweenness centrality of a node $w$ measures the fraction of shortest paths that have $w$ as an intermediate node. Finally, the reach centrality of a node $w$ is the smallest distance $r$ such that any $s$-$t$ shortest path passing through $w$ has either $s$ or $t$ in the ball of radius $r$ around $w$.

The fastest known algorithms to compute the center and the median of a graph, and to compute the betweenness or reach centrality even of a single node take roughly cubic time in the number $n$ of nodes in the input graph. It is open whether these problems admit truly subcubic algorithms, i.e. algorithms with running time $\tilde{O}(n^{3-\delta})$ for some constant $\delta>0$.

We relate the complexity of the mentioned centrality problems to two classical problems for which no truly subcubic algorithm is known, namely All Pairs Shortest Paths (APSP) and Diameter. It is easy to see that Diameter can be solved using an algorithm for APSP with a small overhead. However, no reduction is known in the other direction, and it is entirely possible that Diameter is a truly easier problem than APSP.
We show that Radius, Median and Betweenness Centrality are equivalent under subcubic reductions to APSP, i.e. that a truly subcubic algorithm for any of these problems implies a truly subcubic algorithm for all of them. We then show that Reach Centrality is equivalent to Diameter under subcubic reductions. The same holds for the problem of approximating Betweenness Centrality within any constant factor. Thus the latter two centrality problems could potentially be solved in truly subcubic time, even if APSP required essentially cubic time. Indeed, our reductions already imply an algorithm for Reach Centrality in graphs with small integer weights that is faster than the best known algorithm for APSP in the same family of graphs.

Joint work with Amir Abboud and Fabrizio Grandoni.
Back to top
Rico ZenklusenA Simple Order-Oblivious $O(\log \log(\mathrm{rank}))$-Competitive Algorithm for the Matroid Secretary Problem
The last decade has seen an increased interest in generalizations of the secretary problem, a classical online selection problem. These generalizations have numerous applications in mechanism design for settings involving the selling of a good (e.g. ads) to agents (e.g. page views) arriving online. The matroid secretary problem is one of the most well-studied variants. It is general enough to deal with complex settings and, at the same time, it is sufficiently restricted to admit strong algorithms. A famous conjecture states that there is in fact a $O(1)$-competitive algorithm for the matroid secretary problem. This is an online algorithm that, in expectation and up to a constant factor, performs as well as any offline algorithm with perfect information.

In this talk, we present a new method that improves on the previously best algorithm, both in terms of its competitive ratio and its simplicity. The main idea of our algorithm is to decompose the problem into a distribution over a simple type of matroid secretary problems which are easy to solve. We show that this leads to a $O(\log \log(\mathrm{rank}))$-competitive procedure.

This is joint work with Moran Feldman (EPFL) and Ola Svensson (EPFL).
Back to top