**Authors: **Naor Alaluf, Moran Feldman **Download:** PDF**Abstract: **In this paper we consider the problem of maximizing a non-negative submodular
function subject to a cardinality constraint in the data stream model.
Previously, the best known algorithm for this problem was a
$5.828$-approximation semi-streaming algorithm based on a local search
technique (Feldman et al., 2018). For the special case of this problem in which
the objective function is also monotone, the state-of-the-art semi-streaming
algorithm is an algorithm known as Sieve-Streaming, which is based on a
different technique (Badanidiyuru, 2014). Adapting the technique of
Sieve-Streaming to non-monotone objective functions has turned out to be a
challenging task, which has so far prevented an improvement over the local
search based $5.828$-approximation. In this work, we overcome the above
challenge, and manage to adapt Sieve-Streaming to non-monotone objective
functions by introducing a "just right" amount of randomness into it.
Consequently, we get a semi-streaming polynomial time $4.282$-approximation
algorithm for non-monotone objectives. Moreover, if one allows our algorithm to
run in super-polynomial time, then its approximation ratio can be further
improved to $3 + \varepsilon$.

**Authors: **Claudson F. Bornstein, Martin Charles Golumbic, Tanilson D. Santos, Uéverton S. Souza, Jayme L. Szwarcfiter **Download:** PDF**Abstract: **Golumbic, Lipshteyn and Stern defined in 2009 the class of EPG graphs, as the
intersection graph of edge paths on a grid. An EPG graph $G$ is a graph that
admits a representation where its vertices correspond to paths in a grid $Q$,
such that two vertices of $G$ are adjacent if and only if their corresponding
paths in $Q$ have a common edge. If the paths in the representation have at
most $k$ changes of direction (bends), we say that it is a $B_k$-EPG
representation. A collection $C$ of sets satisfies the Helly property when
every sub-collection of $C$ that is pairwise intersecting has at least one
common element. In this paper we show that the problem of recognizing $B_k$-EPG
graphs $G=(V,E)$ whose edge-intersections of paths in a grid satisfy the Helly
property, so-called Helly-$B_k$ EPG graphs, is in $\mathcal{NP}$, for every $k$
bounded by a polynomial of $|V(G)|$. In addition, we show that recognizing
Helly-$B_1$ EPG graphs is $NP$-complete, and it remains $NP$-complete even when
restricted to 2-apex and 3-degenerate graphs.

**Authors: **Thomas Heinis **Download:** PDF**Abstract: **Key to DNA storage is encoding the information to a sequence of nucleotides
before it can be synthesised for storage. Definition of such an encoding or
mapping must adhere to multiple design restrictions. First, not all possible
sequences of nucleotides can be synthesised. Homopolymers, e.g., sequences of
the same nucleotide, of a length of more than two, for example, cannot be
synthesised without potential errors. Similarly, the G-C content of the
resulting sequences should be higher than 50\%. Second, given that synthesis is
expensive, the encoding must map as many bits as possible to one nucleotide.
Third, the synthesis (as well as the sequencing) is error prone, leading to
substitutions, deletions and insertions. An encoding must therefore be designed
to be resilient to errors through error correction codes or replication.
Fourth, for the purpose of computation and selective retrieval, encodings
should result in substantially different sequences across all data, even for
very similar data. In the following we discuss the history and evolution of
encodings.

**Authors: **Giulia Bernardini, Huiping Chen, Alessio Conte, Roberto Grossi, Grigorios Loukides, Nadia Pisanti, Solon P. Pissis, Giovanna Rosone **Download:** PDF**Abstract: **String data are often disseminated to support applications such as
location-based service provision or DNA sequence analysis. This dissemination,
however, may expose sensitive patterns that model confidential knowledge (e.g.,
trips to mental health clinics from a string representing a user's location
history). In this paper, we consider the problem of sanitizing a string by
concealing the occurrences of sensitive patterns, while maintaining data
utility. First, we propose a time-optimal algorithm, TFS-ALGO, to construct the
shortest string preserving the order of appearance and the frequency of all
non-sensitive patterns. Such a string allows accurately performing tasks based
on the sequential nature and pattern frequencies of the string. Second, we
propose a time-optimal algorithm, PFS-ALGO, which preserves a partial order of
appearance of non-sensitive patterns but produces a much shorter string that
can be analyzed more efficiently. The strings produced by either of these
algorithms may reveal the location of sensitive patterns. In response, we
propose a heuristic, MCSR-ALGO, which replaces letters in these strings with
carefully selected letters, so that sensitive patterns are not reinstated and
occurrences of spurious patterns are prevented. We implemented our sanitization
approach that applies TFS-ALGO, PFS-ALGO and then MCSR-ALGO and experimentally
show that it is effective and efficient.

**Authors: **Fabrizio Grandoni, Stefan Kratsch, Andreas Wiese **Download:** PDF**Abstract: **The area of parameterized approximation seeks to combine approximation and
parameterized algorithms to obtain, e.g., (1+eps)-approximations in
f(k,eps)n^{O(1)} time where k is some parameter of the input. We obtain the
following results on parameterized approximability: 1) In the maximum
independent set of rectangles problem (MISR) we are given a collection of n
axis parallel rectangles in the plane. Our goal is to select a
maximum-cardinality subset of pairwise non-overlapping rectangles. This problem
is NP-hard and also W[1]-hard [Marx, ESA'05]. The best-known polynomial-time
approximation factor is O(loglog n) [Chalermsook and Chuzhoy, SODA'09] and it
admits a QPTAS [Adamaszek and Wiese, FOCS'13; Chuzhoy and Ene, FOCS'16]. Here
we present a parameterized approximation scheme (PAS) for MISR, i.e. an
algorithm that, for any given constant eps>0 and integer k>0, in time
f(k,eps)n^{g(eps)}, either outputs a solution of size at least k/(1+eps), or
declares that the optimum solution has size less than k. 2) In the
(2-dimensional) geometric knapsack problem (TDK) we are given an axis-aligned
square knapsack and a collection of axis-aligned rectangles in the plane
(items). Our goal is to translate a maximum cardinality subset of items into
the knapsack so that the selected items do not overlap. In the version of TDK
with rotations (TDKR), we are allowed to rotate items by 90 degrees. Both
variants are NP-hard, and the best-known polynomial-time approximation factors
are 558/325+eps and 4/3+eps, resp. [Galvez et al., FOCS'17]. These problems
admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese,
SODA'15]. We show that both variants are W[1]-hard. Furthermore, we present a
PAS for TDKR. For all considered problems, getting time f(k,eps)n^{O(1)},
rather than f(k,eps)n^{g(eps)}, would give FPT time f'(k)n^{O(1)} exact
algorithms using eps=1/(k+1), contradicting W[1]-hardness.

**Authors: **Adam Lev-Libfeld **Download:** PDF**Abstract: **As demand for Real-Time applications rises among the general public, the
importance of enabling large-scale, unbound algorithms to solve conventional
problems with low to no latency is critical for product viability. Timer
algorithms are prevalent in the core mechanisms behind operating systems,
network protocol implementation, stream processing, and several database
capabilities. This paper presents a field-tested algorithm for low latency,
unbound range timer structure, based upon the well excepted Timing Wheel
algorithm. Using a set of queues hashed by TTL, the algorithm allows for a
simpler implementation, minimal overhead no overflow and no performance
degradation in comparison to the current state of the algorithms under typical
use cases.

**Authors: **Rafael Pass, Muthuramakrishnan Venkitasubramaniam **Download:** PDF**Abstract: **Consider the following two fundamental open problems in complexity theory: 1)
Does a hard-on-average language in NP imply the existence of one-way functions?
2) Does a hard-on-average language in NP imply a hard problem in TFNP (i.e.,
the class of total NP search problem)? We show that the answer to (at least)
one of these questions is yes. In other words, in Impagliazzo's Pessiland
(where NP is hard-on-average, but one-way functions do not exist), TFNP is
unconditionally hard (on average). This result follows from a more general
theory of interactive average-case complexity, and in particular, a novel
round-collapse theorem for computationally-sound protocols, analogous to
Babai-Moran's celebrated round-collapse theorem for information-theoretically
sound protocols. As another consequence of this treatment, we show that the
existence of O(1)-round public-coin non-trivial arguments (i.e., argument
systems that are not proofs) imply the existence of a hard-on-average problem
in NP/poly.

**Authors: **Mingyu Xiao **Download:** PDF**Abstract: **A mixed dominating set of a graph $G = (V, E)$ is a mixed set $D$ of vertices
and edges, such that for every edge or vertex, if it is not in $D$, then it is
adjacent or incident to at least one vertex or edge in $D$. The mixed
domination problem is to find a mixed dominating set with a minimum
cardinality. It has applications in system control and some other scenarios and
it is $NP$-hard to compute an optimal solution. This paper studies
approximation algorithms and hardness of the weighted mixed dominating set
problem. The weighted version is a generalization of the unweighted version,
where all vertices are assigned the same nonnegative weight $w_v$ and all edges
are assigned the same nonnegative weight $w_e$, and the question is to find a
mixed dominating set with a minimum total weight. Although the mixed dominating
set problem has a simple 2-approximation algorithm, few approximation results
for the weighted version are known. The main contributions of this paper
include: [1.] for $w_e\geq w_v$, a 2-approximation algorithm; [2.] for $w_e\geq
2w_v$, inapproximability within ratio 1.3606 unless $P=NP$ and within ratio 2
under UGC; [3.] for $2w_v > w_e\geq w_v$, inapproximability within ratio 1.1803
unless $P=NP$ and within ratio 1.5 under UGC; [4.] for $w_e< w_v$,
inapproximability within ratio $(1-\epsilon)\ln |V|$ unless $P=NP$ for any
$\epsilon >0$.

**Authors: **H. Philathong, V. Akshay, I. Zacharov, J. Biamonte **Download:** PDF**Abstract: **Gibbs sampling is fundamental to a wide range of computer algorithms. Such
algorithms are set to be replaced by physics based processors$-$be it quantum
or stochastic annealing devices$-$which embed problem instances and evolve a
physical system into an ensemble to recover a probability distribution. At a
critical constraint to variable ratio, decision problems$-$such as
propositional satisfiability$-$appear to statistically exhibit an abrupt
transition in required computational resources. This so called, algorithmic or
computational phase transition signature, has yet-to-be observed in
contemporary physics based processors. We found that the computational phase
transition admits a signature in Gibbs' distributions and hence we predict and
prescribe the physical observation of this effect. We simulate such an
experiment, that when realized experimentally, we believe would represent a
milestone in the physical theory of computation.

This event was a great success! Thank you to all of the participants for contributing your time. Please keep up the momentum and continue to edit the pages you made a start on. Please continue to record your progress on the list of topics. Special thanks to Aviad Rubinstein and Yuval Filmus for offering expert advice at the event.

We plan to organize this event again at future STOCs, and hope many more people can participate. Even an hour of your time can have a huge impact on the community!

**Authors: **Erva Ulu, James McCann, Levent Burak Kara **Download:** PDF**Abstract: **We introduce a method to design lightweight shell objects that are
structurally robust under the external forces they may experience during use.
Given an input 3D model and a general description of the external forces, our
algorithm generates a structurally-sound minimum weight shell object. Our
approach works by altering the local shell thickness repeatedly based on the
stresses that develop inside the object. A key issue in shell design is that
large thickness values might result in self-intersections on the inner boundary
creating a significant computational challenge during optimization. To address
this, we propose a shape parametrization based on the solution to the Laplace's
equation that guarantees smooth and intersection-free shell boundaries.
Combined with our gradient-free optimization algorithm, our method provides a
practical solution to the structural design of hollow objects with a single
inner cavity. We demonstrate our method on a variety of problems with arbitrary
3D models under complex force configurations and validate its performance with
physical experiments.

**Authors: **Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford **Download:** PDF**Abstract: **A landmark result of non-smooth convex optimization is that gradient descent
is an optimal algorithm whenever the number of computed gradients is smaller
than the dimension $d$. In this paper we study the extension of this result to
the parallel optimization setting. Namely we consider optimization algorithms
interacting with a highly parallel gradient oracle, that is one that can answer
$\mathrm{poly}(d)$ gradient queries in parallel. We show that in this case
gradient descent is optimal only up to $\tilde{O}(\sqrt{d})$ rounds of
interactions with the oracle. The lower bound improves upon a decades old
construction by Nemirovski which proves optimality only up to $d^{1/3}$ rounds
(as recently observed by Balkanski and Singer), and the suboptimality of
gradient descent after $\sqrt{d}$ rounds was already observed by Duchi,
Bartlett and Wainwright. In the latter regime we propose a new method with
improved complexity, which we conjecture to be optimal. The analysis of this
new method is based upon a generalized version of the recent results on optimal
acceleration for highly smooth convex optimization.

**Authors: **Ronen Eldan, Assaf Naor **Download:** PDF**Abstract: **Answering a question of Abbasi-Zadeh, Bansal, Guruganesh, Nikolov, Schwartz
and Singh (2018), we prove the existence of a slowed-down sticky Brownian
motion whose induced rounding for MAXCUT attains the Goemans--Williamson
approximation ratio. This is an especially simple particular case of the
general rounding framework of Krivine diffusions that we investigate elsewhere.

**Authors: **David Durfee, Yu Gao, Gramoz Goranci, Richard Peng **Download:** PDF**Abstract: **We study \emph{dynamic} algorithms for maintaining spectral vertex
sparsifiers of graphs with respect to a set of terminals $T$ of our choice.
Such objects preserve pairwise resistances, solutions to systems of linear
equations, and energy of electrical flows between the terminals in $T$. We give
a data structure that supports insertions and deletions of edges, and terminal
additions, all in sublinear time. Our result is then applied to the following
problems.

(1) A data structure for maintaining solutions to Laplacian systems $\mathbf{L} \mathbf{x} = \mathbf{b}$, where $\mathbf{L}$ is the Laplacian matrix and $\mathbf{b}$ is a demand vector. For a bounded degree, unweighted graph, we support modifications to both $\mathbf{L}$ and $\mathbf{b}$ while providing access to $\epsilon$-approximations to the energy of routing an electrical flow with demand $\mathbf{b}$, as well as query access to entries of a vector $\tilde{\mathbf{x}}$ such that $\left\lVert \tilde{\mathbf{x}}-\mathbf{L}^{\dagger} \mathbf{b} \right\rVert_{\mathbf{L}} \leq \epsilon \left\lVert \mathbf{L}^{\dagger} \mathbf{b} \right\rVert_{\mathbf{L}}$ in $\tilde{O}(n^{11/12}\epsilon^{-5})$ expected amortized update and query time.

(2) A data structure for maintaining All-Pairs Effective Resistance. For an intermixed sequence of edge insertions, deletions, and resistance queries, our data structure returns $(1 \pm \epsilon)$-approximation to all the resistance queries against an oblivious adversary with high probability. Its expected amortized update and query times are $\tilde{O}(\min(m^{3/4},n^{5/6} \epsilon^{-2}) \epsilon^{-4})$ on an unweighted graph, and $\tilde{O}(n^{5/6}\epsilon^{-6})$ on weighted graphs.

These results represent the first data structures for maintaining key primitives from the Laplacian paradigm for graph algorithms in sublinear time without assumptions on the underlying graph topologies.

**Authors: **Dekel Tsur **Download:** PDF**Abstract: **In the $l$-path vertex cover problem the input is an undirected graph $G$ and
an integer $k$. The goal is to decide whether there is a set of vertices $S$ of
size at most $k$ such that $G-S$ does not contain a path with $l$ vertices. In
this paper we give parameterized algorithms for $l$-path vertex cover for $l =
5,6,7$, whose time complexities are $O^*(3.945^k)$, $O^*(4.947^k)$, and
$O^*(5.951^k)$, respectively.

**Authors: **Joshua Alan Cook **Download:** PDF**Abstract: **In this paper, I take a step toward answering the following question: for m
different small circuits that compute m orthogonal n qubit states, is there a
small circuit that will map m computational basis states to these m states
without any input leaving any auxiliary bits changed. While this may seem
simple, the constraint that auxiliary bits always be returned to 0 on any input
(even ones besides the m we care about) led me to use sophisticated techniques.
I give an approximation of such a unitary in the m = 2 case that has size
polynomial in the approximation error, and the number of qubits n.

**Authors: **Ashley Montanaro **Download:** PDF**Abstract: **Branch-and-bound is a widely used technique for solving combinatorial
optimisation problems where one has access to two procedures: a branching
procedure that splits a set of potential solutions into subsets, and a cost
procedure that determines a lower bound on the cost of any solution in a given
subset. Here we describe a quantum algorithm that can accelerate classical
branch-and-bound algorithms near-quadratically in a very general setting. We
show that the quantum algorithm can find exact ground states for most instances
of the Sherrington-Kirkpatrick model in time $O(2^{0.226n})$, which is
substantially more efficient than Grover's algorithm.

**Authors: **Rasmus Kyng, Richard Peng, Sushant Sachdeva, Di Wang **Download:** PDF**Abstract: **We present algorithms for solving a large class of flow and regression
problems on unit weighted graphs to $(1 + 1 / poly(n))$ accuracy in
almost-linear time. These problems include $\ell_p$-norm minimizing flow for
$p$ large ($p \in [\omega(1), o(\log^{2/3} n) ]$), and their duals,
$\ell_p$-norm semi-supervised learning for $p$ close to $1$.

As $p$ tends to infinity, $\ell_p$-norm flow and its dual tend to max-flow and min-cut respectively. Using this connection and our algorithms, we give an alternate approach for approximating undirected max-flow, and the first almost-linear time approximations of discretizations of total variation minimization objectives.

This algorithm demonstrates that many tools previous viewed as limited to linear systems are in fact applicable to a much wider range of convex objectives. It is based on the the routing-based solver for Laplacian linear systems by Spielman and Teng (STOC '04, SIMAX '14), but require several new tools: adaptive non-linear preconditioning, tree-routing based ultra-sparsification for mixed $\ell_2$ and $\ell_p$ norm objectives, and decomposing graphs into uniform expanders.

**Authors: **Abusayeed Saifullah **Download:** PDF**Abstract: **Self-stabilization for non-masking fault-tolerant distributed system has
received considerable research interest over the last decade. In this paper, we
propose a self-stabilizing algorithm for 2-edge-connectivity and
2-vertex-connectivity of an asynchronous distributed computer network. It is
based on a self-stabilizing depth-first search, and is not a composite
algorithm in the sense that it is not composed of a number of self-stabilizing
algorithms that run concurrently. The time and space complexities of the
algorithm are the same as those of the underlying self-stabilizing depth-first
search algorithm which are O(dn\Delta) rounds and O(n\log \Delta) bits per
processor, respectively, where \Delta (<= n) is an upper bound on the degree of
a node, d (<= n) is the diameter of the graph, and n is the number of nodes in
the network.

**Authors: **Ke Chen, Adrian Dumitrescu, Wolfgang Mulzer, Csaba D. Tóth **Download:** PDF**Abstract: **Let $P=(p_1, p_2, \dots, p_n)$ be a polygonal chain. The stretch factor of
$P$ is the ratio between the total length of $P$ and the distance of its
endpoints, $\sum_{i = 1}^{n-1} |p_i p_{i+1}|/|p_1 p_n|$. For a parameter $c
\geq 1$, we call $P$ a $c$-chain if $|p_ip_j|+|p_jp_k| \leq c|p_ip_k|$, for
every triple $(i,j,k)$, $1 \leq i<j<k \leq n$. The stretch factor is a global
property: it measures how close $P$ is to a straight line, and it involves all
the vertices of $P$; being a $c$-chain, on the other hand, is a
fingerprint-property: it only depends on subsets of $O(1)$ vertices of the
chain.

We investigate how the $c$-chain property influences the stretch factor in the plane: (i) we show that for every $\varepsilon > 0$, there is a noncrossing $c$-chain that has stretch factor $\Omega(n^{1/2-\varepsilon})$, for sufficiently large constant $c=c(\varepsilon)$; (ii) on the other hand, the stretch factor of a $c$-chain $P$ is $O\left(n^{1/2}\right)$, for every constant $c\geq 1$, regardless of whether $P$ is crossing or noncrossing; and (iii) we give a randomized algorithm that can determine, for a polygonal chain $P$ in $\mathbb{R}^2$ with $n$ vertices, the minimum $c\geq 1$ for which $P$ is a $c$-chain in $O\left(n^{2.5}\ {\rm polylog}\ n\right)$ expected time and $O(n\log n)$ space.

For more than 20 years we’ve had lower bounds for threshold circuits of depth [IPS97], for a fixed . There have been several “explanations” for the lack of progress [AK10]. Recently Chen and Tell have given a better explanation showing that you can’t even improve the result to a better without proving “the whole thing.”

Say you have a finite group and you want to compute the *iterated product* of elements.

**Warm-up [AK10].**.

Suppose you can compute this with circuits of size and depth . Now we show how you can trade size for depth. Put a complete tree with fan-in on top of the group product, where each node computes the product of its children (this is correct by associativity, in general this works for a monoid). This tree needs depth . If you stick your circuit of size and depth at each node, the depth of the overall circuit would be obviously and the overall size would be dominated by the input layer which is . If you are aiming for overall depth , you need . This gives size .

Hence we have shown that proving bounds for some depth suffices to prove lower bounds for depth .

**Chen and Tell.**.

The above is not the most efficient way to build a tree! I am writing this post following their paper to understand what they do. As they say, the idea is quite simple. While above the size will be dominated by the input layer, we want to balance things so that every layer has roughly the same contribution.

Let’s say we are aiming for size and let’s see what depth we can get. Let’s say now the size is . Let us denote by the number of nodes at level with being the root. The fan-in at level is so that the cost is as desired. We have the recursion .

The solution to this recursion is , see below.

So that’s it. We need to get to nodes. So if you set you get say . Going back to , we have exhibited circuits of size and depth just . So proving stronger bounds than this would rule out circuits of size and depth .

**Added later: About the recurrence**.

Letting we have the following recurrence for the exponents of .

This gives

If it was obviously would already be . Instead for we need to get to .

**My two cents.**.

I am not sure I need more evidence that making progress on long-standing bounds in complexity theory is hard, but I do find it interesting to prove these links; we have quite a few by now! The fact that we have been stuck forever just short of proving “the whole thing” makes me think that these long-sought bounds may in fact be false. Would love to be proved wrong, but it’s 2019, this connection is proved by balancing a tree better, and you feel confident that P NP?

On the Importance of Disciplinary Pride for Multidisciplinary Collaboration

I am a big fan of collaborations, even if they come with their own challenges. I always got further and enjoyed research much more because of my collaborators. I’m forever indebted to so many colleagues and dear, dear friends. Each and every one of them was better than me in some ways. To contribute, I had to remember my own strengths and bring them to the table. The premise of this post is that the same holds for collaboration between fields. It should be read as a call for theoreticians to bring the tools and the powerful way of thinking of TOC into collaborations. We shouldn’t be blind to the limitation of our field but obsessing on those limitations is misguided and would only limit our impact. Instead we should bring our best and trust on the other disciplines we collaborate with to do the same (allowing each to complement and compensate for the other).

The context in which these thoughts came to my mind is Algorithmic Fairness. In this and other areas on the interface between society and computing, true collaboration is vital. Not surprisingly, attending multidisciplinary programs on Algorithm Fairness, is a major part of my professional activities these days. And I love it – I get to learn so much from people and disciplines that have been thinking about fairness for many decades and centuries. In addition, the Humanities are simply splendid. Multidisciplinary collaborations come with even more challenges than other collaborations: the language, tools and perspectives are different. But for exactly the same reasons they can be even more rewarding. Nevertheless, my fear and the reason for this post is that my less experienced TOC colleagues might come out from those interdisciplinary meetings frustrated and might lose confidence in what TOC can contribute. It feels to me that old lessons about the value of TOC need to be learned again. There is a lot to be proud of, and holding to this pride would in fact make us better collaborators not worse.

In the context of Algorithmic Fairness, we should definitely acknowledge (as we often do) that science exists within political structures, that algorithms are not objective and that mathematical definitions cannot replace social norms as expressed by policy makers. But let’s not take these as excuses for inaction and let’s not withdraw to the role of spectators. In this era of algorithms, other disciplines need us just as much as we need them .