Last Update

OPML feed of all feeds.

Subscribe to the Atom feed, RSS feed to stay up to date.

Thank you to arXiv for use of its open access interoperability.

Note: the date of arXiv entries announced right after publication holidays might incorrectly show up as the date of the publication holiday itself. This is due to our ad hoc method of inferring announcement dates, which are not returned by the arXiv API.

Powered by Pluto.

Source on GitHub.

Maintained by Nima Anari, Arnab Bhattacharyya, Gautam Kamath.

Theory of Computing Report

Friday, April 26

Inconvenient Facts

from Ben Recht

Meehl's Philosophical Psychology, Lecture 2, Part 2

One of the more obvious objections to Popper’s falsification program is that the history of science is littered with scientists refusing to abandon their theories in light of experimental refutation. Meehl gives a few examples that suggest it might even be “rational” (his word) not to abandon a theory just because the facts disagree with it.

The Periodic Table

In the 1860s, Dimitri Mendeleev noted surprising repeated patterns in the properties of elements. He arranged them into what would become the periodic table. By ordering the elements by atomic weight, patterns of different elemental properties emerged as either vertical patterns or local similarities. The table was a great fit to the elements known at the time, but it wasn’t a perfect fit. In particular, the tellurium wouldn’t fit in the table. Rather than abandoning the table, Mendeleev assumed the weight was incorrectly measured. 

It would turn out that he was sort of right: the correct ordering in the table was by atomic number, not atomic weight. And he was right about the position of tellurium, just not the reason for why it should be there. That tellurium didn’t fit led to further fruitful investigations into the fundamentals of chemistry. And Meehl argues that it was perfectly rational for Mendeleev to stick with his table, exclaiming, “When you’ve got that much going for you and that much order, perfectly sensible to say something was wrong with a couple of numbers in there.”

Ether Drift

Meehl’s second example details the refutation of the aether wind. Before Einstein, physicists thought that light must travel through a medium called luminiferous aether. After all, all waves travel through some medium. If light were to travel through aether, and since Earth was moving around the sun, there should be a difference in the speed of light at perpendicular directions on the surface of the Earth.

Albert Michelson and Edward Morley devised a clever experiment to detect this speed difference using an interferometer. By sending light down two perpendicular paths, reflecting the light off mirrors, and then intersecting the beams, a difference in speed would result in interference patterns at the intersection. After careful and valiant engineering of their device, they found no evidence for the aether drift in the 1880s.

The apocryphal story goes that this inspired Einstein to invent special relativity, where light moves at the same speed in all reference frames. However, there was a significant problem. Michaelson and Morley’s results were not only consistent with the refutation of the aether drift, but also with the apparatus being broken. A broken interferometer would also certainly see no interference patterns and hence would also report zero aether drift. There was plenty of evidence that Michaelson and Morley were trying to estimate an effect far below the capabilities of their device.

In 1933, nearly 50 years after the first Michaelson-Morley experiment and well after the acceptance of special relativity, Dayton C. Miller announced evidence of an aether drift. Miller was a highly respected experimental physicist, and physicists took his results quite seriously.   He even presented his aether drift findings as part of his presidential address to the American Physical Society. 

Miller’s results were compelling. Surely, they couldn’t be dismissed out of hand. But, in fact, they were. Einstein blew off Miller’s results as a “thermal artifact,” and physicists just continued on as if Miller’s experiment hadn’t happened. In 1945, a more spectacular confirmation of special relativity lit up the New Mexico desert. It wasn’t until 1955 that physicists bothered to explain away Miller’s results, using computational statistics to argue that the result was likely due to thermal effects. But this was just a reanalysis of 30-year-old findings, not a new experiment! It was a post hoc statistical rationalization for why Miller was wrong. While plans for more interferometry experiments continued into the 1960s, most physicists moved on to more trendy topics.

The Eddington Experiment

Meehl doesn’t discuss this one in-depth, but he notes a couple of times that Popper’s main scientific inspiration was Eddington’s experiment, in which Newton was “refuted” and Einstein “confirmed.” 

In 1919, Eddington led a scientific collaboration to measure the gravitation deflection of light near the sun to test the predictions of Einstein’s theory of relativity. Two teams, one in Principe and one in Brazil, measured the position of stars whose locations would be near the sun in the sky during a solar eclipse. Einstein’s theory predicted a gravitational lensing effect that would change the measured position of these stars because of the sun’s gravity.

Did the measured deflections confirm Einstein? Most analyses of the data now say not at all. There were only 28 salvageable photographic measurements from the experiment, 26 from Brazil and 2 from Principe. Eddington discarded 18 of the Brazilian ones, declaring that telescope defective. The remaining 8 revealed a deflection confirming Einstein. The 2 from Principe, after a theory-infected heuristic adjustment, also confirmed Einstein.

But before discarding them, the team had measured deflection on the 18 other plates. These measurements were far more favorable to Newtonian gravity than relativity. Eddington argued the device that had taken these measurements suffered from “systematic errors.” The headlines would read “Einstein Confirmed. Newton Refuted,” but this was just Eddigton’s opinion, man.

But Eddington wasn’t wrong! In 1969, a team of scientists at Caltech used radio interferometry to measure the deflection of light in the solar gravitational field. Twenty years of refining this new technique yielded progressively tighter agreement with relativity.

If you’re interested in learning more about the weirdness of these relativity experiments and the social pressures of science, I highly recommend checking out Chapter 2 of The Golem by Harry Collins and Trevor Pinch.

What should we do?

From these varied examples, Meehl concludes, “The history of science doesn't act the way Popper says the good scientist ought to act.”

“It's quite clear in the history of science that the way the facts control the theories is collectively over the long haul, not individually in the short run. Whatever you think about it philosophically, empirically, or historically, it's crystal clear that in every science, the theories, when they are reasonably well corroborated, are allowed to control individual alleged facts. The alleged fact that would be a falsifier is simply not admitted into the corpus of belief.”

Now, Meehl is also not particularly sanguine about formalizing how a long run of facts can support a particular theory. When should we accuse a scientist clinging to their pet theory of being a bad scientist?

“At what point that kind of theoretical tenacity becomes sinful is not known. No philosopher or historian of science has been able to give any kind of a rule or even rule of thumb, which enables you to say ‘Now at time T3 from this point on theoretical tenacity is a scientific sin’ and probably nobody ever will be able to.”

So we’re left with a few questions: Why might a scientist like one theory more than another if both were unfalsified? When should evidence be reasonably sufficient to abandon a theory? How can you argue against theoretical tenacity? Popper tried to engage with these questions, and I’ll describe his approach of “corroboration” in the final post about lecture 2.

Subscribe now

By Ben Recht

Clique Is Hard on Average for Sherali-Adams with Bounded Coefficients

from arXiv: Computational Complexity

Authors: Susanna F. de Rezende, Aaron Potechin, Kilian Risse

We prove that Sherali-Adams with polynomially bounded coefficients requires proofs of size $n^{\Omega(d)}$ to rule out the existence of an $n^{\Theta(1)}$-clique in Erd\H{o}s-R\'{e}nyi random graphs whose maximum clique is of size $d\leq 2\log n$. This lower bound is tight up to the multiplicative constant in the exponent. We obtain this result by introducing a technique inspired by pseudo-calibration which may be of independent interest. The technique involves defining a measure on monomials that precisely captures the contribution of a monomial to a refutation. This measure intuitively captures progress and should have further applications in proof complexity.

Authors: Susanna F. de Rezende, Aaron Potechin, Kilian Risse

We prove that Sherali-Adams with polynomially bounded coefficients requires proofs of size $n^{\Omega(d)}$ to rule out the existence of an $n^{\Theta(1)}$-clique in Erd\H{o}s-R\'{e}nyi random graphs whose maximum clique is of size $d\leq 2\log n$. This lower bound is tight up to the multiplicative constant in the exponent. We obtain this result by introducing a technique inspired by pseudo-calibration which may be of independent interest. The technique involves defining a measure on monomials that precisely captures the contribution of a monomial to a refutation. This measure intuitively captures progress and should have further applications in proof complexity.

Unconditional correctness of recent quantum algorithms for factoring and computing discrete logarithms

from arXiv: Computational Complexity

Authors: Cédric Pilatte

In 1994, Shor introduced his famous quantum algorithm to factor integers and compute discrete logarithms in polynomial time. In 2023, Regev proposed a multi-dimensional version of Shor's algorithm that requires far fewer quantum gates. His algorithm relies on a number-theoretic conjecture on the elements in $(\mathbb{Z}/N\mathbb{Z})^{\times}$ that can be written as short products of very small prime numbers. We prove a version of this conjecture using tools from analytic number theory such as zero-density estimates. As a result, we obtain an unconditional proof of correctness of this improved quantum algorithm and of subsequent variants.

Authors: Cédric Pilatte

In 1994, Shor introduced his famous quantum algorithm to factor integers and compute discrete logarithms in polynomial time. In 2023, Regev proposed a multi-dimensional version of Shor's algorithm that requires far fewer quantum gates. His algorithm relies on a number-theoretic conjecture on the elements in $(\mathbb{Z}/N\mathbb{Z})^{\times}$ that can be written as short products of very small prime numbers. We prove a version of this conjecture using tools from analytic number theory such as zero-density estimates. As a result, we obtain an unconditional proof of correctness of this improved quantum algorithm and of subsequent variants.

Tightening I/O Lower Bounds through the Hourglass Dependency Pattern

from arXiv: Computational Complexity

Authors: Lionel Eyraud-Dubois, Guillaume Iooss, Julien Langou, Fabrice Rastello

When designing an algorithm, one cares about arithmetic/computational complexity, but data movement (I/O) complexity plays an increasingly important role that highly impacts performance and energy consumption. For a given algorithm and a given I/O model, scheduling strategies such as loop tiling can reduce the required I/O down to a limit, called the I/O complexity, inherent to the algorithm itself. The objective of I/O complexity analysis is to compute, for a given program, its minimal I/O requirement among all valid schedules. We consider a sequential execution model with two memories, an infinite one, and a small one of size S on which the computations retrieve and produce data. The I/O is the number of reads and writes between the two memories. We identify a common "hourglass pattern" in the dependency graphs of several common linear algebra kernels. Using the properties of this pattern, we mathematically prove tighter lower bounds on their I/O complexity, which improves the previous state-of-the-art bound by a parametric ratio. This proof was integrated inside the IOLB automatic lower bound derivation tool.

Authors: Lionel Eyraud-Dubois, Guillaume Iooss, Julien Langou, Fabrice Rastello

When designing an algorithm, one cares about arithmetic/computational complexity, but data movement (I/O) complexity plays an increasingly important role that highly impacts performance and energy consumption. For a given algorithm and a given I/O model, scheduling strategies such as loop tiling can reduce the required I/O down to a limit, called the I/O complexity, inherent to the algorithm itself. The objective of I/O complexity analysis is to compute, for a given program, its minimal I/O requirement among all valid schedules. We consider a sequential execution model with two memories, an infinite one, and a small one of size S on which the computations retrieve and produce data. The I/O is the number of reads and writes between the two memories. We identify a common "hourglass pattern" in the dependency graphs of several common linear algebra kernels. Using the properties of this pattern, we mathematically prove tighter lower bounds on their I/O complexity, which improves the previous state-of-the-art bound by a parametric ratio. This proof was integrated inside the IOLB automatic lower bound derivation tool.

A Multivariate to Bivariate Reduction for Noncommutative Rank and Related Results

from arXiv: Computational Complexity

Authors: Vikraman Arvind, Pushkar S Joglekar

We study the noncommutative rank problem, ncRANK, of computing the rank of matrices with linear entries in $n$ noncommuting variables and the problem of noncommutative Rational Identity Testing, RIT, which is to decide if a given rational formula in $n$ noncommuting variables is zero on its domain of definition. Motivated by the question whether these problems have deterministic NC algorithms, we revisit their interrelationship from a parallel complexity point of view. We show the following results: 1. Based on Cohn's embedding theorem \cite{Co90,Cohnfir} we show deterministic NC reductions from multivariate ncRANK to bivariate ncRANK and from multivariate RIT to bivariate RIT. 2. We obtain a deterministic NC-Turing reduction from bivariate $\RIT$ to bivariate ncRANK, thereby proving that a deterministic NC algorithm for bivariate ncRANK would imply that both multivariate RIT and multivariate ncRANK are in deterministic NC.

Authors: Vikraman Arvind, Pushkar S Joglekar

We study the noncommutative rank problem, ncRANK, of computing the rank of matrices with linear entries in $n$ noncommuting variables and the problem of noncommutative Rational Identity Testing, RIT, which is to decide if a given rational formula in $n$ noncommuting variables is zero on its domain of definition. Motivated by the question whether these problems have deterministic NC algorithms, we revisit their interrelationship from a parallel complexity point of view. We show the following results: 1. Based on Cohn's embedding theorem \cite{Co90,Cohnfir} we show deterministic NC reductions from multivariate ncRANK to bivariate ncRANK and from multivariate RIT to bivariate RIT. 2. We obtain a deterministic NC-Turing reduction from bivariate $\RIT$ to bivariate ncRANK, thereby proving that a deterministic NC algorithm for bivariate ncRANK would imply that both multivariate RIT and multivariate ncRANK are in deterministic NC.

Efficient and Near-Optimal Noise Generation for Streaming Differential Privacy

from arXiv: Data Structures and Algorithms

Authors: Krishnamurthy, Dvijotham, H. Brendan McMahan, Krishna Pillutla, Thomas Steinke, Abhradeep Thakurta

In the task of differentially private (DP) continual counting, we receive a stream of increments and our goal is to output an approximate running total of these increments, without revealing too much about any specific increment. Despite its simplicity, differentially private continual counting has attracted significant attention both in theory and in practice. Existing algorithms for differentially private continual counting are either inefficient in terms of their space usage or add an excessive amount of noise, inducing suboptimal utility. The most practical DP continual counting algorithms add carefully correlated Gaussian noise to the values. The task of choosing the covariance for this noise can be expressed in terms of factoring the lower-triangular matrix of ones (which computes prefix sums). We present two approaches from this class (for different parameter regimes) that achieve near-optimal utility for DP continual counting and only require logarithmic or polylogarithmic space (and time). Our first approach is based on a space-efficient streaming matrix multiplication algorithm for a class of Toeplitz matrices. We show that to instantiate this algorithm for DP continual counting, it is sufficient to find a low-degree rational function that approximates the square root on a circle in the complex plane. We then apply and extend tools from approximation theory to achieve this. We also derive efficient closed-forms for the objective function for arbitrarily many steps, and show direct numerical optimization yields a highly practical solution to the problem. Our second approach combines our first approach with a recursive construction similar to the binary tree mechanism.

Authors: Krishnamurthy, Dvijotham, H. Brendan McMahan, Krishna Pillutla, Thomas Steinke, Abhradeep Thakurta

In the task of differentially private (DP) continual counting, we receive a stream of increments and our goal is to output an approximate running total of these increments, without revealing too much about any specific increment. Despite its simplicity, differentially private continual counting has attracted significant attention both in theory and in practice. Existing algorithms for differentially private continual counting are either inefficient in terms of their space usage or add an excessive amount of noise, inducing suboptimal utility. The most practical DP continual counting algorithms add carefully correlated Gaussian noise to the values. The task of choosing the covariance for this noise can be expressed in terms of factoring the lower-triangular matrix of ones (which computes prefix sums). We present two approaches from this class (for different parameter regimes) that achieve near-optimal utility for DP continual counting and only require logarithmic or polylogarithmic space (and time). Our first approach is based on a space-efficient streaming matrix multiplication algorithm for a class of Toeplitz matrices. We show that to instantiate this algorithm for DP continual counting, it is sufficient to find a low-degree rational function that approximates the square root on a circle in the complex plane. We then apply and extend tools from approximation theory to achieve this. We also derive efficient closed-forms for the objective function for arbitrarily many steps, and show direct numerical optimization yields a highly practical solution to the problem. Our second approach combines our first approach with a recursive construction similar to the binary tree mechanism.

Kernelization Dichotomies for Hitting Subgraphs under Structural Parameterizations

from arXiv: Data Structures and Algorithms

Authors: Marin Bougeret, Bart M. P. Jansen, Ignasi Sau

For a fixed graph $H$, the $H$-SUBGRAPH HITTING problem consists in deleting the minimum number of vertices from an input graph to obtain a graph without any occurrence of $H$ as a subgraph. This problem can be seen as a generalization of VERTEX COVER, which corresponds to the case $H = K_2$. We initiate a study of $H$-SUBGRAPH HITTING from the point of view of characterizing structural parameterizations that allow for polynomial kernels, within the recently active framework of taking as the parameter the number of vertex deletions to obtain a graph in a "simple" class $C$. Our main contribution is to identify graph parameters that, when $H$-SUBGRAPH HITTING is parameterized by the vertex-deletion distance to a class $C$ where any of these parameters is bounded, and assuming standard complexity assumptions and that $H$ is biconnected, allow us to prove the following sharp dichotomy: the problem admits a polynomial kernel if and only if $H$ is a clique. These new graph parameters are inspired by the notion of $C$-elimination distance introduced by Bulian and Dawar [Algorithmica 2016], and generalize it in two directions. Our results also apply to the version of the problem where one wants to hit $H$ as an induced subgraph, and imply in particular, that the problems of hitting minors and hitting (induced) subgraphs have a substantially different behavior with respect to the existence of polynomial kernels under structural parameterizations.

Authors: Marin Bougeret, Bart M. P. Jansen, Ignasi Sau

For a fixed graph $H$, the $H$-SUBGRAPH HITTING problem consists in deleting the minimum number of vertices from an input graph to obtain a graph without any occurrence of $H$ as a subgraph. This problem can be seen as a generalization of VERTEX COVER, which corresponds to the case $H = K_2$. We initiate a study of $H$-SUBGRAPH HITTING from the point of view of characterizing structural parameterizations that allow for polynomial kernels, within the recently active framework of taking as the parameter the number of vertex deletions to obtain a graph in a "simple" class $C$. Our main contribution is to identify graph parameters that, when $H$-SUBGRAPH HITTING is parameterized by the vertex-deletion distance to a class $C$ where any of these parameters is bounded, and assuming standard complexity assumptions and that $H$ is biconnected, allow us to prove the following sharp dichotomy: the problem admits a polynomial kernel if and only if $H$ is a clique. These new graph parameters are inspired by the notion of $C$-elimination distance introduced by Bulian and Dawar [Algorithmica 2016], and generalize it in two directions. Our results also apply to the version of the problem where one wants to hit $H$ as an induced subgraph, and imply in particular, that the problems of hitting minors and hitting (induced) subgraphs have a substantially different behavior with respect to the existence of polynomial kernels under structural parameterizations.

Computing Hamiltonian Paths with Partial Order Restrictions

from arXiv: Data Structures and Algorithms

Authors: Jesse Beisegel, Fabienne Ratajczak, Robert Scheffler

When solving the Hamiltonian path problem it seems natural to be given additional precedence constraints for the order in which the vertices are visited. For example one could decide whether a Hamiltonian path exists for a fixed starting point, or that some vertices are visited before another vertex. We consider the problem of finding a Hamiltonian path that observes all precedence constraints given in a partial order on the vertex set. We show that this problem is $\mathsf{NP}$-complete even if restricted to complete bipartite graphs and posets of height 2. In contrast, for posets of width $k$ there is an $\mathcal{O}(k^2 n^k)$ algorithm for arbitrary graphs with $n$ vertices. We show that it is unlikely that the running time of this algorithm can be improved significantly, i.e., there is no $f(k) n^{o(k)}$ time algorithm under the assumption of the Exponential Time Hypothesis. Furthermore, for the class of outerplanar graphs, we give an $\mathcal{O}(n^2)$ algorithm for arbitrary posets.

Authors: Jesse Beisegel, Fabienne Ratajczak, Robert Scheffler

When solving the Hamiltonian path problem it seems natural to be given additional precedence constraints for the order in which the vertices are visited. For example one could decide whether a Hamiltonian path exists for a fixed starting point, or that some vertices are visited before another vertex. We consider the problem of finding a Hamiltonian path that observes all precedence constraints given in a partial order on the vertex set. We show that this problem is $\mathsf{NP}$-complete even if restricted to complete bipartite graphs and posets of height 2. In contrast, for posets of width $k$ there is an $\mathcal{O}(k^2 n^k)$ algorithm for arbitrary graphs with $n$ vertices. We show that it is unlikely that the running time of this algorithm can be improved significantly, i.e., there is no $f(k) n^{o(k)}$ time algorithm under the assumption of the Exponential Time Hypothesis. Furthermore, for the class of outerplanar graphs, we give an $\mathcal{O}(n^2)$ algorithm for arbitrary posets.

More Asymmetry Yields Faster Matrix Multiplication

from arXiv: Data Structures and Algorithms

Authors: Josh Alman, Ran Duan, Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, Renfei Zhou

We present a new improvement on the laser method for designing fast matrix multiplication algorithms. The new method further develops the recent advances by [Duan, Wu, Zhou FOCS 2023] and [Vassilevska Williams, Xu, Xu, Zhou SODA 2024]. Surprisingly the new improvement is achieved by incorporating more asymmetry in the analysis, circumventing a fundamental tool of prior work that requires two of the three dimensions to be treated identically. The method yields a new bound on the square matrix multiplication exponent $$\omega<2.371339,$$ improved from the previous bound of $\omega<2.371552$. We also improve the bounds of the exponents for multiplying rectangular matrices of various shapes.

Authors: Josh Alman, Ran Duan, Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, Renfei Zhou

We present a new improvement on the laser method for designing fast matrix multiplication algorithms. The new method further develops the recent advances by [Duan, Wu, Zhou FOCS 2023] and [Vassilevska Williams, Xu, Xu, Zhou SODA 2024]. Surprisingly the new improvement is achieved by incorporating more asymmetry in the analysis, circumventing a fundamental tool of prior work that requires two of the three dimensions to be treated identically. The method yields a new bound on the square matrix multiplication exponent $$\omega<2.371339,$$ improved from the previous bound of $\omega<2.371552$. We also improve the bounds of the exponents for multiplying rectangular matrices of various shapes.

On Approximating the Dynamic and Discrete Network Flow Problem

from arXiv: Data Structures and Algorithms

Authors: Bubai Manna, Bodhayan Roy, Vorapong Suppakitpaisarn

We examine the dynamic network flow problem under the assumption that the flow consists of discrete units. The dynamic network flow problem is commonly addressed in the context of developing evacuation plans, where the flow is typically treated as a continuous quantity. However, real-world scenarios often involve moving groups, such as families, as single units. We demonstrate that solving the dynamic flow problem with this consideration is APX-hard. Conversely, we present a PTAS for instances where the base graph is a path with a constant number of nodes. We introduce a `ready time' constraint to the minsum bin packing problem, meaning certain items cannot be placed in specific bins, develop a PTAS for this modified problem, and apply our algorithms to the discrete and dynamic flow problem.

Authors: Bubai Manna, Bodhayan Roy, Vorapong Suppakitpaisarn

We examine the dynamic network flow problem under the assumption that the flow consists of discrete units. The dynamic network flow problem is commonly addressed in the context of developing evacuation plans, where the flow is typically treated as a continuous quantity. However, real-world scenarios often involve moving groups, such as families, as single units. We demonstrate that solving the dynamic flow problem with this consideration is APX-hard. Conversely, we present a PTAS for instances where the base graph is a path with a constant number of nodes. We introduce a `ready time' constraint to the minsum bin packing problem, meaning certain items cannot be placed in specific bins, develop a PTAS for this modified problem, and apply our algorithms to the discrete and dynamic flow problem.

Constrained Level Planarity is FPT with Respect to the Vertex Cover Number

from arXiv: Data Structures and Algorithms

Authors: Boris Klemz, Marie Diana Sieper

The problem Level Planarity asks for a crossing-free drawing of a graph in the plane such that vertices are placed at prescribed y-coordinates (called levels) and such that every edge is realized as a y-monotone curve. In the variant Constrained Level Planarity, each level y is equipped with a partial order <_y on its vertices and in the desired drawing the left-to-right order of vertices on level y has to be a linear extension of <_y. Constrained Level Planarity is known to be a remarkably difficult problem: previous results by Klemz and Rote [ACM Trans. Alg. 2019] and by Br\"uckner and Rutter [SODA 2017] imply that it remains NP-hard even when restricted to graphs whose tree-depth and feedback vertex set number are bounded by a constant and even when the instances are additionally required to be either proper, meaning that each edge spans two consecutive levels, or ordered, meaning that all given partial orders are total orders. In particular, these results rule out the existence of FPT-time (even XP-time) algorithms with respect to these and related graph parameters (unless P=NP). However, the parameterized complexity of Constrained Level Planarity with respect to the vertex cover number of the input graph remained open. In this paper, we show that Constrained Level Planarity can be solved in FPT-time when parameterized by the vertex cover number. In view of the previous intractability statements, our result is best-possible in several regards: a speed-up to polynomial time or a generalization to the aforementioned smaller graph parameters is not possible, even if restricting to proper or ordered instances.

Authors: Boris Klemz, Marie Diana Sieper

The problem Level Planarity asks for a crossing-free drawing of a graph in the plane such that vertices are placed at prescribed y-coordinates (called levels) and such that every edge is realized as a y-monotone curve. In the variant Constrained Level Planarity, each level y is equipped with a partial order <_y on its vertices and in the desired drawing the left-to-right order of vertices on level y has to be a linear extension of <_y. Constrained Level Planarity is known to be a remarkably difficult problem: previous results by Klemz and Rote [ACM Trans. Alg. 2019] and by Br\"uckner and Rutter [SODA 2017] imply that it remains NP-hard even when restricted to graphs whose tree-depth and feedback vertex set number are bounded by a constant and even when the instances are additionally required to be either proper, meaning that each edge spans two consecutive levels, or ordered, meaning that all given partial orders are total orders. In particular, these results rule out the existence of FPT-time (even XP-time) algorithms with respect to these and related graph parameters (unless P=NP). However, the parameterized complexity of Constrained Level Planarity with respect to the vertex cover number of the input graph remained open. In this paper, we show that Constrained Level Planarity can be solved in FPT-time when parameterized by the vertex cover number. In view of the previous intractability statements, our result is best-possible in several regards: a speed-up to polynomial time or a generalization to the aforementioned smaller graph parameters is not possible, even if restricting to proper or ordered instances.

Parameterized Complexity of Efficient Sortation

from arXiv: Data Structures and Algorithms

Authors: Robert Ganian, Hung P. Hoang, Simon Wietheger

A crucial challenge arising in the design of large-scale logistical networks is to optimize parcel sortation for routing. We study this problem under the recent graph-theoretic formalization of Van Dyk, Klause, Koenemann and Megow (IPCO 2024). The problem asks - given an input digraph D (the fulfillment network) together with a set of commodities represented as source-sink tuples - for a minimum-outdegree subgraph H of the transitive closure of D that contains a source-sink route for each of the commodities. Given the underlying motivation, we study two variants of the problem which differ in whether the routes for the commodities are assumed to be given, or can be chosen arbitrarily. We perform a thorough parameterized analysis of the complexity of both problems. Our results concentrate on three fundamental parameterizations of the problem: (1) When attempting to parameterize by the target outdegree of H, we show that the problems are paraNP-hard even in highly restricted cases; (2) When parameterizing by the number of commodities, we utilize Ramsey-type arguments, kernelization and treewidth reduction techniques to obtain parameterized algorithms for both problems; (3) When parameterizing by the structure of D, we establish fixed-parameter tractability for both problems w.r.t. treewidth, maximum degree and the maximum routing length. We combine this with lower bounds which show that omitting any of the three parameters results in paraNP-hardness.

Authors: Robert Ganian, Hung P. Hoang, Simon Wietheger

A crucial challenge arising in the design of large-scale logistical networks is to optimize parcel sortation for routing. We study this problem under the recent graph-theoretic formalization of Van Dyk, Klause, Koenemann and Megow (IPCO 2024). The problem asks - given an input digraph D (the fulfillment network) together with a set of commodities represented as source-sink tuples - for a minimum-outdegree subgraph H of the transitive closure of D that contains a source-sink route for each of the commodities. Given the underlying motivation, we study two variants of the problem which differ in whether the routes for the commodities are assumed to be given, or can be chosen arbitrarily. We perform a thorough parameterized analysis of the complexity of both problems. Our results concentrate on three fundamental parameterizations of the problem: (1) When attempting to parameterize by the target outdegree of H, we show that the problems are paraNP-hard even in highly restricted cases; (2) When parameterizing by the number of commodities, we utilize Ramsey-type arguments, kernelization and treewidth reduction techniques to obtain parameterized algorithms for both problems; (3) When parameterizing by the structure of D, we establish fixed-parameter tractability for both problems w.r.t. treewidth, maximum degree and the maximum routing length. We combine this with lower bounds which show that omitting any of the three parameters results in paraNP-hardness.

Approximation Algorithms for Hop Constrained and Buy-at-Bulk Network Design via Hop Constrained Oblivious Routing

from arXiv: Data Structures and Algorithms

Authors: Chandra Chekuri, Rhea Jain

We consider two-cost network design models in which edges of the input graph have an associated cost and length. We build upon recent advances in hop-constrained oblivious routing to obtain two sets of results. We address multicommodity buy-at-bulk network design in the nonuniform setting. Existing poly-logarithmic approximations are based on the junction tree approach [CHKS09,KN11]. We obtain a new polylogarithmic approximation via a natural LP relaxation. This establishes an upper bound on its integrality gap and affirmatively answers an open question raised in [CHKS09]. The rounding is based on recent results in hop-constrained oblivious routing [GHZ21], and this technique yields a polylogarithmic approximation in more general settings such as set connectivity. Our algorithm for buy-at-bulk network design is based on an LP-based reduction to hop constrained network design for which we obtain LP-based bicriteria approximation algorithms. We also consider a fault-tolerant version of hop constrained network design where one wants to design a low-cost network to guarantee short paths between a given set of source-sink pairs even when k-1 edges can fail. This model has been considered in network design [GL17,GML18,AJL20] but no approximation algorithms were known. We obtain polylogarithmic bicriteria approximation algorithms for the single-source setting for any fixed k. We build upon the single-source algorithm and the junction-tree approach to obtain an approximation algorithm for the multicommodity setting when at most one edge can fail.

Authors: Chandra Chekuri, Rhea Jain

We consider two-cost network design models in which edges of the input graph have an associated cost and length. We build upon recent advances in hop-constrained oblivious routing to obtain two sets of results. We address multicommodity buy-at-bulk network design in the nonuniform setting. Existing poly-logarithmic approximations are based on the junction tree approach [CHKS09,KN11]. We obtain a new polylogarithmic approximation via a natural LP relaxation. This establishes an upper bound on its integrality gap and affirmatively answers an open question raised in [CHKS09]. The rounding is based on recent results in hop-constrained oblivious routing [GHZ21], and this technique yields a polylogarithmic approximation in more general settings such as set connectivity. Our algorithm for buy-at-bulk network design is based on an LP-based reduction to hop constrained network design for which we obtain LP-based bicriteria approximation algorithms. We also consider a fault-tolerant version of hop constrained network design where one wants to design a low-cost network to guarantee short paths between a given set of source-sink pairs even when k-1 edges can fail. This model has been considered in network design [GL17,GML18,AJL20] but no approximation algorithms were known. We obtain polylogarithmic bicriteria approximation algorithms for the single-source setting for any fixed k. We build upon the single-source algorithm and the junction-tree approach to obtain an approximation algorithm for the multicommodity setting when at most one edge can fail.

On the Streaming Complexity of Expander Decomposition

from arXiv: Data Structures and Algorithms

Authors: Yu Chen, Michael Kapralov, Mikhail Makarov, Davide Mazzali

In this paper we study the problem of finding $(\epsilon, \phi)$-expander decompositions of a graph in the streaming model, in particular for dynamic streams of edge insertions and deletions. The goal is to partition the vertex set so that every component induces a $\phi$-expander, while the number of inter-cluster edges is only an $\epsilon$ fraction of the total volume. It was recently shown that there exists a simple algorithm to construct a $(O(\phi \log n), \phi)$-expander decomposition of an $n$-vertex graph using $\widetilde{O}(n/\phi^2)$ bits of space [Filtser, Kapralov, Makarov, ITCS'23]. This result calls for understanding the extent to which a dependence in space on the sparsity parameter $\phi$ is inherent. We move towards answering this question on two fronts. We prove that a $(O(\phi \log n), \phi)$-expander decomposition can be found using $\widetilde{O}(n)$ space, for every $\phi$. At the core of our result is the first streaming algorithm for computing boundary-linked expander decompositions, a recently introduced strengthening of the classical notion [Goranci et al., SODA'21]. The key advantage is that a classical sparsifier [Fung et al., STOC'11], with size independent of $\phi$, preserves the cuts inside the clusters of a boundary-linked expander decomposition within a multiplicative error. Notable algorithmic applications use sequences of expander decompositions, in particular one often repeatedly computes a decomposition of the subgraph induced by the inter-cluster edges (e.g., the seminal work of Spielman and Teng on spectral sparsifiers [Spielman, Teng, SIAM Journal of Computing 40(4)], or the recent maximum flow breakthrough [Chen et al., FOCS'22], among others). We prove that any streaming algorithm that computes a sequence of $(O(\phi \log n), \phi)$-expander decompositions requires ${\widetilde{\Omega}}(n/\phi)$ bits of space, even in insertion only streams.

Authors: Yu Chen, Michael Kapralov, Mikhail Makarov, Davide Mazzali

In this paper we study the problem of finding $(\epsilon, \phi)$-expander decompositions of a graph in the streaming model, in particular for dynamic streams of edge insertions and deletions. The goal is to partition the vertex set so that every component induces a $\phi$-expander, while the number of inter-cluster edges is only an $\epsilon$ fraction of the total volume. It was recently shown that there exists a simple algorithm to construct a $(O(\phi \log n), \phi)$-expander decomposition of an $n$-vertex graph using $\widetilde{O}(n/\phi^2)$ bits of space [Filtser, Kapralov, Makarov, ITCS'23]. This result calls for understanding the extent to which a dependence in space on the sparsity parameter $\phi$ is inherent. We move towards answering this question on two fronts. We prove that a $(O(\phi \log n), \phi)$-expander decomposition can be found using $\widetilde{O}(n)$ space, for every $\phi$. At the core of our result is the first streaming algorithm for computing boundary-linked expander decompositions, a recently introduced strengthening of the classical notion [Goranci et al., SODA'21]. The key advantage is that a classical sparsifier [Fung et al., STOC'11], with size independent of $\phi$, preserves the cuts inside the clusters of a boundary-linked expander decomposition within a multiplicative error. Notable algorithmic applications use sequences of expander decompositions, in particular one often repeatedly computes a decomposition of the subgraph induced by the inter-cluster edges (e.g., the seminal work of Spielman and Teng on spectral sparsifiers [Spielman, Teng, SIAM Journal of Computing 40(4)], or the recent maximum flow breakthrough [Chen et al., FOCS'22], among others). We prove that any streaming algorithm that computes a sequence of $(O(\phi \log n), \phi)$-expander decompositions requires ${\widetilde{\Omega}}(n/\phi)$ bits of space, even in insertion only streams.

Multilayer Correlation Clustering

from arXiv: Data Structures and Algorithms

Authors: Atsushi Miyauchi, Florian Adriaens, Francesco Bonchi, Nikolaj Tatti

In this paper, we establish Multilayer Correlation Clustering, a novel generalization of Correlation Clustering (Bansal et al., FOCS '02) to the multilayer setting. In this model, we are given a series of inputs of Correlation Clustering (called layers) over the common set $V$. The goal is then to find a clustering of $V$ that minimizes the $\ell_p$-norm ($p\geq 1$) of the disagreements vector, which is defined as the vector (with dimension equal to the number of layers), each element of which represents the disagreements of the clustering on the corresponding layer. For this generalization, we first design an $O(L\log n)$-approximation algorithm, where $L$ is the number of layers, based on the well-known region growing technique. We then study an important special case of our problem, namely the problem with the probability constraint. For this case, we first give an $(\alpha+2)$-approximation algorithm, where $\alpha$ is any possible approximation ratio for the single-layer counterpart. For instance, we can take $\alpha=2.5$ in general (Ailon et al., JACM '08) and $\alpha=1.73+\epsilon$ for the unweighted case (Cohen-Addad et al., FOCS '23). Furthermore, we design a $4$-approximation algorithm, which improves the above approximation ratio of $\alpha+2=4.5$ for the general probability-constraint case. Computational experiments using real-world datasets demonstrate the effectiveness of our proposed algorithms.

Authors: Atsushi Miyauchi, Florian Adriaens, Francesco Bonchi, Nikolaj Tatti

In this paper, we establish Multilayer Correlation Clustering, a novel generalization of Correlation Clustering (Bansal et al., FOCS '02) to the multilayer setting. In this model, we are given a series of inputs of Correlation Clustering (called layers) over the common set $V$. The goal is then to find a clustering of $V$ that minimizes the $\ell_p$-norm ($p\geq 1$) of the disagreements vector, which is defined as the vector (with dimension equal to the number of layers), each element of which represents the disagreements of the clustering on the corresponding layer. For this generalization, we first design an $O(L\log n)$-approximation algorithm, where $L$ is the number of layers, based on the well-known region growing technique. We then study an important special case of our problem, namely the problem with the probability constraint. For this case, we first give an $(\alpha+2)$-approximation algorithm, where $\alpha$ is any possible approximation ratio for the single-layer counterpart. For instance, we can take $\alpha=2.5$ in general (Ailon et al., JACM '08) and $\alpha=1.73+\epsilon$ for the unweighted case (Cohen-Addad et al., FOCS '23). Furthermore, we design a $4$-approximation algorithm, which improves the above approximation ratio of $\alpha+2=4.5$ for the general probability-constraint case. Computational experiments using real-world datasets demonstrate the effectiveness of our proposed algorithms.

Layered List Labeling

from arXiv: Data Structures and Algorithms

Authors: Michael A. Bender, Alex Conway, Martin Farach-Colton, Hanna Komlos, William Kuszmaul

The list-labeling problem is one of the most basic and well-studied algorithmic primitives in data structures, with an extensive literature spanning upper bounds, lower bounds, and data management applications. The classical algorithm for this problem, dating back to 1981, has amortized cost $O(\log^2 n)$. Subsequent work has led to improvements in three directions: \emph{low-latency} (worst-case) bounds; \emph{high-throughput} (expected) bounds; and (adaptive) bounds for \emph{important workloads}. Perhaps surprisingly, these three directions of research have remained almost entirely disjoint -- this is because, so far, the techniques that allow for progress in one direction have forced worsening bounds in the others. Thus there would appear to be a tension between worst-case, adaptive, and expected bounds. List labeling has been proposed for use in databases at least as early as PODS'99, but a database needs good throughput, response time, and needs to adapt to common workloads (e.g., bulk loads), and no current list-labeling algorithm achieve good bounds for all three. We show that this tension is not fundamental. In fact, with the help of new data-structural techniques, one can actually \emph{combine} any three list-labeling solutions in order to cherry-pick the best worst-case, adaptive, and expected bounds from each of them.

Authors: Michael A. Bender, Alex Conway, Martin Farach-Colton, Hanna Komlos, William Kuszmaul

The list-labeling problem is one of the most basic and well-studied algorithmic primitives in data structures, with an extensive literature spanning upper bounds, lower bounds, and data management applications. The classical algorithm for this problem, dating back to 1981, has amortized cost $O(\log^2 n)$. Subsequent work has led to improvements in three directions: \emph{low-latency} (worst-case) bounds; \emph{high-throughput} (expected) bounds; and (adaptive) bounds for \emph{important workloads}. Perhaps surprisingly, these three directions of research have remained almost entirely disjoint -- this is because, so far, the techniques that allow for progress in one direction have forced worsening bounds in the others. Thus there would appear to be a tension between worst-case, adaptive, and expected bounds. List labeling has been proposed for use in databases at least as early as PODS'99, but a database needs good throughput, response time, and needs to adapt to common workloads (e.g., bulk loads), and no current list-labeling algorithm achieve good bounds for all three. We show that this tension is not fundamental. In fact, with the help of new data-structural techniques, one can actually \emph{combine} any three list-labeling solutions in order to cherry-pick the best worst-case, adaptive, and expected bounds from each of them.

Approximation Algorithm of Minimum All-Ones Problem for Arbitrary Graphs

from arXiv: Data Structures and Algorithms

Authors: Chen Wang, Chao Wang, Gregory Z. Gutin, Xiaoyan Zhang

Let $G=(V, E)$ be a graph and let each vertex of $G$ has a lamp and a button. Each button can be of $\sigma^+$-type or $\sigma$-type. Assume that initially some lamps are on and others are off. The button on vertex $x$ is of $\sigma^+$-type ($\sigma$-type, respectively) if pressing the button changes the lamp states on $x$ and on its neighbors in $G$ (the lamp states on the neighbors of $x$ only, respectively). Assume that there is a set $X\subseteq V$ such that pressing buttons on vertices of $X$ lights all lamps on vertices of $G$. In particular, it is known to hold when initially all lamps are off and all buttons are of $\sigma^+$-type. Finding such a set $X$ of the smallest size is NP-hard even if initially all lamps are off and all buttons are of $\sigma^+$-type. Using a linear algebraic approach we design a polynomial-time approximation algorithm for the problem such that for the set $X$ constructed by the algorithm, we have $|X|\le \min\{r,(|V|+{\rm opt})/2\},$ where $r$ is the rank of a (modified) adjacent matrix of $G$ and ${\rm opt}$ is the size of an optimal solution to the problem. To the best of our knowledge, this is the first polynomial-time approximation algorithm for the problem with a nontrivial approximation guarantee.

Authors: Chen Wang, Chao Wang, Gregory Z. Gutin, Xiaoyan Zhang

Let $G=(V, E)$ be a graph and let each vertex of $G$ has a lamp and a button. Each button can be of $\sigma^+$-type or $\sigma$-type. Assume that initially some lamps are on and others are off. The button on vertex $x$ is of $\sigma^+$-type ($\sigma$-type, respectively) if pressing the button changes the lamp states on $x$ and on its neighbors in $G$ (the lamp states on the neighbors of $x$ only, respectively). Assume that there is a set $X\subseteq V$ such that pressing buttons on vertices of $X$ lights all lamps on vertices of $G$. In particular, it is known to hold when initially all lamps are off and all buttons are of $\sigma^+$-type. Finding such a set $X$ of the smallest size is NP-hard even if initially all lamps are off and all buttons are of $\sigma^+$-type. Using a linear algebraic approach we design a polynomial-time approximation algorithm for the problem such that for the set $X$ constructed by the algorithm, we have $|X|\le \min\{r,(|V|+{\rm opt})/2\},$ where $r$ is the rank of a (modified) adjacent matrix of $G$ and ${\rm opt}$ is the size of an optimal solution to the problem. To the best of our knowledge, this is the first polynomial-time approximation algorithm for the problem with a nontrivial approximation guarantee.

Scalable Distributed String Sorting

from arXiv: Data Structures and Algorithms

Authors: Florian Kurpicz, Pascal Mehnert, Peter Sanders, Matthias Schimek

String sorting is an important part of tasks such as building index data structures. Unfortunately, current string sorting algorithms do not scale to massively parallel distributed-memory machines since they either have latency (at least) proportional to the number of processors $p$ or communicate the data a large number of times (at least logarithmic). We present practical and efficient algorithms for distributed-memory string sorting that scale to large $p$. Similar to state-of-the-art sorters for atomic objects, the algorithms have latency of about $p^{1/k}$ when allowing the data to be communicated $k$ times. Experiments indicate good scaling behavior on a wide range of inputs on up to 49152 cores. Overall, we achieve speedups of up to 5 over the current state-of-the-art distributed string sorting algorithms.

Authors: Florian Kurpicz, Pascal Mehnert, Peter Sanders, Matthias Schimek

String sorting is an important part of tasks such as building index data structures. Unfortunately, current string sorting algorithms do not scale to massively parallel distributed-memory machines since they either have latency (at least) proportional to the number of processors $p$ or communicate the data a large number of times (at least logarithmic). We present practical and efficient algorithms for distributed-memory string sorting that scale to large $p$. Similar to state-of-the-art sorters for atomic objects, the algorithms have latency of about $p^{1/k}$ when allowing the data to be communicated $k$ times. Experiments indicate good scaling behavior on a wide range of inputs on up to 49152 cores. Overall, we achieve speedups of up to 5 over the current state-of-the-art distributed string sorting algorithms.

Cost-Driven Data Replication with Predictions

from arXiv: Data Structures and Algorithms

Authors: Tianyu Zuo, Xueyan Tang, Bu Sung Lee

This paper studies an online replication problem for distributed data access. The goal is to dynamically create and delete data copies in a multi-server system as time passes to minimize the total storage and network cost of serving access requests. We study the problem in the emergent learning-augmented setting, assuming simple binary predictions about inter-request times at individual servers. We develop an online algorithm and prove that it is ($\frac{5+\alpha}{3}$)-consistent (competitiveness under perfect predictions) and ($1 + \frac{1}{\alpha}$)-robust (competitiveness under terrible predictions), where $\alpha \in (0, 1]$ is a hyper-parameter representing the level of distrust in the predictions. We also study the impact of mispredictions on the competitive ratio of the proposed algorithm and adapt it to achieve a bounded robustness while retaining its consistency. We further establish a lower bound of $\frac{3}{2}$ on the consistency of any deterministic learning-augmented algorithm. Experimental evaluations are carried out to evaluate our algorithms using real data access traces.

Authors: Tianyu Zuo, Xueyan Tang, Bu Sung Lee

This paper studies an online replication problem for distributed data access. The goal is to dynamically create and delete data copies in a multi-server system as time passes to minimize the total storage and network cost of serving access requests. We study the problem in the emergent learning-augmented setting, assuming simple binary predictions about inter-request times at individual servers. We develop an online algorithm and prove that it is ($\frac{5+\alpha}{3}$)-consistent (competitiveness under perfect predictions) and ($1 + \frac{1}{\alpha}$)-robust (competitiveness under terrible predictions), where $\alpha \in (0, 1]$ is a hyper-parameter representing the level of distrust in the predictions. We also study the impact of mispredictions on the competitive ratio of the proposed algorithm and adapt it to achieve a bounded robustness while retaining its consistency. We further establish a lower bound of $\frac{3}{2}$ on the consistency of any deterministic learning-augmented algorithm. Experimental evaluations are carried out to evaluate our algorithms using real data access traces.

Parallel and (Nearly) Work-Efficient Dynamic Programming

from arXiv: Data Structures and Algorithms

Authors: Xiangyun Ding, Yan Gu, Yihan Sun

The idea of dynamic programming (DP), proposed by Bellman in the 1950s, is one of the most important algorithmic techniques. However, in parallel, many fundamental and sequentially simple problems become more challenging, and open to a (nearly) work-efficient solution (i.e., the work is off by at most a polylogarithmic factor over the best sequential solution). In fact, sequential DP algorithms employ many advanced optimizations such as decision monotonicity or special data structures, and achieve better work than straightforward solutions. Many such optimizations are inherently sequential, which creates extra challenges for a parallel algorithm to achieve the same work bound. The goal of this paper is to achieve (nearly) work-efficient parallel DP algorithms by parallelizing classic, highly-optimized and practical sequential algorithms. We show a general framework called the Cordon Algorithm for parallel DP algorithms, and use it to solve several classic problems. Our selection of problems includes Longest Increasing Subsequence (LIS), sparse Longest Common Subsequence (LCS), convex/concave generalized Least Weight Subsequence (LWS), Optimal Alphabetic Tree (OAT), and more. We show how the Cordon Algorithm can be used to achieve the same level of optimization as the sequential algorithms, and achieve good parallelism. Many of our algorithms are conceptually simple, and we show some experimental results as proofs-of-concept.

Authors: Xiangyun Ding, Yan Gu, Yihan Sun

The idea of dynamic programming (DP), proposed by Bellman in the 1950s, is one of the most important algorithmic techniques. However, in parallel, many fundamental and sequentially simple problems become more challenging, and open to a (nearly) work-efficient solution (i.e., the work is off by at most a polylogarithmic factor over the best sequential solution). In fact, sequential DP algorithms employ many advanced optimizations such as decision monotonicity or special data structures, and achieve better work than straightforward solutions. Many such optimizations are inherently sequential, which creates extra challenges for a parallel algorithm to achieve the same work bound. The goal of this paper is to achieve (nearly) work-efficient parallel DP algorithms by parallelizing classic, highly-optimized and practical sequential algorithms. We show a general framework called the Cordon Algorithm for parallel DP algorithms, and use it to solve several classic problems. Our selection of problems includes Longest Increasing Subsequence (LIS), sparse Longest Common Subsequence (LCS), convex/concave generalized Least Weight Subsequence (LWS), Optimal Alphabetic Tree (OAT), and more. We show how the Cordon Algorithm can be used to achieve the same level of optimization as the sequential algorithms, and achieve good parallelism. Many of our algorithms are conceptually simple, and we show some experimental results as proofs-of-concept.

Dynamic PageRank: Algorithms and Lower Bounds

from arXiv: Data Structures and Algorithms

Authors: Rajesh Jayaram, Jakub Łącki, Slobodan Mitrović, Krzysztof Onak, Piotr Sankowski

We consider the PageRank problem in the dynamic setting, where the goal is to explicitly maintain an approximate PageRank vector $\pi \in \mathbb{R}^n$ for a graph under a sequence of edge insertions and deletions. Our main result is a complete characterization of the complexity of dynamic PageRank maintenance for both multiplicative and additive ($L_1$) approximations. First, we establish matching lower and upper bounds for maintaining additive approximate PageRank in both incremental and decremental settings. In particular, we demonstrate that in the worst-case $(1/\alpha)^{\Theta(\log \log n)}$ update time is necessary and sufficient for this problem, where $\alpha$ is the desired additive approximation. On the other hand, we demonstrate that the commonly employed ForwardPush approach performs substantially worse than this optimal runtime. Specifically, we show that ForwardPush requires $\Omega(n^{1-\delta})$ time per update on average, for any $\delta > 0$, even in the incremental setting. For multiplicative approximations, however, we demonstrate that the situation is significantly more challenging. Specifically, we prove that any algorithm that explicitly maintains a constant factor multiplicative approximation of the PageRank vector of a directed graph must have amortized update time $\Omega(n^{1-\delta})$, for any $\delta > 0$, even in the incremental setting, thereby resolving a 13-year old open question of Bahmani et al.~(VLDB 2010). This sharply contrasts with the undirected setting, where we show that $\rm{poly}\ \log n$ update time is feasible, even in the fully dynamic setting under oblivious adversary.

Authors: Rajesh Jayaram, Jakub Łącki, Slobodan Mitrović, Krzysztof Onak, Piotr Sankowski

We consider the PageRank problem in the dynamic setting, where the goal is to explicitly maintain an approximate PageRank vector $\pi \in \mathbb{R}^n$ for a graph under a sequence of edge insertions and deletions. Our main result is a complete characterization of the complexity of dynamic PageRank maintenance for both multiplicative and additive ($L_1$) approximations. First, we establish matching lower and upper bounds for maintaining additive approximate PageRank in both incremental and decremental settings. In particular, we demonstrate that in the worst-case $(1/\alpha)^{\Theta(\log \log n)}$ update time is necessary and sufficient for this problem, where $\alpha$ is the desired additive approximation. On the other hand, we demonstrate that the commonly employed ForwardPush approach performs substantially worse than this optimal runtime. Specifically, we show that ForwardPush requires $\Omega(n^{1-\delta})$ time per update on average, for any $\delta > 0$, even in the incremental setting. For multiplicative approximations, however, we demonstrate that the situation is significantly more challenging. Specifically, we prove that any algorithm that explicitly maintains a constant factor multiplicative approximation of the PageRank vector of a directed graph must have amortized update time $\Omega(n^{1-\delta})$, for any $\delta > 0$, even in the incremental setting, thereby resolving a 13-year old open question of Bahmani et al.~(VLDB 2010). This sharply contrasts with the undirected setting, where we show that $\rm{poly}\ \log n$ update time is feasible, even in the fully dynamic setting under oblivious adversary.

Fault-Tolerant Bounded Flow Preservers

from arXiv: Data Structures and Algorithms

Authors: Shivam Bansal, Keerti Choudhary, Harkirat Dhanoa, Harsh Wardhan

Given a directed graph $G = (V, E)$ with $n$ vertices, $m$ edges and a designated source vertex $s\in V$, we consider the question of finding a sparse subgraph $H$ of $G$ that preserves the flow from $s$ up to a given threshold $\lambda$ even after failure of $k$ edges. We refer to such subgraphs as $(\lambda,k)$-fault-tolerant bounded-flow-preserver ($(\lambda,k)$-FT-BFP). Formally, for any $F \subseteq E$ of at most $k$ edges and any $v\in V$, the $(s, v)$-max-flow in $H \setminus F$ is equal to $(s, v)$-max-flow in $G \setminus F$, if the latter is bounded by $\lambda$, and at least $\lambda$ otherwise. Our contributions are summarized as follows: 1. We provide a polynomial time algorithm that given any graph $G$ constructs a $(\lambda,k)$-FT-BFP of $G$ with at most $\lambda 2^kn$ edges. 2. We also prove a matching lower bound of $\Omega(\lambda 2^kn)$ on the size of $(\lambda,k)$-FT-BFP. In particular, we show that for every $\lambda,k,n\geq 1$, there exists an $n$-vertex directed graph whose optimal $(\lambda,k)$-FT-BFP contains $\Omega(\min\{2^k\lambda n,n^2\})$ edges. 3. Furthermore, we show that the problem of computing approximate $(\lambda,k)$-FT-BFP is NP-hard for any approximation ratio that is better than $O(\log(\lambda^{-1} n))$.

Authors: Shivam Bansal, Keerti Choudhary, Harkirat Dhanoa, Harsh Wardhan

Given a directed graph $G = (V, E)$ with $n$ vertices, $m$ edges and a designated source vertex $s\in V$, we consider the question of finding a sparse subgraph $H$ of $G$ that preserves the flow from $s$ up to a given threshold $\lambda$ even after failure of $k$ edges. We refer to such subgraphs as $(\lambda,k)$-fault-tolerant bounded-flow-preserver ($(\lambda,k)$-FT-BFP). Formally, for any $F \subseteq E$ of at most $k$ edges and any $v\in V$, the $(s, v)$-max-flow in $H \setminus F$ is equal to $(s, v)$-max-flow in $G \setminus F$, if the latter is bounded by $\lambda$, and at least $\lambda$ otherwise. Our contributions are summarized as follows: 1. We provide a polynomial time algorithm that given any graph $G$ constructs a $(\lambda,k)$-FT-BFP of $G$ with at most $\lambda 2^kn$ edges. 2. We also prove a matching lower bound of $\Omega(\lambda 2^kn)$ on the size of $(\lambda,k)$-FT-BFP. In particular, we show that for every $\lambda,k,n\geq 1$, there exists an $n$-vertex directed graph whose optimal $(\lambda,k)$-FT-BFP contains $\Omega(\min\{2^k\lambda n,n^2\})$ edges. 3. Furthermore, we show that the problem of computing approximate $(\lambda,k)$-FT-BFP is NP-hard for any approximation ratio that is better than $O(\log(\lambda^{-1} n))$.

Unweighted Layered Graph Traversal

from arXiv: Data Structures and Algorithms

Authors: Xingjian Bai, Christian Coester, Romain Cosson

Introduced by Papadimitriou and Yannakakis in 1989, layered graph traversal is an important problem in online algorithms and mobile computing that has been studied for several decades, and which now is essentially resolved in its original formulation. In this paper, we demonstrate that what appears to be an innocuous modification of the problem actually leads to a drastic (exponential) reduction of the competitive ratio. Specifically, we present an algorithm that is $O(\log^2 w)$-competitive for traversing unweighted layered graphs of width $w$. Our technique is based on a simple entropic regularizer, which evolves as the agent progresses in the layered graph. Our algorithm is randomized and simply maintains that at all layers, the probability distribution of the position of the mobile agent maximizes the entropic regularizer.

Authors: Xingjian Bai, Christian Coester, Romain Cosson

Introduced by Papadimitriou and Yannakakis in 1989, layered graph traversal is an important problem in online algorithms and mobile computing that has been studied for several decades, and which now is essentially resolved in its original formulation. In this paper, we demonstrate that what appears to be an innocuous modification of the problem actually leads to a drastic (exponential) reduction of the competitive ratio. Specifically, we present an algorithm that is $O(\log^2 w)$-competitive for traversing unweighted layered graphs of width $w$. Our technique is based on a simple entropic regularizer, which evolves as the agent progresses in the layered graph. Our algorithm is randomized and simply maintains that at all layers, the probability distribution of the position of the mobile agent maximizes the entropic regularizer.

Combinatorial Approximations for Cluster Deletion: Simpler, Faster, and Better

from arXiv: Data Structures and Algorithms

Authors: Vicente Balmaseda, Ying Xu, Yixin Cao, Nate Veldt

Cluster deletion is an NP-hard graph clustering objective with applications in computational biology and social network analysis, where the goal is to delete a minimum number of edges to partition a graph into cliques. We first provide a tighter analysis of two previous approximation algorithms, improving their approximation guarantees from 4 to 3. Moreover, we show that both algorithms can be derandomized in a surprisingly simple way, by greedily taking a vertex of maximum degree in an auxiliary graph and forming a cluster around it. One of these algorithms relies on solving a linear program. Our final contribution is to design a new and purely combinatorial approach for doing so that is far more scalable in theory and practice.

Authors: Vicente Balmaseda, Ying Xu, Yixin Cao, Nate Veldt

Cluster deletion is an NP-hard graph clustering objective with applications in computational biology and social network analysis, where the goal is to delete a minimum number of edges to partition a graph into cliques. We first provide a tighter analysis of two previous approximation algorithms, improving their approximation guarantees from 4 to 3. Moreover, we show that both algorithms can be derandomized in a surprisingly simple way, by greedily taking a vertex of maximum degree in an auxiliary graph and forming a cluster around it. One of these algorithms relies on solving a linear program. Our final contribution is to design a new and purely combinatorial approach for doing so that is far more scalable in theory and practice.

Thursday, April 25

Applied Algorithms for Machine Learning: A Workshop on Future of Computation

from CS Theory Events

June 10-12, 2024 Paris, France aaforml.com In this workshop, we present a series of talks on the intersection between applied algorithms and machine learning, two indispensable areas of future computation. We will cover a range of specific topics, including randomized and approximation algorithms; large-scale machine learning; distributed and federated learning; learning-augmented algorithms; algorithms for fairness … Continue reading Applied Algorithms for Machine Learning: A Workshop on Future of Computation

By shacharlovett

June 10-12, 2024 Paris, France https://aaforml.com In this workshop, we present a series of talks on the intersection between applied algorithms and machine learning, two indispensable areas of future computation. We will cover a range of specific topics, including randomized and approximation algorithms; large-scale machine learning; distributed and federated learning; learning-augmented algorithms; algorithms for fairness … Continue reading Applied Algorithms for Machine Learning: A Workshop on Future of Computation

By shacharlovett

Workshop on Local Algorithms

from CS Theory Events

August 5-8, 2024 Simons Institute (Berkeley, USA) ptreview.sublinear.info/2024/04/wola24-dates-registration-and-call-for-contributed-talks-and-posters/ Submission deadline: May 3, 2024 The 8th edition of WoLA, the Workshop on Local Algorithms, will be taking place on August 5-7, 2024 at the Simons Institute, as part of the Simons Institute’ summer program on Sublinear Algorithms.

By shacharlovett

August 5-8, 2024 Simons Institute (Berkeley, USA) https://ptreview.sublinear.info/2024/04/wola24-dates-registration-and-call-for-contributed-talks-and-posters/ Submission deadline: May 3, 2024 The 8th edition of WoLA, the Workshop on Local Algorithms, will be taking place on August 5-7, 2024 at the Simons Institute, as part of the Simons Institute’ summer program on Sublinear Algorithms.

By shacharlovett

A nearly-$4\log n$ depth lower bound for formulas with restriction on top

from arXiv: Computational Complexity

Authors: Hao Wu

One of the major open problems in complexity theory is to demonstrate an explicit function which requires super logarithmic depth, a.k.a, the $\mathbf{P}$ versus $\mathbf{NC^1}$ problem. The current best depth lower bound is $(3-o(1))\cdot \log n$, and it is widely open how to prove a super-$3\log n$ depth lower bound. Recently Mihajlin and Sofronova (CCC'22) show if considering formulas with restriction on top, we can break the $3\log n$ barrier. Formally, they prove there exist two functions $f:\{0,1\}^n \rightarrow \{0,1\},g:\{0,1\}^n \rightarrow \{0,1\}^n$, such that for any constant $0<\alpha<0.4$ and constant $0<\epsilon<\alpha/2$, their XOR composition $f(g(x)\oplus y)$ is not computable by an AND of $2^{(\alpha-\epsilon)n}$ formulas of size at most $2^{(1-\alpha/2-\epsilon)n}$. This implies a modified version of Andreev function is not computable by any circuit of depth $(3.2-\epsilon)\log n$ with the restriction that top $0.4-\epsilon$ layers only consist of AND gates for any small constant $\epsilon>0$. They ask whether the parameter $\alpha$ can be push up to nearly $1$ thus implying a nearly-$3.5\log n$ depth lower bound. In this paper, we provide a stronger answer to their question. We show there exist two functions $f:\{0,1\}^n \rightarrow \{0,1\},g:\{0,1\}^n \rightarrow \{0,1\}^n$, such that for any constant $0<\alpha<2-o(1)$, their XOR composition $f(g(x)\oplus y)$ is not computable by an AND of $2^{\alpha n}$ formulas of size at most $2^{(1-\alpha/2-o(1))n}$. This implies a $(4-o(1))\log n$ depth lower bound with the restriction that top $2-o(1)$ layers only consist of AND gates. We prove it by observing that one crucial component in Mihajlin and Sofronova's work, called the well-mixed set of functions, can be significantly simplified thus improved. Then with this observation and a more careful analysis, we obtain these nearly tight results.

Authors: Hao Wu

One of the major open problems in complexity theory is to demonstrate an explicit function which requires super logarithmic depth, a.k.a, the $\mathbf{P}$ versus $\mathbf{NC^1}$ problem. The current best depth lower bound is $(3-o(1))\cdot \log n$, and it is widely open how to prove a super-$3\log n$ depth lower bound. Recently Mihajlin and Sofronova (CCC'22) show if considering formulas with restriction on top, we can break the $3\log n$ barrier. Formally, they prove there exist two functions $f:\{0,1\}^n \rightarrow \{0,1\},g:\{0,1\}^n \rightarrow \{0,1\}^n$, such that for any constant $0<\alpha<0.4$ and constant $0<\epsilon<\alpha/2$, their XOR composition $f(g(x)\oplus y)$ is not computable by an AND of $2^{(\alpha-\epsilon)n}$ formulas of size at most $2^{(1-\alpha/2-\epsilon)n}$. This implies a modified version of Andreev function is not computable by any circuit of depth $(3.2-\epsilon)\log n$ with the restriction that top $0.4-\epsilon$ layers only consist of AND gates for any small constant $\epsilon>0$. They ask whether the parameter $\alpha$ can be push up to nearly $1$ thus implying a nearly-$3.5\log n$ depth lower bound. In this paper, we provide a stronger answer to their question. We show there exist two functions $f:\{0,1\}^n \rightarrow \{0,1\},g:\{0,1\}^n \rightarrow \{0,1\}^n$, such that for any constant $0<\alpha<2-o(1)$, their XOR composition $f(g(x)\oplus y)$ is not computable by an AND of $2^{\alpha n}$ formulas of size at most $2^{(1-\alpha/2-o(1))n}$. This implies a $(4-o(1))\log n$ depth lower bound with the restriction that top $2-o(1)$ layers only consist of AND gates. We prove it by observing that one crucial component in Mihajlin and Sofronova's work, called the well-mixed set of functions, can be significantly simplified thus improved. Then with this observation and a more careful analysis, we obtain these nearly tight results.

A Review on Message Complexity of the Algorithms for Clock Synchronization in Distributed Systems

from arXiv: Computational Complexity

Authors: Chandeepa Dissanayake, Chanuka Algama

In this work, we present an extensive analysis of clock synchronization algorithms, with a specific focus on message complexity. We begin by introducing fundamental concepts in clock synchronization, such as the Byzantine generals problem and specific concepts like clock accuracy, precision, skew, offset, timestamping, and clock drift estimation. Describing the concept of logical clocks, their implementation in distributed systems is discussed, highlighting their significance and various approaches. The paper then examines four prominent clock synchronization algorithms: Lamport's Algorithm, Ricart-Agrawala Algorithm, Vector Clocks Algorithm, and Christian's Algorithm. Special attention is given to the analysis of message complexity, providing insights into the efficiency of each algorithm. Finally, we compare the message complexities of the discussed algorithms.

Authors: Chandeepa Dissanayake, Chanuka Algama

In this work, we present an extensive analysis of clock synchronization algorithms, with a specific focus on message complexity. We begin by introducing fundamental concepts in clock synchronization, such as the Byzantine generals problem and specific concepts like clock accuracy, precision, skew, offset, timestamping, and clock drift estimation. Describing the concept of logical clocks, their implementation in distributed systems is discussed, highlighting their significance and various approaches. The paper then examines four prominent clock synchronization algorithms: Lamport's Algorithm, Ricart-Agrawala Algorithm, Vector Clocks Algorithm, and Christian's Algorithm. Special attention is given to the analysis of message complexity, providing insights into the efficiency of each algorithm. Finally, we compare the message complexities of the discussed algorithms.

On the Emergence of Ergodic Dynamics in Unique Games

from arXiv: Computational Complexity

Authors: Tuhin Sahai, Abeynaya Gnanasekaran

The Unique Games Conjecture (UGC) constitutes a highly dynamic subarea within computational complexity theory, intricately linked to the outstanding P versus NP problem. Despite multiple insightful results in the past few years, a proof for the conjecture remains elusive. In this work, we construct a novel dynamical systems-based approach for studying unique games and, more generally, the field of computational complexity. We propose a family of dynamical systems whose equilibria correspond to solutions of unique games and prove that unsatisfiable instances lead to ergodic dynamics. Moreover, as the instance hardness increases, the weight of the invariant measure in the vicinity of the optimal assignments scales polynomially, sub-exponentially, or exponentially depending on the value gap. We numerically reproduce a previously hypothesized hardness plot associated with the UGC. Our results indicate that the UGC is likely true, subject to our proposed conjectures that link dynamical systems theory with computational complexity.

Authors: Tuhin Sahai, Abeynaya Gnanasekaran

The Unique Games Conjecture (UGC) constitutes a highly dynamic subarea within computational complexity theory, intricately linked to the outstanding P versus NP problem. Despite multiple insightful results in the past few years, a proof for the conjecture remains elusive. In this work, we construct a novel dynamical systems-based approach for studying unique games and, more generally, the field of computational complexity. We propose a family of dynamical systems whose equilibria correspond to solutions of unique games and prove that unsatisfiable instances lead to ergodic dynamics. Moreover, as the instance hardness increases, the weight of the invariant measure in the vicinity of the optimal assignments scales polynomially, sub-exponentially, or exponentially depending on the value gap. We numerically reproduce a previously hypothesized hardness plot associated with the UGC. Our results indicate that the UGC is likely true, subject to our proposed conjectures that link dynamical systems theory with computational complexity.

L is different from NP

from arXiv: Computational Complexity

Authors: J. Andres Montoya

We prove that the class LOGSPACE (L, for short) is different from the class NP.

Authors: J. Andres Montoya

We prove that the class LOGSPACE (L, for short) is different from the class NP.

Filling holes in LoD2 building models

from arXiv: Computational Geometry

Authors: Weixiao Gao, Ravi Peters, Hugo Ledoux, Jantien Stoter

This paper presents a new algorithm for filling holes in Level of Detail 2 (LoD2) building mesh models, addressing the challenges posed by geometric inaccuracies and topological errors. Unlike traditional methods that often alter the original geometric structure or impose stringent input requirements, our approach preserves the integrity of the original model while effectively managing a range of topological errors. The algorithm operates in three distinct phases: (1) pre-processing, which addresses topological errors and identifies pseudo-holes; (2) detecting and extracting complete border rings of holes; and (3) remeshing, aimed at reconstructing the complete geometric surface. Our method demonstrates superior performance compared to related work in filling holes in building mesh models, achieving both uniform local geometry around the holes and structural completeness. Comparative experiments with established methods demonstrate our algorithm's effectiveness in delivering more complete and geometrically consistent hole-filling results, albeit with a slight trade-off in efficiency. The paper also identifies challenges in handling certain complex scenarios and outlines future directions for research, including the pursuit of a comprehensive repair goal for LoD2 models to achieve watertight 2-manifold models with correctly oriented normals. Our source code is available at github.com/tudelft3d/Automatic-Repair-of-LoD2-Building-Models.git

Authors: Weixiao Gao, Ravi Peters, Hugo Ledoux, Jantien Stoter

This paper presents a new algorithm for filling holes in Level of Detail 2 (LoD2) building mesh models, addressing the challenges posed by geometric inaccuracies and topological errors. Unlike traditional methods that often alter the original geometric structure or impose stringent input requirements, our approach preserves the integrity of the original model while effectively managing a range of topological errors. The algorithm operates in three distinct phases: (1) pre-processing, which addresses topological errors and identifies pseudo-holes; (2) detecting and extracting complete border rings of holes; and (3) remeshing, aimed at reconstructing the complete geometric surface. Our method demonstrates superior performance compared to related work in filling holes in building mesh models, achieving both uniform local geometry around the holes and structural completeness. Comparative experiments with established methods demonstrate our algorithm's effectiveness in delivering more complete and geometrically consistent hole-filling results, albeit with a slight trade-off in efficiency. The paper also identifies challenges in handling certain complex scenarios and outlines future directions for research, including the pursuit of a comprehensive repair goal for LoD2 models to achieve watertight 2-manifold models with correctly oriented normals. Our source code is available at https://github.com/tudelft3d/Automatic-Repair-of-LoD2-Building-Models.git

CWF: Consolidating Weak Features in High-quality Mesh Simplification

from arXiv: Computational Geometry

Authors: Rui Xu, Longdu Liu, Ningna Wang, Shuangmin Chen, Shiqing Xin, Xiaohu Guo, Zichun Zhong, Taku Komura, Wenping Wang, Changhe Tu

In mesh simplification, common requirements like accuracy, triangle quality, and feature alignment are often considered as a trade-off. Existing algorithms concentrate on just one or a few specific aspects of these requirements. For example, the well-known Quadric Error Metrics (QEM) approach prioritizes accuracy and can preserve strong feature lines/points as well but falls short in ensuring high triangle quality and may degrade weak features that are not as distinctive as strong ones. In this paper, we propose a smooth functional that simultaneously considers all of these requirements. The functional comprises a normal anisotropy term and a Centroidal Voronoi Tessellation (CVT) energy term, with the variables being a set of movable points lying on the surface. The former inherits the spirit of QEM but operates in a continuous setting, while the latter encourages even point distribution, allowing various surface metrics. We further introduce a decaying weight to automatically balance the two terms. We selected 100 CAD models from the ABC dataset, along with 21 organic models, to compare the existing mesh simplification algorithms with ours. Experimental results reveal an important observation: the introduction of a decaying weight effectively reduces the conflict between the two terms and enables the alignment of weak features. This distinctive feature sets our approach apart from most existing mesh simplification methods and demonstrates significant potential in shape understanding.

Authors: Rui Xu, Longdu Liu, Ningna Wang, Shuangmin Chen, Shiqing Xin, Xiaohu Guo, Zichun Zhong, Taku Komura, Wenping Wang, Changhe Tu

In mesh simplification, common requirements like accuracy, triangle quality, and feature alignment are often considered as a trade-off. Existing algorithms concentrate on just one or a few specific aspects of these requirements. For example, the well-known Quadric Error Metrics (QEM) approach prioritizes accuracy and can preserve strong feature lines/points as well but falls short in ensuring high triangle quality and may degrade weak features that are not as distinctive as strong ones. In this paper, we propose a smooth functional that simultaneously considers all of these requirements. The functional comprises a normal anisotropy term and a Centroidal Voronoi Tessellation (CVT) energy term, with the variables being a set of movable points lying on the surface. The former inherits the spirit of QEM but operates in a continuous setting, while the latter encourages even point distribution, allowing various surface metrics. We further introduce a decaying weight to automatically balance the two terms. We selected 100 CAD models from the ABC dataset, along with 21 organic models, to compare the existing mesh simplification algorithms with ours. Experimental results reveal an important observation: the introduction of a decaying weight effectively reduces the conflict between the two terms and enables the alignment of weak features. This distinctive feature sets our approach apart from most existing mesh simplification methods and demonstrates significant potential in shape understanding.

Minimum Consistent Subset in Trees and Interval Graphs

from arXiv: Data Structures and Algorithms

Authors: Aritra Banik, Sayani Das, Anil Maheshwari, Bubai Manna, Subhas C Nandy, Krishna Priya K M, Bodhayan Roy, Sasanka Roy, Abhishek Sahu

In the Minimum Consistent Subset (MCS) problem, we are presented with a connected simple undirected graph $G=(V,E)$, consisting of a vertex set $V$ of size $n$ and an edge set $E$. Each vertex in $V$ is assigned a color from the set $\{1,2,\ldots, c\}$. The objective is to determine a subset $V' \subseteq V$ with minimum possible cardinality, such that for every vertex $v \in V$, at least one of its nearest neighbors in $V'$ (measured in terms of the hop distance) shares the same color as $v$. The decision problem, indicating whether there exists a subset $V'$ of cardinality at most $l$ for some positive integer $l$, is known to be NP-complete even for planar graphs. In this paper, we establish that the MCS problem for trees, when the number of colors $c$ is considered an input parameter, is NP-complete. We propose a fixed-parameter tractable (FPT) algorithm for MCS on trees running in $O(2^{6c}n^6)$ time, significantly improving the currently best-known algorithm whose running time is $O(2^{4c}n^{2c+3})$. In an effort to comprehensively understand the computational complexity of the MCS problem across different graph classes, we extend our investigation to interval graphs. We show that it remains NP-complete for interval graphs, thus enriching graph classes where MCS remains intractable.

Authors: Aritra Banik, Sayani Das, Anil Maheshwari, Bubai Manna, Subhas C Nandy, Krishna Priya K M, Bodhayan Roy, Sasanka Roy, Abhishek Sahu

In the Minimum Consistent Subset (MCS) problem, we are presented with a connected simple undirected graph $G=(V,E)$, consisting of a vertex set $V$ of size $n$ and an edge set $E$. Each vertex in $V$ is assigned a color from the set $\{1,2,\ldots, c\}$. The objective is to determine a subset $V' \subseteq V$ with minimum possible cardinality, such that for every vertex $v \in V$, at least one of its nearest neighbors in $V'$ (measured in terms of the hop distance) shares the same color as $v$. The decision problem, indicating whether there exists a subset $V'$ of cardinality at most $l$ for some positive integer $l$, is known to be NP-complete even for planar graphs. In this paper, we establish that the MCS problem for trees, when the number of colors $c$ is considered an input parameter, is NP-complete. We propose a fixed-parameter tractable (FPT) algorithm for MCS on trees running in $O(2^{6c}n^6)$ time, significantly improving the currently best-known algorithm whose running time is $O(2^{4c}n^{2c+3})$. In an effort to comprehensively understand the computational complexity of the MCS problem across different graph classes, we extend our investigation to interval graphs. We show that it remains NP-complete for interval graphs, thus enriching graph classes where MCS remains intractable.

A Note on Approximating Weighted Nash Social Welfare with Additive Valuations

from arXiv: Data Structures and Algorithms

Authors: Yuda Feng, Shi Li

We give the first $O(1)$-approximation for the weighted Nash Social Welfare problem with additive valuations. The approximation ratio we obtain is $e^{1/e} + \epsilon \approx 1.445 + \epsilon$, which matches the best known approximation ratio for the unweighted case \cite{BKV18}. Both our algorithm and analysis are simple. We solve a natural configuration LP for the problem, and obtain the allocation of items to agents using a randomized version of the Shmoys-Tardos rounding algorithm developed for unrelated machine scheduling problems. In the analysis, we show that the approximation ratio of the algorithm is at most the worst gap between the Nash social welfare of the optimum allocation and that of an EF1 allocation, for an unweighted Nash Social Welfare instance with identical additive valuations. This was shown to be at most $e^{1/e} \approx 1.445$ by Barman et al., leading to our approximation ratio.

Authors: Yuda Feng, Shi Li

We give the first $O(1)$-approximation for the weighted Nash Social Welfare problem with additive valuations. The approximation ratio we obtain is $e^{1/e} + \epsilon \approx 1.445 + \epsilon$, which matches the best known approximation ratio for the unweighted case \cite{BKV18}. Both our algorithm and analysis are simple. We solve a natural configuration LP for the problem, and obtain the allocation of items to agents using a randomized version of the Shmoys-Tardos rounding algorithm developed for unrelated machine scheduling problems. In the analysis, we show that the approximation ratio of the algorithm is at most the worst gap between the Nash social welfare of the optimum allocation and that of an EF1 allocation, for an unweighted Nash Social Welfare instance with identical additive valuations. This was shown to be at most $e^{1/e} \approx 1.445$ by Barman et al., leading to our approximation ratio.

Online Disjoint Set Covers: Randomization is not Necessary

from arXiv: Data Structures and Algorithms

Authors: Marcin Bienkowski, Jarosław Byrka, Łukasz Jeż

In the online disjoint set covers problem, the edges of a hypergraph are revealed online, and the goal is to partition them into a maximum number of disjoint set covers. That is, n nodes of a hypergraph are given at the beginning, and then a sequence of hyperedges (subsets of [n]) is presented to an algorithm. For each hyperedge, an online algorithm must assign a color (an integer). Once an input terminates, the gain of the algorithm is the number of colors that correspond to valid set covers (i.e., the union of hyperedges that have that color contains all n nodes). We present a deterministic online algorithm that is O(log^2 n)-competitive, exponentially improving on the previous bound of O(n) and matching the performance of the best randomized algorithm by Emek et al. [ESA 2019]. For color selection, our algorithm uses a novel potential function, which can be seen as an online counterpart of the derandomization method of conditional probabilities and pessimistic estimators. There are only a few cases where derandomization has been successfully used in the field of online algorithms. In contrast to previous approaches, our result extends this tool to tackle the following new challenges: (i) the potential function derandomizes not only the Chernoff bound, but also the coupon collector's problem, (ii) the value of OPT of the maximization problem is not bounded a priori, and (iii) we do not produce a fractional solution first, but work directly on the input.

Authors: Marcin Bienkowski, Jarosław Byrka, Łukasz Jeż

In the online disjoint set covers problem, the edges of a hypergraph are revealed online, and the goal is to partition them into a maximum number of disjoint set covers. That is, n nodes of a hypergraph are given at the beginning, and then a sequence of hyperedges (subsets of [n]) is presented to an algorithm. For each hyperedge, an online algorithm must assign a color (an integer). Once an input terminates, the gain of the algorithm is the number of colors that correspond to valid set covers (i.e., the union of hyperedges that have that color contains all n nodes). We present a deterministic online algorithm that is O(log^2 n)-competitive, exponentially improving on the previous bound of O(n) and matching the performance of the best randomized algorithm by Emek et al. [ESA 2019]. For color selection, our algorithm uses a novel potential function, which can be seen as an online counterpart of the derandomization method of conditional probabilities and pessimistic estimators. There are only a few cases where derandomization has been successfully used in the field of online algorithms. In contrast to previous approaches, our result extends this tool to tackle the following new challenges: (i) the potential function derandomizes not only the Chernoff bound, but also the coupon collector's problem, (ii) the value of OPT of the maximization problem is not bounded a priori, and (iii) we do not produce a fractional solution first, but work directly on the input.

Renting Servers for Multi-Parameter Jobs in the Cloud

from arXiv: Data Structures and Algorithms

Authors: Yaqiao Li, Mahtab Masoori, Lata Narayanan, Denis Pankratov

We study the Renting Servers in the Cloud problem (RSiC) in multiple dimensions. In this problem, a sequence of multi-parameter jobs must be scheduled on servers that can be rented on-demand. Each job has an arrival time, a finishing time, and a multi-dimensional size vector that specifies its resource demands. Each server has a multi-dimensional capacity and jobs can be scheduled on a server as long as in each dimension the sum of sizes of jobs does not exceed the capacity of the server in that dimension. The goal is to minimize the total rental time of servers needed to process the job sequence. AF algorithms do not rent new servers to accommodate a job unless they have to. We introduce a sub-family of AF algorithms called monotone AF algorithms. We show this family have a tight competitive ratio of $Theta(d mu)$, where $d$ is the dimension of the problem and $mu$ is the ratio between the maximum and minimum duration of jobs in the input sequence. We also show that upper bounds for the RSiC problem obey the direct-sum property with respect to dimension $d$, that is we show how to transform $1$-dimensional algorithms for RSiC to work in the $d$-dimensional setting with competitive ratio scaling by a factor of $d$. As a corollary, we obtain an $O(d\sqrt{log mu})$ upper bound for $d$-dimensional clairvoyant RSiC. We also establish a lower bound of $\widetilde{Omega}(d mu)$ for both deterministic and randomized algorithms for $d$-dimensional non-clairvoyant RSiC, under the assumption that $mu \le log d - 2$. Lastly, we propose a natural greedy algorithm called Greedy. Greedy, is a clairvoyant algorithm belongs to the monotone AF family, achieves a competitive ratio of $Theta(d mu)$. Our experimental results indicate that Greedy performs better or matches all other existing algorithms, for almost all the settings of arrival rates and values of mu and $d$ that we implemented.

Authors: Yaqiao Li, Mahtab Masoori, Lata Narayanan, Denis Pankratov

We study the Renting Servers in the Cloud problem (RSiC) in multiple dimensions. In this problem, a sequence of multi-parameter jobs must be scheduled on servers that can be rented on-demand. Each job has an arrival time, a finishing time, and a multi-dimensional size vector that specifies its resource demands. Each server has a multi-dimensional capacity and jobs can be scheduled on a server as long as in each dimension the sum of sizes of jobs does not exceed the capacity of the server in that dimension. The goal is to minimize the total rental time of servers needed to process the job sequence. AF algorithms do not rent new servers to accommodate a job unless they have to. We introduce a sub-family of AF algorithms called monotone AF algorithms. We show this family have a tight competitive ratio of $Theta(d mu)$, where $d$ is the dimension of the problem and $mu$ is the ratio between the maximum and minimum duration of jobs in the input sequence. We also show that upper bounds for the RSiC problem obey the direct-sum property with respect to dimension $d$, that is we show how to transform $1$-dimensional algorithms for RSiC to work in the $d$-dimensional setting with competitive ratio scaling by a factor of $d$. As a corollary, we obtain an $O(d\sqrt{log mu})$ upper bound for $d$-dimensional clairvoyant RSiC. We also establish a lower bound of $\widetilde{Omega}(d mu)$ for both deterministic and randomized algorithms for $d$-dimensional non-clairvoyant RSiC, under the assumption that $mu \le log d - 2$. Lastly, we propose a natural greedy algorithm called Greedy. Greedy, is a clairvoyant algorithm belongs to the monotone AF family, achieves a competitive ratio of $Theta(d mu)$. Our experimental results indicate that Greedy performs better or matches all other existing algorithms, for almost all the settings of arrival rates and values of mu and $d$ that we implemented.

Seed Selection in the Heterogeneous Moran Process

from arXiv: Data Structures and Algorithms

Authors: Petros Petsinis, Andreas Pavlogiannis, Josef Tkadlec, Panagiotis Karras

The Moran process is a classic stochastic process that models the rise and takeover of novel traits in network-structured populations. In biological terms, a set of mutants, each with fitness $m\in(0,\infty)$ invade a population of residents with fitness $1$. Each agent reproduces at a rate proportional to its fitness and each offspring replaces a random network neighbor. The process ends when the mutants either fixate (take over the whole population) or go extinct. The fixation probability measures the success of the invasion. To account for environmental heterogeneity, we study a generalization of the Standard process, called the Heterogeneous Moran process. Here, the fitness of each agent is determined both by its type (resident/mutant) and the node it occupies. We study the natural optimization problem of seed selection: given a budget $k$, which $k$ agents should initiate the mutant invasion to maximize the fixation probability? We show that the problem is strongly inapproximable: it is $\mathbf{NP}$-hard to distinguish between maximum fixation probability 0 and 1. We then focus on mutant-biased networks, where each node exhibits at least as large mutant fitness as resident fitness. We show that the problem remains $\mathbf{NP}$-hard, but the fixation probability becomes submodular, and thus the optimization problem admits a greedy $(1-1/e)$-approximation. An experimental evaluation of the greedy algorithm along with various heuristics on real-world data sets corroborates our results.

Authors: Petros Petsinis, Andreas Pavlogiannis, Josef Tkadlec, Panagiotis Karras

The Moran process is a classic stochastic process that models the rise and takeover of novel traits in network-structured populations. In biological terms, a set of mutants, each with fitness $m\in(0,\infty)$ invade a population of residents with fitness $1$. Each agent reproduces at a rate proportional to its fitness and each offspring replaces a random network neighbor. The process ends when the mutants either fixate (take over the whole population) or go extinct. The fixation probability measures the success of the invasion. To account for environmental heterogeneity, we study a generalization of the Standard process, called the Heterogeneous Moran process. Here, the fitness of each agent is determined both by its type (resident/mutant) and the node it occupies. We study the natural optimization problem of seed selection: given a budget $k$, which $k$ agents should initiate the mutant invasion to maximize the fixation probability? We show that the problem is strongly inapproximable: it is $\mathbf{NP}$-hard to distinguish between maximum fixation probability 0 and 1. We then focus on mutant-biased networks, where each node exhibits at least as large mutant fitness as resident fitness. We show that the problem remains $\mathbf{NP}$-hard, but the fixation probability becomes submodular, and thus the optimization problem admits a greedy $(1-1/e)$-approximation. An experimental evaluation of the greedy algorithm along with various heuristics on real-world data sets corroborates our results.

Hardness and Tight Approximations of Demand Strip Packing

from arXiv: Data Structures and Algorithms

Authors: Klaus Jansen, Malin Rau, Malte Tutas

We settle the pseudo-polynomial complexity of the Demand Strip Packing (DSP) problem: Given a strip of fixed width and a set of items with widths and heights, the items must be placed inside the strip with the objective of minimizing the peak height. This problem has gained significant scientific interest due to its relevance in smart grids[Deppert et al.\ APPROX'21, G\'alvez et al.\ APPROX'21]. Smart Grids are a modern form of electrical grid that provide opportunities for optimization. They are forecast to impact the future of energy provision significantly. Algorithms running in pseudo-polynomial time lend themselves to these applications as considered time intervals, such as days, are small. Moreover, such algorithms can provide superior approximation guarantees over those running in polynomial time. Consequently, they evoke scientific interest in related problems. We prove that Demand Strip Packing is strongly NP-hard for approximation ratios below $5/4$. Through this proof, we provide novel insights into the relation of packing and scheduling problems. Using these insights, we show a series of frameworks that solve both Demand Strip Packing and Parallel Task Scheduling optimally when increasing the strip's width or number of machines. Such alterations to problems are known as resource augmentation. Applications are found when penalty costs are prohibitively large. Finally, we provide a pseudo-polynomial time approximation algorithm for DSP with an approximation ratio of $(5/4+\varepsilon)$, which is nearly optimal assuming $P\neq NP$. The construction of this algorithm provides several insights into the structure of DSP solutions and uses novel techniques to restructure optimal solutions.

Authors: Klaus Jansen, Malin Rau, Malte Tutas

We settle the pseudo-polynomial complexity of the Demand Strip Packing (DSP) problem: Given a strip of fixed width and a set of items with widths and heights, the items must be placed inside the strip with the objective of minimizing the peak height. This problem has gained significant scientific interest due to its relevance in smart grids[Deppert et al.\ APPROX'21, G\'alvez et al.\ APPROX'21]. Smart Grids are a modern form of electrical grid that provide opportunities for optimization. They are forecast to impact the future of energy provision significantly. Algorithms running in pseudo-polynomial time lend themselves to these applications as considered time intervals, such as days, are small. Moreover, such algorithms can provide superior approximation guarantees over those running in polynomial time. Consequently, they evoke scientific interest in related problems. We prove that Demand Strip Packing is strongly NP-hard for approximation ratios below $5/4$. Through this proof, we provide novel insights into the relation of packing and scheduling problems. Using these insights, we show a series of frameworks that solve both Demand Strip Packing and Parallel Task Scheduling optimally when increasing the strip's width or number of machines. Such alterations to problems are known as resource augmentation. Applications are found when penalty costs are prohibitively large. Finally, we provide a pseudo-polynomial time approximation algorithm for DSP with an approximation ratio of $(5/4+\varepsilon)$, which is nearly optimal assuming $P\neq NP$. The construction of this algorithm provides several insights into the structure of DSP solutions and uses novel techniques to restructure optimal solutions.

Detecting Disjoint Shortest Paths in Linear Time and More

from arXiv: Data Structures and Algorithms

Authors: Shyan Akmal, Virginia Vassilevska Williams, Nicole Wein

In the $k$-Disjoint Shortest Paths ($k$-DSP) problem, we are given a weighted graph $G$ on $n$ nodes and $m$ edges with specified source vertices $s_1, \dots, s_k$, and target vertices $t_1, \dots, t_k$, and are tasked with determining if $G$ contains vertex-disjoint $(s_i,t_i)$-shortest paths. For any constant $k$, it is known that $k$-DSP can be solved in polynomial time over undirected graphs and directed acyclic graphs (DAGs). However, the exact time complexity of $k$-DSP remains mysterious, with large gaps between the fastest known algorithms and best conditional lower bounds. In this paper, we obtain faster algorithms for important cases of $k$-DSP, and present better conditional lower bounds for $k$-DSP and its variants. Previous work solved 2-DSP over weighted undirected graphs in $O(n^7)$ time, and weighted DAGs in $O(mn)$ time. For the main result of this paper, we present linear time algorithms for solving 2-DSP on weighted undirected graphs and DAGs. Our algorithms are algebraic however, and so only solve the detection rather than search version of 2-DSP. For lower bounds, prior work implied that $k$-Clique can be reduced to $2k$-DSP in DAGs and undirected graphs with $O((kn)^2)$ nodes. We improve this reduction, by showing how to reduce from $k$-Clique to $k$-DSP in DAGs and undirected graphs with $O((kn)^2)$ nodes. A variant of $k$-DSP is the $k$-Disjoint Paths ($k$-DP) problem, where the solution paths no longer need to be shortest paths. Previous work reduced from $k$-Clique to $p$-DP in DAGs with $O(kn)$ nodes, for $p= k + k(k-1)/2$. We improve this by showing a reduction from $k$-Clique to $p$-DP, for $p=k + \lfloor k^2/4\rfloor$. Under the $k$-Clique Hypothesis from fine-grained complexity, our results establish better conditional lower bounds for $k$-DSP for all $k\ge 4$, and better conditional lower bounds for $p$-DP for all $p\le 4031$.

Authors: Shyan Akmal, Virginia Vassilevska Williams, Nicole Wein

In the $k$-Disjoint Shortest Paths ($k$-DSP) problem, we are given a weighted graph $G$ on $n$ nodes and $m$ edges with specified source vertices $s_1, \dots, s_k$, and target vertices $t_1, \dots, t_k$, and are tasked with determining if $G$ contains vertex-disjoint $(s_i,t_i)$-shortest paths. For any constant $k$, it is known that $k$-DSP can be solved in polynomial time over undirected graphs and directed acyclic graphs (DAGs). However, the exact time complexity of $k$-DSP remains mysterious, with large gaps between the fastest known algorithms and best conditional lower bounds. In this paper, we obtain faster algorithms for important cases of $k$-DSP, and present better conditional lower bounds for $k$-DSP and its variants. Previous work solved 2-DSP over weighted undirected graphs in $O(n^7)$ time, and weighted DAGs in $O(mn)$ time. For the main result of this paper, we present linear time algorithms for solving 2-DSP on weighted undirected graphs and DAGs. Our algorithms are algebraic however, and so only solve the detection rather than search version of 2-DSP. For lower bounds, prior work implied that $k$-Clique can be reduced to $2k$-DSP in DAGs and undirected graphs with $O((kn)^2)$ nodes. We improve this reduction, by showing how to reduce from $k$-Clique to $k$-DSP in DAGs and undirected graphs with $O((kn)^2)$ nodes. A variant of $k$-DSP is the $k$-Disjoint Paths ($k$-DP) problem, where the solution paths no longer need to be shortest paths. Previous work reduced from $k$-Clique to $p$-DP in DAGs with $O(kn)$ nodes, for $p= k + k(k-1)/2$. We improve this by showing a reduction from $k$-Clique to $p$-DP, for $p=k + \lfloor k^2/4\rfloor$. Under the $k$-Clique Hypothesis from fine-grained complexity, our results establish better conditional lower bounds for $k$-DSP for all $k\ge 4$, and better conditional lower bounds for $p$-DP for all $p\le 4031$.

Wednesday, April 24

Is Persistence an Anachronism?

from Computational Complexity

Guest post by Martin Bullinger

Very recently, Vijay Vazirani's paper A Theory of Alternating Paths and Blossoms, from the Perspective of Minimum Length got accepted to Mathematics of Operations Research. For the first time, it gives a complete and correct proof that the Micali-Vazirani algorithm finds a maximum cardinality matching in time \(\mathcal O\left(m\sqrt{n}\right)\). I would like to give an account of the extraordinary story of this proof and how Vazirani's contribution inspires persistence.

My fascination for matching already started during my undergrad when I gave a talk on Edmonds' blossom algorithm. It was at this time that I first heard about the Micali-Vazirani (MV) algorithm. Naturally, I was quite excited when I got to know Vazirani personally years later. When I talked to him about the MV algorithm I was, however, shocked: Vazirani admitted that even to that day, there did not exist a complete proof of its correctness. How can a theoretical result be accepted to FOCS without a proof?

Now, 44 years after publication of the algorithm, a proof exists and has been peer-reviewed in great depth. But why did it take so long? Apparently, some results just need time. Sometimes a lot of time. Think of Fermat's Last Theorem, whose proof took 358 years! So what is the story behind the MV algorithm? It can without a doubt be seen as a lifework. Together with his fellow PhD student Silvio Micali, Vazirani discovered it in the first year of his PhD in 1979-80. Without even attempting a proof, it was published in the proceedings of FOCS 1980. The first proof attempt by Vazirani was published in 1994 in Combinatorica. Unfortunately, this proof turned out to be flawed. It took another 30 years until his current paper.

What kept Vazirani going for so long? In the acknowledgements of his paper, he thanks matching theory for its gloriously elegant structure. Vazirani was driven by his passion for the subject matter---but passion by itself can only go so far. Even more important was his belief in the correctness of the algorithm and the theory, which he had broadly outlined in his 1994 paper. Similar to Andrew Wiles' story, his perseverance led him to the idea which clinched the proof. In Vazirani's case, this was to use the new algorithmic idea of double depth-first search, which forms the core of the MV algorithm, and now, its proof as well. But Vazirani's result is also the story of an excellent research environment. Finding deep results requires colleagues or friends to discuss ideas with. Vazirani had these in the form of strong postdocs and PhD students. About ten years ago, he had been discussing ideas towards his proof with his former postdoc Ruta Mehta, and in the last three years, he discussed the final touches of his proof with his current PhD student Rohith Gangam. Needless to say, both of them gained a lot from these discussions.

So why should we care for the MV algorithm? I have several reasons. First, without doubt, it is a historic result within combinatorial optimization. Matching is one of the most fundamental objects in discrete mathematics and we keep finding new applications for it, for example, in health, labor markets, and modern day matching markets on the Internet, basically in every part of our lives. But there is more. Once again, one can look at Vazirani's paper where he describes the impact of matching to the development of the theory of algorithms: Matching theory has led to foundational concepts like the definition of the complexity classes \(\mathcal P\) (Edmonds, 1965a) and \(\# \mathcal P\) (Valiant, 1979), the primal-dual paradigm (Kuhn, 1955), and polyhedral combinatorics (Edmonds, 1965b). The impact of matching on complexity theory was an earlier topic of this blog.

Despite being around for decades, the MV algorithm is still the fastest known algorithm for computing a maximum cardinality matching. This is surprising, to put it mildly. Similar to many other fundamental problems in combinatorial optimization, I would have expected the discovery of better algorithms in the last four decades. Why has this not happened? Vazirani appears to have gotten to the essence of the problem: a profound theory that interleaves algorithmic invariants and graph-theoretic concepts. It seems to be the kind of theory which would play an active role in the field of combinatorial optimization.

However, Vazirani's result proves something else, possibly even more important: the massive gains to be made by single-minded persistence. In a world in which departments and promotion procedures focus on publishing large numbers of papers, it seems impossible to work on one result for more than a year, let alone for decades. Vazirani managed to achieve both: pursue his passion and get the unfinished job done, but not let it come in the way of the rest of his otherwise-active research career. As a young researcher, this inspires me! In the end, it is through such persistence that science will take big steps forward.

This blog post evolved from many enjoyable discussions, which I had with Vijay Vazirani during a research stay at UC Irvine in spring 2024. I am grateful to Ruta Mehta for feedback on the initial version of this post. Vazirani recently presented his paper in a mini series of two talks available online.

By Lance Fortnow

Guest post by Martin Bullinger

Very recently, Vijay Vazirani's paper A Theory of Alternating Paths and Blossoms, from the Perspective of Minimum Length got accepted to Mathematics of Operations Research. For the first time, it gives a complete and correct proof that the Micali-Vazirani algorithm finds a maximum cardinality matching in time \(\mathcal O\left(m\sqrt{n}\right)\). I would like to give an account of the extraordinary story of this proof and how Vazirani's contribution inspires persistence.

My fascination for matching already started during my undergrad when I gave a talk on Edmonds' blossom algorithm. It was at this time that I first heard about the Micali-Vazirani (MV) algorithm. Naturally, I was quite excited when I got to know Vazirani personally years later. When I talked to him about the MV algorithm I was, however, shocked: Vazirani admitted that even to that day, there did not exist a complete proof of its correctness. How can a theoretical result be accepted to FOCS without a proof?

Now, 44 years after publication of the algorithm, a proof exists and has been peer-reviewed in great depth. But why did it take so long? Apparently, some results just need time. Sometimes a lot of time. Think of Fermat's Last Theorem, whose proof took 358 years! So what is the story behind the MV algorithm? It can without a doubt be seen as a lifework. Together with his fellow PhD student Silvio Micali, Vazirani discovered it in the first year of his PhD in 1979-80. Without even attempting a proof, it was published in the proceedings of FOCS 1980. The first proof attempt by Vazirani was published in 1994 in Combinatorica. Unfortunately, this proof turned out to be flawed. It took another 30 years until his current paper.

What kept Vazirani going for so long? In the acknowledgements of his paper, he thanks matching theory for its gloriously elegant structure. Vazirani was driven by his passion for the subject matter---but passion by itself can only go so far. Even more important was his belief in the correctness of the algorithm and the theory, which he had broadly outlined in his 1994 paper. Similar to Andrew Wiles' story, his perseverance led him to the idea which clinched the proof. In Vazirani's case, this was to use the new algorithmic idea of double depth-first search, which forms the core of the MV algorithm, and now, its proof as well. But Vazirani's result is also the story of an excellent research environment. Finding deep results requires colleagues or friends to discuss ideas with. Vazirani had these in the form of strong postdocs and PhD students. About ten years ago, he had been discussing ideas towards his proof with his former postdoc Ruta Mehta, and in the last three years, he discussed the final touches of his proof with his current PhD student Rohith Gangam. Needless to say, both of them gained a lot from these discussions.

So why should we care for the MV algorithm? I have several reasons. First, without doubt, it is a historic result within combinatorial optimization. Matching is one of the most fundamental objects in discrete mathematics and we keep finding new applications for it, for example, in health, labor markets, and modern day matching markets on the Internet, basically in every part of our lives. But there is more. Once again, one can look at Vazirani's paper where he describes the impact of matching to the development of the theory of algorithms: Matching theory has led to foundational concepts like the definition of the complexity classes \(\mathcal P\) (Edmonds, 1965a) and \(\# \mathcal P\) (Valiant, 1979), the primal-dual paradigm (Kuhn, 1955), and polyhedral combinatorics (Edmonds, 1965b). The impact of matching on complexity theory was an earlier topic of this blog.

Despite being around for decades, the MV algorithm is still the fastest known algorithm for computing a maximum cardinality matching. This is surprising, to put it mildly. Similar to many other fundamental problems in combinatorial optimization, I would have expected the discovery of better algorithms in the last four decades. Why has this not happened? Vazirani appears to have gotten to the essence of the problem: a profound theory that interleaves algorithmic invariants and graph-theoretic concepts. It seems to be the kind of theory which would play an active role in the field of combinatorial optimization.

However, Vazirani's result proves something else, possibly even more important: the massive gains to be made by single-minded persistence. In a world in which departments and promotion procedures focus on publishing large numbers of papers, it seems impossible to work on one result for more than a year, let alone for decades. Vazirani managed to achieve both: pursue his passion and get the unfinished job done, but not let it come in the way of the rest of his otherwise-active research career. As a young researcher, this inspires me! In the end, it is through such persistence that science will take big steps forward.

This blog post evolved from many enjoyable discussions, which I had with Vijay Vazirani during a research stay at UC Irvine in spring 2024. I am grateful to Ruta Mehta for feedback on the initial version of this post. Vazirani recently presented his paper in a mini series of two talks available online.

By Lance Fortnow

assistant/associate professor at University of Sheffield (apply by May 20, 2024)

from CCI: jobs

This is an exciting opportunity for a Lecturer or Senior Lecturer in Algorithms at the University of Sheffield. Working in the Department of Computer Science, you will join our Foundations of Computation (FOX) Group. Its research topics range from the theoretical mathematical foundations that underpin computer science to their applications in real world contexts. Website: […]

This is an exciting opportunity for a Lecturer or Senior Lecturer in Algorithms at the University of Sheffield. Working in the Department of Computer Science, you will join our Foundations of Computation (FOX) Group. Its research topics range from the theoretical mathematical foundations that underpin computer science to their applications in real world contexts.

Website: https://www.jobs.ac.uk/job/DHG612/lecturer-senior-lecturer-in-algorithms
Email: feldmann.a.e@gmail.com

By shacharlovett

Popperian Falsification

from Ben Recht

Meehl's Philosophical Psychology, Lecture 2, Part 1

Meehl’s second lecture is almost entirely about Karl Popper and his program of refutation. Popper is the scientist’s favorite philosopher, as he conceives the scientist as a heroic truth-seeker, carving out understanding with the sword of falsification. I’ve been guilty of falling for Popper’s flirting! If you don’t think too deeply about it, Popper’s view of science is very romantic. Great thinkers put theories up for scrutiny and do ingenious experiments rendering them false, rapidly revealing an essential theoretical core. But, as we’ll see, not only does no science work this way, but if you probe a scientist for more than a few minutes, they’ll agree they aren’t in the falsification business. Let me return to these social issues after first describing the logical idea behind falsification.

As is always the case with logic, we have to start with some stodgy formal notation. I’m not going to use Meehl’s notation as I’d like to avoid the Emoji & Symbols Viewer when possible. But I think what I’ve chosen should be fine. If P and Q are statements, then I’ll write ~P to denote the negation of P, and P-->Q will denote material implication. Material implication is a logical rule that we colloquially read as “if P, then Q.” You could say that Q is necessary for P, or P is sufficient for Q. If you really like logic, the implication is equivalent to “~P or Q.” Or, more instructively, “~(P and ~Q).” Bah, I don’t like logic! But fortunately we won’t need much more than this to discuss Popper.

The final piece we need is the hypothetical syllogism.

The way to read a chart like this is “A is true. B is true. Therefore C is true.” A is some rule, B is a truth statement, and C is the implication.

There are four combinations from the “if P, then Q” relationship.

In the second line of each of these syllogisms, the truth of one of the propositions is asserted. Only two of these correspond to valid logical deduction. The top left corner is called modus ponens or affirming the antecedent. If P implies Q and P is true, then Q must be true. This is all well and good. 

The lower left corner is called denying the antecedent. It doesn’t get a fancy Latin name as it’s not valid. Certainly, just because P implies Q doesn’t mean that Q can’t happen when P doesn’t happen. When the Patriots win a lot of games, it makes me happy. I’m happy. Checks the Patriots’ record in 2023

Now, the really interesting cell in this table is the upper right. It is not valid. Just because P implies Q does not mean that Q implies P. “If I listen to Taylor Swift, I get a headache. I have a headache. Therefore I listened to Taylor Swift.” Or whatever. This implication is called the “converse fallacy” or “affirming the consequent.” We learn that it’s invalid in high school geometry at the latest.

But when you think about it, science and engineering is entirely built upon affirming the consequent. Typical reasoning in science goes something like this: “If my theory is true, then I’ll observe this outcome of my experiment. I observe exactly this outcome. Therefore, my theory is true.” We do this all the time. “If Newton’s Laws are true, bowling balls and feathers drop at the same rate in a vacuum. I see that bowling balls and feathers drop at the same rate in a vacuum. Therefore, I conclude Newton’s Laws are awesome.”

Huh, this can’t be the way things work, can it? Science can’t be built upon irrationality! Popper certainly didn’t think so. But let me get back to Popper in a second.

Even if it’s the first logical fallacy we learn, our entire society is built upon affirming the consequent. We all agree to believe the future will resemble the past, at least somewhat. This belief will always just be your opinion, man. Postmodern Machine Learning Dude Ben learned to stop worrying and embrace Hume’s Problem of Induction. It’s unavoidable. The sun will come up tomorrow. We build technology around prediction, assuming the future is like the past. Our society affirms the consequent. We’re delightfully arational. The Dude abides.

If you don’t want to embrace inductive anarchy like me, Meehl offers a probabilistic fix, one I incessantly write about on this blog:

“All empirical inference is only probable. That's why it differs from inference in mathematics set theory, pure logic. That's why no matter how much evidence you have about facts, the theory can never be said to be proved in the strong sense of Euclid. It's only proved in the sense of rendered more likely, rendered more credible, supported, whatever you want to say.“

For this reason, probability will necessarily play a central role in Meehl’s course. 

OK, but let’s get back to Popper. Popper hated inductive reasoning. He knew it was logically invalid. And he thought that we were just confused by Hume’s ramblings and could do science with purely logical deduction. Popper’s scientific logic is based on the fourth hypothetical syllogism: “If P, then Q. Not Q. Therefore Not P.” This is denying the consequent, also known as modus tollens. It gets a fancy Latin name as it’s logically valid. It forms the logical basis of our proofs by contradiction. And Popper tried to make it the basis of scientific inference.

Popper figured that he could solve the problem of induction, by denying induction exists. Bold! Or, at least, you could do science without induction. “You don’t support theories with facts.” For Popper. what is essential about science is its falsifiability. A scientist honorably tells their colleagues what sorts of observations undermine their theory. And then the other scientists do these experiments. The irrefutable theories are left standing.

I get why this is appealing, of course. We like to teach the scientific method as about generating alternative hypotheses and finding clever experiments to show these are false. After all, didn’t Galileo actually do that Leaning Tower of Pisa experiment to prove Aristotle wrong? Though Popper and the Logical Positivists were allergic to history, they were clearly inspired by certain historical anecdotes. But tomorrow, I’ll dive into some alternative anecdotes showing how science has never been about falsification. How scientists cling to theories despite significant evidence against them. And how Popper and others tried to patch this up.

Subscribe now

By Ben Recht

Positive Moments Forever: Undecidable and Decidable Cases

from arXiv: Computational Complexity

Authors: Gemma De les Coves, Joshua Graf, Andreas Klingler, Tim Netzer

Is there an algorithm to determine attributes such as positivity or non-zeroness of linear recurrence sequences? This long-standing question is known as Skolem's problem. In this paper, we study the complexity of an equivalent problem, namely the (generalized) moment membership problem for matrices. We show that this problem is decidable for orthogonal, unitary and real eigenvalue matrices, and undecidable for matrices over certain commutative and non-commutative polynomial rings. Our results imply that the positivity problem for simple unitary linear recurrence sequences is decidable, and is undecidable for linear recurrence sequences over the ring of commutative polynomials. As a byproduct, we prove a free version of Polya's theorem.

Authors: Gemma De les Coves, Joshua Graf, Andreas Klingler, Tim Netzer

Is there an algorithm to determine attributes such as positivity or non-zeroness of linear recurrence sequences? This long-standing question is known as Skolem's problem. In this paper, we study the complexity of an equivalent problem, namely the (generalized) moment membership problem for matrices. We show that this problem is decidable for orthogonal, unitary and real eigenvalue matrices, and undecidable for matrices over certain commutative and non-commutative polynomial rings. Our results imply that the positivity problem for simple unitary linear recurrence sequences is decidable, and is undecidable for linear recurrence sequences over the ring of commutative polynomials. As a byproduct, we prove a free version of Polya's theorem.

Transformers Can Represent $n$-gram Language Models

from arXiv: Computational Complexity

Authors: Anej Svete, Ryan Cotterell

Plenty of existing work has analyzed the abilities of the transformer architecture by describing its representational capacity with formal models of computation. However, the focus so far has been on analyzing the architecture in terms of language \emph{acceptance}. We contend that this is an ill-suited problem in the study of \emph{language models} (LMs), which are definitionally \emph{probability distributions} over strings. In this paper, we focus on the relationship between transformer LMs and $n$-gram LMs, a simple and historically relevant class of language models. We show that transformer LMs using the hard or sparse attention mechanisms can exactly represent any $n$-gram LM, giving us a concrete lower bound on their probabilistic representational capacity. This provides a first step towards understanding the mechanisms that transformer LMs can use to represent probability distributions over strings.

Authors: Anej Svete, Ryan Cotterell

Plenty of existing work has analyzed the abilities of the transformer architecture by describing its representational capacity with formal models of computation. However, the focus so far has been on analyzing the architecture in terms of language \emph{acceptance}. We contend that this is an ill-suited problem in the study of \emph{language models} (LMs), which are definitionally \emph{probability distributions} over strings. In this paper, we focus on the relationship between transformer LMs and $n$-gram LMs, a simple and historically relevant class of language models. We show that transformer LMs using the hard or sparse attention mechanisms can exactly represent any $n$-gram LM, giving us a concrete lower bound on their probabilistic representational capacity. This provides a first step towards understanding the mechanisms that transformer LMs can use to represent probability distributions over strings.

Pseudorandom Permutations from Random Reversible Circuits

from arXiv: Computational Complexity

Authors: William He, Ryan O'Donnell

We study pseudorandomness properties of permutations on $\{0,1\}^n$ computed by random circuits made from reversible $3$-bit gates (permutations on $\{0,1\}^3$). Our main result is that a random circuit of depth $n \cdot \tilde{O}(k^2)$, with each layer consisting of $\approx n/3$ random gates in a fixed nearest-neighbor architecture, yields almost $k$-wise independent permutations. The main technical component is showing that the Markov chain on $k$-tuples of $n$-bit strings induced by a single random $3$-bit nearest-neighbor gate has spectral gap at least $1/n \cdot \tilde{O}(k)$. This improves on the original work of Gowers [Gowers96], who showed a gap of $1/\mathrm{poly}(n,k)$ for one random gate (with non-neighboring inputs); and, on subsequent work [HMMR05,BH08] improving the gap to $\Omega(1/n^2k)$ in the same setting. From the perspective of cryptography, our result can be seen as a particularly simple/practical block cipher construction that gives provable statistical security against attackers with access to $k$~input-output pairs within few rounds. We also show that the Luby--Rackoff construction of pseudorandom permutations from pseudorandom functions can be implemented with reversible circuits. From this, we make progress on the complexity of the Minimum Reversible Circuit Size Problem (MRCSP), showing that block ciphers of fixed polynomial size are computationally secure against arbitrary polynomial-time adversaries, assuming the existence of one-way functions (OWFs).

Authors: William He, Ryan O'Donnell

We study pseudorandomness properties of permutations on $\{0,1\}^n$ computed by random circuits made from reversible $3$-bit gates (permutations on $\{0,1\}^3$). Our main result is that a random circuit of depth $n \cdot \tilde{O}(k^2)$, with each layer consisting of $\approx n/3$ random gates in a fixed nearest-neighbor architecture, yields almost $k$-wise independent permutations. The main technical component is showing that the Markov chain on $k$-tuples of $n$-bit strings induced by a single random $3$-bit nearest-neighbor gate has spectral gap at least $1/n \cdot \tilde{O}(k)$. This improves on the original work of Gowers [Gowers96], who showed a gap of $1/\mathrm{poly}(n,k)$ for one random gate (with non-neighboring inputs); and, on subsequent work [HMMR05,BH08] improving the gap to $\Omega(1/n^2k)$ in the same setting. From the perspective of cryptography, our result can be seen as a particularly simple/practical block cipher construction that gives provable statistical security against attackers with access to $k$~input-output pairs within few rounds. We also show that the Luby--Rackoff construction of pseudorandom permutations from pseudorandom functions can be implemented with reversible circuits. From this, we make progress on the complexity of the Minimum Reversible Circuit Size Problem (MRCSP), showing that block ciphers of fixed polynomial size are computationally secure against arbitrary polynomial-time adversaries, assuming the existence of one-way functions (OWFs).

Complexity of Planar Graph Orientation Consistency, Promise-Inference, and Uniqueness, with Applications to Minesweeper Variants

from arXiv: Computational Complexity

Authors: MIT Hardness Group, Della Hendrickson, Andy Tockman

We study three problems related to the computational complexity of the popular game Minesweeper. The first is consistency: given a set of clues, is there any arrangement of mines that satisfies it? This problem has been known to be NP-complete since 2000, but our framework proves it as a side effect. The second is inference: given a set of clues, is there any cell that the player can prove is safe? The coNP-completeness of this problem has been in the literature since 2011, but we discovered a flaw that we believe is present in all published results, and we provide a fixed proof. Finally, the third is solvability: given the full state of a Minesweeper game, can the player win the game by safely clicking all non-mine cells? This problem has not yet been studied, and we prove that it is coNP-complete.

Authors: MIT Hardness Group, Della Hendrickson, Andy Tockman

We study three problems related to the computational complexity of the popular game Minesweeper. The first is consistency: given a set of clues, is there any arrangement of mines that satisfies it? This problem has been known to be NP-complete since 2000, but our framework proves it as a side effect. The second is inference: given a set of clues, is there any cell that the player can prove is safe? The coNP-completeness of this problem has been in the literature since 2011, but we discovered a flaw that we believe is present in all published results, and we provide a fixed proof. Finally, the third is solvability: given the full state of a Minesweeper game, can the player win the game by safely clicking all non-mine cells? This problem has not yet been studied, and we prove that it is coNP-complete.

PHLP: Sole Persistent Homology for Link Prediction -- Interpretable Feature Extraction

from arXiv: Computational Geometry

Authors: Junwon You, Eunwoo Heo, Jae-Hun Jung

Link prediction (LP), inferring the connectivity between nodes, is a significant research area in graph data, where a link represents essential information on relationships between nodes. Although graph neural network (GNN)-based models have achieved high performance in LP, understanding why they perform well is challenging because most comprise complex neural networks. We employ persistent homology (PH), a topological data analysis method that helps analyze the topological information of graphs, to explain the reasons for the high performance. We propose a novel method that employs PH for LP (PHLP) focusing on how the presence or absence of target links influences the overall topology. The PHLP utilizes the angle hop subgraph and new node labeling called degree double radius node labeling (Degree DRNL), distinguishing the information of graphs better than DRNL. Using only a classifier, PHLP performs similarly to state-of-the-art (SOTA) models on most benchmark datasets. Incorporating the outputs calculated using PHLP into the existing GNN-based SOTA models improves performance across all benchmark datasets. To the best of our knowledge, PHLP is the first method of applying PH to LP without GNNs. The proposed approach, employing PH while not relying on neural networks, enables the identification of crucial factors for improving performance.

Authors: Junwon You, Eunwoo Heo, Jae-Hun Jung

Link prediction (LP), inferring the connectivity between nodes, is a significant research area in graph data, where a link represents essential information on relationships between nodes. Although graph neural network (GNN)-based models have achieved high performance in LP, understanding why they perform well is challenging because most comprise complex neural networks. We employ persistent homology (PH), a topological data analysis method that helps analyze the topological information of graphs, to explain the reasons for the high performance. We propose a novel method that employs PH for LP (PHLP) focusing on how the presence or absence of target links influences the overall topology. The PHLP utilizes the angle hop subgraph and new node labeling called degree double radius node labeling (Degree DRNL), distinguishing the information of graphs better than DRNL. Using only a classifier, PHLP performs similarly to state-of-the-art (SOTA) models on most benchmark datasets. Incorporating the outputs calculated using PHLP into the existing GNN-based SOTA models improves performance across all benchmark datasets. To the best of our knowledge, PHLP is the first method of applying PH to LP without GNNs. The proposed approach, employing PH while not relying on neural networks, enables the identification of crucial factors for improving performance.

Neural Slicer for Multi-Axis 3D Printing

from arXiv: Computational Geometry

Authors: Tao Liu, Tianyu Zhang, Yongxue Chen, Yuming Huang, Charlie C. L. Wang

We introduce a novel neural network-based computational pipeline as a representation-agnostic slicer for multi-axis 3D printing. This advanced slicer can work on models with diverse representations and intricate topology. The approach involves employing neural networks to establish a deformation mapping, defining a scalar field in the space surrounding an input model. Isosurfaces are subsequently extracted from this field to generate curved layers for 3D printing. Creating a differentiable pipeline enables us to optimize the mapping through loss functions directly defined on the field gradients as the local printing directions. New loss functions have been introduced to meet the manufacturing objectives of support-free and strength reinforcement. Our new computation pipeline relies less on the initial values of the field and can generate slicing results with significantly improved performance.

Authors: Tao Liu, Tianyu Zhang, Yongxue Chen, Yuming Huang, Charlie C. L. Wang

We introduce a novel neural network-based computational pipeline as a representation-agnostic slicer for multi-axis 3D printing. This advanced slicer can work on models with diverse representations and intricate topology. The approach involves employing neural networks to establish a deformation mapping, defining a scalar field in the space surrounding an input model. Isosurfaces are subsequently extracted from this field to generate curved layers for 3D printing. Creating a differentiable pipeline enables us to optimize the mapping through loss functions directly defined on the field gradients as the local printing directions. New loss functions have been introduced to meet the manufacturing objectives of support-free and strength reinforcement. Our new computation pipeline relies less on the initial values of the field and can generate slicing results with significantly improved performance.

The Geometry of the Set of Equivalent Linear Neural Networks

from arXiv: Computational Geometry

Authors: Jonathan Richard Shewchuk, Sagnik Bhattacharya

We characterize the geometry and topology of the set of all weight vectors for which a linear neural network computes the same linear transformation $W$. This set of weight vectors is called the fiber of $W$ (under the matrix multiplication map), and it is embedded in the Euclidean weight space of all possible weight vectors. The fiber is an algebraic variety that is not necessarily a manifold. We describe a natural way to stratify the fiber--that is, to partition the algebraic variety into a finite set of manifolds of varying dimensions called strata. We call this set of strata the rank stratification. We derive the dimensions of these strata and the relationships by which they adjoin each other. Although the strata are disjoint, their closures are not. Our strata satisfy the frontier condition: if a stratum intersects the closure of another stratum, then the former stratum is a subset of the closure of the latter stratum. Each stratum is a manifold of class $C^\infty$ embedded in weight space, so it has a well-defined tangent space and normal space at every point (weight vector). We show how to determine the subspaces tangent to and normal to a specified stratum at a specified point on the stratum, and we construct elegant bases for those subspaces. To help achieve these goals, we first derive what we call a Fundamental Theorem of Linear Neural Networks, analogous to what Strang calls the Fundamental Theorem of Linear Algebra. We show how to decompose each layer of a linear neural network into a set of subspaces that show how information flows through the neural network. Each stratum of the fiber represents a different pattern by which information flows (or fails to flow) through the neural network. The topology of a stratum depends solely on this decomposition. So does its geometry, up to a linear transformation in weight space.

Authors: Jonathan Richard Shewchuk, Sagnik Bhattacharya

We characterize the geometry and topology of the set of all weight vectors for which a linear neural network computes the same linear transformation $W$. This set of weight vectors is called the fiber of $W$ (under the matrix multiplication map), and it is embedded in the Euclidean weight space of all possible weight vectors. The fiber is an algebraic variety that is not necessarily a manifold. We describe a natural way to stratify the fiber--that is, to partition the algebraic variety into a finite set of manifolds of varying dimensions called strata. We call this set of strata the rank stratification. We derive the dimensions of these strata and the relationships by which they adjoin each other. Although the strata are disjoint, their closures are not. Our strata satisfy the frontier condition: if a stratum intersects the closure of another stratum, then the former stratum is a subset of the closure of the latter stratum. Each stratum is a manifold of class $C^\infty$ embedded in weight space, so it has a well-defined tangent space and normal space at every point (weight vector). We show how to determine the subspaces tangent to and normal to a specified stratum at a specified point on the stratum, and we construct elegant bases for those subspaces. To help achieve these goals, we first derive what we call a Fundamental Theorem of Linear Neural Networks, analogous to what Strang calls the Fundamental Theorem of Linear Algebra. We show how to decompose each layer of a linear neural network into a set of subspaces that show how information flows through the neural network. Each stratum of the fiber represents a different pattern by which information flows (or fails to flow) through the neural network. The topology of a stratum depends solely on this decomposition. So does its geometry, up to a linear transformation in weight space.

Parameterized Maximum Node-Disjoint Paths

from arXiv: Data Structures and Algorithms

Authors: Michael Lampis, Manolis Vasilakis

We revisit the Maximum Node-Disjoint Paths problem, the natural optimization version of Node-Disjoint Paths, where we are given a graph $G$, $k$ pairs of vertices $(s_i, t_i)$ and an integer $\ell$, and are asked whether there exist at least $\ell$ vertex-disjoint paths in $G$ whose endpoints are given pairs. We present several results, with an emphasis towards FPT approximation. Our main positive contribution is to show that the problem's intractability can be overcome using approximation and that for several of the structural parameters for which the problem is hard, most notably tree-depth, it admits an efficient FPT approximation scheme, returning a $(1-\varepsilon)$-approximate solution in time $f(td,\varepsilon)n^{O(1)}$. We manage to obtain these results by comprehensively mapping out the structural parameters for which the problem is FPT if $\ell$ is also a parameter, hence showing that understanding $\ell$ as a parameter is key to the problem's approximability. This, in turn, is a problem we are able to solve via a surprisingly simple color-coding algorithm, which relies on identifying an insightful problem-specific variant of the natural parameter, namely the number of vertices used in the solution. A natural question is whether the FPT approximation algorithm we devised for tree-depth can be extended to pathwidth. We resolve this negatively, showing that under the Parameterized Inapproximability Hypothesis no FPT approximation scheme for this parameter is possible, even in time $f(pw,\varepsilon)n^{g(\varepsilon)}$, thus precisely determining the parameter border where the problem transitions from ``hard but approximable'' to ``inapproximable''. Lastly, we strengthen existing lower bounds by replacing W[1]-hardness by XNLP-completeness for parameter pathwidth, and improving the $n^{o(\sqrt{td})}$ ETH-based lower bound for tree-depth to $n^{o(td)}$.

Authors: Michael Lampis, Manolis Vasilakis

We revisit the Maximum Node-Disjoint Paths problem, the natural optimization version of Node-Disjoint Paths, where we are given a graph $G$, $k$ pairs of vertices $(s_i, t_i)$ and an integer $\ell$, and are asked whether there exist at least $\ell$ vertex-disjoint paths in $G$ whose endpoints are given pairs. We present several results, with an emphasis towards FPT approximation. Our main positive contribution is to show that the problem's intractability can be overcome using approximation and that for several of the structural parameters for which the problem is hard, most notably tree-depth, it admits an efficient FPT approximation scheme, returning a $(1-\varepsilon)$-approximate solution in time $f(td,\varepsilon)n^{O(1)}$. We manage to obtain these results by comprehensively mapping out the structural parameters for which the problem is FPT if $\ell$ is also a parameter, hence showing that understanding $\ell$ as a parameter is key to the problem's approximability. This, in turn, is a problem we are able to solve via a surprisingly simple color-coding algorithm, which relies on identifying an insightful problem-specific variant of the natural parameter, namely the number of vertices used in the solution. A natural question is whether the FPT approximation algorithm we devised for tree-depth can be extended to pathwidth. We resolve this negatively, showing that under the Parameterized Inapproximability Hypothesis no FPT approximation scheme for this parameter is possible, even in time $f(pw,\varepsilon)n^{g(\varepsilon)}$, thus precisely determining the parameter border where the problem transitions from ``hard but approximable'' to ``inapproximable''. Lastly, we strengthen existing lower bounds by replacing W[1]-hardness by XNLP-completeness for parameter pathwidth, and improving the $n^{o(\sqrt{td})}$ ETH-based lower bound for tree-depth to $n^{o(td)}$.