Theory of Computing Blog Aggregator

Last night, two papers appeared on the quantum physics arXiv that my coauthors and I have been working on for more than a year, and that I’m pretty happy about.

The first paper, with Guy Rothblum, is Gentle Measurement of Quantum States and Differential Privacy (85 pages, to appear in STOC’2019). This is Guy’s first paper that has anything to do with quantum, and also my first paper that has anything to do with privacy. (What do I care about privacy? I just share everything on this blog…) The paper has its origin when I gave a talk at the Weizmann Institute about “shadow tomography” (a task where you have to measure quantum states very carefully to avoid destroying them), and Guy was in the audience, and he got all excited that the techniques sounded just like what they use to ensure privacy in data-mining, and I figured it was just some wacky coincidence and brushed him off, but he persisted, and it turned out that he was 100% right, and our two fields were often studying the same problems from different angles and we could prove it. Anyway, here’s the abstract:

In differential privacy (DP), we want to query a database about n users, in a way that “leaks at most ε about any individual user,” even conditioned on any outcome of the query. Meanwhile, in gentle measurement, we want to measure n quantum states, in a way that “damages the states by at most α,” even conditioned on any outcome of the measurement. In both cases, we can achieve the goal by techniques like deliberately adding noise to the outcome before returning it. This paper proves a new and general connection between the two subjects. Specifically, we show that on products of n quantum states, any measurement that is α-gentle for small α is also O(α)-DP, and any product measurement that is ε-DP is also O(ε√n)-gentle.

Illustrating the power of this connection, we apply it to the recently studied problem of shadow tomography. Given an unknown d-dimensional quantum state ρ, as well as known two-outcome measurements E1,…,Em, shadow tomography asks us to estimate Pr[Ei accepts ρ], for every i∈[m], by measuring few copies of ρ. Using our connection theorem, together with a quantum analog of the so-called private multiplicative weights algorithm of Hardt and Rothblum, we give a protocol to solve this problem using O((log m)2(log d)2) copies of ρ, compared to Aaronson’s previous bound of ~O((log m)4(log d)). Our protocol has the advantages of being online (that is, the Ei‘s are processed one at a time), gentle, and conceptually simple.

Other applications of our connection include new lower bounds for shadow tomography from lower bounds on DP, and a result on the safe use of estimation algorithms as subroutines inside larger quantum algorithms.

The second paper, with Robin Kothari, UT Austin PhD student William Kretschmer, and Justin Thaler, is Quantum Lower Bounds for Approximate Counting via Laurent Polynomials. Here’s the abstract:

Given only a membership oracle for S, it is well-known that approximate counting takes Θ(√(N/|S|)) quantum queries. But what if a quantum algorithm is also given “QSamples”—i.e., copies of the state |S〉=Σi∈S|i〉—or even the ability to apply reflections about |S〉? Our first main result is that, even then, the algorithm needs either Θ(√(N/|S|)) queries or else Θ(min{|S|1/3,√(N/|S|)}) reflections or samples. We also give matching upper bounds.

We prove the lower bound using a novel generalization of the polynomial method of Beals et al. to Laurent polynomials, which can have negative exponents. We lower-bound Laurent polynomial degree using two methods: a new “explosion argument” that pits the positive- and negative-degree parts of the polynomial against each other, and a new formulation of the dual polynomials method.

Our second main result rules out the possibility of a black-box Quantum Merlin-Arthur (or QMA) protocol for proving that a set is large. More precisely, we show that, even if Arthur can make T quantum queries to the set S⊆[N], and also receives an m-qubit quantum witness from Merlin in support of S being large, we have Tm=Ω(min{|S|,√(N/|S|)}). This resolves the open problem of giving an oracle separation between SBP, the complexity class that captures approximate counting, and QMA.

Note that QMA is “stronger” than the queries+QSamples model in that Merlin’s witness can be anything, rather than just the specific state |S〉, but also “weaker” in that Merlin’s witness cannot be trusted. Intriguingly, Laurent polynomials also play a crucial role in our QMA lower bound, but in a completely different manner than in the queries+QSamples lower bound. This suggests that the “Laurent polynomial method” might be broadly useful in complexity theory.

I need to get ready for our family’s Seder now, but after that, I’m happy to answer any questions about either of these papers in the comments.

Meantime, the biggest breakthrough in quantum complexity theory of the past month isn’t either of the above: it’s the paper by Anand Natarajan and John Wright showing that MIP*, or multi-prover interactive proof systems with entangled provers, contains NEEXP, or nondeterministic doubly-exponential time (!!). I’ll try to blog about this later, but if you can’t wait, check out this excellent post by Thomas Vidick.

by Scott at April 19, 2019 09:16 PM UTC

Authors: Scott Aaronson, Robin Kothari, William Kretschmer, Justin Thaler
Download: PDF
Abstract: This paper proves new limitations on the power of quantum computers to solve approximate counting -- that is, multiplicatively estimating the size of a nonempty set $S\subseteq [N]$. Given only a membership oracle for $S$, it is well known that approximate counting takes $\Theta(\sqrt{N/|S|})$ quantum queries. But what if a quantum algorithm is also given "QSamples"---i.e., copies of the state $|S\rangle = \sum_{i\in S}|i\rangle$---or even the ability to apply reflections about $|S\rangle$? Our first main result is that, even then, the algorithm needs either $\Theta(\sqrt{N/|S|})$ queries or else $\Theta(\min\{|S|^{1/3},\sqrt{N/|S|}\})$ reflections or samples. We also give matching upper bounds. We prove the lower bound using a novel generalization of the polynomial method of Beals et al. to Laurent polynomials, which can have negative exponents. We lower-bound Laurent polynomial degree using two methods: a new "explosion argument" and a new formulation of the dual polynomials method. Our second main result rules out the possibility of a black-box Quantum Merlin-Arthur (or QMA) protocol for proving that a set is large. We show that, even if Arthur can make $T$ quantum queries to the set $S$, and also receives an $m$-qubit quantum witness from Merlin in support of $S$ being large, we have $Tm=\Omega(\min\{|S|,\sqrt{N/|S|}\})$. This resolves the open problem of giving an oracle separation between SBP and QMA. Note that QMA is "stronger" than the queries+QSamples model in that Merlin's witness can be anything, rather than just the specific state $|S\rangle$, but also "weaker" in that Merlin's witness cannot be trusted. Intriguingly, Laurent polynomials also play a crucial role in our QMA lower bound, but in a completely different manner than in the queries+QSamples lower bound. This suggests that the "Laurent polynomial method" might be broadly useful in complexity theory.

at April 19, 2019 11:20 PM UTC

Authors: Penghui Yao
Download: PDF
Abstract: This paper initiates the study of a class of entangled-games, mono-state games, denoted by $(G,\psi)$, where $G$ is a two-player one-round game and $\psi$ is a bipartite state independent of the game $G$. In the mono-state game $(G,\psi)$, the players are only allowed to share arbitrary copies of $\psi$. This paper provides a doubly exponential upper bound on the copies of $\psi$ for the players to approximate the value of the game to an arbitrarily small constant precision for any mono-state binary game $(G,\psi)$, if $\psi$ is a noisy EPR state, which is a two-qubit state with completely mixed states as marginals and maximal correlation less than $1$. In particular, it includes $(1-\epsilon)|\Psi\rangle\langle\Psi|+\epsilon\frac{I_2}{2}\otimes\frac{I_2}{2}$, an EPR state with an arbitrary depolarizing noise $\epsilon>0$. This paper develops a series of new techniques about the Fourier analysis on matrix spaces and proves a quantum invariance principle and a hypercontractive inequality of random operators. The structure of the proofs is built the recent framework about the decidability of the non-interactive simulation of joint distributions, which is completely different from all previous optimization-based approaches or "Tsirelson's problem"-based approaches. This novel approach provides a new angle to study the decidability of the complexity class MIP$^*$, a longstanding open problem in quantum complexity theory.

at April 19, 2019 12:00 AM UTC

Authors: Mosab Hassaan, Karam Gouda
Download: PDF
Abstract: Graphs are widely used to model complicated data semantics in many application domains. In this paper, two novel and efficient algorithms Fast-ON and Fast-P are proposed for solving the subgraph isomorphism problem. The two algorithms are based on Ullman algorithm [Ullmann 1976], apply vertex-at-a-time matching manner and path-at-a-time matching manner respectively, and use effective heuristics to cut the search space. Comparing to the well-known algorithms, Fast-ON and Fast-P achieve up to 1-4 orders of magnitude speed-up for both dense and sparse graph data.

at April 19, 2019 11:24 PM UTC

Authors: Vincent Froese, Malte Renken
Download: PDF
Abstract: We study terrain visibility graphs, a well-known graph class closely related to polygon visibility graphs in computational geometry, for which a precise graph-theoretical characterization is still unknown. Over the last decade, terrain visibility graphs attracted quite some attention in the context of time series analysis with various practical applications in areas such as physics, geography and medical sciences. We make progress in understanding terrain visibility graphs by providing several graph-theoretic results. For example, we show that they can contain arbitrary large holes but not large antiholes. Moreover, we obtain two algorithmic results which are interesting from a practical perspective. We devise a fast shortest path algorithm on arbitrary induced subgraphs of terrain visibility graphs and a polynomial-time algorithm for Dominating Set on special terrain visibility graphs (called funnel visibility graphs).

at April 19, 2019 11:24 PM UTC

Authors: Onur Çağırıcı, Subir Kumar Ghosh, Petr Hliněný, Bodhayan Roy
Download: PDF
Abstract: We study the problem of colouring the vertices of a polygon, such that every viewer can see a unique colour. The goal is to minimize the number of colours used. This is also known as the conflict-free chromatic guarding problem with vertex guards (which is quite different from point guards considered in other papers). We study the problem in two scenarios of a set of viewers. In the first scenario, we assume that the viewers are all points of the polygon. We solve the related problem of minimizing the number of guards and approximate (up to only an additive error) the number of colours in the special case of funnels. We also give an upper bound of O(log n) colours on weak-visibility polygons which generalizes to all simple polygons. In the second scenario, we assume that the viewers are only the vertices of the polygon. We show a lower bound of 3 colours in the general case of simple polygons and conjecture that this is tight. We also prove that already deciding whether 1 or 2 colours are enough is NP-complete.

at April 19, 2019 11:24 PM UTC

Authors: Benjamin Doerr
Download: PDF
Abstract: In the first runtime analysis of an estimation-of-distribution algorithm (EDA) on the multi-modal jump function class, Hasen\"ohrl and Sutton (GECCO 2018) proved that the runtime of the compact genetic algorithm with suitable parameter choice on jump functions with high probability is at most polynomial (in the dimension) if the jump size is at most logarithmic (in the dimension), and is at most exponential in the jump size if the jump size is super-logarithmic. The exponential runtime guarantee was achieved with a hypothetical population size that is also exponential in the jump size. Consequently, this setting cannot lead to a better runtime.

In this work, we show that any choice of the hypothetical population size leads to a runtime that, with high probability, is at least exponential in the jump size. This result might be the first non-trivial exponential lower bound for EDAs that holds for arbitrary parameter settings.

at April 19, 2019 11:22 PM UTC

And a possible approach to avoid this obstacle

Valentine Kabanets is a famous complexity theorist from Simon Fraser University. He has been at the forefront of lower bounds for over two decades.

Today we draw attention to this work and raise an idea about trying to unravel what makes circuit lower bounds hard.

He is the common author on two new papers on the Minimum Circuit Size Problem (MCSP), which belongs to {\mathsf{NP}} but is not known to be complete or in {\mathsf{P}}. We posted on MCSP four years ago and mentioned his 1999 paper with Jin-Yi Cai, which gives evidence for MCSP truly being neither complete nor in {\mathsf{P}}. This “intermediate” status and the problem’s simplicity have raised hopes that direct attacks might succeed. The new papers prove direct lower bounds against some restricted circuit/formula models, including constant-depth circuits with mod-{p} gates for {p} prime. But they stop short of mod-{m} for {m} composite and other barrier cases.

He has a nifty research statement on his home page. It shows how derandomization, pseudorandomness, circuit complexity, and crypto combine into his two current projects. In a clickable tab for the third heading, he puts the meta-issue in pithy terms:

Why is proving circuit lower bounds so difficult?

His first answer tab speaks a connection we have also often emphasized here:

Traditionally, designing efficient algorithms is the subject of the theory of algorithms, while lower bounds are sought in complexity theory. It turns out, however, that there is a deep connection between the two directions: better algorithms (for a certain class of problems) also yield strong lower bounds (for related problems), and vice versa: strong lower bounds translate into more efficient algorithms.

Of course we agree, and we love connections shown in the new papers to problems such as distinguishing a very slightly biased coin from a true one. But we will try to supplement the algorithmic view of circuit lower bounds with a direct look at the underlying logic.

Logical Structure of Lower Bounds

Okay we all know that circuit lower bounds are hard. For all Kabanets’ success and beautiful work—he like the rest of the complexity field—are unable to prove what we believe is true. They cannot in the full circuit model prove anything close to what is believed to be true for at least a half a century: There are explicit Boolean functions that cannot be computed by any linear size circuit.

We feel that the logical structure of lower bounds statements gives insight into their difficulty. Perhaps this is almost a tautology. Of course the logical structure of any mathematical statement helps us understand its inherent difficulty. But we believe more: That this structure can reveal quite a bit about lower bounds. Let’s take a look at lower bounds and see if this belief holds up.

In particular let’s compare the two main approaches to proving lower bounds: non-uniform and uniform. Our claim is that they have different logical structure, and that this difference explains why there is such a gap between the two. While lower bounds—non-uniform or uniform—are hard, uniform ones are at least possible now. Non-uniform lower bounds are really very difficult.

Here is one example. To prove an explicit size lower bound for Boolean circuits—we’ll be content with just a linear one—we must give a particular family of Boolean functions {f_{n}(x)} (each of {n} inputs) so that:

  1. Given {n} and {x} we can evaluate {f_{n}(x)} in polynomial time;

  2. There is no Boolean circuit of size {\alpha n} that correctly computes {f_{n}(x)} on all {x} of length {n}.

Here {\alpha} is a constant and {n} is assumed to be large enough. The terrific paper of Kazuo Iwama, Oded Lachish, Hiroki Morizumi, and Ran Raz gives explicit Boolean functions whose size for circuits with the usual not and binary and and or operators exceeds {5n-o(n)}.

An Approach

Let’s look at the above example more carefully. Suppose that in place of a single Boolean function on {n} inputs we have a list of them:

\displaystyle  f_{n,1}(x),\dots,f_{n,m}(x).

Can we prove the following?

\displaystyle  \exists n_{0}\ \forall n > n_{0} \ \exists f_{n,k} \ \forall C \in \mathsf{SIZE}(\alpha n) \ \neg\mathsf{compute}(C,f_{n,k}).

The first thing to note is the effect of letting the number {\bf m} of functions vary:

{\bullet } If {\bf m = 1}, this just becomes our original explicit circuit lower bound problem.

{\bullet } If {\bf m} is a huge value, however, this becomes the exponential lower bound shown by Claude Shannon—a known quantity.

In our terms, the latter takes {\bf m} equal to {2^{2^{n}}}, so that given {n} our function list is just the list of all Boolean functions. If all we care about is an {\alpha n} lower bound, then the high end of the range can be something like {m = 2^{2\alpha n\log(n)} = n^{2\alpha n}}. So at the high end we have a simple counting argument for the proof but have traded away explicitness. The question will be about the tradeoffs for {\bf m} in-between the extremes.

An Analogy

The above idea that we can model the lower bound methods by controlling the length of the list of the functions is the key to our approach. Perhaps it may help to note an analogy to other famous hard problems of constructing explicit objects. In particular, let’s look at constructing transcendental numbers. Recall these are real numbers that are not algebraic: they are not roots of polynomials with integer coefficients. They include {\pi = 3.14159\dots} and {e = 2.71828\dots}

{\bullet } The Liouville numbers of Joseph Liouville.

\displaystyle  x = \sum_{k=1}^\infty \frac{a_k}{b^{k!}}.

These are explicit numbers that were proved by him in 1844 to be transcendental. In terms of our model {\bf m=1}.

{\bullet } The great {\pi} and {e} puzzle. This is the observation that of {\pi + e} or {\pi - e}, at least one is a transcendental number. In our terms this gives {\bf m=2}.

{\bullet } The famous theorem of Georg Cantor—read as proving the existence of transcendental numbers since algebraic ones are countable.

Here the high end of the range is as extreme as can be. Cantor’s `list’ of numbers is uncountable—in our model, {\bf m} is the cardinality of the real numbers. Note, the fact that his {\bf m} is huge, really huge, may explain why some at the time were unimpressed by this result. They wanted the ‘list’ to be small, actually they wanted {\bf m=1}. See this for a discussion of the history of these ideas.

{\bullet} The theorem by Waim Zudilin, in a 2001 paper, that at least one of the numbers {\zeta(5), \zeta(7), \zeta(9), \zeta(11)} must be irrational. It is for “irrational” not “transcendental,” but exemplified {\bf m = 4} in a highly nontrivial manner. The technical point that makes this work is interactions among these numbers that cannot be captured just by considering any one of them separately. This has {\bf m = 4}.

Joining Functions

The issue is this: Suppose that we have a list of several boolean functions {f_{1}(x),\dots,f_{m}(x)}. Then we can join them together to form one function {g(x,y)} so that

\displaystyle  g(x,1) = f_{1}(x), \cdots, g(x,m) = f_{m}(x).

Clearly the function {g(x,y)} is easy implies that all of the {f_{y}(x)} are easy. This join trick shows that we can encode several boolean functions into one function. Note, we can even make {y} have only order {\log(n)} where {x} has {n} bits.

Thus we can join any collection of functions to make a “universal” one that is at least as hard as the worst of the single functions. More precisely,

\displaystyle  \mathsf{complexity}(g) \ge \mathsf{complexity}(f_{y}) \text{ for } y=1,\dots,m.

Here {\mathsf{complexity}(h)} is the circuit complexity of the boolean function {h}.

If {m} is bigger than {2^{O(n)}}, that is if {m = 2^{\omega(n)}}, then the joined function has more than linearly many variables. Can we possibly establish nontrivial interactions among so many functions, say {{\bf m} = n^{2n}}?

One can also try to get this effect with fewer or no additional variables by taking the XOR of some subset of functions in the list. If this is done randomly for each input length {n} then one can expect hard functions to show up for many {n}. If this process can then be de-randomized, then this may yield an explicit hard function. We wonder how this idea might meld with Andy Yao’s famous XOR Lemma and conditions to de-randomize it.

Joining Numbers

Ken and I thought about the above simple fact about joins, which seems special to functions. Joining by interleaving the decimal expansions is not an arithmetic operation. However, it appears that there may be a similar result possible for transcendental numbers.

Lemma 1 Suppose that {\alpha} and {\beta} are real numbers. Then

\displaystyle  \alpha + i\beta

is a transcendental complex number if at least one of {\alpha} or {\beta} are transcendental.

Proof: Let {\gamma = \alpha + i\beta} be an algebraic number. Thus there must be a polynomial {q(x)} with integer coefficients so that

\displaystyle  q(\gamma) = q(\alpha + i\beta) = 0.

Then it follows by complex conjugation that

\displaystyle  q(\alpha - i\beta) = 0.

Therefore {\alpha + i\beta} and {\alpha -i\beta} are both algebraic; thus, so is their sum which is {2\alpha}. Thus {\alpha} is algebraic. It follows that {\beta} is also algebraic. This shows that {\gamma} is transcendental. \Box

A question: Can we show that we can do a “join” operation for three or more numbers? That is given numbers {x_{1},\dots,x_{m}} can we construct a number {y} that is transcendental if and only if at least one of {x_{i}} is transcendental?

Open Problems

Is the {\bf m} model useful? Is it possible for it to succeed where a direct explicit argument ({{\bf m} = 1}) does not? Does it need {{\bf m} \gg 2^n} to rise above technical dependence on the {\bf m = 1} case via the join construction?

Maybe the barriers just need 3-Dan martial-arts treatment. Source—our congrats.

by RJLipton+KWRegan at April 18, 2019 10:16 PM UTC

One of the classic results in concurrency theory is the Hennessy-Milner Theorem. This result states that
  1. two bisimilar states in a labelled transition system satisfy exactly the same formulae in a multi-modal logic now called Hennessy-Milner logic, and 
  2. two states in a labelled transition system that satisfy a mild finiteness constraint (called image finiteness)  and enjoy the same properties expressible in Hennessy-Milner logic are bisimilar.
See, for instance, Section 1.2 in these notes by Colin Stirling for an exposition of that result. A consequence of the Hennessy-Milner Theorem is that whenever two states p and q in a labelled transition system are not bisimilar, one can come up with a formula in Hennessy-Milner logic that p satisfies, but q does not. Moreover, for each state p in a finite, loop-free labelled transition systems, it is possible to construct a formula F(p) in Hennessy-Milner logic that completely characterizes p up to bisimilarity. This means that, for each state q, p is bisimilar to q if, and only if, q satisfies F(p). The formula F(p) is called a characteristic formula for p up to bisimilarity. One can obtain a similar result for states in finite labelled transition systems by extending Hennessy-Milner logic with greatest fixed points.

Characteristic formulae have a long history in concurrency theory. However, to be best of my knowledge, the complexity of determining whether a formula is characteristic had not been studied before Antonis Achilleos first addressed the problem in this conference paper. In that paper, Antonis focused on the complexity of the problem of determining whether a formula F is complete, in the sense that, for each formula G, it can derive either G or its negation.

Our recent preprint  The Complexity of Identifying Characteristic Formulae extends the results originally obtained by Antonis to a variety of modal logics, possibly including least and greatest fixed-point operators. In the paper, we show that completeness, characterization, and validity have the same complexity — with some exceptions for which there are, in general, no complete formulae. So, for most modal logics of interest, the problem is coNP-complete or PSPACE-complete, and becomes EXPTIME-complete for modal logics with fixed points. To prove our upper bounds, we present a nondeterministic procedure with an oracle for validity that combines tableaux and a test for bisimilarity, and determines whether a formula is complete.

I think that there is still a lot of work that can be done in studying this problem, with respect to a variety of other notions of equivalence considered in concurrency theory, so stay tuned for further updates.

by Luca Aceto ( at April 18, 2019 05:19 PM UTC

In differential privacy (DP), we want to query a database about $n$ users, in a way that "leaks at most $\varepsilon$ about any individual user," even conditioned on any outcome of the query. Meanwhile, in gentle measurement, we want to measure $n$ quantum states, in a way that "damages the states by at most $\alpha$," even conditioned on any outcome of the measurement. In both cases, we can achieve the goal by techniques like deliberately adding noise to the outcome before returning it. This paper proves a new and general connection between the two subjects. Specifically, we show that on products of $n$ quantum states, any measurement that is $\alpha$-gentle for small $\alpha$ is also $O( \alpha)$-DP, and any product measurement that is $\varepsilon$-DP is also $O(\varepsilon\sqrt{n})$-gentle. Illustrating the power of this connection, we apply it to the recently studied problem of shadow tomography. Given an unknown $d$-dimensional quantum state $\rho$, as well as known two-outcome measurements $E_{1},\ldots,E_{m}$, shadow tomography asks us to estimate $\Pr\left[ E_{i}\text{ accepts }\rho\right] $, for every $i\in\left[ m\right] $, by measuring few copies of $\rho$. Using our connection theorem, together with a quantum analog of the so-called private multiplicative weights algorithm of Hardt and Rothblum, we give a protocol to solve this problem using $O\left( \left( \log m\right) ^{2}\left( \log d\right) ^{2}\right)$ copies of $\rho$, compared to Aaronson's previous bound of $\widetilde{O} \left(\left( \log m\right) ^{4}\left( \log d\right)\right) $. Our protocol has the advantages of being online (that is, the $E_{i}$'s are processed one at a time), gentle, and conceptually simple. Other applications of our connection include new lower bounds for shadow tomography from lower bounds on DP, and a result on the safe use of estimation algorithms as subroutines inside larger quantum algorithms.

at April 18, 2019 12:48 PM UTC

Based on Scott's review, I read through Stephen Pinker's Enlightenment Now. I can't top Scott's exposition of the book, but it is pretty incredible how far humanity has gone when you step back to look at the big picture.

One line intrigued me, one that Pinker credits to a book called The Big Picture by Sean Carroll
The laws of physics underlying everyday life (that is excluding extreme values of energy and gravitation like black holes, dark matter and the Big Bang) are completely known.
Hasn't this statement almost always been true, in the sense that the leading minds would make this claim at many times in history. The ancient Greeks probably believed they understood physics that underlies everyday life. So did physicists after Newton. Life back then not today. My everyday life involves using a GPS device that requires understanding relativistic effects and computer chips that needed other scientific advances.

Is it possible we could do more in everyday life if we knew more physics? I'd certainly use a teleporter in everyday life.

And is the statement even true today? We all use public key cryptography, even to read this blog. It's not completely clear if we understand the physics enough to know how or if large-scale quantum computers capable of breaking those systems can be built.

Everday life is relative.

by Lance Fortnow ( at April 18, 2019 11:40 AM UTC

Blasiok (SODA'18) recently introduced the notion of a subgaussian sampler, defined as an averaging sampler for approximating the mean of functions $f:\{0,1\}^m \to \mathbb{R}$ such that $f(U_m)$ has subgaussian tails, and asked for explicit constructions. In this work, we give the first explicit constructions of subgaussian samplers (and in fact averaging samplers for the broader class of subexponential functions) that match the best-known constructions of averaging samplers for $[0,1]$-bounded functions in the regime of parameters where the approximation error $\varepsilon$ and failure probability $\delta$ are subconstant. Our constructions are established via an extension of the standard notion of randomness extractor (Nisan and Zuckerman, JCSS'96) where the error is measured by an arbitrary divergence rather than total variation distance, and a generalization of Zuckerman's equivalence (Random Struct. Alg.'97) between extractors and samplers. We believe that the framework we develop, and specifically the notion of an extractor for the Kullback-Leibler (KL) divergence, are of independent interest. In particular, KL-extractors are stronger than both standard extractors and subgaussian samplers, but we show that they exist with essentially the same parameters (constructively and non-constructively) as standard extractors.

at April 18, 2019 12:27 AM UTC

Recently a Twitter account started called justsaysinmice. The only thing this account does, is to repost breathless news articles about medical research breakthroughs that fail to mention that the effect in question was only observed in mice, and then add the words “IN MICE” to them. Simple concept, but it already seems to be changing the conversation about science reporting.

It occurred to me that we could do something analogous for quantum computing. While my own deep-seated aversion to Twitter prevents me from doing it myself, which of my readers is up for starting an account that just reposts one overhyped QC article after another, while appending the words “A CLASSICAL COMPUTER COULD ALSO DO THIS” to each one?

by Scott at April 17, 2019 07:14 PM UTC

Lancaster – Watching the outcomes of the Israeli elections (photo: Andrey Kupavskii)


I just came back from a trip to Sweden and the U.K. I was invited to Gothenburg to be the opponent for a Ph. D. Candidate  Malin Palö Forsström (by now Dr. Malin Palö Forsström),  who wrote her excellent Ph. D. thesis under the supervision of Jeff Steif in Chalmers University. We also used the opportunity for a lovely mini-mini-workshop

From Gothenburg I took the train to Stockholm to spend the weekend with Anders Björner and we talked about some old projects regarding algebraic shifting.  We had dinner with several colleagues including Svante Linusson who is a candidate for the European parliament!

Stockholm: With Anders and Cristins in the late 80s (left, I think this was also when I was an opponent), Svante Linusson ten days ago (right)

The United Kingdom

The British Mathematical Colloquium at Lancaster was a lovely 4-day general meeting, an opportunity to meet some old and new friends (and Internet MO friend Yemon Choi in real life), and to learn about various new developments. I am aware of the fact that my list of unfulfilled promises is longer than those of most politicians, but I do hope to come back to some mathematics from this trip to Sweden and to Lancaster.

Election Day

Last week’s Tuesday was election day in Israel,  and as much as I like to participate (and to devote a post to election day here on the blog – in 2009, 2012, and 2015) I had to miss the election, for the first time since 1985. (I still tried to follow the outcomes in real time.)

The Negev, Israel

And we are now spending a three-day vacation and doing some mild hiking in Mitzpe Ramon, in the Negev, the Israeli desert.  The view around here is spectacular. I first fell in love with the sights of the Negev when I spent six months here when I was 19 (in the army). Since then we have been caming here many times over the years, and in 2002 the annual meeting of the Israeli Mathematical Union took place here, in the same hotel.


Ein Ovdat (left). The 2002 Annual meeting of the IMU (right). A large number of Israeli mathematicians come to a substantial fraction of these annual events.

The stance of the main Israeli parties on quantum computing

One anecdote about the Israeli election is that both major political parties of Israel, the Likud, led by Benjamin (Bibi) Netanyahu that won 35 seats in the parliament and will probably lead the coalition, and the newly formed “Blue-and-White” party, led by Benny (Benjamin) Gantz that also won 35 seats and will probably lead the opposition, stand behind quantum computing! 🙂

Left – A paragraph from “Blue and White’s” charter with a pledge to quantum computing (I thank Noam Lifshitz for telling me about it). Right –  a news item (click for the article) about the quantum computing vision of Netanyahu and the Likud party.


by Gil Kalai at April 17, 2019 07:10 PM UTC

Last year I took some time off to study online convex optimization in some detail. The reason for doing that was similar to the reason why at some point I took time off to study spectral graph theory: it was coming up in several papers that I wanted to understand, and I felt that I was missing out by not mastering an important tool. In particular, I wanted to understand:

  1. The Barak-Hardt-Kale proof of the Impagliazzo hard-core lemma.
  2. The online convex optimization viewpoint on the Frieze-Kannan weak regularity lemma, on the dense model theorem of (RTTV), and on the abstract weak regularity lemma of (TTV) that were described to me by Madhur Tulsiani a few years ago. Furthermore, I wanted to see if Russel Impagliazzo’s subsequent improvements to the dense model theorem and to the abstract weak regularity lemma could be recovered from this point of view.
  3. The Arora-Kale algorithms for semidefinite programming, including their nearly linear-time algorithm for approximating the Goemans-Williamson relaxation of Max Cut.
  4. The meaning of the sentence “multiplicative weights and gradient descent are both special cases of follow-the-regularized-leader, using negative entropy and {\ell_2^2} as regularizer, respectively.”
  5. The AllenZhu-Liao-Orecchia online optimization proof of the Batson-Spielman-Srivastava sparsification result.

I am happy to say that, except for the “furthermore” part of (2), I achieved my goals. To digest this material a bit better, I came up with the rather ambitious plan of writing a series of posts, in which I would alternate between (i) explaining a notion or theorem from online convex optimization (at a level that someone learning about optimization or machine learning might find useful) and (ii) explaining a complexity-theoretic application. Now that a very intense Spring semester is almost over, I plan to get started on this plan, although it is not clear that I will see it through the end. So stay tuned for the forthcoming first episode, which will be about the good old multiplicative weights algorithm.

by luca at April 17, 2019 03:27 PM UTC

Here are in theory‘s first ever book reviews! The books are

Giorgio Garuzzo
Quando in Italia si facevano i computer
Available for free at and

Giorgio Ausiello
The Making of a New Science
Available from Springer, as a DRM-free PDF through your academic library.

Both books talk about the early years of computing in Italy, on the industrial and academic side, respectively. They briefly intersect with the story of Olivetti’s Elea computer.

Olivetti was a company that was founded in 1908 to make typewriters, and then branched out to other office/business machines and avionics. In the 1930s, Adriano Olivetti, a son of the founder Camillo Olivetti, took over the company. Adriano Olivetti was an unusual figure of entrepreneur deeply interested in arts, humanities and social sciences, with a utopian vision of a company reinvesting its profits in its community. In the 1950s, he led the company to develop the Elea, the first Italian computer. The Elea was made with transistors, and it came out before IBM had built its own first transistor-based computer.

The development of Elea was led by Mario Tchou. Mario Tchou was a Chinese-Italian born and raised in Rome, who studied electrical engineering at the Sapienza University of Rome and then at Brooklyn Polytechnic, eventually becoming an assistant professor at Columbia University. Olivetti persuaded Tchou to move back to Italy and lead the development of Elea, whose first prototype came out in 1957.

As production was ramping up, tragedy struck: Adriano Olivetti died in 1960, and Mario Tchou died in 1961. To shore up the finances of the company, the new CEO Roberto Olivetti brought in a series of new investors, who pushed to spin off the computer business.

At that point, Olivetti was working on another revolutionary machine, the P101, a programmable desktop calculator billed as the “first desktop computer,” which came out in 1964, attracting huge interest. Nonetheless the company spun off its “computer” division into a joint venture with GE, eventually divesting of it completely. Fortunately, they kept control of the P101 project, because those working on it were careful in branding it internally as a “calculator” (not part of the of deal with GE) rather than a “computer.”

These events are narrated, with a fascinating insider view, in Garuzzo’s book.

Giorgio Ausiello is one of the founding fathers of academic computer science in Italy. His book is a professional memoir that starts in the 1960s, at the time in which he started working on his undergraduate thesis at the Istituto Nazionale per le Applicazioni del Calcolo (INAC, later renamed IAC) at the National Research Council in Rome. At that point INAC had one of Italy’s few computers, a machine bought in 1954 from the Ferranti company in Manchester (when it was installed, it was Italy’s second computer).

As narrated in a previous post, Mauro Picone, the mathematician who was leading INAC, brought Corrado Bohm to Rome to work on this computer, and Ausiello started to work with Bohm at the time in which he was just starting to think about models of computation and lambda-calculus.

Later, Ausiello visited Berkeley in the 1968-69 academic year, when Manuel Blum and Dick Karp had just joined the faculty. Ausiello took part in the first STOC, which was held in Marina del Rey in May 1969, and, later that month, he witnessed the occupation of People’s Park in Berkeley.

The Fall of 1969 marks the start of the first Italian undergraduate programs in Computer Science, in just four universities: Bari, Milan, Pisa and Torino. Back in Italy from Berkeley, Ausiello continued to work at the National Research Council in Rome.

The book continues with a behind-the-scene narration of the events that led to the founding of the EATCS professional society, the ICALP conference and the TCS journal. There is also another trip to Berkeley in the 1980s, featuring Silvio Micali and Vijay Vazirani working on their matching algorithm, and Shafi Goldwasser just arriving in Berkeley.

Methodically documented and very detail-oriented, the book is a fascinating read, although it leaves you sometimes wanting to hear more about the personalities and the stories of the people involved and less about the attendance lists of certain meetings.

Even when it comes to the dryer details, however, I am happy that the books documents them and makes them available to future generations that will not have any living memory of the 1960s and 1970s.

I should also mention that Alon Rosen has recently interviewed Christos Papadimitriou and Avi Wigderson and those (long) interviews are full of good stories. Finally, the Simons Foundation site has an interview of Laszlo Lovasz in conversation with Avi Wigderson which I very highly recommend everybody to watch.

by luca at April 17, 2019 12:52 AM UTC

A random variable $X$ is an $(n,k)$-zero-fixing source if for some subset $V\subseteq[n]$, $X$ is the uniform distribution on the strings $\{0,1\}^n$ that are zero on every coordinate outside of $V$. An $\epsilon$-extractor for $(n,k)$-zero-fixing sources is a mapping $F:\{0,1\}^n\to\{0,1\}^m$, for some $m$, such that $F(X)$ is $\epsilon$-close in statistical distance to the uniform distribution on $\{0,1\}^m$ for every $(n,k)$-zero-fixing source $X$. Zero-fixing sources were introduced by Cohen and Shinkar in~2015 in connection with the previously studied extractors for bit-fixing sources. They constructed, for every $\mu>0$, an efficiently computable extractor that extracts a positive fraction of entropy, i.e., $\Omega(k)$ bits, from $(n,k)$-zero-fixing sources where $k\geq(\log\log n)^{2+\mu}$. In this paper we present two different constructions of extractors for zero-fixing sources that are able to extract a positive fraction of entropy for $k$ essentially smaller than $\log\log n$. The first extractor works for $k\geq C\log\log\log n$, for some constant $C$. The second extractor extracts a positive fraction of entropy for $k\geq \log^{(i)}n$ for any fixed $i\in \N$, where $\log^{(i)}$ denotes $i$-times iterated logarithm. The fraction of extracted entropy decreases with $i$. The first extractor is a function computable in polynomial time in~$n$ (for $\epsilon=o(1)$, but not too small); the second one is computable in polynomial time when $k\leq\alpha\log\log n/\log\log\log n$, where $\alpha$ is a positive constant.

at April 16, 2019 03:10 PM UTC

June 3-14, 2019 U. Colorado Boulder and Colorado State U The Department of Mathematics at Colorado State University and the Department of Computer Science at the University of Colorado, Boulder invite interested participants to attend a workshop and conference on Tensors: Algebra, Computation, and Applications (TACA). The central theme of tensors is meant to … Continue reading Tensors: Algebra-Computation-Applications

by shacharlovett at April 16, 2019 03:01 PM UTC

by David Eppstein at April 15, 2019 05:43 PM UTC

Disclaimer: This blog post is not intended to offend University of Pennsylvania or IIT Kharagpur or Yahoo or any individuals or parties involved. The goal of this blog post is to point out some of the inefficiencies and acknowledge them as one of the motivations behind developing TrueCerts platform.

The year was 2005. I was applying for a PhD program in Computer Science in the top US universities. I have applied to 14 US universities. On Dec 2nd 2015, I received the following email (see the screenshot below) from the University of Pennsylvania (UPenn), Penn Engineering, Office of Academic Programs, Graduate Admissions. I have redacted the name, email address and the phone numbers in the emails, to preserve their privacy.


Here are the photos of the original transcript (I received from IIT Kharagpur when I graduated) and the additional transcripts (I received from IIT Kharagpur when I requested them for my PhD application). They are all laminated by IIT Kharagpur and sent to me. Feel free to laugh at my appearance on the transcript. I do too 😉


IIT Kharagpur stated the following in their transcripts policy.

“Institute does not take responsibility of sending the duplicate copy of grade cards (transcripts) directly to other institutions/organizations, in connection with the applicants’ admission/employment etc.”

My reply to the above email and the response from UPenn are shown in the following screenshot.


In Summary, UPenn was concerned that my transcript is laminated and opened. Surprised at their response (“Your application will not go any further with the opened transcript”), I have spent couple of hours searching online and discovered that there is a lot of scam involving fake degrees and fake transcripts. There are “professionals” in India and China creating the “highest quality fakes”. These fakes are often laminated. So, UPenn decided not to process any applications with opened and laminated credentials. All my credentials are valid and correct, but my application was not processed because IIT Kharagpur has no way to “securely send” valid transcripts and UPenn has no way to “efficiently validate applicants’ transcripts”.

I have applied to 14 universities and some of them rejected me because ‘they found a better candidate’ or ‘the research group is not looking for new PhD students’ etc etc. But my UPenn application not going further because of an “inefficient and broken system” is frustrating, to say the least.

After a week, I have recovered from this frustration and went back to my daily routine of reading research papers on Theoretical Computer Science. I said to myself “I will get into one of the remaining 13 universities and I have to focus on research and resolve the P vs NP problem” 😉

On a lighter note: There is a simple way to verify that my transcript is valid — ‘Simply open it and look at my grades’. I have received several B’s, some C’s and even couple of D’s. My CGPA is just average. Nobody in their right senses would create a fake transcript with those grades 🙂 During the last decade, whenever I met any IITian I first set them up telling my credentials (B-Tech from IIT Kharagpur, Masters from USC, PhD from GeorgiaTech, Taught at Princeton) and after they say “wow”, I bet with them that their CGPA at IIT is greater than mine. I never lost till date. I take pride in this fact 😉 In my defense, I was an all-rounder at IIT Kharagpur, balancing studies, serving as a ‘secretary of fine arts, modeling and dramatics’ of my hostel, painting, learning guitar and many more things.

A second incident: During summer 2010 (at the end of my fourth year as a PhD student at GeorgiaTech), I was offered an internship at Yahoo labs and the Yahoo HR team said they want to verify all my credentials (my IIT B-Tech degree, USC Master’s degree, work experience). Yahoo uses a third-party service to rigorously verify all credentials of potential employees / interns. Yahoo fired their own CEO when they found that he lied in his resume. This process is very rigorous, but very time-consuming. The verification of all my credentials took more than couple of months. Meanwhile, I was waiting with my fingers-crossed (figuratively speaking) and hoping these verifications happen soon, so that I can start my internship and earn some serious summer money for two months and see a bank balance of more than $1,000 dollars for the first time during my grad school 😉

Later that year, when I discovered Bitcoin white paper and the underlying Blockchain, my first thought was to use the technology to build a ‘document integrity platform’ to issue tamper-proof credentials (degrees, transcripts, employment certificates, Visas, Identity documents, driver’s license etc), which can be verified instantaneously and securely on the blockchain.

Early 2011, I got busy with writing thesis and joined the CS department Princeton University in September 2011. I have spent the first couple of years teaching at Princeton, mining Bitcoin, keeping track of Blockchain news and hoping that somebody will develop a ‘document integrity platform’. To my relief, some entrepreneurs tried to develop a ‘credential verification solutions’. To my frustration, none of those solutions are perfect. So I started preparing myself to become a full-time entrepreneur and left Princeton in summer 2015.

Today, I am very glad we have a complete data integrity solution (for universities, employers and enterprises) and I am very excited that we are preventing fraud and corruption in several areas using our TrueCerts platform.

Every week I read several stories about fraud and corruption online (Eg: the recent college admissions scandal). I approach the involved parties and explain how such instances can be rigorously prevented using technology.

I feel very fortunate to have discovered an exciting vision towards a fraud-free and efficient future. This discovery happened through the above mentioned unfortunate events. Sometimes the lowest points in your life have the greatest potential to show you the right path to your highest points.

by kintali at April 15, 2019 04:35 AM UTC

About a month ago (after my P NP poll appeared) I got email from Jacob Aron asking me some questions about it. One thing he was excited about was that the number of people who thought P vs NP would be solved in the next decade had increased from 11% to 22%. I told him that this also surprised me and there had been no major advances to warrant that increase.

Then the article came out. Here is the pointer to the headline and the first few lines, the rest is behind a paywall


You may notice that the headline is

We could solve the biggest problem in math in the next decade

I emailed Jacob to send me the article, which he did. The article was fine, even quoting me as saying that the increase of people who thought it would be solved soon was unwarranted.

1) So, article fine, headline terrible.

2)  A more honest headline would be

The Complexity Theory Community slightly more optimistic about when P vs NP will be resolved for no apparent reason.

3) More bad science:here

Headline is

Turing's Oracle: The computer that goes beyond logic

I think the article was about how a Turing Machine with an oracle for HALT can solve HALT. (If I am wrong about this let me know in the comments and I'll correct it.)

4) More bad science:here


Finally, a problem that only Quantum Computers will ever be able to solve.

This was about the oracle such that BQP is not in PH. Really!

5) I invite you to add your own.

6) OKAY, so why is the press SO BAD at reporting on our field? And is it just our field? I have one explanation, though I am sure there are many.

Our field is about showing the LIMITS of computation. Its hard to make that sexy so they ... lie? exaggerate. They themselves don't really understand our field? Note:

To explain to someone who does not really know CS why its important to have an oracle where BQP is not in PH is hard

To explain this to someone IN CS but not in Theory is still hard!

To explain this to someone IN CS and even in CS Theory, but not complexity (e.g., algorithms) might be hard, though it may depend on the person.

7) The old saying is `I don't care if you get the story wrong so long as you spell my name right' And indeed, they did spell my name right. So there is that! But more seriously and less about me or even the article that refers to my poll--- is it bad that science reporting is often wrong?

by GASARCH ( at April 15, 2019 04:08 AM UTC

by Abby van Soest and Elad Hazan, based on this paper 

When humans interact with the environment, we receive a series of signals that indicate the value of our actions. We know that chocolate tastes good, sunburns feel bad, and certain actions generate praise or disapproval from others. Generally speaking, we learn from these signals and adapt our behavior in order to get more positive “rewards” and fewer negative ones.

Reinforcement learning (RL) is a sub-field of machine learning that formally models this setting of learning through interaction in a reactive environment. In RL, we have an agent and an environment. The agent observes its position (or “state”) in the environment and takes actions that transition it to a new state. The environment looks at an agent’s state and hands out rewards based on a hidden set of criteria.

Source: Reinforcement Learning: An Introduction. Sutton & Barto, 2017.

Typically, the goal of RL is for the agent to learn behavior that maximizes the total reward it receives from the environment. This methodology has led to some notable successes: machines have learned how to play Atari games, how to beat human masters of Go, and how to write long-form responses to an essay prompt.

But what can we learn without an external reward signal?

It seems like a paradoxical question to ask, given that RL is all about rewards. But even though the reward paradigm is fundamentally flexible in many ways, it is also brittle and limits the agent’s ability to learn about its environment. This is due to several reasons. First, a reward signal directs the agents towards a single specific goal that may not generalize. Second, the reward signal may be sparse and uninformative, as we illustrate below.

Imagine that you want a robot to learn to navigate through the following maze.

Case 1: Sparse Rewards. The agent gets a reward of +1 when it exits the maze, and a reward of 0 everywhere else. The agent doesn’t learn anything until it stumbles upon the exit.

⇒ Clear reward signals are not always available.

Case 2: Misleading Rewards. The agent gets a reward of +1 at the entrance and a reward of +10 at the exit. The agent incorrectly learns to sit at the entrance because it hasn’t explored its environment sufficiently.

⇒ Rewards can prevent discovery of the full environment.

These issues are easy to overcome in the small maze on the left. But what about the maze on the right? As the size of the environment grows, it’ll get harder and harder to find the correct solution — the intractability of the problem scales exponentially.

pasted image 0.png

So what we find is that there is power in being able to learn effectively in the absence of rewards. This intuition is supported by a body of research that shows learning fails when rewards aren’t dense or are poorly shaped; and fixing these problems can require substantial engineering effort.

By enabling agents to discover the environment without the requirement of a reward signal, we create a more flexible and generalizable form of reinforcement learning. This framework can be considered a form of “unsupervised” RL. Rather than relying on explicit and inherently limited signals (or “labels”), we can deal with a broad, unlabelled pool of data. Learning from this pool facilitates a more general extraction of knowledge from the environment.

Our new approach: Maximum Entropy

In recent work, we propose finding a policy that maximizes entropy (which we refer to as a MaxEnt policy), or another related and concave function of the distribution. This objective is reward-independent and favors exploration.

In the video below, a two-dimensional cheetah robot learns to run backwards and forwards, move its legs fast and in all different directions, and even do flips. The cheetah doesn’t have access to any external rewards; it only uses signals from the MaxEnt policy.


Entropy is a function of the distribution over states. A high entropy distribution visits all states with near-equal frequency — it’s a uniform distribution. On the other hand, a low entropy distribution is biased toward visiting some states more frequently than others. (In the maze example, a low entropy distribution would result from the agent sitting at the entrance of the maze forever.)


So given that policy \pi creates a distribution d_\pi over states, the problem we are hoping to solve is:

\pi^* = \arg \max_{\pi} \text{entropy}(d_\pi)

When we know all the states, actions, and dynamics of a given environment, finding the policy with maximum entropy is a concave optimization problem. This type of problem can be easily and exactly solved by convex programming.

But we very rarely have all that knowledge available to use. In practice, one of several complications usually arise:

  1. The states are non-linearly approximated by a neural network or some other function approximator.
  2. The transition dynamics are unknown.
  3. The state space is intractably large. (As an interesting example, the game of Go has more than one googol or 10^{100} possible states. That’s more than the number of atoms in the universe, according to this blog from 2016, and definitely more than can fit on a computer.)

In such cases, the problem of finding a max-entropy policy becomes non-convex and computationally hard.

So what to do? If we look at many practical RL problems (Atari, OpenAI Gym), we see that there are many known, efficient solvers that can construct an optimal (or nearly-optimal) policy when they are given a reward signal.

We thus consider an oracle model: let’s assume that we have access to one of these solvers, so that we can pass it an explicit reward vector and receive an optimal policy in return. Can we now maximize entropy in a provably efficient way? In other words, is it possible to reduce this high complexity optimization problem to that of “standard” RL?

Our approach does exactly that! It is based on the Frank-Wolfe method. This is a gradient-based optimization algorithm that is particularly suited for oracle-based optimization. Instead of moving in the direction of the steepest decline of the objective function, the Frank-Wolfe method iteratively moves in the direction of the optimal point in the direction of the gradient. This is depicted below (and deserves a separate post…). The Frank-Wolfe method is a projection-free algorithm, see this exposition about its theoretical properties.

For the exact specification of the algorithm and its performance guarantee, see our paper.


To complement the theory, we also created some experiments to test the MaxEnt algorithm on simulated robotic locomotion tasks (open source code available here). We used test environments from OpenAI Gym and Mujoco and trained MaxEnt experts for various environments.

These are some results from the Humanoid experiment, where the agent is a human-like bipedal robot. The behavior of the MaxEnt agent (blue) is baselined against a random agent (orange), who explores by sampling randomly from the environment. This random approach is often used in practice for epsilon-greedy RL exploration.

In this figure, we see that over the course of 25 epochs, the MaxEnt agent progressively increases the total entropy over the state space.

Here, we see a visualization of the Humanoid’s coverage of the $xy$-plane, where the shown plane is of size 40-by-40. After one epoch, there is minimal coverage of the area. But by the 5th, 10th, and 15th epoch, we see that the agent has learned to visit all the different states in the plane, obtaining full and nearly uniform coverage of the grid!


by Elad Hazan at April 15, 2019 01:03 AM UTC

Color the points of an grid with two colors. How big a monochromatic grid-like subset can you find? By “grid-like” I mean that it should be possible to place equally many horizontal and vertical lines, partitioning the plane into cells each of which contains a single point.

So for the coloring of the grid below, there are several monochromatic grid-like subsets. The image below shows one, and the completely red and blue southwest and northeast quadrants provide two others. The blue quadrant prevents any red grid-like subset from being larger than , and vice versa, so these are the largest grid-like subsets in this grid.

Monochromatic grid-like subset in a colored grid

It’s not hard to prove that there always exists a monochromatic grid-like subset of size at least . Just use vertical and horizontal lines to partition the big grid into blocks of that size. If one of those blocks is monochromatic, then it’s the grid-like subset you’re looking for. And if not, you can choose a red point from each block to get a grid-like subset of the same size.

In the other direction, there exist colorings of an grid for which the largest monochromatic grid-like subset has size only a little bigger, . To find such a coloring, partition the big grid into square blocks of size , and make each block monochromatic with a randomly chosen color.

Coloring a grid by dividing into square blocks and coloring each block randomly

Now, consider any partition by axis-parallel lines into (irregular) rectangles, each containing one of the points of a grid-like subset. Only one row or column of the rectangles can cross each line of the partition into square blocks, so the number of rectangles that include parts of two or more square blocks is . Any remaining rectangles of the partition must come from a grid-like subset of square blocks that are all colored the same as each other. But with a random coloring, the expected size of this largest monochromatic subset of square blocks is . Therefore, the number of rectangles that stay within a single square block is limited to the total number of points in this grid-like subset of square blocks, which is again .

I’m not sure how to eliminate the remaining gap between these two bounds, or which way it should go.

One application of these ideas involves the theory of superpatterns, permutations that contain as a pattern every smaller permutation up to some size . If is a superpattern for the permutations of size , then we can obtain a point set by interpreting the position and value of each element of as Cartesian coordinates. This point set includes a grid-like subset of size , coming from a permutation of size that translates to a grid-like set of points. If the elements of the superpattern are colored with two colors, there still exists a monochromatic grid-like subset of size . And this monochromatic grid-like subset corresponds to a superpattern, for permutations of size . So, whenever the elements of a superpattern are colored with two (or finitely many) colors, there remains a monochromatic subset of elements that is still a superpattern for permutations of some smaller but non-constant size.

(Discuss on Mastodon)

by David Eppstein at April 14, 2019 05:02 PM UTC

For quantified Boolean formulas (QBF), a resolution system with a symmetry rule was recently introduced by Kauers and Seidl (Inf. Process. Lett. 2018). In this system, many formulas hard for QBF resolution admit short proofs. Kauers and Seidl apply the symmetry rule on symmetries of the original formula. Here we propose a new formalism where symmetries are dynamically recomputed during the proof on restrictions of the original QBF. This presents a theoretical model for the potential use of symmetry learning as an inprocessing rule in QCDCL solving. We demonstrate the power of symmetry learning by proving an exponential separation between Q-resolution with the symmetry rule and Q-resolution with our new symmetry learning rule. In fact, we show that bounding the number of symmetry recomputations gives rise to a hierarchy of QBF resolution systems of strictly increasing strength.

at April 14, 2019 10:26 AM UTC

A locally decodable code (LDC) C:{0,1}^k -> {0,1}^n is an error correcting code wherein individual bits of the message can be recovered by only querying a few bits of a noisy codeword. LDCs found a myriad of applications both in theory and in practice, ranging from probabilistically checkable proofs to distributed storage. However, despite nearly two decades of extensive study, the best known constructions of O(1)-query LDCs have super-polynomial blocklength. The notion of relaxed LDCs is a natural relaxation of LDCs, which aims to bypass the foregoing barrier by requiring local decoding of nearly all individual message bits, yet allowing decoding failure (but not error) on the rest. State of the art constructions of O(1)-query relaxed LDCs achieve blocklength n = O(k^{1+ \gamma}) for an arbitrarily small constant \gamma. We prove a lower bound which shows that O(1)-query relaxed LDCs cannot achieve blocklength n = k^{1+ o(1)}. This resolves an open problem raised by Ben-Sasson, Goldreich, Harsha, Sudan, and Vadhan (STOC 2004).

at April 14, 2019 08:31 AM UTC

The study of entanglement through the length of interactive proof systems has been one of the most productive applications of complexity theory to the physical sciences that I know of. Last week Anand Natarajan and John Wright, postdoctoral scholars at Caltech and MIT respectively, added a major stone to this line of work. Anand & John (hereafter “NW”) establish the following wild claim: it is possible for a classical polynomial-time verifier to decide membership in any language in non-deterministic doubly exponential time by asking questions to two infinitely powerful, but untrusted, provers sharing entanglement. In symbols, NEEXP {\subseteq} MIP{^\star}! (The last symbol is for emphasis — no, we don’t have an MIP{^\star}! class — yet.)

What is amazing about this result is the formidable gap between the complexity of the verifier and the complexity of the language being verified. We know since the 90s that the use of interaction and randomness can greatly expand the power of polynomial-time verifiers, from NP to PSPACE (with a single prover) and NEXP (with two provers). As a result of the work of Natarajan and Wright, we now know that yet an additional ingredient, the use of entanglement between the provers, can be leveraged by the verifier — the same verifier as in the previous results, a classical randomized polynomial-time machine — to obtain an exponential increase in its verification power. Randomness and interaction brought us one exponential; entanglement gives us another.

To gain intuition for the result consider first the structure of a classical two-prover one-round interactive proof system for non-deterministic doubly exponential time, with exponential-time verifier. Cutting some corners, such a protocol can be obtained by “scaling up” a standard two-prover protocol for non-deterministic singly exponential time. In the protocol, the verifier would sample a pair of exponential-length questions {(X,Y)}, send {X} and {Y} to each prover, receive answers {A} and {B}, and perform an exponential-time computation that verifies some predicate about {(X,Y,A,B)}.

How can entanglement help design an exponentially more efficient protocol? At first it may seem like a polynomial-time verifier has no way to even get started: if it can only communicate polynomial-length messages with the provers, how can it leverage their power? And indeed, if the provers are classical, it can’t: it is known that even with a polynomial number of provers, and polynomially many rounds of interaction, a polynomial-time verifier cannot decide any language beyond NEXP.

But the provers in the NW protocol are not classical. They can share entanglement. How can the verifier exploit this to its advantage? The key property that is needed is know as the rigidity of entanglement. In words, rigidity is the idea that by verifying the presence of certain statistical correlations between the provers’ questions and answers the verifier can determine precisely (up to a local change of basis) the quantum state and measurements that the provers must have been using to generate their answers. The most famous example of rigidity is the CHSH game: as already shown by Werner and Summers in 1982, the CHSH game can only be optimally, or even near-optimally, won by measuring a maximally entangled state using two mutually unbiased bases for each player. No other state or measurements will do, unless they trivially imply an EPR pair and mutually unbiased bases (such as a state that is the tensor product of an EPR pair with an additional entangled state).

Rigidity gives the verifier control over the provers’ use of their entanglement. The simplest use of this is for the verifier to force the provers to share a certain number {N} of EPR pairs and measure them to obtain identical uniformly distributed {N}-bit strings. Such a test for {N} EPR pairs can be constructed from {N} CHSH games. In a paper with Natarajan we give a more efficient test that only requires questions and answers of length that is poly-logarithmic in {N}. Interestingly, the test is built on classical machinery — the low-degree test — that plays a central role in the analysis of some classical multi-prover proof systems for NEXP.

At this point we have made an inch of progress: it is possible for a polynomial-time (in {n=\log N}) verifier to “command” two quantum provers sharing entanglement to share {N=2^n} EPR pairs, and measure them in identical bases to obtain identical uniformly random {N}-bit strings. What is this useful for? Not much — yet. But here comes the main insight in NW: suppose we could similarly force the provers to generate, not identical uniformly random strings, but a pair of {N}-bit strings {(X,Y)} that is distributed as a pair of questions from the verifier in the aforementioned interactive proof system for NEEXP with exponential-time (in {n}) verifier. Then we could use a polynomial-time (in {n}) verifier to “command” the provers to generate their exponentially-long questions {(X,Y)} by themselves. The provers would then compute answers {(A,B)} as in the NEEXP protocol. Finally, they would prove to the verifier, using a polynomial interaction, that {(A,B)} is a valid pair of answers to the pair of questions {(X,Y)} — indeed, the latter verification is an NEXP problem, hence can be verified using a protocol with polynomial-time verifier.

Sounds crazy? Yes. But they did it! Of course there are many issues with the brief summary above — for example, how does the verifier even know the questions {X,Y} sampled by the provers? The answer is that it doesn’t need to know the entire question; only that it was sampled correctly, and that the quadruple {(X,Y,A,B)} satisfies the verification predicate of the exponential-time verifier. This can be verified using a polynomial-time interactive proof.

Diving in, the most interesting insight in the NW construction is what they call “introspection”. What makes multi-prover proof systems powerful is the ability for the verifier to send correlated questions to the provers, in a way such that each prover has only partial information about the other’s question — informally, the verifier plays a variant of prisonner’s dilemma with the provers. In particular, any interesting distribution {(X,Y)} will have the property that {X} and {Y} are not fully correlated. For a concrete example think of the “planes-vs-lines” distribution, where {X} is a uniformly random plane and {Y} a uniformly random line in {X}. The aforementioned test for {N} EPR pairs can be used to force both provers to sample the same uniformly random plane {X}. But how does the verifier ensure that one of the provers “forgets” parts of the plane, to only remember a uniformly random line {Y} that is contained in it? NW’s insight is that the information present in a quantum state — such as the prover’s half-EPR pairs — can be “erased” by commanding the prover to perform a measurement in the wrong basis — a basis that is mutually unbiased with the basis used by the other prover to obtain its share of the query. Building on this idea, NW develop a battery of delicate tests that provide the verifier the ability to control precisely what information gets distributed to each prover. This allows a polynomial-time verifier to perfectly simulate the local environment that the exponential-time verifier would have created for the provers in a protocol for NEEXP, thus simulating the latter protocol with exponentially less resources.

One of the aspects of the NW result I like best is that they showed how the “history state barrier” could be overcome. Previous works attempting to establish strong lower bounds on the class MIP{^\star}, such as the paper by Yuen et al., relies on a compression technique that requires the provers to share a history state of the computation performed by a larger protocol. Unfortunately, history states are very non-robust, and as a result such works only succeeded in developing protocols with vanishing completeness-soundness gap. NW entirely bypass the use of history states, and this allows them to maintain a constant gap.

Seven years ago Tsuyoshi Ito and I showed that MIP{}^\star contains NEXP. At the time, we thought this may be the end of the story — although it seemed challenging, surely someone would eventually prove a matching upper bound. Natarajan and Wright have defeated this expectation by showing that MIP{^\star} contains NEEXP. What next? NEEEXP? The halting problem? I hope to make this the topic of a future post.

by Thomas at April 14, 2019 04:35 AM UTC

After about 14 years, ICE-TCS has finally decided to have its own blog. In the past, I have covered ICE-TCS related news here and I will continue to do so. Those posts will also appear in the centre's blog.

Have a look at the new blog if you are interested in the work we do and in the events at our little centre in Reykjavik, Iceland.

by Luca Aceto ( at April 13, 2019 12:14 PM UTC

The Department of Informatics at the University of Bergen (Norway) has announced a 3 years PhD position in Algorithms. The focus of the position will be on the algorithmic aspects of automated reasoning, and in particular, in the algorithms aspects of automated theorem proving.


by shacharlovett at April 13, 2019 09:59 AM UTC

I am very pleased to inform you that Emilio Cruciani and Roberto Verdecchia, two third-year PhD students in CS at the GSSI,  and their coauthors Breno Miranda and Antonia Bertolino (member of the Scientific Committee of the PhD programme in CS at the GSSI) will receive an ICSE 2019 ACM SIGSOFT Distinguished Paper Award for their paper "Scalable Approaches for Test Suite Reduction". 
Distinguished Papers represent the very best contributions to the ICSE Technical Track, and are awarded to up to 10% of the papers. (ICSE is the premiere annual conference in the field of software engineering and is very competitive.) This is a remarkable achievement that reflects well on the authors, on CS@GSSI and on the institute as a whole. 

Congratulations to Antonella, Breno, Emilio and Roberto, not to mention the GSSI as a whole!

by Luca Aceto ( at April 13, 2019 09:13 AM UTC

ACM-SIAM Algorithmic Principles of Computer Systems (APoCS20)

January 8, 2020
Hilton Salt Lake City Center, Salt Lake City, Utah, USA
Colocated with SODA, SOSA, and Alenex

The First ACM-SIAM APoCS is sponsored by SIAM SIAG/ACDA and ACM SIGACT.

Important Dates:

August 9: Abstract Submission and Paper Registration Deadline
August 16: Full Paper Deadline
October 4: Decision Announcement

Program Chair:

Bruce Maggs, Duke University and Akamai Technologies


Contributed papers are sought in all areas of algorithms and architectures that offer insight into the performance and design of computer systems.  Topics of interest include, but are not limited to algorithms and data structures for:

Emerging Architectures
Energy Efficient Computing
High-performance Computing
Management of Massive Data
Networks, including Mobile, Ad-Hoc and Sensor Networks
Operating Systems
Parallel and Distributed Systems
Storage Systems

A submission must report original research that has not previously or is not concurrently being published. Manuscripts must not exceed twelve (12) single-spaced double-column pages, in addition the bibliography and any pages containing only figures.  Submission must be self-contained, and any extra details may be submitted in a clearly marked appendix. 

Steering Committee:

Michael Bender
Guy Blelloch
Jennifer Chayes
Martin Farach-Colton (Chair)
Charles Leiserson
Don Porter
Jennifer Rexford
Margo Seltzer

by Paul Goldberg ( at April 12, 2019 08:27 AM UTC

I was saddened about the results of the Israeli election. The Beresheet lander, which lost its engine and crashed onto the moon as I was writing this, seems like a fitting symbol for where the country is now headed. Whatever he was at the start of his career, Netanyahu has become the Israeli Trump—a breathtakingly corrupt strongman who appeals to voters’ basest impulses, sacrifices his country’s future and standing in the world for short-term electoral gain, considers himself unbound by laws, recklessly incites hatred of minorities, and (despite zero personal piety) cynically panders to religious fundamentalists who help keep him in power. Just like with Trump, it’s probably futile to hope that lawyers will swoop in and free the nation from the demagogue’s grip: legal systems simply aren’t designed for assaults of this magnitude.

(If, for example, you were designing the US Constitution, how would you guard against a presidential candidate who openly supported and was supported by a hostile foreign power, and won anyway? Would it even occur to you to include such possibilities in your definitions of concepts like “treason” or “collusion”?)

The original Zionist project—the secular, democratic vision of Herzl and Ben-Gurion and Weizmann and Einstein, which the Nazis turned from a dream to a necessity—matters more to me than most things in this world, and that was true even before I’d spent time in Israel and before I had a wife and kids who are Israeli citizens. It would be depressing if, after a century of wildly improbable victories against external threats, Herzl’s project were finally to eat itself from the inside. Of course I have analogous worries (scaled up by two orders of magnitude) for the US—not to mention the UK, India, Brazil, Poland, Hungary, the Philippines … the world is now engaged in a massive test of whether Enlightenment liberalism can long endure, or whether it’s just a metastable state between one Dark Age and the next. (And to think that people have accused me of uncritical agreement with Steven Pinker, the world’s foremost optimist!)

In times like this, one takes one’s happiness where one can find it.

So, yeah: I’m happy that there’s now an “image of a black hole” (or rather, of radio waves being bent around a black hole’s silhouette). If you really want to understand what the now-ubiquitous image is showing, I strongly recommend this guide by Matt Strassler.

I’m happy that integer multiplication can apparently now be done in O(n log n), capping a decades-long quest (see also here).

I’m happy that there’s now apparently a spectacular fossil record of the first minutes after the asteroid impact that ended the Cretaceous period. Even better will be if this finally proves that, yes, some non-avian dinosaurs were still alive on impact day, and had not gone extinct due to unrelated climate changes slightly earlier. (Last week, my 6-year-old daughter sang a song in a school play about how “no one knows what killed the dinosaurs”—the verses ran through the asteroid and several other possibilities. I wonder if they’ll retire that song next year.) If you haven’t yet read it, the New Yorker piece on this is a must.

I’m happy that my good friend Zach Weinersmith (legendary author of SMBC Comics), as well as the GMU economist Bryan Caplan (rabblerousing author of The Case Against Education, which I reviewed here), have a new book out: a graphic novel that makes a moral and practical case for open borders (!). Their book is now available for pre-order, and at least at some point cracked Amazon’s top 10. Just last week I met Bryan for the first time, when he visited UT Austin to give a talk based on the book. He said that meeting me (having known me only from the blog) was like meeting a fictional character; I said the feeling was mutual. And as for Bryan’s talk? It felt great to spend an hour visiting a fantasyland where the world’s economies are run by humane rationalist technocrats, and where walls are going down rather than up.

I’m happy that, according to this Vanity Fair article, Facebook will still ban you for writing that “men are scum” or that “women are scum”—having ultimately rejected the demands of social-justice activists that it ban only the latter sentence, not the former. According to the article, everyone on Facebook’s Community Standards committee agreed with the activists that this was the right result: dehumanizing comments about women have no place on the platform, while (superficially) dehumanizing comments about men are an important part of feminist consciousness-raising that require protection. The problem was simply that the committee couldn’t come up with any general principle that would yield that desired result, without also yielding bad results in other cases.

I’m happy that the 737 Max jets are grounded and will hopefully be fixed, with no thanks to the FAA. Strangely, while this was still the top news story, I gave a short talk about quantum computing to a group of Boeing executives who were visiting UT Austin on a previously scheduled trip. The title of the session they put me in? “Disruptive Computation.”

I’m happy that Greta Thunberg, the 15-year-old Swedish climate activist, has attracted a worldwide following and might win the Nobel Peace Prize. I hope she does—and more importantly, I hope her personal story will help galvanize the world into accepting things that it already knows are true, as with the example of Anne Frank (or for that matter, Gandhi or MLK). Confession: back when I was an adolescent, I often daydreamed about doing exactly what Thunberg is doing right now, leading a worldwide children’s climate movement. I didn’t, of course. In my defense, I wouldn’t have had the charisma for it anyway—and also, I got sidetracked by even more pressing problems, like working out the quantum query complexity of recursive Fourier sampling. But fate seems to have made an excellent choice in Thunberg. She’s not the prop of any adult—just a nerdy girl with Asperger’s who formed the idea to sit in front of Parliament every day to protest the destruction of the world, because she couldn’t understand why everyone else wasn’t.

I’m happy that the college admissions scandal has finally forced Americans to confront the farcical injustice of our current system—a system where elite colleges pretend to peer into applicants’ souls (or the souls of the essay consultants hired by the applicants’ parents?), and where they preen about the moral virtue of their “holistic, multidimensional” admissions criteria, which amount in practice to shutting out brilliant working-class Asian kids in favor of legacies and rich badminton players. Not to horn-toot, but Steven Pinker and I tried to raise the alarm about this travesty five years ago (see for example this post), and were both severely criticized for it. I do worry, though, that people will draw precisely the wrong lessons from the scandal. If, for example, a few rich parents resorted to outright cheating on the SAT—all their other forms of gaming and fraud apparently being insufficient—then the SAT itself must be to blame so we should get rid of it. In reality, the SAT (whatever its flaws) is almost the only bulwark we have against the complete collapse of college admissions offices into nightclub bouncers. This Quillette article says it much better than I can.

I’m happy that there will a symposium from May 6-9 at the University of Toronto, to honor Stephen Cook and the (approximate) 50th anniversary of the discovery of NP-completeness. I’m happy that I’ll be attending and speaking there. If you’re interested, there’s still time to register!

Finally, I’m happy about the following “Sierpinskitaschen” baked by CS grad student and friend-of-the-blog Jess Sorrell, and included with her permission (Jess says she got the idea from Debs Gardner).

Image may contain: food

by Scott at April 11, 2019 10:38 PM UTC

Elwyn R. Berlekamp entered this world on Sept 6, 1940, and left it on April 9, 2019.  Wikipedia calls him An American Mathematician which seems to narrow to me. He worked on a variety of topics which were Math, Comp Sci, finance, and likely others --- if you know any that I missed leave comments.

Here are some of the things he worked on:

1) Factoring Polynomials over large finite fields (See here from 1970)

I quote the abstract
This paper reviews some of the known algorithms for factoring polynomials over finite fields and presents a new deterministic procedure for reducing the problem of factoring an arbitrary polynomial over the Galois field GP(pm) to the problem of finding roots in GF(p) of certain other polynomials over GP(p). The amount of computation and the storage space required by these algorithms are algebraic in both the degree of the polynomial to be factored and the log of the order of the field.
Certain observations on the application of these methods to factorization of polynomials over the rational integers are also included. 
I think he meant what we would call polynomial time when he says algebraic.

2) Combinatorial Games. He co-wrote the classic Winning Ways For your Mathematical Plays with John Conway and Richard Guy.  These books are like NIM on steroids. They are FUN and have some serious math in them. (My blog system thinks Combinatorial is not a word. It is wrong)

3) Combinatorial Games. He was very interested in Go via Combinatorial Games. I have an article (alas, not online) by him entitled Introductory Overview of Mathematical Go Endgames.  There has been further work on this that is on the web, see here. I wonder what ERB would think of the ALPHAGO program.

4) Berlekamp-Massey Algorithm finds the shortest linear feedback shift register for a given binary output sequence.

5) Berlekamp-Welch Algorithm efficiently corrects errors in Reed-Solomon codes.  (Berlekamp also wrote a book on Algebraic Coding Theory.)

6) I didn't know about his work in finance until I began writing this, but the following quote from Wikipedia is impressive
In 1989, Berlekamp purchased the largest interest in a trading company named Axcom Trading Advisors. After the firm's futures trading algorithms were rewritten, Axcom's Medallion Fund had a return (in 1990) of 55%, net of all management fees and transaction costs. The fund has subsequently continued to realize annualized returns exceeding 30% under management by James Harris Simons and his Renaissance Technologies Corporation.

by GASARCH ( at April 11, 2019 10:13 PM UTC

Draw a collection of quadrilaterals in the plane, meeting edge to edge, so that they don’t surround any open space (the region they cover is a topological disk) and every vertex interior to the disk touches at least four quadrilaterals. Is it always possible to color the corners of the quadrilaterals with four colors so that all four colors appear in each quadrilateral?

4-colored kinggraph

The graph of corners and quadrilateral edges is a squaregraph but this question is really about coloring a different and related graph, called a kinggraph, that also includes as edges the diagonals of each quad. It’s called that because one example of this kind of graph is the king’s graph describing the possible moves of a chess king on a chessboard.

The king's graph

The king’s graph, and kinggraphs more generally, are examples of 1-planar graphs, graphs drawn in the plane in such a way that each edge is crossed at most once. The edges of the underlying squaregraph are not crossed at all, and the diagonals of each quad only cross each other. Squaregraphs are bipartite (like every planar graph in which all faces have an even number of edges), so they can be colored with only two colors. 1-planar graphs, in general, can require six colors (for instance you can draw the complete graph as a 1-planar graph by adding diagonals to the squares of a triangular prism) and this is tight. And you can easily 4-color the king’s graph by using two colors in alternation across the even rows of the chessboard, and a different two colors across the odd rows. So the number of colors for kinggraphs should be somewhere between these two extremes, but where?

One useful and general graph coloring method is based on the degeneracy of graphs. This is the largest number such that every subgraph has a vertex with at most neighbors; one can use a greedy coloring algorithm to color any graph with colors. Kinggraphs themselves always have a vertex with at most neighbors, but unfortunately they do not have degeneracy . If you form a king’s graph on a chessboard, and then remove its four corners, you get a subgraph in which all vertices have at least four neighbors. This turns out to be as large as possible: every kinggraph has degeneracy at most four. This is because, if you consider the zones of the system of quads (strips of quads connected on opposite pairs of edges), there always exists an “outer zone”, a zone with nothing on one side of it (see the illustration, below). You can remove the vertices of the outer zone one at a time, in order from one end to the other, always removing a vertex of degree at most four, and then repeat on another outer zone until the whole graph is gone. So the degeneracy and greedy coloring method shows that you can 5-color every kinggraph, better than the 6-coloring that we get for arbitrary 1-planar graphs.

An outer zone

This turns out to be optimal! For a while I thought that every kinggraph must be 4-colorable, because it was true of all the small examples that I tried. But it’s not true in general, and here’s why. If you look at the zones of the 4-colored kinggraph above, you might notice a pattern. The edges that connect pairs of quads from the same zone have colorings that alternate between two different pairs of colors. For instance, we might have a zone that has red–yellow edges alternating with blue–green edges, or another zone that has red–blue edges alternating with green–yellow edges. This is true whenever a kinggraph can be 4-colored. But there are only three ways of coloring a zone (that is, of partitioning the four colors into two disjoint pairs, which alternate along the zone). And when two zones cross each other, they must be colored differently. So every 4-coloring of a kinggraph turns into a 3-coloring of its zones. But the graph that describes the zones and its crossings is a triangle-free circle graph, and vice versa: every triangle-free circle graph describes a kinggraph. And triangle-free circle graphs may sometimes need as many as five colors, in which case so does the corresponding kinggraph.

I posted an example of a squaregraph whose circle graph needs five colors on this blog in 2008. Here’s a slightly different drawing of the same graph from a later post. Because its circle graph is not 3-colorable, the corresponding kinggraph is not 4-colorable.

A squaregraph whose kinggraph is not 4-colorable

There are simpler examples of squaregraphs whose circle graph needs four colors. As long as the number of colors of the circle graph is more than three, the number of colors of the kinggraph will be more than four.

On the other hand, if you can color the circle graph with three colors, then it’s also possible to translate this 3-coloring of zones back into a 4-coloring of the kinggraph. Just remove an outer zone, color the remaining graph recursively, add the removed zone back, and use the color of the zone you removed to decide which colors to assign to its vertices. Unfortunately, I don’t know the computational complexity of testing whether a circle graph is 3-colorable. There was a conference paper by Walter Unger in 1992 that claimed to have a polynomial time algorithm, but without enough details and it was never published in a journal. I think we have to consider the problem of finding a coloring as still being open.

The same method also leads to an easy calculation of the number of 4-colorings (in the same sense) of the usual kind of chessboard with squares, or of a king’s graph with vertices. In this case, the zones are just the rows and columns of the chessboard. We can use one color for the rows and two for the columns, or vice versa, so the number of 3-colorings of the zones (accounting for the fact that the 2-colorings get counted twice) is . And once the coloring of the zones is chosen, the coloring of the chessboard itself is uniquely determined by the color of any of its squares, so the total number of chessboard colorings is .

(Discuss on Mastodon)

by David Eppstein at April 11, 2019 09:28 PM UTC

Martin Farach-Colton asked me to mention this, which is definitely NOT a pox on computer systems. 
ACM-SIAM Algorithmic Principles of Computer Systems (APoCS20) 8, 2020
Hilton Salt Lake City Center, Salt Lake City, Utah, USA
Colocated with SODA, SOSA, and Alenex 
The First ACM-SIAM APoCS is sponsored by SIAM SIAG/ACDA and ACM SIGACT. 
Important Dates:  
        August 9: Abstract Submission and Paper Registration Deadline
August 16: Full Paper Deadline
October 4: Decision Announcement 
Program Chair: Bruce Maggs, Duke University and Akamai Technologies 
Submissions: Contributed papers are sought in all areas of algorithms and architectures that offer insight into the performance and design of computer systems.  Topics of interest include, but are not limited to algorithms and data structures for: 

  • Databases
  • Compilers
  • Emerging Architectures
  • Energy Efficient Computing
  • High-performance Computing
  • Management of Massive Data
  • Networks, including Mobile, Ad-Hoc and Sensor Networks
  • Operating Systems
  • Parallel and Distributed Systems
  • Storage Systems

A submission must report original research that has not previously or is not concurrently being published. Manuscripts must not exceed twelve (12) single-spaced double-column pages, in addition the bibliography and any pages containing only figures.  Submission must be self-contained, and any extra details may be submitted in a clearly marked appendix. 
Steering Committee: 

  • Michael Bender
  • Guy Blelloch
  • Jennifer Chayes
  • Martin Farach-Colton (Chair)
  • Charles Leiserson
  • Don Porter
  • Jennifer Rexford
  • Margo Seltzer

by Suresh Venkatasubramanian ( at April 11, 2019 08:00 PM UTC

The EATCS has just announced that Thomas Henzinger is the EATCS Award 2019 recipient for "fundamental contributions to the theory and practice of formal verification and synthesis of reactive, real-time, and hybrid computer systems, and to the application of formal methods to biological systems." Congratulations to  the award committee---consisting of Artur Czumaj, Marta Kwiatkowska and Christos Papadimitriou (chair)---for their great choice, to Tom for a very well deserved award and to the TCS community at large.

Of course, Tom Henzinger needs no introduction. However, let me use this post to provide a bird's eye view of his career and of some of his many contributions to TCS, which would be enough for a good number of very successful research careers. The text below is largely due to Jean-Francois Raskin.

Biographical sketch Thomas A. Henzinger is the President of IST Austria (Institute of Science and Technology Austria), which, under his leadership, has quickly become one of the most impactful research institutes in the world. Before joining IST as its first president, Tom was Assistant Professor of Computer Science at Cornell University (1992-95), Assistant Professor (1996-97), Associate Professor (1997-98) and Professor (1998-2004) of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He was also the Director of the Max-Planck Institute for Computer Science in Saarbruecken, Germany (1999) and Professor of Computer and Communication Sciences at EPFL in Lausanne, Switzerland (2004-09).

Tom is an ISI highly cited researcher and his h-index is 103, according to Google Scholar. He is a member of Academia Europaea, a member of the German Academy of Sciences (Leopoldina), a member of the Austrian Academy of Sciences, a Fellow of the AAAS, a Fellow of the EATCS, a Fellow of the ACM, and a Fellow of the IEEE. He was the recipient of the Milner Award of the Royal Society in 2015, of the Wittgenstein Award of the Austrian Science Fund and was granted an ERC Advanced Investigator Grant in 2010. He received the SIGPLAN POPL Most Influential Paper Award (2014), Logic in Computer Science Test-of-Time Award (2012), ACM SIGSOFT Impact Paper Award (2011), and Best Paper awards at SIGSOFT FSE and CONCUR.

Main scientific achievements Tom's research focuses on modern systems theory, especially models, algorithms, and tools for the design and verification of reliable software, hardware and embedded systems. His HyTech tool was the first model checker for mixed discrete-continuous systems.

Tom has made a large number of fundamental contributions to theoretical computer science. Here I will limit myself to mentioning a small number of research areas where he has been particularly prolific and influential.
  • The theory of timed and hybrid systems. Tom has defined and studied the expressiveness and the algorithmic complexity of several real-time extensions of temporal logics. He is one of the most important contributors to the theory of hybrid automata and to algorithms for the analysis of suchmodels. His papers on the subject are among the most cited ones in the field of computer-aided verification. As an example, his paper “Symbolic model checking for real-time systems” received a LICS Test-of-Time Award in 2012. He has also studied the gap that exists between mathematical models of systems (e.g. hybrid automata) and their implementation on physical hardware. To bridge this gap, he developed models of physical platforms such as Giotto and E-machines, and he devised ways to relate their semantics with the one of the abstract mathematical models.
  • Games for verification and control. Tom has introduced the logic ATL*, which extends LTL and CTL* with the ability to reason about strategies that can be played by coalitions of agents/components in models of multi-agent/component systems. He has contributed to the understanding of the potential of adopting concepts from game theory for modeling and reasoning about open systems. He has contributed to a large number of algorithmic advances for solving game-graph problems and to better understand their computational complexity.
  • From Boolean models to quantitative models for verification and synthesis. Tom has recently investigated how to shift from Boolean models to quantitative models. This research proposes quantitative generalizations of the paradigms that had success in reactive modeling, such as compositionality, property-preserving abstraction, model checking and synthesis. With those models, we can specify and reason about quantitative aspects, such as resource consumption, or compare the performance of different design solutions in embedded systems. He has obtained a substantial funding from the European Research Council to proceed along this promising line of research.
  • Foundations of software model checking. Tom has contributed substantially to the algorithms underlying the analysis of software systems by introducing the concept of lazy abstraction. Those ideas have been implemented in the highly influential tool Blast. This line of work was honoured with the Most Influential 2004 POPL Paper Award which he received in 2014.
  • Computational modelling of biological systems. Tom and his coworkers have shown that computational models are well suited to model the dynamics of biological systems. This is part of a broader research program that has the objective to show that concepts introduced to formalize reactive systems are helpful to model and reason about biological systems.
Those important theoretical contributions have always been developed with relevant practical applications in mind. Consequently, Thomas Henzinger has not only worked on the foundations of our subject, but he also transferred his theoretical ideas into practice by developing tools and by suggesting methodologies that could be used in applying his theoretical results.

Tom is a research leader who has had a profound influence on his PhD students and post-docs. To wit, several of his former students are now well-recognized researchers that enrich the life of our research community and now lead successful research groups.

Addendum 12 April 2019: In case you are interested, you can find a short interview I had with Tom Henzinger a while back here.

by Luca Aceto ( at April 11, 2019 06:33 PM UTC

September 9-12, 2019 York, UK Registration deadline: May 26, 2019 This workshop will bring together AI and SE researchers from the UK and Russia (and potentially other countries) to discuss applications of AI to SE, specifically ML and NLP, and develop future directions of research in this emerging area. Application examples include program analysis … Continue reading UK-Russia Workshop on AI for Software Engineering

by shacharlovett at April 11, 2019 10:12 AM UTC

September 8-11, 2019 Swansea, Wales, UK Registration deadline: July 1, 2019 The aim of the Proof Society Summer School is to cover basic and advanced topics in proof theory. The focus of the second edition will be on philosophy of proof theory, proof theory of impredicative theories, structural proof theory, proof mining, reverse mathematics, … Continue reading 2nd Summer School on Proof Theory

by shacharlovett at April 11, 2019 09:01 AM UTC

July 21-27, 2019 Tübingen, Germany Registration deadline: April 30, 2019 The summer school is addressed at students of the subjects mathematics, philosophy and computer science, preferably undergraduate students in their final year and graduate students.

by shacharlovett at April 10, 2019 02:55 PM UTC

The next TCS+ talk will take place this coming Wednesday, April 17th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 18:00 Central European Time, 17:00 UTC). Thatchaphol Saranurak from TTI Chicago will speak about “Breaking Quadratic Time for Small Vertex Connectivity ” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: Vertex connectivity is a classic extensively-studied problem. Given an integer k, its goal is to decide if an n-node m-edge graph can be disconnected by removing k vertices. Although a O(m)-time algorithm was postulated since 1974 [Aho, Hopcroft, and Ullman], and despite its sibling problem of edge connectivity being resolved over two decades ago [Karger STOC’96], so far no vertex connectivity algorithms are faster than O(n^2) time even for k = 4 and m = O(n). In the simplest case where m = O(n) and k = O(1), the O(n^2) bound dates five decades back to [Kleitman IEEE Trans. Circuit Theory’69]. For the general case, the best bound is \tilde{O}( \min{ kn^2, n^\omega + nk^\omega } ) [Henzinger, Rao, Gabow FOCS’96; Linial, Lovász, Wigderson FOCS’86].

In this talk, I will present a randomized Monte Carlo algorithm with \tilde{O}(m + k^3 n) time. This algorithm proves the conjecture by Aho, Hopcroft, and Ullman when k=O(1) up to a polylog factor, breaks the 50-year-old bound by Kleitman, is fastest for 4 < k < n^{0.456}. The story is similar for the directed graphs where we obtain an algorithm running time at most \tilde{O}(k^2 m).

The key to our results is to avoid computing single-source connectivity, which was needed by all previous exact algorithms and is not known to admit o(n^2) time. Instead, we design the first local algorithm for computing vertex connectivity; without reading the whole graph, our algorithm can find a separator of size at most k or certify that there is no separator of size at most k “near” a given seed node.

This talk is based on joint works with Danupon Nanongkai and Sorrachai Yingchareonthawornchai.

by plustcs at April 10, 2019 11:19 AM UTC

[Guest post from Virginia Vassilevska Williams –Boaz]

Barna Saha, Sofya Raskhodnikova and I are organizing the second annual TCS Women event at STOC’19. We had an event at STOC’18 and it went really well. We envision an exciting program for the TCS Women event at STOC 2019. The details about the program are forthcoming.  The new TCS Women website is here: There you can learn more about our initiative.

In addition to the event, we have secured generous funding (similar to last year) from the NSF and industrial partners such as Akamai, Amazon, Google and Microsoft for travel grants for women and underrepresented minority students and postdocs to attend STOC. These grants are separate from the STOC travel awards and you can apply to both.

The application deadline for the TCS Women travel scholarship is April 25th, 2019. The information on how to apply is available here We hope to support as many women and underrepresented minority students and postdocs as possible all over the globe to come to STOC and FCRC. The participants will also have the opportunity to present their work at the STOC poster session.
If you are aware of eligible students (not only PhD) who are interested in attending STOC, please encourage them to apply.

Best wishes,

Virginia on behalf of the TCS Women organizers

by windowsontheory at April 10, 2019 02:24 AM UTC