Theory of Computing Blog Aggregator

(The following blog post serves as an introduction to the following notes:)

Black Holes, Hawking Radiation, and the Firewall

There are many different types of “theoretical physicists.” There are theoretical astrophysicists, theoretical condensed matter physicists, and even theoretical biophysicists. However, the general public seems to be most interested in the exploits of what you might call “theoretical high energy theorists.” (Think Stephen Hawking.)

The holy grail for theoretical high energy physicists (who represent only a small fraction of all physicists) would be to find a theory of quantum gravity. As it stands now, physicists have two theories of nature: quantum field theory (or, more specifically, the “Standard Model”) and Einstein’s theory of general relativity. Quantum field theory describes elementary particles, like electrons, photons, quarks, gluons, etc. General relativity describes the force of gravity, which is really just a consequence of the curvature of spacetimes.

Sometimes people like to say that quantum field theory describes “small stuff” like particles, while general relativity describes “big stuff” like planets and stars. This is maybe not the best way to think about it, though, because planets and stars are ultimately just made out of a bunch of quantum particles.

Theoretical physicists are unhappy with having two theories of nature. In order to describe phenomena that depend on both quantum field theory and general relativity, the two theories must be combined in an “ad hoc” way. A so-called “theory of everything,” another name for the currently unknown theory of “quantum gravity,” would hypothetically be able to describe all the phenomena we know about. Just so we’re all on the same page, they don’t even have a fuly worked out hypothesis. (“String theory,” a popular candidate, is still not even a complete “theory” in the normal sense of the word, although it could become one eventually.)

So what should these high energy theoretical physicists do if they want to discover what this theory of quantum gravity is? For the time being, nobody can think up an experiment that would be sensitive to quantum gravitation effects which is feasible with current technology. We are limited to so-called “thought experiments.”

This brings us to Hawking radiation. In the 1970’s, Stephen Hawking considered what would happen to a black hole once quantum field theory was properly taken into account. (Of course, this involved a bit of “ad hoc” reasoning, as mentioned previously.) Hawking found that, much to everybody’s surprise, the black hole evaporated, realeasing energy in the form of “Hawking radiation” (mostly low energy photons). More strangely, this radiation comes out exactly in the spectrum you would expect from something “hot.” For example, imagine heating a piece of metal. At low temperatures, it emits low energy photons invisible to the human eye. Once it gets hotter, it glows red, then yellow, then perhaps eventually blue. The spectrum of light emitted follows a very specific pattern. Amazingly, Hawking found that the radiation which black holes emit follow the exact same pattern. By analogy, they have a temperature too!

This is more profound than you might realize. This is because things which have an temperature should also have an “entropy.” You see, there are two notions of “states” in physics: “microstates” and “macrostates.” A microstate gives you the complete physical information of what comprises a physical system. For example, imagine you have a box of gas, which contains many particles moving in a seemingly random manner. A “microstate” of this box would be a list of all the positions and momenta of every last particle in that box. This would be impossible to measure in pratice. A “macrostate,” on the other hand, is a set of microstates. You may not know what the exact microstate your box of gas is in, but you can measure macroscopic quantities (like the total internal energy, volume, particle number) and consider the set of all possible microstates with those measured quantities.

The “entropy” of a macrostate is the logarithm of the number of possible microstates. If black holes truly are thermodynamic systems with some entropy, that means there should be some “hidden variables” or microstates to the black hole that we currently don’t understand. Perhaps if we understood the microstates of the black hole, we would be much closer to understanding quantum gravity!

However, Hawking also discovered something else. Because the black hole is radiating out energy, its mass will actually decrease as time goes on. Eventually, it should disappear entirely. This means that the information of what went into the black hole will be lost forever.

Physicists did not like this, however, because it seemed to them that the information of what went into the black hole should never be lost. Many physicists believe that the information of what went into the black hole should somehow be contained in the outgoing Hawking radiation, although they do not currently understand how. According to Hawking’s original calculation, the Hawking radiation only depends on a parameters of the black hole (like its mass) and have nothing to do with the many finer details on what went in, the exact “microstate” of what went in.

However, physicists eventually realized a problem with the idea that the black hole releases its information in the form of outgoing Hawking radiation. The problem has to do with quantum mechanics. In quantum mechanics, it is impossible to clone a qubit. That means that if you threw a qubit in a black hole and then waited for it to eventually come out in the form of Hawking radiation, then the qubit could no longer be “inside” the black hole. However, if Einstein is to be believed, you should also be able to jump into the black hole and see the qubit on the inside. This seems to imply that the qubit is cloned, as it is present on both the inside and outside of the black hole.

Physicists eventually came up with a strange fix called “Black Hole Complementarity” (BHC). According to BHC, according to people outside the black hole, the interior does not exist. Also, according to people who have entered the black hole, the outside ceases to exist. Both descriptions of the world are “correct” because once someone has entered the black hole, they will be unable to escape and compare notes with the person on the outside.

Of course, it must be emphasized that BHC remains highly hypothetical. People have been trying to poke holes in it for a long time. The largest hole is the so called “Firewall Paradox,” first proposed in 2012. Essentially, the Firewall paradox tries to show that the paradigm of BHC is self-contradictory. In fact, it was able to use basic quantum mechanics to show that, under some reasonable assumptions, the interior of the black hole truly doesn’t exist, and that anyone who tries to enter would be fried at an extremely hot “firewall!” Now, I don’t think most physicists actually believe that black holes really have a firewall (although this might depend on what day of the week you ask them). The interesting thing about the Firewall paradox is that it derives a seemingly crazy result from seemingly harmless starting suppositions. So these suppositions would have to be tweaked in the theory of quantum gravity order to get rid of the firewall.

This is all to say that all this thinking about black holes really might help physicists figure out something about quantum gravity. (Then again, who can really say for sure.)

If you would like to know more about the Firewall paradox, I suggest you read my notes, pasted at the top of this post!

The goal of the notes was to write an introduction to the Black Hole Information Paradox and Firewall Paradox that could be read by computer scientists with no physics background.

The structure of the notes goes as follows:

  1. Special relativity
  2. General relativity
  3. Quantum Field Theory (in which I ambitiously tell you what QFT actually is!)
  4. Statistical Mechanics (this is the best part)
  5. Hawking radiation
  6. The information paradox

Because the information paradox touches on all areas of physics, I thought it was necessary to take “zero background, infinite intelligence” approach, introducing the all the necessary branches of physics (GR, QFT, Stat Mech) in order to understand what the deal with Hawking radiation really is, and why physicists think it is so important. I think it is safe to say that if you read these notes, you’ll learn a non-trivial amount of physics.

by noahmiller5490 at January 20, 2019 06:00 AM UTC

Authors: Yulun Tian, Kasra Khosoussi, Jonathan P. How
Download: PDF
Abstract: Inter-robot loop closure detection, e.g., for collaborative simultaneous localization and mapping (CSLAM), is a fundamental capability for many multirobot applications in GPS-denied regimes. In real-world scenarios, this is a resource-intensive process that involves exchanging observations and verifying potential matches. This poses severe challenges especially for small-size and low-cost robots with various operational and resource constraints that limit, e.g., energy consumption, communication bandwidth, and computation capacity. This paper presents resource-aware algorithms for distributed inter-robot loop closure detection. In particular, we seek to select a subset of potential inter-robot loop closures that maximizes a monotone submodular performance metric without exceeding computation and communication budgets. We demonstrate that this problem is in general NP-hard, and present efficient approximation algorithms with provable performance guarantees. A convex relaxation scheme is used to certify near-optimal performance of the proposed framework in real and synthetic SLAM benchmarks.

at January 19, 2019 11:23 PM UTC

Authors: Ahad N. Zehmakan
Download: PDF
Abstract: Assume that you are given a graph $G=(V,E)$ with an initial coloring, where each node is black or white. Then, in discrete-time rounds all nodes simultaneously update their color following a predefined deterministic rule. This process is called two-way $r$-bootstrap percolation, for some integer $r$, if a node with at least $r$ black neighbors gets black and white otherwise. Similarly, in two-way $\alpha$-bootstrap percolation, for some $0<\alpha<1$, a node gets black if at least $\alpha$ fraction of its neighbors are black, and white otherwise. The two aforementioned processes are called respectively $r$-bootstrap and $\alpha$-bootstrap percolation if we require that a black node stays black forever. For each of these processes, we say a node set $D$ is a dynamic monopoly whenever the following holds: If all nodes in $D$ are black then the graph gets fully black eventually. We provide tight upper and lower bounds on the minimum size of a dynamic monopoly.

at January 19, 2019 11:23 PM UTC

Authors: Arkadev Chattopadhyay, Meena Mahajan, Nikhil Mande, Nitin Saurabh
Download: PDF
Abstract: We demonstrate a lower bound technique for linear decision lists, which are decision lists where the queries are arbitrary linear threshold functions. We use this technique to prove an explicit lower bound by showing that any linear decision list computing the function $MAJ \circ XOR$ requires size $2^{0.18 n}$. This completely answers an open question of Tur{\'a}n and Vatan [FoCM'97]. We also show that the spectral classes $PL_1, PL_\infty$, and the polynomial threshold function classes $\widehat{PT}_1, PT_1$, are incomparable to linear decision lists.

at January 19, 2019 11:21 PM UTC

Authors: Tomoyuki Yamakami
Download: PDF
Abstract: We focus our attention onto polynomial-time sub-linear-space computation for decision problems, which are parameterized by size parameters $m(x)$, where the informal term "sub linear" means a function of the form $m(x)^{\varepsilon}\cdot polylog(|x|)$ on input instances $x$ for a certain absolute constant $\varepsilon\in(0,1)$ and a certain polylogarithmic function $polylog(n)$. The parameterized complexity class PsubLIN consists of all parameterized decision problems solvable simultaneously in polynomial time using sub-linear space. This complexity class is associated with the linear space hypothesis. There is no known inclusion relationships between PsubLIN and para-NL (nondeterministic log-space class), where the prefix "para-" indicates the natural parameterization of a given complexity class. Toward circumstantial evidences for the inclusions and separations of the associated complexity classes, we seek their relativizations. However, the standard relativization of Turing machines is known to violate the relationships of L$\subseteq$NL=co-NL$\subseteq$DSPACE[O($\log^2{n}$)]$\cap$P. We instead consider special oracles, called NL-supportive oracles, which guarantee these relationships in the corresponding relativized worlds. This paper vigorously constructs such NL-supportive oracles that generate relativized worlds where, for example, para-L$\neq$para-NL$\nsubseteq$PsubLIN and para-L$\neq$para-NL$\subseteq$PsubLIN.

at January 19, 2019 11:20 PM UTC

Authors: Nikolaj Tatti, Pauli Miettinen
Download: PDF
Abstract: Boolean matrix factorization is a natural and a popular technique for summarizing binary matrices. In this paper, we study a problem of Boolean matrix factorization where we additionally require that the factor matrices have consecutive ones property (OBMF). A major application of this optimization problem comes from graph visualization: standard techniques for visualizing graphs are circular or linear layout, where nodes are ordered in circle or on a line. A common problem with visualizing graphs is clutter due to too many edges. The standard approach to deal with this is to bundle edges together and represent them as ribbon. We also show that we can use OBMF for edge bundling combined with circular or linear layout techniques.

We demonstrate that not only this problem is NP-hard but we cannot have a polynomial-time algorithm that yields a multiplicative approximation guarantee (unless P = NP). On the positive side, we develop a greedy algorithm where at each step we look for the best 1-rank factorization. Since even obtaining 1-rank factorization is NP-hard, we propose an iterative algorithm where we fix one side and and find the other, reverse the roles, and repeat. We show that this step can be done in linear time using pq-trees. We also extend the problem to cyclic ones property and symmetric factorizations. Our experiments show that our algorithms find high-quality factorizations and scale well.

at January 19, 2019 11:22 PM UTC

Authors: Hanan ElNaghy, Leo Dorst
Download: PDF
Abstract: We propose to employ scale spaces of mathematical morphology to hierarchically simplify fracture surfaces of complementarily fitting archaeological fragments. This representation preserves contact and is insensitive to different kinds of abrasion affecting the exact complementarity of the original fragments. We present a pipeline for morphologically simplifying fracture surfaces, based on their Lipschitz nature; its core is a new embedding of fracture surfaces to simultaneously compute both closing and opening morphological operations, using distance transforms.

at January 19, 2019 11:25 PM UTC

Authors: James Allen Fill, Daniel Q. Naiman
Download: PDF
Abstract: We present, (partially) analyze, and apply an efficient algorithm for the simulation of multivariate Pareto records. A key role is played by minima of the record-setting region (we call these generators) each time a new record is generated, and two highlights of our work are (i) efficient dynamic maintenance of the set of generators and (ii) asymptotic analysis of the expected number of generators at each time.

at January 19, 2019 11:23 PM UTC

Authors: James Allen Fill, Daniel Q. Naiman
Download: PDF
Abstract: For iid $d$-dimensional observations $X^{(1)}, X^{(2)}, \ldots$ with independent Exponential$(1)$ coordinates, consider the boundary (relative to the closed positive orthant), or "frontier", $F_n$ of the closed Pareto record-setting (RS) region \[ \mbox{RS}_n := \{0 \leq x \in {\mathbb R}^d: x \not\prec X^{(i)}\ \mbox{for all $1 \leq i \leq n$}\} \] at time $n$, where $0 \leq x$ means that $0 \leq x_j$ for $1 \leq j \leq d$ and $x \prec y$ means that $x_j < y_j$ for $1 \leq j \leq d$. With $x_+ := \sum_{j = 1}^d x_j$, let \[ F_n^- := \min\{x_+: x \in F_n\} \quad \mbox{and} \quad F_n^+ := \max\{x_+: x \in F_n\}, \] and define the width of $F_n$ as \[ W_n := F_n^+ - F_n^-. \] We describe typical and almost sure behavior of the processes $F^+$, $F^-$, and $W$. In particular, we show that $F^+_n \sim \ln n \sim F^-_n$ almost surely and that $W_n / \ln \ln n$ converges in probability to $d - 1$; and for $d \geq 2$ we show that, almost surely, the set of limit points of the sequence $W_n / \ln \ln n$ is the interval $[d - 1, d]$.

We also obtain modifications of our results that are important in connection with efficient simulation of Pareto records. Let $T_m$ denote the time that the $m$th record is set. We show that $\tilde{F}^+_m \sim (d! m)^{1/d} \sim \tilde{F}^-_m$ almost surely and that $W_{T_m} / \ln m$ converges in probability to $1 - d^{-1}$; and for $d \geq 2$ we show that, almost surely, the sequence $W_{T_m} / \ln m$ has $\liminf$ equal to $1 - d^{-1}$ and $\limsup$ equal to $1$.

at January 19, 2019 11:23 PM UTC

Authors: Bernardo Lopo Tavares
Download: PDF
Abstract: Graphs are interesting structures: extremely useful to depict real-life problems, extremely easy to understand given a sketch, extremely complicated to represent formally, extremely complicated to compare. Phylogeny is the study of the relations between biological entities. From it, the interest in comparing tree graphs grew more than in other fields of science. Since there is no definitive way to compare them, multiple distances were formalized over the years since the early sixties, when the first effective numerical method to compare dendrograms was described. This work consists of formalizing, completing (with original work) and give a universal notation to analyze and compare the discriminatory power and time complexity of computing the thirteen here formalized metrics. We also present a new way to represent tree graphs, reach deeper in the details of the Geodesic Distance and discuss its worst-case time complexity in a suggested implementation. Our contribution ends up as a clean, valuable resource for anyone looking for an introduction to comparative metrics for tree graphs.

at January 19, 2019 11:23 PM UTC

UT Austin invites applications for a Postdoctoral Fellow in theoretical computer science for the 2019-20 academic year to work with David Zuckerman. Research interests should overlap with his: pseudorandomness, computational complexity, coding theory, and more. Applications will be considered until the position is filled, but review of applicants will begin on February 5.


by shacharlovett at January 18, 2019 04:52 PM UTC

From the SIGACT executive committee:

The deadlines to submit nominations for the Gödel Prize, Knuth Prize, and SIGACT Distinguished Service Award are coming soon. Calls for nominations for all three awards can be found at the links below. Note that March 1 is now the permanent deadline for SIGACT Distinguished Service Award nominations, this year and in future years.

  • Gödel Prize: deadline February 15, 2019
  • Knuth Prize: deadline February 15, 2019
  • SIGACT Distinguished Service Award: deadline March 1, every year (including 2019)
    Those who intend to submit a nomination for the Distinguished Service Award are strongly encouraged to inform the Selection Committee Chair at least two weeks in advance.

by shuchic at January 18, 2019 02:37 PM UTC

The deadlines to submit nominations for the Gödel Prize, Knuth Prize, and SIGACT Distinguished Service Award are coming soon. Calls for nominations for all three awards can be found at the links below. Note that March 1 is now the permanent deadline for SIGACT Distinguished Service Award nominations, this year and in future years.

  • Gödel Prize: deadline February 15, 2019
  • Knuth Prize: deadline February 15, 2019
  • SIGACT Distinguished Service Award: deadline March 1, every year (including 2019)
    Those who intend to submit a nomination for the Distinguished Service Award are strongly encouraged to inform the Selection Committee Chair at least two weeks in advance.

by robertkleinberg at January 18, 2019 03:35 AM UTC

An orientation of an undirected graph is the directed graph that you get by assigning a direction to each edge. Several kinds of orientations have been studied. For instance, in a graph with even vertex degrees, an Eulerian orientation makes the numbers of incoming and outgoing degrees equal at each vertex. In a bridgeless graph, a strong orientation makes the resulting directed graph strongly connected. In finite connected graphs, every Eulerian orientation is strong, but that is untrue in infinite graphs. Consider, for instance, the graph of unit distances on the integers, which is Eulerian (every vertex has degree two) but has no strong orientation (every edge is a bridge). Even when a strong orientation exists, an Eulerian orientation might not be strong: the graph of distance-1 and distance-2 integers, with the orientation from smaller to larger numbers, is Eulerian but not strong. So when does an Eulerian strong orientation exist? The answer turns out to be: whenever the obvious obstacles are not present. Every bridgeless connected even-degree infinite graph has an Eulerian strong orientation.

For infinite graphs, Eulerian orientations might not be strong

To prove this, we can use a convenient tool for dealing with infinite orientations by looking only at finite graphs, a result of Rado from 1949 that is simultaneously a predecessor and generalization of the De Bruijn–Erdős theorem on graph colorings. Suppose each element in some infinite set has a finite set of labels, and we choose an assignment of labels for each finite subset of . These choices may be inconsistent with each other, so there may be no way of labeling all of consistently with all of the choices. But Rado proved (assuming the axiom of choice) that there exists a global labeling that, on every finite subset, is consistent with the assignment to one of its finite supersets. Another way of thinking about this is that if certain finite patterns must be avoided, and every finite subset has a labeling that avoids them, then some global labeling will also avoid them.1 The De Bruijn–Erdős theorem is the case where the elements are vertices, the labels are colors, and the patterns to be avoided are pairs of equal color on adjacent vertices. In our case the set elements will be edges, the labels will be which way to orient each edge, and the choice of assignment will be some way of defining a “good” orientation for finite subgraphs.

Graph vertices can be labeled by color; edges, by orientation

So suppose we’re given a finite subgraph of an infinite 2-edge-connected even-degree graph. What kind of orientation on should we look for? A complication is that might have bridges or odd-degree vertices. So we’ll try to come as close as we can to what we want, an Eulerian strong orientation, while taking those deficiencies into account. Let’s define a good orientation of to be an orientation that is almost Eulerian, in that at each vertex the in-degree and out-degree differ by at most one, and almost strong, in that each edge of that belongs to an undirected cycle also belongs to a directed cycle. Another way of stating the almost strong condition is that the strongly connected components of the oriented graph should coincide with the 2-edge-connected components, or blocks, of the undirected graph.

A good orientation of a graph

The existence of a good orientation of an arbitrary finite undirected graph (also having some other stronger properties) was proven in the 1960s by Nash-Williams.2 But now Rado’s method proves that every infinite graph also has a good orientation. If we find a global orientation that’s nearly-Eulerian on a supergraph of the star of neighbors of every vertex, then it must be nearly-Eulerian. And if it’s almost strong on a supergraph of every cycle, then it must be almost strong. This proves the theorem that a bridgeless connected even-degree infinite graph has an Eulerian strong orientation, because every good orientation in these graphs must be Eulerian and strong. Nash-Williams already considered the infinite case of his orientation theorem, but wrote that the details were too heavy to include. Perhaps he didn’t know about Rado’s theorem (despite it being published in the same journal), which makes the extension from finite to infinite easy.

The same method of extending finite to infinite orientations can also be used to study notions of graph sparseness including arboricity, degeneracy, and pseudoarboricity. Historically the pseudoarboricity of infinite graphs was first. A graph has pseudoarboricity at most if it can be oriented so that each vertex has out-degree (or equivalently if it can be partitioned into subgraphs each with out-degree one). In the 1950 paper in which he first announced the De Bruijn–Erdős theorem, Erdős used it to prove that, when such an orientation is given, the resulting graph can be -colored.3 But he didn’t write about the conditions under which such an orientation exists. Rado’s theorem shows that they are the same as in the finite case: an outdegree- orientation exists if and only if every finite -vertex subgraph has at most edges.

A graph has degeneracy at most if it has an acyclic orientation so that each vertex has out-degree . Unlike in the finite case, infinite graphs with low degeneracy might have high minimum degree; for instance, there exist graphs with degeneracy one in which all vertices have infinite degree. A finite -degenerate graph can be -colored greedily, and the De Bruijn–Erdős theorem shows that even in the infinite case such a coloring exists. Rado’s theorem shows that infinite graphs with pseudoarboricity are -degenerate, and that a graph is -degenerate if and only if every finite subgraph has a vertex with degree at most . As in the case of Eulerian strong orientations, this involves checking the orientation only on two kinds of finite subgraphs, stars and cycles.

Arboricity is based on forests and there are multiple incompatible definitions of infinite forests. But the one we want to use is that a forest is a 1-degenerate graph. A graph has arboricity if its edges can be partitioned into forests. This is clearly at least as large as the degeneracy (no Rado needed). Rado’s theorem shows that infinite graphs with arboricity are -degenerate, and that a graph has arboricity at most if and only if every finite -vertex subset has at most edges.

  1. Rado, R. (1949), “Axiomatic treatment of rank in infinite sets”, Canad. J. Math. 1: 337–343. 

  2. Nash-Williams, C. St. J. A. (1960), “On orientations, connectivity and odd-vertex-pairings in finite graphs”, Canad. J. Math., 12: 555–567; —— (1969), “Well-balanced orientations of finite graphs and unobtrusive odd-vertex-pairings”, Recent Progress in Combinatorics (Proc. Third Waterloo Conf. on Combinatorics, 1968), New York: Academic Press, pp. 133–149. 

  3. Erdős, P. (1950), “Some remarks on set theory”, Proc. AMS, 1: 127–141. 

(Discuss on Mastodon)

by David Eppstein at January 17, 2019 08:39 PM UTC

Billboard at 2019 CES

Computer scientists tend to obsess about privacy and we've had a privacy/security debate for decades now. But now machine learning has really given us a whole new spin on what privacy protects and takes away.

I take an open approach and basically allow Google to know everything about my life. Google knows where I've been--sometimes my Pixel asks me which store in a shopping center I visited and I give up that info. Google knows who I communicate with, what websites I visit, what music and movies I listen to and watch, all my photos, what temperature makes me comfortable and so on.

What do I get? A Google ecosystem that knows me sometimes better than I know myself. Google works best when it learns and integrates. I get asked to download maps for trips Google knows I'm about to take. I have Google assistant throughout my house, in my phone, in my car and it tailor answers and sometimes even the questions that I need answers to. If anything I wish there was further integration, like Google Voice should ring my office phone only when I'm in the office.

Georgia Tech now forces us to use Microsoft Exchange for email. Outlook is not a bad email program but its capabilities, especially for search, does not work as well and think of all that unused knowledge.

I trust Google to keep my information safe, with a random password and 2-factor encryption and even if someone would manage to break in they would find I'm a pretty boring person with an unhealthy obsession of opera (the musical form not the browser).

Doesn't work for everyone and companies should make it easy to keep your info secure. But I say go use your machine learning on me and find ways to make my life easier and more fun, and sure send me some targeted ads as payment. The Internets will find a way to discover you anyway, might as well take advantage. 

by Lance Fortnow ( at January 17, 2019 05:45 PM UTC

We demonstrate a lower bound technique for linear decision lists, which are decision lists where the queries are arbitrary linear threshold functions. We use this technique to prove an explicit lower bound by showing that any linear decision list computing the function $MAJ \circ XOR$ requires size $2^{0.18 n}$. This completely answers an open question of Tur{\'a}n and Vatan [FoCM'97]. We also show that the spectral classes $PL_1, PL_\infty$, and the polynomial threshold function classes $\widehat{PT}_1, PT_1$, are incomparable to linear decision lists.

at January 17, 2019 04:40 PM UTC

June 12-13, 2019 Brno, Czech Republic Registration deadline: March 31, 2019

by shacharlovett at January 17, 2019 10:51 AM UTC

July 6-12, 2019 Gdansk, Poland The school aims at providing an opportunity for graduate and undergraduate students as well as young researchers to get together and attend advanced courses and talks on current topics in the field of algorithms and data structures for discrete optimization problems. During 8 days of the school, 6 advanced … Continue reading Gdańsk Summer School of Advanced Science on Algorithms for Discrete Optimization

by shacharlovett at January 17, 2019 10:51 AM UTC

July 8-19, 2019 Nice, France Registration deadline: May 1, 2019 This 2-week-summer school aims to explain concepts on random walks and random models for complex networks at a level suitable for second year master students and PhD students in mathematics and theoretical computer science.

by shacharlovett at January 17, 2019 10:51 AM UTC

We show that any Boolean function with approximate rank $r$ can be computed by bounded error quantum protocols without prior entanglement of complexity $O( \sqrt{r} \log r)$. In addition, we show that any Boolean function with approximate rank $r$ and discrepancy $\delta$ can be computed by deterministic protocols of complexity $O(r)$, and private coin bounded error randomized protocols of complexity $O((\frac{1}{\delta})^2 + \log r)$. Our results yield lower bounds on approximate rank. We also obtain a strengthening of Newman's theorem with respect to approximate rank.

at January 16, 2019 10:58 PM UTC

An $(n,k,\ell)$-vector MDS code is a $\mathbb{F}$-linear subspace of $(\mathbb{F}^\ell)^n$ (for some field $\mathbb{F}$) of dimension $k\ell$, such that any $k$ (vector) symbols of the codeword suffice to determine the remaining $r=n-k$ (vector) symbols. The length $\ell$ of each codeword symbol is called the sub-packetization of the code. Such a code is called minimum storage regenerating (MSR), if any single symbol of a codeword can be recovered by downloading $\ell/r$ field elements (which is known to be the least possible) from each of the other symbols. MSR codes are attractive for use in distributed storage systems, and by now a variety of ingenious constructions of MSR codes are available. However, they all suffer from exponentially large sub-packetization of at least $r^{k/r}$. Our main result is an almost tight lower bound showing that for an MSR code, one must have $\ell \ge \exp(\Omega(k/r))$. Previously, a lower bound of $\approx \exp(\sqrt{k/r})$, and a tight lower bound for a restricted class of optimal access MSR codes, were known. Our work settles a key question concerning MSR codes that has received much attention, with a short proof hinging on one key definition that is somewhat inspired by Galois theory.

at January 16, 2019 06:36 PM UTC

The theory group at Stanford invites applications for the Motwani postdoctoral fellowship in theoretical computer science. Information and application instructions below.

Applications will be accepted until the positions are filled, but review of applicants will begin after Jan 21.


by shacharlovett at January 16, 2019 06:32 PM UTC

June 30 – July 6, 2019 Rogla, Slovenia The summer school will offer two mini-courses: “Combinatorial limits and their applications in extremal combinatorics” and “Coxeter groups”, as well as invited talks and student talks

by shacharlovett at January 15, 2019 11:09 PM UTC

July 29 – August 2, 2019 Hagenberg, Austria n the spirit of the AEC2014, AEC2015, AEC2016 and AEC2018, the goal of this summer school is to put forward the interplay between the fields of Enumerative Combinatorics, Analytic Combinatorics, and Algorithmics. This is a very active research area, which, aside from the three fields fueling … Continue reading 5th Algorithmic and Enumerative Combinatorics Summer School 2019

by shacharlovett at January 15, 2019 11:09 PM UTC

March 11-15, 2019 Bochum, Germany Registration deadline: February 11, 2019 The Spring School is designed for advanced bachelor, master and PhD students. The aim is to prepare the participants in such a way that they can follow the lectures of the subsequent workshop. It consists of three crash courses: Geometry of Polytopes, Monday (2x2h) … Continue reading Spring School and Workshop on Polytopes

by shacharlovett at January 15, 2019 11:08 PM UTC

For the new year, I’ve decided to try to get back into taking photos more frequently, and to make it lower-overhead I’m making individual Mastodon posts for some of them rather than writing a longer blog post for every batch of photos. So that’s why you see a couple of those images inline here.

by David Eppstein at January 15, 2019 10:07 PM UTC

Greetings from QIP’2019 in Boulder, Colorado! Obvious highlights of the conference include Urmila Mahadev’s opening plenary talk on her verification protocol for quantum computation (which I blogged about here), and Avishay Tal’s upcoming plenary on his and Ran Raz’s oracle separation between BQP and PH (which I blogged about here). If you care, here are the slides for the talk I just gave, on the paper “Online Learning of Quantum States” by me, Xinyi Chen, Elad Hazan, Satyen Kale, and Ashwin Nayak. Feel free to ask in the comments about what else is going on.

I returned a few days ago from my whirlwind Australia tour, which included Melbourne and Sydney; a Persian wedding that happened to be held next to a pirate ship (the Steve Irwin, used to harass whalers and adorned with a huge Jolly Roger); meetings and lectures graciously arranged by friends at UTS; a quantum computing lab tour personally conducted by 2018 “Australian of the Year” Michelle Simmons; three meetups with readers of this blog (or more often, readers of the other Scott A’s blog who graciously settled for the discount Scott A); and an excursion to Grampians National Park to see wild kangaroos, wallabies, koalas, and emus.

But the thing that happened in Australia that provided the actual occassion for this post is this: I was interviewed by Adam Ford in Carlton Gardens in Melbourne, about quantum supremacy, AI risk, Integrated Information Theory, whether the universe is discrete or continuous, and to be honest I don’t remember what else. You can watch the first segment, the one about the prospects for quantum supremacy, here on YouTube. My only complaint is that Adam’s video camera somehow made me look like an out-of-shape slob who needs to hit the gym or something.

Update (Jan. 16): Adam has now posted a second video on YouTube, wherein I talk about my “Ghost in the Quantum Turing Machine” paper, my critique of Integrated Information Theory, and more.

And now Adam has posted yet a third segment, in which I talk about small, lighthearted things like existential threats to civilization and the prospects for superintelligent AI.

And a fourth, in which I talk about whether reality is discrete or continuous.

Related to the “free will / consciousness” segment of the interview: the biologist Jerry Coyne, whose blog “Why Evolution Is True” I’ve intermittently enjoyed over the years, yesterday announced my existence to his readers, with a post that mostly criticizes my views about free will and predictability, as I expressed them years ago in a clip that’s on YouTube (at the time, Coyne hadn’t seen GIQTM or my other writings on the subject). Coyne also took the opportunity to poke fun at this weird character he just came across whose “life is devoted to computing” and who even mistakes tips for change at airport smoothie stands. Some friends here at QIP had a good laugh over the fact that, for the world beyond theoretical computer science and quantum information, this is what 23 years of research, teaching, and writing apparently boil down to: an 8.5-minute video clip where I spouted about free will, and also my having been arrested once in a comic mix-up at Philadelphia airport. Anyway, since then I had a very pleasant email exchange with Coyne—someone with whom I find myself in agreement much more often than not, and who I’d love to have an extended conversation with sometime despite the odd way our interaction started.

by Scott at January 15, 2019 06:51 PM UTC

(I had been thinking of this for a post then Lance's post on search versus decision inspired me to write up these thoughts.)

When teaching NP-completeness we often say

The problem we really care about is, for example, given a weighted graph and two vertices s and t, find the optimal way to go from s to t while hitting every node. But its cleaner mathematically to look at the decision problem:

{ (G,s,t,C) : there is a Ham Path from s to t that costs \le C }

The search and decision are poly time equivalent, so its fine to just look at the decision. Indeed- if our interest in in lower bounds then clearly if Decision is hard then Find is Hard.

But here are some questions about search vs decision in general, not just with regard to P vs NP.

1) Is there ever a case where the real world actually cares about the decision version? I can think of just one- given a number is it PRIME is used in Crypto. The real world does not need the witness that its prime (or similar).  They  just want a prime.  Any other cases?

2) How far apart can search and decision be? NP-decision and NP-search they are poly equivalent. In other domains can they be very far apart? For example, is FINDING a k-clique or k-ind set in a graph on 2^{2k} vertices require roughly n^k steps (go through all k-sets) or can we do much better? I suspect this is unknown but would be delighted if a commenter tells me otherwise.

by GASARCH ( at January 15, 2019 04:21 PM UTC

Roy J. Glauber, Harvard physics professor for 65 years, longtime Keeper of the Broom at the annual Ig Nobel ceremony, and winner of a non-Ig Nobel, has died at age 93. Glauber is known for his work in quantum optics; roughly speaking, he developed a mathematical theory of the laser at about the same time that device was invented, circa 1960. His two main papers on the subject, published in Physical Review in 1963, did not meet with instant acclaim; the Nobel committee’s recognition of their worth came more than 40 years later, in 2005. A third paper from 1963, titled “Time-dependent statistics of the Ising model,” also had a delayed impact. It is the basis of a modeling algorithm now called Glauber dynamics, which is well known in the cloistered community of statistical mechanics but deserves wider recognition.

Before digging into the dynamics, however, let us pause for a few words about the man himself, drawn largely from the obituaries in the New York Times and the Harvard Crimson.

Glauber was a member of the first class to graduate from the Bronx High School of Science, in 1941. From there he went to Harvard, but left in his sophomore year, at age 18, to work in the theory division at Los Alamos, where he helped calculate the critical mass of fissile material needed for a bomb. After the war he finished his degree at Harvard and went on to complete a PhD under Julian Schwinger. After a few brief adventures in Princeton and Pasadena, he was back at Harvard in 1952 and never left. A poignant aspect of his life is mentioned briefly in a 2009 interview, where Glauber discusses the challenge of sustaining an academic career while raising two children as a single parent.

Here’s a glimpse of Glauber dynamics in action. Click the Go button, then try fiddling with the slider.


In the computer program that drives this animation, the slider controls a variable representing temperature. At high temperature (slide the control all the way to the right), you’ll see a roiling, seething mass of colored squares, switching rapidly and randomly between light and dark shades. There are no large-scale or long-lived structures. Occasionally the end point is not a monochromatic field. Instead the panel is divided into broad stripes—horizontal, vertical, or diagonal. This is an artifact of the finite size of the lattice and the use of wraparound boundary conditions. On an infinite lattice, the stripes would not occur.At low temperature (slide to the left), the tableau congeals into a few writhing blobs of contrasting color. Then the minority blobs are likely to evaporate, and you’ll be left with an unchanging, monochromatic panel. Between these extremes there’s some interesting behavior. Adjust the slider to a temperature near 2.27 and you can expect to see persistent fluctuations at all possible scales, from isolated individual blocks to patterns that span the entire array.

What we’re looking at here is a simulation of a model of a ferromagnet—the kind of magnet that sticks to the refrigerator. The model was introduced almost 100 years ago by Wilhelm Lenz and his student Ernst Ising. They were trying to understand the thermal behavior of ferromagnetic materials such as iron. If you heat a block of magnetized iron above a certain temperature, called the Curie point, it loses all traces of magnetization. Slow cooling below the Curie point allows it to spontaneously magnetize again, perhaps with the poles in a different orientation. The onset of ferromagnetism at the Curie point is an abrupt phase transition.

Lenz and Ising created a stripped-down model of a ferromagnet. In the two-dimensional version shown here, each of the small squares represents the spin vector of an unpaired electron in an iron atom. The vector can point in either of two directions, conventionally called up and down, which for graphic convenience are represented by two contrasting colors. There are \(100 \times 100 = 10{,}000\) spins in the array. This would be a minute sample of a real ferromagnet. On the other hand, the system has \(2^{10{,}000}\) possible states—quite an enormous number.

The essence of ferromagnetism is that adjacent spins “prefer” to point in the same direction. To put that more formally: The energy of neighboring spins is lower when they are parallel, rather than antiparallel. four possible orientations of two spins (up up, up down, down up, down down), with their corresponding energiesFor the array as a whole, the energy is minimized if all the spins point the same way, either up or down. Each spin contributes a tiny magnetic moment. When the spins are parallel, all the moments add up and the system is fully magnetized.

If energy were the only consideration, the Ising model would always settle into a magnetized configuration, but there is a countervailing influence: Heat tends to randomize the spin directions. At infinite temperature, thermal fluctuations completely overwhelm the spins’ tendency to align, and all states are equally likely. Because the vast majority of those \(2^{10{,}000}\) configurations have nearly equal numbers of up and down spins, the magnetization is negligible. At zero temperature, nothing prevents the system from condensing into the fully magnetized state. The interval between these limits is a battleground where energy and entropy contend for supremacy. Clearly, there must be a transition of some kind. For Lenz and Ising in the 1920s, the crucial question was whether the transition comes at a sharply defined critical temperature, as it does in real ferromagnets. A more gradual progression from one regime to the other would signal the model’s failure to capture important aspects of ferromagnet physics.

In his doctoral dissertation Ising investigated the one-dimensional version of the model—a chain or ring of spins, each one holding hands with its two nearest neighbors. The result was a disappointment: He found no abrupt phase transition. And he speculated that the negative result would also hold in higher dimensions. The Ising model seemed to be dead on arrival.

It was revived a decade later by Rudolf Peierls, who gave suggestive evidence for a sharp transition in the two-dimensional lattice. Then in 1944 Lars Onsager “solved” the two-dimensional model, showing that the phase transition does exist. The phase diagram looks like this:

Graph of the gagnetization v temperature phase diagram for the Ising model.

As the system cools, the salt-and-pepper chaos of infinite temperature evolves into a structure with larger blobs of color, but the up and down spins remain balanced on average (implying zero magnetization) down to the critical temperature \(T_C\). At that point there is a sudden bifurcation, and the system will follow one branch or the other to full magnetization at zero temperature.

If a model is classified as solved, is there anything more to say about it? In this case, I believe the answer is yes. The solution to the two-dimensional Ising model gives us a prescription for calculating the probability of seeing any given configuration at any given temperature. That’s a major accomplishment, and yet it leaves much of the model’s behavior unspecified. The solution defines the probability distribution at equilibrium—after the system has had time to settle into a statistically stable configuration. It doesn’t tell us anything about how the lattice of spins reaches that equilibrium when it starts from an arbitrary initial state, or how the system evolves when the temperature changes rapidly.

It’s not just the solution to the model that has a few vague spots. When you look at the finer details of how spins interact, the model itself leaves much to the imagination. When a spin reacts to the influence of its nearest neighbors, and those neighbors are also reacting to one another, does everything happen all at once? Suppose two antiparallel spins both decide to flip at the same time; they will be left in a configuration that is still antiparallel. It’s hard to see how they’ll escape repeating the same dance over and over, like people who meet head-on in a corridor and keep making mirror-image evasive maneuvers. This kind of standoff can be avoided if the spins act sequentially rather than simultaneously. But if they take turns, how do they decide who goes first?

Within the intellectual traditions of physics and mathematics, these questions can be dismissed as foolish or misguided. After all, when we look at the procession of the planets orbiting the sun, or at the colliding molecules in a gas, we don’t ask who takes the first step; the bodies are all in continuous and simultaneous motion. Newton gave us a tool, calculus, for understanding such situations. If you make the steps small enough, you don’t have to worry so much about the sequence of marching orders.

However, if you want to write a computer program simulating a ferromagnet (or simulating planetary motions, for that matter), questions of sequence and synchrony cannot be swept aside. With conventional computer hardware, “let everything happen at once” is not an option. The program must consider each spin, one at a time, survey the surrounding neighborhood, apply an update rule that’s based on both the state of the neighbors and the temperature, and then decide whether or not to flip. Thus the program must choose a sequence in which to visit the lattice sites, as well as a sequence in which to visit the neighbors of each site, and those choices can make a difference in the outcome of the simulation. So can other details of implementation. Do we look at all the sites, calculate their new spin states, and then update all those that need to be flipped? Or do we update each spin as we go along, so that spins later in the sequence will see an array already modified by earlier actions? The original definition of the Ising model is silent on such matters, but the programmer must make a commitment one way or another.

This is where Glauber dynamics enters the story. Glauber presented a version of the Ising model that’s somewhat more explicit about how spins interact with one another and with the “heat bath” that represents the influence of temperature. It’s a theory of Ising dynamics because he describes the spin system not just at equilibrium but also during transitional stages. I don’t know if Glauber was the first to offer an account of Ising dynamics, but the notion was certainly not commonplace in 1963.

There’s no evidence Glauber was thinking of his method as an algorithm suitable for computer implementation. The subject of simulation doesn’t come up in his 1963 paper, where his primary aim is to find analytic expressions for the distribution of up and down spins as a function of time. (He did this only for the one-dimensional model.) Nevertheless, Glauber dynamics offers an elegant approach to programming an interactive version of the Ising model. Assume we have a lattice of \(N\) spins. Each spin \(\sigma\) is indexed by its coordinates \(x, y\) and takes on one of the two values \(+1\) and \(-1\). Thus flipping a spin is a matter of multiplying \(\sigma\) by \(-1\). The algorithm for a updating the lattice looks like this:

Repeat \(N\) times:

  1. Choose a spin \(\sigma_{x, y}\) at random.
  2. Sum the values of the four neighboring spins, \(S = \sigma_{x+1, y} + \sigma_{x-1, y} + \sigma_{x, y+1} + \sigma_{x, y-1}\). The possible values of \(S\) are \(\{-4, -2, 0, +2, +4\}\).
  3. Calculate \(\Delta E = 2 \, \sigma_{x, y} \, S\), the change in interaction energy if \(\sigma_{x, y}\) were to flip.
  4. If \(\Delta E \lt 0\), set \(\sigma_{x, y} = -\sigma_{x, y}\).
  5. Otherwise, set \(\sigma_{x, y} = -\sigma_{x, y}\) with probability \(\exp(-\Delta E/T)\), where \(T\) is the temperature.

Display the updated lattice.

Step 4 says: If flipping a spin will reduce the overall energy of the system, flip it. Step 5 says: Even if flipping a spin raises the energy, go ahead and flip it in a randomly selected fraction of the cases. The probability of such spin flips is the Boltzmann factor \(\exp(-\Delta E/T)\). This quantity goes to \(0\) as the temperature \(T\) falls to \(0\), so that energetically unfavorable flips are unlikely in a cold lattice. The probability approaches \(1\) as \(T\) goes to infinity, which is why the model is such a seething mass of fluctuations at high temperature.

(If you’d like to take a look at real code rather than pseudocode—namely the JavaScript program running the simulation above—it’s on GitHub.)

Glauber dynamics belongs to a family of methods called Markov chain Monte Carlo algorithms (MCMC). The idea of Markov chains was an innovation in probability theory in the early years of the 20th century, extending classical probability to situations where the the next event depends on the current state of the system. Monte Carlo algorithms emerged at post-war Los Alamos, not long after Glauber left there to resume his undergraduate curriculum. He clearly kept up with the work of Stanislaw Ulam and other former colleagues in the Manhattan Project.

Within the MCMC family, the distinctive feature of Glauber dynamics is choosing spins at random. The obvious alternative is to march methodically through the lattice by columns and rows, examining every spin in turn. Blinking checkerboardThat procedure can certainly be made to work, but it requires care in implementation. At low temperature the Ising process is very nearly deterministic, since unfavorable flips are extremely rare. When you combine a deterministic flip rule with a deterministic path through the lattice, it’s easy to get trapped in recurrent patterns. For example, a subtle bug yields the same configuration of spins on every step, shifted left by a single lattice site, so that the pattern seems to slide across the screen. Another spectacular failure gives rise to a blinking checkerboard, where every spin is surrounded by four opposite spins and flips on every time step. Avoiding these errors requires much fussy attention to algorithmic details. (My personal experience is that the first attempt is never right.)

Choosing spins by throwing random darts at the lattice turns out to be less susceptible to clumsy mistakes. Yet, at first glance, the random procedure seems to have hazards of its own. In particular, choosing 10,000 spins at random from a lattice of 10,000 sites does not guarantee that every site will be visited once. On the contrary, a few sites will be sampled six or seven times, and you can expect that 3,679 sites (that’s \(1/e \times 10{,}000)\) will not be visited at all. Doesn’t that bias distort the outcome of the simulation? No, it doesn’t. After many iterations, all the sites will get equal attention.

The nasty bit in all Ising simulation algorithms is updating pairs of adjacent sites, where each spin is the neighbor of the other. Which one goes first, or do you try to handle them simultaneously? The column-and-row ordering maximizes exposure to this problem: Every spin is a member of such a pair. Other sequential algorithms—for example, visiting all the black squares of a checkerboard followed by all the white squares—avoid these confrontations altogether, never considering two adjacent spins in succession. Glauber dynamics is the Goldilocks solution. Pairs of adjacent spins do turn up as successive elements in the random sequence, but they are rare events. Decisions about how to handle them have no discernible influence on the outcome.

Years ago, I had several opportunities to meet Roy Glauber. Regrettably, I failed to take advantage of them. Glauber’s office at Harvard was in the Lyman Laboratory of Physics, a small isthmus building connecting two larger halls. In the 1970s I was a frequent visitor there, pestering people to write articles for Scientific American. It was fertile territory; for a few years, the magazine found more authors per square meter in Lyman Lab than anywhere else in the world. But I never knocked on Glauber’s door. Perhaps it’s just as well. I was not yet equipped to appreciate what he had to say.

Now I can let him have the last word. This is from the introduction to the paper that introduced Glauber dynamics:

If the mathematical problems of equilibrium statistical mechanics are great, they are at least relatively well-defined. The situation is quite otherwise in dealing with systems which undergo large-scale changes with time. The principles of nonequilibrium statistical mechanics remain in largest measure unformulated. While this lack persists, it may be useful to have in hand whatever precise statements can be made about the time-dependent hehavior of statistical systems, however simple they may be.

by Brian Hayes at January 15, 2019 12:30 PM UTC

The Department of Mathematics and Computer Science at TU Eindhoven is looking for new faculty members to expand its academic staff. We welcome applications in all areas of computer science and at all levels, ranging from (tenure-track) assistant professor to full professor.


by shacharlovett at January 15, 2019 08:57 AM UTC

June 17-28, 2019 Paris, France Submission deadline: March 27, 2019 Registration deadline: March 27, 2019 This two-week summer school will present an overview of a selection of topics in the crossroads between geometry, algebra, and combinatorics. It will consist of four one-week mini-courses given by leading experts, complemented with supervised exercise sessions, lectures by … Continue reading Summer School on Geometric and Algebraic Combinatorics

by shacharlovett at January 14, 2019 02:09 PM UTC

Open Problems in the Equational Logic of Processes

School of Computer Science, Reykjavik University

One Postdoc Position

Applications are invited for one post-doctoral position at the School of Computer Science, Reykjavik University.  The position is part of a three-year research project funded by the Icelandic Research Fund, under the direction of Luca Aceto (Gran Sasso Science Institute and Reykjavik Universityand Anna Ingolfsdottir (Reykjavik University) in cooperation with Bas Luttik (TU Eindhoven) and Alexandra Silva (University College London). The overarching goal of this project is to solve some of the challenging open problems in the equational axiomatization of behavioural equivalences over process calculi. Interested applicants can contact Luca Aceto (email: for further details on the research proposal.

The successful candidate will benefit from, and contribute to, the research environment at the Icelandic Centre of Excellence in Theoretical Computer Science (ICE-TCS). For information about ICE-TCS and its activities, see

Moreover, she/he will cooperate with Bas Luttik and Alexandra Silva  during the project work and will benefit from the interaction with their research groups at TU Eindhoven and University College London. The postdoc will also have a chance to interact with Clemens Grabmayer and the CS group at the Gran Sasso Science Institute (, L'Aquila, Italy. 

Qualification requirements
Applicants for the postdoctoral position should have, or be about to hold, a PhD degree in Computer Science or closely related fields. Previous knowledge of at least one of concurrency theory, process calculi, (structural) operational semantics and logic in computer science is highly desirable.

The wage for the postdoctoral position is 
530,000 ISK (roughly 3,830  € at the present exchange rate) per month before taxes. (See for information on what the wage will be after taxes.) The position is for two years, starting as soon as possible, and is renewable for another year, based on good performance and mutual satisfaction.

Application details

Interested applicants should send their CV, including a list of publications, in PDF to all the addresses below, together with a statement outlining their suitability for the project and the names of at least two referees.

Luca Aceto

Anna Ingolfsdottir

We will start reviewing applications as soon as they arrive and will continue to accept applications until the position is filled. We strongly encourage interested applicants to send their applications as soon as possible and no later than 8 February 2019.

by Luca Aceto ( at January 14, 2019 10:10 AM UTC

The $2$-to-$2$ Games Theorem of [KMS-1, DKKMS-1, DKKMS-2, KMS-2] implies that it is NP-hard to distinguish between Unique Games instances with assignment satisfying at least $(\frac{1}{2}-\varepsilon)$ fraction of the constraints $vs.$ no assignment satisfying more than $\varepsilon$ fraction of the constraints, for every constant $\varepsilon>0$. We show that the reduction can be transformed in a non-trivial way to give a stronger guarantee in the completeness case: For at least $(\frac{1}{2}-\varepsilon)$ fraction of the vertices on one side, all the constraints associated with them in the Unique Games instance can be satisfied. We use this guarantee to convert the known UG-hardness results to NP-hardness. We show: 1. Tight inapproximability of approximating independent sets in a degree $d$ graphs within a factor of $\Omega\left(\frac{d}{\log^2 d}\right)$, where $d$ is a constant. 2. NP-hardness of approximate Maximum Acyclic Subgraph problem within a factor of $\frac{2}{3}+\varepsilon$, improving the previous ratio of $\frac{14}{15}+\varepsilon$ by Austrin et al.[AMW15]. 3. For any predicate $P^{-1}(1) \subseteq [q]^k$ supporting balanced pairwise independent distribution, given a $P$-CSP instance with value at least $\frac{1}{2}-\varepsilon$, it is NP-hard to satisfy more than $\frac{|P^{-1}(1)|}{q^k}+\varepsilon$ fraction of constraints.

at January 13, 2019 07:45 AM UTC

A tribute to two premier analysts

From Flanders Today src1 and Ryle Trust Lecture src2

Baron Jean Bourgain and Sir Michael Atiyah passed away within the past three weeks. They became mathematical nobility by winning the Fields Medal, Atiyah in 1966 and Bourgain in 1994. Bourgain was created Baron by King Philippe of Belgium in 2015. Atiyah’s knighthood did not confer nobility, but he held the dynastic Order of Merit, which is limited to 24 living members and has had fewer than 200 total since its inception in 1902. Atiyah had been #2 by length of tenure after Prince Philip and ahead of Prince Charles.

Today we discuss how they ennobled mathematics by their wide contributions.

Bourgain was affiliated to IAS by the IBM John von Neumann Professorship. He had been battling cancer for a long time. Here is the middle section of the coat of arms he created for his 2015 investiture:

Detail from IAS source

The shield shows the beginning of an Apollonian circle packing, in which every radius is the reciprocal of an integer. This property continues as circles are recursively inscribed in the curvilinear regions—see this 2000 survey for a proof. To quote Bourgain’s words accompanying his design:

The theory of these [packings] is today a rich mathematical research area, at the interface of hyperbolic geometry, dynamics, and number theory.

Bourgain’s affinity to topics we hold dear in computing theory is shown by this 2009 talk titled, “The Search for Randomness.” It covers not only PRNGs and crypto but also expander graphs and succinctness in quantum computing. He has been hailed for diversity in other mathematical areas and editorships of many journals. We will talk about a problem in analysis which he helped solve not by analytical means but by connecting the problem to additive combinatorics.

From Analysis to Combinatorics

Sōichi Kakeya posed the problem of the minimum size of a subset of {\mathbb{R}^{2}} in which a unit-length needle can be rotated through 360 degrees. Abram Besicovitch showed in 1928 that such sets can have Lebesgue measure {\epsilon} for any {\epsilon > 0}. He had already shown that one can achieve measure zero with a weaker property, which he had used to show a strong failure of Fubini’s theorem for Riemann integrals:

For all {d} there is a measure-zero subset of {\mathbb{R}^d} that contains a unit line segment in every direction.

The surprise to many of us is that such strange sets would have important further consequences in analysis. A 2008 survey in the AMS Bulletin by Izabella Łaba, titled “From Harmonic Analysis to Arithmetic Combinatorics,” brings out breakthrough contributions by Bourgain to conjectures and problems that involve further properties of these sets, which seem to retain Kakeya’s name:

Conjecture: A Kakeya set in {R^{d}} must have Hausdorff dimension {d}.

This and the formally weaker conjecture that the set must have Minkowski dimension {d} are proved in {\mathbb{R}^2} but open for all {d \geq 3}. Bourgain first proved that the restriction conjecture of Elias Stein, which is about extensions of the Fourier transform from certain subspaces of functions from {\mathbb{R}^d} to {\mathbb{C}} to operators from {L^q} to {L^p} functions on {\mathbb{R}^d}, implies the Kakeya conjecture. It is likewise open for {d,p \geq 3}. As Łaba writes, associated estimates “with {p > 2} require deeper geometrical information, and this is where we find Kakeya sets lurking under the surface.”

What Bourgain showed is that the restriction estimates place constraints on sets of lower Hausdorff dimension that force them to align “tubes” along discrete directions that can be approximated via integer lattices. This led to the following “key lemma”:

Lemma 1 Consider subsets {S} of {A \times B}, where {A} and {B} are finite subsets of {\mathbb{Z}^d}, and define

\displaystyle  S^{+} = \{a+b: (a,b) \in S\}, \qquad S^{-} = \{a - b: (a,b) \in S\}.

For every {C > 0} there is {C' > 0} such that whenever {|S^{+}| \leq Cn}, where {n = \max\{|A|,|B|\}}, we have {|S^{-}| \leq C'n^{2 - \frac{1}{13}}}.

To quote Łaba: “Bourgain’s approach, however, provided a way out. Effectively, it said that our hypothetical set would have structure, to the extent that many of its lines would have to be parallel instead of pointing in different directions. Not a Kakeya set, after all.” She further says:

Bourgain’s argument was, to this author’s knowledge, the first application of additive number theory to Euclidean harmonic analysis. It was significant, not only because it improved Kakeya bounds, but perhaps even more so because it introduced many harmonic analysts to additive number theory, including [Terence] Tao who contributed so much to the subject later on, and jump-started interaction and communication between the two communities. The Green-Tao theorem [on primes] and many other developments might have never happened, were it not for Bourgain’s brilliant leap of thought in 1998.

Among many sources, note this seminar sponsored by Fan Chung and links from Tao’s own memorial post.

Michael Atiyah

Michael Atiyah was also much more than an analyst—indeed, he was first a topologist and algebraic geometer. He was also a theoretical physicist. Besides all these scientific hats, he engaged with society at large. After heading Britain’s Royal Society from 1990 to 1995, he became president of the Pugwash Conferences on Science and World Affairs. This organization was founded by Joseph Rotblat and Bertrand Russell in the 1950s to avert nuclear war and misuse of science, and won the 1995 Nobel Peace Prize.

The “misuse of science” aspect comes out separately in Atiyah’s 1999 article in the British Medical Journal titled, “Science for evil: the scientist’s dilemma.” It lays out a wider scope of ethical and procedural concerns than the original anti-war purpose. This is furthered in his 1999 book chapter, “The Social Responsibility of Scientists,” which laid out six points including:

  • First there is the argument of moral responsibility. If you create something you should be concerned with its consequences. This should apply as much to making scientific discoveries as to having children.

  • Scientists will understand the technical problems better than the average politician or citizen and knowledge brings responsibility.

  • [T]here is need to prevent a public backlash against science. Self-interest requires that scientists must be fully involved in public debate and must not be seen as “enemies of the people.”

As he says in its abstract:

In my own case, after many years of quiet mathematical research, working out of the limelight, a major change occurred when unexpectedly I found myself president of the Royal Society, in a very public position, and expected to act as a general spokesman for the whole of science.

Within physics and mathematics, he also ventured into a debate that comes closer to the theory-as-social-process topic we have discussed on this blog. In 1994 he led a collection of community responses to a 1993 article by Arthur Jaffe and Frank Quinn that began with the question, “Is speculative mathematics dangerous?” Atiyah replied by saying he agreed with many of their points, especially the need to distinguish between results based on rigorous proofs and heuristic arguments,

…But if mathematics is to rejuvenate itself and break exciting new ground it will have to allow for the exploration of new ideas and techniques which, in their creative phase, are likely to be as dubious as in some of the great eras of the past. …[I]n the early stages of new developments, we must be prepared to act in more buccaneering style.

Now we cannot help recalling his claim last September of heuristic arguments that will build a proof of the Riemann Hypothesis, which we covered in several posts. As we stated in our New Year’s post, nothing more of substance has come to our attention. We do not know how much more work was done on the promised longer paper. We will move toward discussing briefly how his most famous work is starting to matter in algorithms and complexity.

Indexes and Invariants

We will not try to go into even as much detail as we did for Kakeya sets about Atiyah’s signature contributions to topological K-theory, physical gauge theory, his celebrated index theorem with Isadore Singer, and much else. But we can evoke reasons for us to be interested in the last. We start with the simple statement from the essay by John Rognes of Oslo that accompanied the 2004 Abel Prize award to Atiyah and Singer:

Theorem 2 Let {P(f) = 0} be a system of differential equations. Then

\displaystyle  \text{analytical index}(P) = \text{topological index}(P).

Here the analytical index equals the dimension {d_k} of the kernel of {P} minus the dimension {d_c} of the co-kernel of {P}, which (again quoting Rognes) “is equal to the number of parameters needed to describe all the solutions of the equation, minus the number of relations there are between the expressions {P(f)}.” The topological index has a longer laundry list of items in its definition, but the point is, those items are usually all easily calculable. It is further remarkable that in many cases we can get {d_k - d_c} without knowing how to compute {d_k} and {d_c} individually. The New York Times obituary quotes Atiyah from 2015:

It’s a bit of black magic to figure things out about differential equations even though you can’t solve them.

One thing it helps figure out is satisfiability. Besides cases where knowing the number of solutions does help in finding them, there are many theorems that needed only information about the number and the parameterization.

We have an analogous situation in complexity theory with the lower bound theorem of Walter Baur and Volker Strassen, which we covered in this post: The number of multiplication gates needed to compute an arithmetical function {f} is bounded below by a known constant times the log-base-2 of the maximum number of solutions to a system formed from the partial derivatives of {f} and a certain number of linear equations, over cases where that number is finite. Furthermore, both theorems front on algebraic geometry and geometric invariant theory, whose rapid ascent in our field was witnessed by a workshop at IAS that we covered last June. That workshop mentioned not only Atiyah but also the further work in algebraic geometry by his student Frances Kirwan, who was contemporaneous with Ken while at Oxford. Thus we may see more of the kind of connections in which Atiyah delighted, as noted in current tributes and the “matchmaker” label which was promoted at last August’s ICM.

Open Problems

Our condolences go out to their families and colleagues.

[more tribute links]

by RJLipton+KWRegan at January 13, 2019 06:11 AM UTC

The algorithms and computational economics research groups at Duke University invite applications for one or potentially more postdoctoral positions, starting on or after July 1, 2019. The AcademicJobsOnline URL has full details as well as online application form.


by shacharlovett at January 11, 2019 08:32 PM UTC

I'm a bit disappointed that Michael describes al-Nayrizi's and Perigal's (families of) simple perfect squarings of the (flat square) torus as different. They are, in fact, identical—not just “congruent” or “equivalent” or “isomorphic”, but actually indistinguishable. They only look different because al-Nayrizi and Perigal cut the torus into a square in two different ways.

by Jeff Erickson at January 11, 2019 01:13 PM UTC

Please consider nominating graduating Ph.D. students for the SIGecom Dissertation Award.  If you are a graduating student, consider asking your adviser or other senior mentor to nominate you.
Nominations are due on February 28, 2019.  This award is given to a student who defended a thesis in 2018.  It is a prestigious award and is accompanied by a $1500 prize.  In the past, the grand prize has been awarded to:
2017: Aviad Rubinstein, “Hardness of Approximation Between P and NP”
2016: Peng Shi, “Prediction and Optimization in School Choice”
2015: Inbal Talgam-Cohen, “Robust Market Design: Information and Computation “
2014: S. Matthew Weinberg, “Algorithms for Strategic Agents”
2013: Balasubramanian Sivan, “Prior Robust Optimization”
And the award has had seven runner-ups: Rachel Cummings, Christos Tzamos, Bo Waggoner, James Wright, Xi (Alice) Gao, Yang Cai, and Sigal Oren.  You can find detailed information about the nomination process at: We look forward to reading your nominations!
Your Award Committee,
Renato Paes Leme
Aaron Roth (Chair)
Inbal Talgam-Cohen

by Kevin Leyton-Brown at January 11, 2019 03:27 AM UTC

Dear all,

Please consider nominating graduating Ph.D. students for the SIGecom Dissertation Award.  If you are a graduating student, consider asking your adviser or other senior mentor to nominate you.

Nominations are due on February 28, 2019.  This award is given to a student who defended a thesis in 2018.  It is a prestigious award and is accompanied by a $1500 prize.  In the past, the grand prize has been awarded to:

2017: Aviad Rubinstein, "Hardness of Approximation Between P and NP"
2016: Peng Shi, "Prediction and Optimization in School Choice"
2015: Inbal Talgam-Cohen, "Robust Market Design: Information and Computation "
2014: S. Matthew Weinberg, "Algorithms for Strategic Agents"
2013: Balasubramanian Sivan, "Prior Robust Optimization"

And the award has had seven runner-ups: Rachel Cummings, Christos Tzamos, Bo Waggoner, James Wright, Xi (Alice) Gao, Yang Cai, and Sigal Oren.  You can find detailed information about the nomination process at: We look forward to reading your nominations!

Your Award Committee,

Renato Paes Leme
Aaron Roth (Chair)
Inbal Talgam-Cohen

by Aaron Roth ( at January 10, 2019 08:26 PM UTC

I spent the last few days at SODA-ANALCO-ALENEX-SOSA in San Diego.  (Nice location choice, I'd say!)  Here's some news.

This will be the last ANALCO (Analytic Algorithms and Combinatorics).  Apparently submissions have been decreasing, so they've decided it will halt and the work on these topics will go into SODA and other conferences.  I'm not sure how to think of it -- I think we as a community have far too many conferences/workshops generally, but I think the SODA model of having ANALCO and ALENEX (and now SOSA, I imagine) folded in cleanly into the main conference is an excellent model.  I also like the ANALCO topics.  But I can understand the time may have come to do something else.  Thanks to everyone who worked to organize ANALCO and keep it going these many years.

It looks like SOSA (Symposium on Simplicity in Algorithms) will be taking its place in the SODA lineup.  I co-chaired the symposium with Jeremy Fineman this year, the second for the symposium.  I was surprised by the high quality of the submissions, and was then further surprised by the strong turnout at SODA.  The room was quite full for the Tuesday afternoon sessions, and there were easily 75+ people at several of the talks.  I do think there's a need for SOSA -- no other workshop/conference hits the theme of simplicity in our area, and it's a really nice fit with the rest of SODA.  I'm hoping it will last, and in particular that they'll continue to have a good number of high quality submissions, but that depends on all of you.  Ideally, there will be a positive feedback loop here -- now that there's a good home for this type of work (besides notes on the arxiv), people will be more inclined to write up and submit things to SOSA.  For Tuesday's talks, I'll call out Josh Alman's great presentation on "An Illuminating Algorithm for the Light Bulb Problem" as my favorite for the day.

With ANALCO exiting, though, I think there's more room for additional satellite events at SODA, so hopefully some people will get creative.

If I had thought about it I should have live-blogged the business meeting.  I'd say as highlights, first, Sandy Irani presented the report of the ad hoc committee to combat harassment and discrimination in the theory of computing community.   (See here for the report.)  There was an overwhelming vote to adopt their recommendations going forward.  It's good to see progress in addressing these community concerns.  Second, Shuchi Chawla will be the next PC chair, and she brought forward a plan to have SODA PC members be allowed to submit papers (with a higher bar) that was voted on favorably as well.

I suppose the last note is that Jon Kleinberg's invited talk was the conference highlight you expect a Jon Kleinberg talk to be, with interesting results and models related to fairness and implicit bias.

Thanks to SIAM and all the organizers for their hard work.

by Michael Mitzenmacher ( at January 09, 2019 10:25 PM UTC

June 25, 2018 – June 28, 2019 Ischia, Italy CNR-IASI, as part of the MINOA project, announces the school for PhD students and post-docs on the theme Mixed Integer Non linear Optimization meets Data Science. The school will cover the following topics: Deep learning for AI Clustering for Big Data Machine Learning for Combinatorial … Continue reading Mixed-Integer Nonlinear Optimization meets Data Science

by shacharlovett at January 09, 2019 08:51 PM UTC