Theory of Computing Blog Aggregator

A few days ago an historic 160-page paper with a very short title MIP*=RE was uploaded to the arXive by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen.  I am thankful to Dorit Aharonov and Alon Rosen for telling me about it. The paper simultaneously settles several major open problems in quantum computational complexity, mathematical foundations of quantum physics, and mathematics. Congratulations to Zhengfeng, Anand, Thomas, John, and Henry!

A tweet by Ryan

The new paper dramatically improved the 2019 result by Anand Natarajan, and John Wright asserting that  “NEEXP in MIP*”.

In this post I will talk a little about two corners of this work and neglect many others:  I will describe a few conjectures about infinite groups related to the new result, and I will give a very gentle beginning-of-an-introduction to interactive proofs.  I will also give some useful links to papers, presentations and blog posts.

Boris Tsirelson is one of my mathematical heroes. The new paper gives a negative answer to an important problem posed by Tsirelson. (Here is a great interview with Boris.)

Resources

My Qstate: a master project (Vidick’s memories of the road since Julia Kempe offered him the problem 14 years ago; with a lot of scientific content). Older related posts on My Qstate  I, II, III.

A Notices AMS article by Vidick: From Operator Algebras to Complexity Theory and Back.

Shtetl Optimized: MIP*=RE;  GLL (Ken Regan) Halting Is Poly-Time Quantum Provable ; Other posts on blogs: WIT (Boaz Barak); CC (Lance Fortnow); WN; Posts on an earlier 2019 result  MIP* contains NEEXP

Quanta Magazine: An article by Kevin Hartnett about an earlier result MIP*=NEEXP; An article about the new result.

Older posts here:  about Vidick- 2012 paper (among various updates); a 2008 post mentioning sofic groups (among various updates);

Videotaped lectures: from our recent winter school Thomas Vidick on quantum protocols Video 1, Video 2, Video3.

A mathematical context (of one corner of the work) and wish list: stability theory for groups.

(I am thankful to Alex Lubotzky for telling me about the algebra background.)

Links: Finitary approximations of groups and their applications, by Andreas Thom, Andreas’ ICM 2018 videotaped lecture. And a great video: Best of Andreas Thom. See also this paper Stability, cohomology vanishing, and non-approximable groups by Marcus De Chiffre, Lev Glebsky, Alex Lubotzky, and Andreas Thom.

The assertion of Connes’ embedding conjecture refuted in the MIP*=RE paper would imply several (outrageous :)) stronger conjectures that are still open. One is the conjecture of Connes that every group is “hyperlinear.” Another famous conjecture (an affirmative answer to a question posed by Gromov) is  that every group is sofic. As sofic groups are hyperlinear we can now expect (ever more than before) that non-sofic and even non hyperlinear groups will be found. Here is a rough short explanation what these conjectures are about. (Kirchberg’s conjecture, is another group theoretic question of this nature.)

Every finite group is a permutation group and is a linear group. This is not the case for infinite groups and there are various interesting notions of “approximately-permutation-group” (this is “sofic”) and “approximately linear” (this is called “hyperlinear”).

Given a group Γ we want to find a sequence of functions f_n

  1. From Γ to symmetric groups S_n,
  2. or from Γ to the unitary groups U(n).

Such that asymptotically as n grows these functions are “almost homomorphisms” with respect to certain metrics DIST on S_n or U(n) respectively. This means that for every two elements

a,b \in \Gamma,

DIST( f_n(ab), f_n(a)f_n(b)) tends to zero when n tends to infinity.

Now,

  1. Sofic group refers to the normalized Hamming metric for symmetric groups.
  2. Hyperlinear group refers to metrics given by the normalized Hilbert-Schmidt norm on the unitary groups
  3. MF-groups, Again the unitary group but the operator norm this time.

And there are various other metrics that were also considered. The assertion of the famous embedding conjecture by Connes on von-Neumann algebras (now refuted by the new result) implies that every group is hyperlinear.

A remaining wish list: Find a non sofic group; find a non-hyperlinear group; refute  Kirchberg’s conjecture (if it was not already refuted).

Interactive proofs and some computational complexity background.

P, NP, IP, MIP

Links: here are slides of  a great talk by Yael Kalai: The evolution of proof in computer science; an a blog post on this topic by Yael Kalai, and a post here about Yael’s 2018 ICM paper and lecture.

A decision problem is in P if there is a polynomial time algorithm (in terms of the input size) to test if the answer is yes or no.  A decision problem is in NP if there is a proof for a YES answer that can be verified in a polynomial time.

Here are two examples: The question if  graph has a perfect matching is in P. The question if  graph has an Hamiltonian cycle  is in NP. If the answer is yes a prover can give a proof that requires the verifier a polynomial number of steps to verify.

IP is a complexity class based on a notion of interactive proof where, based on a protocol for questions and answers,  the prover can convince the verifier (with arbitrary high probability) that the answer is yes. Following a sequence of startling developments Adi Shamir proved that IP is quite a large complexity space P-space.  When we consider several non-interacting provers (two provers suffice) the computational power denoted by MIP is even larger: László Babai, Lance Fortnow, and Cartsen Lund proved that MIP=NEXP!  NEXP is the class of decision problems where if the answer is yes a prover can give a proof that requires the verifier (at most) an exponential number of steps to verify.

Enters quantum computation and entanglement

We replace the model of classical computation with quantum computation. Each of the two provers, Prover1 and Prover2, have access to separate sets of m qubits but they  can prepare in advance a complicated quantum state on those 2m qubits. When we run the verification protocol each prover has access only to its m qubits and, like in the classical case,  the two provers cannot communicate. These types of verification protocols represent the complexity class MIP*.  In 2012 and Tsuyoshi Ito and Thomas Vidick proved that MIP* contains NEXP. In this 2012 post I reported an unusual seminar we ran on the problem.

Interactive quantum lecture: We had an unususal quantum seminar presentation by Michael Ben-Or on the work A multi-prover interactive proof for NEXP sound against entangled provers by Tsuyoshi Ito and Thomas Vidick. Michael ran Vidick’s videotaped recent talk on the matter and from time to time he and Dorit acted as a pair of prover and the other audience as verifier. (Michael was one of the authors of the very first paper introducing multi-prover interactive proofs.)

Let me mention also a related 2014 paper by Yael Kalai, Ran Raz, and Ron Rothblum: How to delegate computations: the power of no-signaling proofs. They considered two provers that are limited by the “non-signaling principle” and showed that the power of interactive proofs contains NEXP. (Here is a videotaped lecture by Ran Raz.)

In April 2019, Anand Natarajan and John Wright uploaded a paper with a proof that MIP* contain NEEXP. (NEEXP is the class of decision problems where if the answer is yes a prover can give a proof that requires the verifier (at most) doubly exponential number of steps to verify.)

Here is a nice quote from the Harnett’s  quanta paper regarding the Natarajan-Wright breakthrough:

Some problems are too hard to solve in any reasonable amount of time. But their solutions are easy to check. Given that, computer scientists want to know: How complicated can a problem be while still having a solution that can be verified?

Turns out, the answer is: Almost unimaginably complicated.

In a paper released in April, two computer scientists dramatically increased the number of problems that fall into the hard-to-solve-but-easy-to-verify category. They describe a method that makes it possible to check answers to problems of almost incomprehensible complexity. “It seems insane,” said Thomas Vidick, a computer scientist at the California Institute of Technology who wasn’t involved in the new work.

Now with the new result, I wonder if this bold  philosophical interpretation is sensible:  There is a shared quantum state that will allow two non-interacting provers (with unlimited computational power) to convince  a mathematician if a given mathematical statement has a proof, and also to convince a historian or a futurist about any question regarding the past or future evolution of the universe.

A little more information and links

The negative answer to Tsirelson problem asserts roughly that there are types of correlations  that can be produced by an infinite quantum systems, but that can’t even be approximated by a finite system. Connes’ 1976 embedding conjecture (now refuted) from the theory of von Neumann algebras asserts that “Every  type \Pi_1 von Neumann factor embeds in an ultrapower of a hyperfinite \Pi_1 factor.”

The abstract of the new paper mentions a few other works that are important for the new proof.

Anand Natarajan and Thomas Vidick. Low-degree testing for quantum states, and a quantum entangled games PCP for QMA. (FOCS, 2018.)

Anand Natarajan and John Wright. NEEXP ⊆ MIP∗ (FOCS 2019) (We mentioned it above.)

Joe Fitzsimons, Zhengfeng Ji, Thomas Vidick, and Henry Yuen. Quantum proof systems
for iterated exponential time, and beyond.

The abstract also mentions two papers about the connections with Tsirelson problem and Connes embedding conjecture

Tobias Fritz. Tsirelson’s problem and Kirchberg’s conjecture. Reviews in Mathematical
Physics, 24(05):1250012, 2012.

Marius Junge, Miguel Navascues, Carlos Palazuelos, David Perez-Garcia, Volkher B Scholz, and Reinhard F Werner. Connes’ embedding problem and Tsirelson’s problem. Journal of Mathematical Physics, 52(1):012102, 2011.

Let me also mention

Narutaka Ozawa. About the Connes embedding conjecture. Japanese Journal of Mathematics, 8(1):147–183, 2013.

by Gil Kalai at January 17, 2020 01:10 PM UTC

In the field of constraint satisfaction problems (CSP), promise CSPs are an exciting new direction of study. In a promise CSP, each constraint comes in two forms: "strict" and "weak," and in the associated decision problem one must distinguish between being able to satisfy all the strict constraints versus not being able to satisfy all the weak constraints. The most commonly cited example of a promise CSP is the approximate graph coloring problem--which has recently seen exciting progress [BKO19, WZ20] benefiting from a systematic algebraic approach to promise CSPs based on "polymorphisms," operations that map tuples in the strict form of each constraint to tuples in the corresponding weak form. In this work, we present a simple algorithm which in polynomial time solves the decision problem for all promise CSPs that admit infinitely many symmetric polymorphisms, that is the coordinates are permutation invariant. This generalizes previous work of the first two authors [BG19]. We also extend this algorithm to a more general class of block-symmetric polymorphisms. As a corollary, this single algorithm solves all polynomial-time tractable Boolean CSPs simultaneously. These results give a new perspective on Schaefer's classic dichotomy theorem and shed further light on how symmetries of polymorphisms enable algorithms. Finally, we show that block symmetric polymorphisms are not only sufficient but also necessary for this algorithm to work, thus establishing its precise power.

at January 17, 2020 02:50 AM UTC

Authors: Bartłomiej Dudek, Paweł Gawrychowski, Tatiana Starikovskaya
Download: PDF
Abstract: In the problem of $\texttt{Generalised Pattern Matching}\ (\texttt{GPM})$ [STOC'94, Muthukrishnan and Palem], we are given a text $T$ of length $n$ over an alphabet $\Sigma_T$, a pattern $P$ of length $m$ over an alphabet $\Sigma_P$, and a matching relationship $\subseteq \Sigma_T \times \Sigma_P$, and must return all substrings of $T$ that match $P$ (reporting) or the number of mismatches between each substring of $T$ of length $m$ and $P$ (counting). In this work, we improve over all previously known algorithms for this problem for various parameters describing the input instance:

* $\mathcal{D}\,$ being the maximum number of characters that match a fixed character,

* $\mathcal{S}\,$ being the number of pairs of matching characters,

* $\mathcal{I}\,$ being the total number of disjoint intervals of characters that match the $m$ characters of the pattern $P$.

At the heart of our new deterministic upper bounds for $\mathcal{D}\,$ and $\mathcal{S}\,$ lies a faster construction of superimposed codes, which solves an open problem posed in [FOCS'97, Indyk] and can be of independent interest.

To conclude, we demonstrate first lower bounds for $\texttt{GPM}$. We start by showing that any deterministic or Monte Carlo algorithm for $\texttt{GPM}$ must use $\Omega(\mathcal{S})$ time, and then proceed to show higher lower bounds for combinatorial algorithms. These bounds show that our algorithms are almost optimal, unless a radically new approach is developed.

at January 17, 2020 12:00 AM UTC

Authors: Marc Hellmuth, Carsten R. Seemann, Peter F. Stadler
Download: PDF
Abstract: Binary relations derived from labeled rooted trees play an import role in mathematical biology as formal models of evolutionary relationships. The (symmetrized) Fitch relation formalizes xenology as the pairs of genes separated by at least one horizontal transfer event. As a natural generalization, we consider symmetrized Fitch maps, that is, symmetric maps $\varepsilon$ that assign a subset of colors to each pair of vertices in $X$ and that can be explained by a tree $T$ with edges that are labeled with subsets of colors in the sense that the color $m$ appears in $\varepsilon(x,y)$ if and only if $m$ appears in a label along the unique path between $x$ and $y$ in $T$. We first give an alternative characterization of the monochromatic case and then give a characterization of symmetrized Fitch maps in terms of compatibility of a certain set of quartets. We show that recognition of symmetrized Fitch maps is NP-complete but FPT in general. In the restricted case where $|\varepsilon(x,y)|\leq 1$ the problem becomes polynomial, since such maps coincide with class of monochromatic Fitch maps whose graph-representations form precisely the class of complete multi-partite graphs.

at January 17, 2020 12:00 AM UTC

Authors: Kohei Yamada, Yuto Nakashima, Shunsuke Inenaga, Hideo Bannai, Masayuki Takeda
Download: PDF
Abstract: The longest common subsequence (LCS) problem is a central problem in stringology that finds the longest common subsequence of given two strings $A$ and $B$. More recently, a set of four constrained LCS problems (called generalized constrained LCS problem) were proposed by Chen and Chao [J. Comb. Optim, 2011]. In this paper, we consider the substring-excluding constrained LCS (STR-EC-LCS) problem. A string $Z$ is said to be an STR-EC-LCS of two given strings $A$ and $B$ excluding $P$ if, $Z$ is one of the longest common subsequences of $A$ and $B$ that does not contain $P$ as a substring. Wang et al. proposed a dynamic programming solution which computes an STR-EC-LCS in $O(mnr)$ time and space where $m = |A|, n = |B|, r = |P|$ [Inf. Process. Lett., 2013]. In this paper, we show a new solution for the STR-EC-LCS problem. Our algorithm computes an STR-EC-LCS in $O(n|\Sigma| + (L+1)(m-L+1)r)$ time where $|\Sigma| \leq \min\{m, n\}$ denotes the set of distinct characters occurring in both $A$ and $B$, and $L$ is the length of the STR-EC-LCS. This algorithm is faster than the $O(mnr)$-time algorithm for short/long STR-EC-LCS (namely, $L \in O(1)$ or $m-L \in O(1)$), and is at least as efficient as the $O(mnr)$-time algorithm for all cases.

at January 17, 2020 12:00 AM UTC

Authors: Lars Jaffke, Mateus de Oliveira Oliveira, Hans Raj Tiwary
Download: PDF
Abstract: It can be shown that each permutation group $G \sqsubseteq S_n$ can be embedded, in a well defined sense, in a connected graph with $O(n+|G|)$ vertices. Some groups, however, require much fewer vertices. For instance, $S_n$ itself can be embedded in the $n$-clique $K_n$, a connected graph with n vertices. In this work, we show that the minimum size of a context-free grammar generating a finite permutation group $G \sqsubseteq S_n$ can be upper bounded by three structural parameters of connected graphs embedding $G$: the number of vertices, the treewidth, and the maximum degree. More precisely, we show that any permutation group $G \sqsubseteq S_n$ that can be embedded into a connected graph with $m$ vertices, treewidth k, and maximum degree $\Delta$, can also be generated by a context-free grammar of size $2^{O(k\Delta\log\Delta)}\cdot m^{O(k)}$. By combining our upper bound with a connection between the extension complexity of a permutation group and the grammar complexity of a formal language, we also get that these permutation groups can be represented by polytopes of extension complexity $2^{O(k \Delta\log \Delta)}\cdot m^{O(k)}$. The above upper bounds can be used to provide trade-offs between the index of permutation groups, and the number of vertices, treewidth and maximum degree of connected graphs embedding these groups. In particular, by combining our main result with a celebrated $2^{\Omega(n)}$ lower bound on the grammar complexity of the symmetric group $S_n$ we have that connected graphs of treewidth $o(n/\log n)$ and maximum degree $o(n/\log n)$ embedding subgroups of $S_n$ of index $2^{cn}$ for some small constant $c$ must have $n^{\omega(1)}$ vertices. This lower bound can be improved to exponential on graphs of treewidth $n^{\varepsilon}$ for $\varepsilon<1$ and maximum degree $o(n/\log n)$.

at January 17, 2020 12:00 AM UTC

Authors: Joon-Seok Kim, Carola Wenk
Download: PDF
Abstract: Simplification is one of the fundamental operations used in geoinformation science (GIS) to reduce size or representation complexity of geometric objects. Although different simplification methods can be applied depending on one's purpose, a simplification that many applications employ is designed to preserve their spatial properties after simplification. This article addresses one of the 2D simplification methods, especially working well on human-made structures such as 2D footprints of buildings and indoor spaces. The method simplifies polygons in an iterative manner. The simplification is segment-wise and takes account of intrusion, extrusion, offset, and corner portions of 2D structures preserving its dominant frame.

at January 17, 2020 12:00 AM UTC

Authors: João F. Doriguello, Ashley Montanaro
Download: PDF
Abstract: In this work we revisit the Boolean Hidden Matching communication problem, which was the first communication problem in the one-way model to demonstrate an exponential classical-quantum communication separation. In this problem, Alice's bits are matched into pairs according to a partition that Bob holds. These pairs are compressed using a Parity function and it is promised that the final bit-string is equal either to another bit-string Bob holds, or its complement. The problem is to decide which case is the correct one. Here we generalize the Boolean Hidden Matching problem by replacing the parity function with an arbitrary function $f$. Efficient communication protocols are presented depending on the sign-degree of $f$. If its sign-degree is less than or equal to 1, we show an efficient classical protocol. If its sign-degree is less than or equal to $2$, we show an efficient quantum protocol. We then completely characterize the classical hardness of all symmetric functions $f$ of sign-degree greater than or equal to $2$, except for one family of specific cases. We also prove, via Fourier analysis, a classical lower bound for any function $f$ whose pure high degree is greater than or equal to $2$. Similarly, we prove, also via Fourier analysis, a quantum lower bound for any function $f$ whose pure high degree is greater than or equal to $3$. These results give a large family of new exponential classical-quantum communication separations.

at January 17, 2020 12:00 AM UTC

Scott’s preface: Imagine that every time you turned your blog over to a certain topic, you got denounced on Twitter and Reddit as a privileged douchebro, entitled STEMlord, counterrevolutionary bourgeoisie, etc. etc. The sane response would simply be to quit blogging about that topic. But there’s also an insane (or masochistic?) response: the response that says, “but if everyone like me stopped talking, we’d cede the field by default to the loudest, angriest voices on all sides—thereby giving those voices exactly what they wanted. To hell with that!”

A few weeks ago, while I was being attacked for sharing Steven Pinker’s guest post about NIPS vs. NeurIPS, I received a beautiful message of support from a PhD student in physical chemistry and quantum computing named Karen Morenz. Besides her strong words of encouragement, Karen wanted to share with me an essay she had written on Medium about why too many women leave STEM.

Karen’s essay, I found, marshaled data, logic, and her own experience in support of an insight that strikes me as true and important and underappreciated—one that dovetails with what I’ve heard from many other women in STEM fields, including my wife Dana. So I asked Karen for permission to reprint her essay on this blog, and she graciously agreed.

Briefly: anyone with a brain and a soul wants there to be many more women in STEM. Karen outlines a realistic way to achieve this shared goal. Crucially, Karen’s way is not about shaming male STEM nerds for their deep-seated misogyny, their arrogant mansplaining, or their gross, creepy, predatory sexual desires. Yes, you can go the shaming route (God knows it’s being tried). If you do, you’ll probably snare many guys who really do deserve to be shamed as creeps or misogynists, along with many more who don’t. Yet for all your efforts, Karen predicts, you’ll no more solve the original problem of too few women in STEM, than arresting the kulaks solved the problem of lifting the masses out of poverty.

For you still won’t have made a dent in the real issue: namely that, the way we’ve set things up, pursuing an academic STEM career demands fanatical devotion, to the exclusion of nearly everything else in life, between the ages of roughly 18 and 35. And as long as that’s true, Karen says, the majority of talented women are going to look at academic STEM, in light of all the other great options available to them, and say “no thanks.” Solving this problem might look like more money for maternity leave and childcare. It might also look like re-imagining the academic career trajectory itself, to make it easier to rejoin it after five or ten years away. Way back in 2006, I tried to make this point in a blog post called Nerdify the world, and the women will follow. I’m grateful to Karen for making it more cogently than I did.

Without further ado, here’s Karen’s essay. –SA

Is it really just sexism? An alternative argument for why women leave STEM

by Karen Morenz

Everyone knows that you’re not supposed to start your argument with ‘everyone knows,’ but in this case, I think we ought to make an exception:

Everyone knows that STEM (Science, Technology, Engineering and Mathematics) has a problem retaining women (see, for example Jean, Payne, and Thompson 2015). We pour money into attracting girls and women to STEM fields. We pour money into recruiting women, training women, and addressing sexism, both overt and subconscious. In 2011, the United States spent nearly $3 billion tax dollars on STEM education, of which roughly one third was spent supporting and encouraging underrepresented groups to enter STEM (including women). And yet, women are still leaving at alarming rates.

Alarming? Isn’t that a little, I don’t know, alarmist? Well, let’s look at some stats.

A recent report by the National Science Foundation (2011) found that women received 20.3% of the bachelor’s degrees and 18.6% of the PhD degrees in physics in 2008. In chemistry, women earned 49.95% of the bachelor’s degrees but only 36.1% of the doctoral degrees. By comparison, in biology women received 59.8% of the bachelor’s degrees and 50.6% of the doctoral degrees. A recent article in Chemical and Engineering News showed a chart based on a survey of life sciences workers by Liftstream and MassBio demonstrating how women are vastly underrepresented in science leadership despite earning degrees at similar rates, which I’ve copied below. The story is the same in academia, as you can see on the second chart — from comparable or even larger number of women at the student level, we move towards a significantly larger proportion of men at the more and more advanced stages of an academic career.

Although 74% of women in STEM report “loving their work,” half (56%, in fact) leave over the course of their career — largely at the “mid-level” point, when the loss of their talent is most costly as they have just completed training and begun to contribute maximally to the work force.

A study by Dr. Flaherty found that women who obtain faculty position in astronomy spent on average 1 year less than their male counterparts between completing their PhD and obtaining their position — but he concluded that this is because women leave the field at a rate 3 to 4 times greater than men, and in particular, if they do not obtain a faculty position quickly, will simply move to another career. So, women and men are hired at about the same rate during the early years of their post docs, but women stop applying to academic positions and drop out of the field as time goes on, pulling down the average time to hiring for women.

There are many more studies to this effect. At this point, the assertion that women leave STEM at an alarming rate after obtaining PhDs is nothing short of an established fact. In fact, it’s actually a problem across all academic disciplines, as you can see in this matching chart showing the same phenomenon in humanities, social sciences, and education. The phenomenon has been affectionately dubbed the “leaky pipeline.”

But hang on a second, maybe there just aren’t enough women qualified for the top levels of STEM? Maybe it’ll all get better in a few years if we just wait around doing nothing?

Nope, sorry. This study says that 41% of highly qualified STEM people are female. And also, it’s clear from the previous charts and stats that a significantly larger number of women are getting PhDs than going on the be professors, in comparison to their male counterparts. Dr. Laurie Glimcher, when she started her professorship at Harvard University in the early 1980s, remembers seeing very few women in leadership positions. “I thought, ‘Oh, this is really going to change dramatically,’ ” she says. But 30 years later, “it’s not where I expected it to be.” Her experiences are similar to those of other leading female faculty.

So what gives? Why are all the STEM women leaving?

It is widely believed that sexism is the leading problem. A quick google search of “sexism in STEM” will turn up a veritable cornucopia of articles to that effect. And indeed, around 60% of women report experiencing some form of sexism in the last year (Robnett 2016). So, that’s clearly not good.

And yet, if you ask leading women researchers like Nobel Laureate in Physics 2018, Professor Donna Strickland, or Canada Research Chair in Advanced Functional Materials (Chemistry), Professor Eugenia Kumacheva, they say that sexism was not a barrier in their careers. Moreover, extensive research has shown that sexism has overall decreased since Professors Strickland and Kumacheva (for example) were starting their careers. Even more interestingly, Dr. Rachael Robnett showed that more mathematical fields such as Physics have a greater problem with sexism than less mathematical fields, such as Chemistry, a finding which rings true with the subjective experience of many women I know in Chemistry and Physics. However, as we saw above, women leave the field of Chemistry in greater proportions following their BSc than they leave Physics. On top of that, although 22% of women report experiencing sexual harassment at work, the proportion is the same among STEM and non-STEM careers, and yet women leave STEM careers at a much higher rate than non-STEM careers.

So,it seems that sexism can not fully explain why women with STEM PhDs are leaving STEM. At the point when women have earned a PhD, for the most part they have already survived the worst of the sexism. They’ve already proven themselves to be generally thick-skinned and, as anyone with a PhD can attest, very stubborn in the face of overwhelming difficulties. Sexism is frustrating, and it can limit advancement, but it doesn’t fully explain why we have so many women obtaining PhDs in STEM, and then leaving. In fact, at least in the U of T chemistry department, faculty hires are directly proportional to the applicant pool —although the exact number of applicants are not made public, from public information we can see that approximately one in four interview invitees are women, and approximately one in four hires are women. Our hiring committees have received bias training, and it seems that it has been largely successful. That’s not to say that we’re done, but it’s time to start looking elsewhere to explain why there are so few women sticking around.

So why don’t more women apply?

Well, one truly brilliant researcher had the groundbreaking idea of asking women why they left the field. When you ask women why they left, the number one reason they cite is balancing work/life responsibilities — which as far as I can tell is a euphemism for family concerns.

The research is in on this. Women who stay in academia expect to marry later, and delay or completely forego having children, and if they do have children, plan to have fewer than their non-STEM counterparts (Sassler et al 2016Owens 2012). Men in STEM have no such difference compared to their non-STEM counterparts; they marry and have children about the same ages and rates as their non-STEM counterparts (Sassler et al 2016). Women leave STEM in droves in their early to mid thirties (Funk and Parker 2018) — the time when women’s fertility begins to decrease, and risks of childbirth complications begin to skyrocket for both mother and child. Men don’t see an effect on their fertility until their mid forties. Of the 56% of women who leave STEM, 50% wind up self-employed or using their training in a not for profit or government, 30% leave to a non-STEM more ‘family friendly’ career, and 20% leave to be stay-at-home moms (Ashcraft and Blithe 2002). Meanwhile, institutions with better childcare and maternity leave policies have twice(!) the number of female faculty in STEM (Troeger 2018). In analogy to the affectionately named “leaky pipeline,” the challenge of balancing motherhood and career has been titled the “maternal wall.”

To understand the so-called maternal wall better, let’s take a quick look at the sketch of a typical academic career.

For the sake of this exercise, let’s all pretend to be me. I’m a talented 25 year old PhD candidate studying Physical Chemistry — I use laser spectroscopy to try to understand atypical energy transfer processes in innovative materials that I hope will one day be used to make vastly more efficient solar panels. I got my BSc in Chemistry and Mathematics at the age of 22, and have published 4 scientific papers in two different fields already (Astrophysics and Environmental Chemistry). I’ve got a big scholarship, and a lot of people supporting me to give me the best shot at an academic career — a career I dearly want. But, I also want a family — maybe two or three kids. Here’s what I can expect if I pursue an academic career:

With any luck, 2–3 years from now I’ll graduate with a PhD, at the age of 27. Academics are expected to travel a lot, and to move a lot, especially in their 20s and early 30s — all of the key childbearing years. I’m planning to go on exchange next year, and then the year after that I’ll need to work hard to wrap up research, write a thesis, and travel to several conferences to showcase my work. After I finish my PhD, I’ll need to undertake one or two post doctoral fellowships, lasting one or two years each, probably in completely different places. During that time, I’ll start to apply for professorships. In order to do this, I’ll travel around to conferences to advertise my work and to meet important leaders in my field, and then, if I am invited for interviews, I’ll travel around to different universities for two or three days at a time to undertake these interviews. This usually occurs in a person’s early 30s — our helpful astronomy guy, Dr. Flaherty, found the average time to hiring was 5 years, so let’s say I’m 32 at this point. If offered a position, I’ll spend the next year or two renovating and building a lab, buying equipment, recruiting talented graduate students, and designing and teaching courses. People work really, really hard during this time and have essentially no leisure time. Now I’m 34. Within usually 5 years I’ll need to apply for tenure. This means that by the time I’m 36, I’ll need to be making significant contributions in my field, and then in the final year before applying for tenure, I will once more need to travel to many conferences to promote my work, in order to secure tenure — if I fail to do so, my position at the university would probably be terminated. Although many universities offer a “tenure extension” in cases where an assistant professor has had a child, this does not solve all of the problems. Taking a year off during that critical 5 or 6 year period often means that the research “goes bad” — students flounder, projects that were promising get “scooped” by competitors at other institutions, and sometimes, in biology and chemistry especially, experiments literally go bad. You wind up needing to rebuild much more than just a year’s worth of effort.

At no point during this time do I appear stable enough, career-wise, to take even six months off to be pregnant and care for a newborn. Hypothetical future-me is travelling around, or even moving, conducting and promoting my own independent research and training students. As you’re likely aware, very pregnant people and newborns don’t travel well. And academia has a very individualistic and meritocratic culture. Starting at the graduate level, huge emphasis is based on independent research, and independent contributions, rather than valuing team efforts. This feature of academia is both a blessing and a curse. The individualistic culture means that people have the independence and the freedom to pursue whatever research interests them — in fact this is the main draw for me personally. But it also means that there is often no one to fall back on when you need extra support, and because of biological constraints, this winds up impacting women more than men.

At this point, I need to make sure that you’re aware of some basics of female reproductive biology. According to Wikipedia, the unquestionable source of all reliable knowledge, at age 25, my risk of conceiving a baby with chromosomal abnormalities (including Down’s Syndrome) is 1 in about 1400. By 35, that risk more than quadruples to 1 in 340. At 30, I have a 75% chance of a successful birth in one year, but by 35 it has dropped to 66%, and by 40 it’s down to 44%. Meanwhile, 87 to 94% of women report at least 1 health problem immediately after birth, and 1.5% of mothers have a severe health problem, while 31% have long-term persistent health problems as a result of pregnancy (defined as lasting more than six months after delivery). Furthermore, mothers over the age of 35 are at higher risk for pregnancy complications like preterm delivery, hypertension, superimposed preeclampsia, severe preeclampsia (Cavazos-Rehg et al 2016). Because of factors like these, pregnancies in women over 35 are known as “geriatric pregnancies” due to the drastically increased risk of complications. This tight timeline for births is often called the “biological clock” — if women want a family, they basically need to start before 35. Now, that’s not to say it’s impossible to have a child later on, and in fact some studies show that it has positive impacts on the child’s mental health. But it is riskier.

So, women with a PhD in STEM know that they have the capability to make interesting contributions to STEM, and to make plenty of money doing it. They usually marry someone who also has or expects to make a high salary as well. But this isn’t the only consideration. Such highly educated women are usually aware of the biological clock and the risks associated with pregnancy, and are confident in their understanding of statistical risks.

The Irish say, “The common challenge facing young women is achieving a satisfactory work-life balance, especially when children are small. From a career perspective, this period of parenthood (which after all is relatively short compared to an entire working life) tends to coincide exactly with the critical point at which an individual’s career may or may not take off. […] All the evidence shows that it is at this point that women either drop out of the workforce altogether, switch to part-time working or move to more family-friendly jobs, which may be less demanding and which do not always utilise their full skillset.”

And in the Netherlands, “The research project in Tilburg also showed that women academics have more often no children or fewer children than women outside academia.” Meanwhile in Italy “On a personal level, the data show that for a significant number of women there is a trade-off between family and work: a large share of female economists in Italy do not live with a partner and do not have children”

Most jobs available to women with STEM PhDs offer greater stability and a larger salary earlier in the career. Moreover, most non-academic careers have less emphasis on independent research, meaning that employees usually work within the scope of a larger team, and so if a person has to take some time off, there are others who can help cover their workload. By and large, women leave to go to a career where they will be stable, well funded, and well supported, even if it doesn’t fulfill their passion for STEM — or they leave to be stay-at-home moms or self-employed.

I would presume that if we made academia a more feasible place for a woman with a family to work, we could keep almost all of those 20% of leavers who leave to just stay at home, almost all of the 30% who leave to self-employment, and all of those 30% who leave to more family friendly careers (after all, if academia were made to be as family friendly as other careers, there would be no incentive to leave). Of course, there is nothing wrong with being a stay at home parent — it’s an admirable choice and contributes greatly to our society. One estimate valued the equivalent salary benefit of stay-at-home parenthood at about $160,000/year. Moreover, children with a stay-at-home parent show long term benefits such as better school performance — something that most academic women would want for their children. But a lot of people only choose it out of necessity — about half of stay-at-home moms would prefer to be working (Ciciolla, Curlee, & Luthar 2017). When the reality is that your salary is barely more than the cost of daycare, then a lot of people wind up giving up and staying home with their kids rather than paying for daycare. In a heterosexual couple it will usually be the woman that winds up staying home since she is the one who needs to do things like breast feed anyways. And so we lose these women from the workforce.

And yet, somehow, during this informal research adventure of mine, most scholars and policy makers seem to be advising that we try to encourage young girls to be interested in STEM, and to address sexism in the workplace, with the implication that this will fix the high attrition rate in STEM women. But from what I’ve found, the stats don’t back up sexism as the main reason women leave. There is sexism, and that is a problem, and women do leave STEM because of it — but it’s a problem that we’re already dealing with pretty successfully, and it’s not why the majority of women who have already obtained STEM PhDs opt to leave the field. The whole family planning thing is huge and for some reason, almost totally swept under the rug — mostly because we’re too shy to talk about it, I think.

In fact, I think that the plethora of articles suggesting that the problem is sexism actually contribute to our unwillingness to talk about the family planning problem, because it reinforces the perception that that men in power will not hire a woman for fear that she’ll get pregnant and take time off. Why would anyone talk about how they want to have a family when they keep hearing that even the mere suggestion of such a thing will limit their chances of being hired? I personally know women who have avoided bringing up the topic with colleagues or supervisors for fear of professional repercussions. So we spend all this time and energy talking about how sexism is really bad, and very little time trying to address the family planning challenge, because, I guess, as the stats show, if women are serious enough about science then they just give up on the family (except for the really, really exceptional ones who can handle the stresses of both simultaneously).

To be very clear, I’m not saying that sexism is not a problem. What I am saying is that, thanks to the sustained efforts of a large number of people over a long period of time, we’ve reduced the sexism problem to the point where, at least at the graduate level, it is no longer the largest major barrier to women’s advancement in STEM. Hurray! That does not mean that we should stop paying attention to the issue of sexism, but does mean that it’s time to start paying more attention to other issues, like how to properly support women who want to raise a family while also maintaining a career in STEM.

So what can we do to better support STEM women who want families?

A couple of solutions have been tentatively tested. From a study mentioned above, it’s clear that providing free and conveniently located childcare makes a colossal difference to women’s choices of whether or not to stay in STEM, alongside extended and paid maternity leave. Another popular and successful strategy was implemented by a leading woman in STEM, Laurie Glimcher, a past Harvard Professor in Immunology and now CEO of Dana-Farber Cancer Institute. While working at NIH, Dr. Glimcher designed a program to provide primary caregivers (usually women) with an assistant or lab technician to help manage their laboratories while they cared for children. Now, at Dana-Farber Cancer Institute, she has created a similar program to pay for a technician or postdoctoral researcher for assistant professors. In the academic setting, Dr. Glimcher’s strategies are key for helping to alleviate the challenges associated with the individualistic culture of academia without compromising women’s research and leadership potential.

For me personally, I’m in the ideal situation for an academic woman. I graduated my BSc with high honours in four years, and with many awards. I’ve already had success in research and have published several peer reviewed papers. I’ve faced some mild sexism from peers and a couple of TAs, but nothing that’s seriously held me back. My supervisors have all been extremely supportive and feminist, and all of the people that I work with on a daily basis are equally wonderful. Despite all of this support, I’m looking at the timelines of an academic career, and the time constraints of female reproduction, and honestly, I don’t see how I can feasible expect to stay in academia and have the family life I want. And since I’m in the privileged position of being surrounded by supportive and feminist colleagues, I can say it: I’m considering leaving academia, if something doesn’t change, because even though I love it, I don’t see how it can fit in to my family plans.

But wait! All of these interventions are really expensive. Money doesn’t just grow on trees, you know!

It doesn’t in general, but in this case it kind of does — well, actually, we already grew it. We spend billions of dollars training women in STEM. By not making full use of their skills, if we look at only the american economy, we are wasting about $1.5 billion USD per year in economic benefits they would have produced if they stayed in STEM. So here’s a business proposal: let’s spend half of that on better family support and scientific assistants for primary caregivers, and keep the other half in profit. Heck, let’s spend 99% — $1.485 billion (in the states alone) on better support. That should put a dent in the support bill, and I’d sure pick up $15 million if I saw it lying around. Wouldn’t you?

By demonstrating that we will support women in STEM who choose to have a family, we will encourage more women with PhDs to apply for the academic positions that they are eminently qualified for. Our institutions will benefit from the wider applicant pool, and our whole society will benefit from having the skills of these highly trained and intelligent women put to use innovating new solutions to our modern day challenges.

by Scott at January 16, 2020 08:21 PM UTC

The Theory Group in the Dept. of Computing Science at U. of Alberta invites applications for a postdoc position.
The successful applicant is expected to work closely with Zachary Friggstad and Mohammad Salavatipour in the areas of: approximation algorithms, hardness of approximation, combinatorial optimization. For details see https://webdocs.cs.ualberta.ca/~mreza/pdf-ad7.pdf

Website: https://webdocs.cs.ualberta.ca/~mreza/
Email: mrs@ualberta.ca

by shacharlovett at January 16, 2020 07:48 PM UTC

by David Eppstein at January 15, 2020 10:59 PM UTC


How does this intersect David Deutsch’s thoughts in 1985?

Composite crop from homepages

Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen (JNVWY) have just posted a paper titled {\mathsf{MIP}^* = \mathsf{RE}}. The title means that multiple provers sharing quantum entanglement, given any Turing machine {M} and string {x} accepted by {M}, can convince a polynomial-time bounded verifier with high probability that {x \in L(M)}. The time is polynomial in {|x|} regardless of how long {M} takes to halt on {x}.

Today we applaud this work and try to convey some short way of apprehending it.

Yoking a classic undecidable problem to a polynomial-time task is not the only surprise. The proof refutes a conjecture in classical functional analysis that had apparently been widely believed. Thus this story continues the theme of surprises and possibly working the wrong way on conjectures, as we also just mentioned in our previous post. The new work subsumes a major paper last year showing that {\mathsf{MIP}^*} contains nondeterministic double exponential time ({\mathsf{NEEXP}}), which proves it different from its classical counterpart {\mathsf{MIP}}, which László Babai, Lance Fortnow, and Carsten Lund proved equal to {\mathsf{NEXP}}.

The developments have been covered by Scott Aaronson here, Lance here, Boaz Barak here, and in a personal way by Vidick here. The new paper weights in at 165 pages. We will give our own snap-summary and try to add a little from the side.

A Halting Attempt to Explain

The refuted conjecture was made by the Fields Medalist Alain Connes in a context having no overt involvement of quantum mechanics. In these 2013 eightlecture course notes on the conjecture, the word “quantum” appears only once, to say on page 2 of lecture 1:

Other very recent discoveries include the fact that Connes’ embedding conjecture is related to an important problem in Quantum Information Theory, the so-called Tsirelson’s problem…

The problem of Boris Tsirelson ultimately harks back to the theorem of John Bell about correlations that are physically realizable using quantum entanglement but not by any classical physical system. In the CHSH game form of Bell’s theorem, our old friends Alice and Bob can win the game over 85% of the time using quantum, only 75% otherwise. They can get this with just one pair of entangled qubits per trial. Tsirelson proved that the 85% (to wit, {\cos^2(\pi/8) = 0.85355\dots}) is optimal. In extensions of these games to larger-size cases, the question becomes: what are the gaps between quantum and classical?

Whether there is a gap of more than a fixed {\epsilon} then feeds into interactive protocols. We can have parties trying to prove these gaps using their own entanglement. Where it gets tricky is when you allow Alice and Bob to use larger and larger quantum states and ask, can they achieve the gap with some large enough state? The limiting behavior of the gaps is complex. What JNVWY proved is that this becomes like a halting problem. Not just a halting problem, the Halting Problem. Yet two quantum provers, working for a given gap that is achievable, can prove this to a polynomial-time classical verifier. This is the magic of the theorem.

The reduction from halting to the problem about limits and gaps comes before introducing two-prover systems, as is reflected by JNVWY and also in the wonderful introduction of a 2017 paper by William Slofstra which they reference. In advance of saying more about it, we’ll remark that the new work may provide a new dictionary for translating between (i) issues of finite/infinite precision and other continuous matters, and (ii) possible evolutions of a system of finite size {n} in discrete steps of time and size, where both are unbounded but (in positive cases) finite.

A Flashback?

The results strikes Dick and me as shedding new light on a principle stated by David Deutsch in a 1985 paper:

Every finitely realisable physical system can be perfectly simulated by a universal model computing machine operating by finite means.

I was a student at Oxford alongside Deutsch in 1984–1985, and I remember more going on than the searchable record seems to reflect. Deutsch believed that his model of a quantum computer could solve the Halting problem in finite time. He gave at least one talk at the Oxford Mathematical Institute on that claim. As far as I know the claim stayed local to Oxford and generated intense discussion led by Robin Gandy, Roger Penrose, and (if I recall) Angus Macintyre and at least one other person who was versed in random physical processes.

My recollection is that the nub of the technical argument turned on a property of infinite random sequences that, when hashed out, made some associated predicates decidable, so that Deutsch’s functions were classically total computable after all. Thus the hypercomputation claim was withdrawn.

Now, however, I wonder whether the two-prover system constitutes the kind of “machine” that Deutsch was intuitively thinking of. As I recall, his claim was not deemed wrong from first principles but from how theorems about random sequences interacted with machine model definitions. The theory of interactive provers as computational systems was then in its infancy. Could Deutsch have had some inkling of it?

Open Problems

Again we congratulate JNVWY on this achievement of a long-term research goal. Looking at the past, does it relate to the discussion of hypercomputation stemming from the 1980s? We mean a stronger connection than treated here or in this 2018 paper. Is it much different from ones where “the mystery … vanishes when the level of precision is explicitly taken into account” (quoting this). Looking ahead, are there any connection to the physical issues of infinity in finite time that we recently discussed here?


Updates 1/17: Gil Kalai has a post with background on further conjectures impacted by (the failure of) Connes’s conjecture and on quantum prover systems, plus a plethora of links.

A new article in Nature includes the following quotations:

  • Alain Connes, after saying he was not (yet) able to digest all the material involved in the proof: “It is amazing that the [embedding] problem went so deep and I never foresaw that!”

  • Tony Cubitt, University College of London: “What’s amazing is that quantum complexity theory has been the key to the proof.”

  • Joseph Fitzsimons, Horizon Quantum Computing: “I thought it would turn out to be one of those complexity-theory questions that might take 100 years to answer.”

The article and comments on Scott’s blog include interpretations that seem to oppose rather than support Deutsch’s principle on the finiteness of nature. The tandem of two unlimited provers may not qualify as a “finite machine.”

There are comments below querying whether the theorem is in first-order arithmetic or how strong a choice axiom it may need.


[added to first paragraph in second section, added updates]

by KWRegan at January 15, 2020 09:24 PM UTC

This semester I taught another iteration of my “Introduction to Theoretical Computer Science” course, based on my textbook in process. The book was also used in University of Virgnia CS 3102 by David Evans and Nathan Brunelle.

The main differences I made in the text and course since its original version were to make it less “idiosyncratic”: while I still think using programming language terminology is the conceptually “right” way to teach this material, there is a lot to be said for sticking with well-established models. So, I used Boolean circuits as the standard model for finite-input non-uniform computation, and Turing Machines, as the standard model for unbounded-input uniform computation. (I do talk about the equivalent programming languages view of both models, which can be a more useful perspective for some results, and is also easier to work with in code.)

In any course on intro to theoretical CS, there are always beautiful topics that are left on the “cutting room floor”. To partially compensate for that, we had an entirely optional “advanced section” where guest speakers talked about topics such as error correcting codes, circuit lower bounds, communication complexity, interactive proofs, and more. The TA in charge of this section – amazing sophomore named Noah Singer – wrote very detailed lecture notes for this section.

This semester, students in CS 121 could also do an optional project. Many chose to do a video about topics related to the course, here are some examples:

There is much work to still do on both the text and the course. Though the text has improved a lot (we do have 267 closed issues after all) some students still justifiably complained about typos, which can throw off people that are just getting introduced to the topic. I also want to add significantly more solved exercises and examples, since students do find them extremely useful. I need to significantly beef up the NP completeness chapter with more examples of reductions, though I do have Python implementation of several reductions and the Cook Levin theorem.

This type of course is often known as a “great ideas” in computer science, and so in the book I also added a “Big Idea” environment to highlight those. Of course some of those ideas are bigger than others, but I think the list below reflects well the contents of the course:

  • If we can represent objects of type T as strings, then we can represent tuples of objects of type T as strings as well.
  • A function is not the same as a program. A program computes a function.
  • Two models are equivalent in power if they can be used to compute the same set of functions.
  • Every finite function can be computed by a large enough Boolean  circuit.
  • program is a piece of text, and so it can be fed as input to other  programs.
  • Some functions  f:\{0,1\}^n \rightarrow \{0,1\}  cannot be computed by a Boolean circuit using fewer than exponential (in n) number of gates.
  • We can precisely define what it means for a function to be computable by any possible algorithm.
  • Using equivalence results such as those between Turing and RAM machines, we can “have our cake and eat it too”: We can use a simpler model such as Turing machines when we want to prove something can’t be done, and use a feature-rich model such as RAM machines when we want to prove something can be done.
  • There is a  “universal” algorithm that can evaluate arbitrary algorithms on arbitrary inputs.
  • There are some functions that can not be computed by any algorithm.
  • If a function F is uncomputable we can show that another function H is uncomputable by giving a way to reduce the task of computing F to computing H.
  • We can use restricted computational models to bypass limitations such as uncomputability of the Halting problem and Rice’s Theorem. Such models can compute only a restricted subclass of functions, but allow to answer at least some semantic questions on programs.
  • A proof is just a string of text whose meaning is given by a verification algorithm.
  • The running time of an algorithm is not a number, it is a function of the length of the input.
  • For a function F:{0,1}^* \rightarrow {0,1} and T:\mathbb{N} \rightarrow \mathbb{N}, we can formally define what it means for F to be computable in time at most T(n) where n is the size of the input.
  • All “reasonable” computational models are equivalent if we only care about the distinction between  polynomial and exponential. (The book immediately notes quantum computers as a possible exception for this.)
  • If we have more time, we can compute more functions.
  • By “unrolling the loop” we can transform an algorithm that takes T(n) steps to compute F into a circuit that uses poly(T(n)) gates to compute the restriction of F to {0,1}^n.
  • A reduction F \leq_p G shows that F is “no harder than G” or equivalently that G is “no easier than F“.
  • If a single \mathbf{NP}-complete has a polynomial-time algorithm, then there is such an algorithm for every decision problem that corresponds to the existence of an efficiently-verifiable solution.
  • If \mathbf{P}=\mathbf{NP}, we can efficiently solve a fantastic number of decision, search, optimization, counting, and sampling problems from all areas of human endeavors.
  • A randomized algorithm outputs the correct value with good probability on every possible input.
  • We can amplify the success of randomized algorithms to a value that is arbitrarily close to 1.
  • There is no secrecy without randomness.
  • Computational hardness is necessary and sufficient for almost all cryptographic applications.
  • Just as we did with classical computation, we can define mathematical models for quantum computation, and represent quantum algorithms as binary strings.
  • Quantum computers are not a panacea and are unlikely to solve \mathbf{NP} complete problems, but they can provide exponential speedups to certain structured problems.

These are all ideas that I believe are important for Computer Science undergraduates to be exposed to, but covering all of these does make for a every challenging course, which gets literally mixed reviews from the students, with some loving it and some hating it. (I post all reviews on the course home page.) Running a 200-student class is definitely something that I’m still learning how to do.

by Boaz Barak at January 15, 2020 08:31 PM UTC

Update: The Fellowship has been extended to all IBM Research locations, including the Almaden laboratory. The Mathematical Sciences Council of IBM Research invites applications for the Herman Goldstine Memorial Postdoctoral Fellowship for research in mathematical and computer sciences. Areas of interest include theoretical computer science and optimization.

Website: https://www.research.ibm.com/goldstine/
Email: postdoc@us.ibm.com

by shacharlovett at January 15, 2020 08:09 PM UTC

In this paper, we study the static cell probe complexity of non-adaptive data structures that maintain a subset of $n$ points from a universe consisting of $m=n^{1+\Omega(1)}$ points. A data structure is defined to be non-adaptive when the memory locations that are chosen to be accessed during a query depend only on the query inputs and not on the contents of memory. We prove an $\Omega(\log m / \log (sw/n\log m))$ static cell probe complexity lower bound for non-adaptive data structures that solve the fundamental dictionary problem where $s$ denotes the space of the data structure in the number of cells and $w$ is the cell size in bits. Our lower bounds hold for all word sizes including the bit probe model ($w = 1$) and are matched by the upper bounds of Boninger et al. [FSTTCS'17]. Our results imply a sharp dichotomy between dictionary data structures with one round of adaptive and at least two rounds of adaptivity. We show that $O(1)$, or $O(\log^{1-\epsilon}(m))$, overhead dictionary constructions are only achievable with at least two rounds of adaptivity. In particular, we show that many $O(1)$ dictionary constructions with two rounds of adaptivity such as cuckoo hashing are optimal in terms of adaptivity. On the other hand, non-adaptive dictionaries must use significantly more overhead. Finally, our results also imply static lower bounds for the non-adaptive predecessor problem. Our static lower bounds peak higher than the previous, best known lower bounds of $\Omega(\log m / \log w)$ for the dynamic predecessor problem by Boninger et al. [FSTTCS'17] and Ramamoorthy and Rao [CCC'18] in the natural setting of linear space $s = \Theta(n)$ where each point can fit in a single cell $w = \Theta(\log m)$. Furthermore, our results are stronger as they apply to the static setting unlike the previous lower bounds that only applied in the dynamic setting.

at January 15, 2020 02:42 PM UTC

The Internets are buzzing about the new paper MIP* = RE by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen. See posts by Scott, Boaz, not to mention a wonderful backstory by Vidick himself and a tweet stream by Yeun. I'm not an expert enough to verify or even try to explain the proof so I'll just give a brief overview of the result.

For those not familiar with the classes, RE (recursively enumerable) is the simplest of all complexity classes, a language is in RE if there is some Turing machine M such that x is in L if and only if M on input x accepts. For x not in L, M on x can reject or run forever. The classic halting problem, the set of descriptions of Turing machines that halt on empty input, is RE-complete. To nitpick the notation, it should have been r.e. and even c.e. (computably enumerable), a more standard notation these days. But given the importance of the result, we can give the authors a pass.

MIP* is the set of things provable to a classically random polynomial-time verifier by two separated provers with an unlimited number of quantumly entangled qubits. Without the quantum entanglement, MIP = NEXP, nondeterministic exponential time, and last year Natarajan and Wright showed that MIP* could do at least exponentially better in their paper, NEEXP in MIP*. NEEXP seems large but still only consists of computable sets. RE gets outside of the computable realm.

I found the first paper more surprising, as it showed that quantum entanglement actually gets more, much more, than classical provers. The second paper does get a much stronger and tight result, and still highly surprising in its own right, as it requires disproving the Connes' embedding conjecture. In the end we may just consider this one result, as the second paper subsumes the first both in theorem and authors.

We didn't award the 2019 theorem of the year to Natarajan and Wright, instead opting for a paper that had more, how should I say this, sensitivity. This new paper is certainly the front runner for the 2020 honors, albeit it is only mid-January.

by Lance Fortnow (noreply@blogger.com) at January 14, 2020 11:41 PM UTC

I've spent Sunday/Monday at ITCS, or Innovations in Theoretical Computer Science, where I am giving a talk on this paper on Scheduling with Prediction and the Price of Misprediction (LIPIcs page) (which is one of several recent works on Algorithms with Predictions (powerpoint slides)).

I'm told it's 10 years since ITCS started as a conference, and I was one of the skeptics that really did not think it was a good idea 10 years ago.  So as I'm sitting in the sessions, what do I think now?  What are the pros and cons of ITCS?

On the negative side, ITCS is not very big.  It is just over 100 people registered, so it's like a big workshop/small conference size.  (And considering that it's usually held in central places with lots of students, those numbers are buffered by locals.)  Somehow, it's scheduled the 2nd week of January, right after SODA, which seems problematic, and certainly (if it's kept that way) may keep it from getting any larger.  The number of "senior people" around at any one time seemed generally small, a problem for things like "Graduating Bits" (see below).  As ITCS, at least 10 years ago, was supposed to build up to another "premier" TCS conference, focused on "innovative" work, the attendance seems a bit disappointing.

On the neutral side, the conference is single-session, and to make that work, talks this year are limited to 12 minutes.  Your mileage may vary on whether you think this is good or bad;  it seems to work.  (My take:  I slightly prefer parallel sessions, because it means there's more times where there's something I want to see, and that's more important than the times where there are two talks at the same time I want to see.  And 12 minutes is certainly workable but maybe a few minutes short.  But again, that's just my opinion.)  This year (and possibly going forward), some papers were accepted without a full talk -- instead they have a 3-minute talk and a poster in a poster session.  Again, it's not clear to me if this is good or bad (though more paper acceptances makes me lean to the good side), but it seemed to work fine and people were happy with it.  (Such papers are considered full publications and treated the same in the proceedings.)

On the positive side, the conference seems well run.  It's held in a university building, so no expensive hotel costs;  instead they're generous with food and keep registration costs reasonably low.  They use LIPIcs, which provides a good system and approach for publishing papers at low cost.  (Note, I was until recently part of the LIPIcs editorial board, so I'm biased there.)  They seem to be covering their expenses from what I understand.  The business meeting was blessedly short.  They're recording all the talks.  They're doing interesting things like "Graduating Bits" for people who are job-looking (where people graduating or coming out of a postdoc give short talks about their work).

In terms of content, it seems really good.  I've seen several good talks and interesting papers.  While I'm not sure how to quantify whether ITCS work is more "innovative" than the work at other major TCS conferences, I do actually think they are noticeably more open at ITCS than other conferences about accepting papers based on the paper's underlying idea rather than on "technical mastery".

My thoughts 10 years ago were that ITCS was not a great outcome for the community, and that instead the community should push for:

1)  Aiming to do better about opening up the criteria for paper acceptance, including weighing innovation/practical relevance in reviewing papers at FOCS/STOC/SODA.
2)  Increasing the number of papers accepted to these conferences, as too many good papers were being rejected.

Viewed under this lens, ITCS could, I think, be viewed as a success.  The theory community seems unwilling to expand conferences by accepting more papers.  (I note that while the STOC theory-fest has changed and expanded STOC, it hasn't really seemed to increase attendance, and while the number of accepted papers has increased slightly, it hasn't kept pace with the growth in the field.)  ITCS provides another venue for high-quality theory papers, thereby increasing the number of such papers published each year within the theory community, and I think it is generally viewed as a high-quality conference.  And, as I mentioned, ITCS seems at least somewhat more flexible in its criteria for what is an acceptable paper.  ITCS has, I think, in these regards been a benefit for the theory community.

However, despite the success of ITCS, I think it's a band-aid on structural issues in the theory community.  While these issues are complex and many-sided, just comparing with the growth and excitement in the AI community is a little depressing.  Indeed, what I see is AI absorbing significant parts of the theory community;  lots of theory papers now end up at NeurIPS, AAAI, ICML, or AISTATS, because the theory community doesn't seem to have room for them, or doesn't judge them as sufficiently significant.  I view this as a problem for the theory community, the result of the problems I saw 10 years ago for which I didn't think ITCS was the right response.  (Though perhaps it was/is not really viewed as a problem by others;  all of CS, including theory, seems to continue growing;  theory just seems to me to be growing less than it could or should.)

To conclude, my thanks and congratulations to those many people that have organized and maintained ITCS over the years;  I appreciate your work, as I think the whole community does. 

By the way, I thought it never snowed in Seattle, so I'm confused; what is all the cold white stuff outside?

by Michael Mitzenmacher (noreply@blogger.com) at January 14, 2020 08:27 PM UTC

In an exciting manuscript just posted on the arxiv, Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen prove that there is a 2-prover quantum protocol (with shared entanglement) for the halting problem. As a consequence they resolve negatively a host of open problems in quantum information theory and operator algebra, including refuting the longstanding Connes embedding conjecture. See also Scott’s post and this blog post of Thomas Vidick discussing his personal history with these questions, that started with his Masters project under Julia Kempe’s supervision 14 years ago.

I am not an expert in this area, and still have to look the paper beyond the first few pages, but find the result astounding. In particular, the common intuition is that since all physical quantities are “nice” function (continuous, differentiable, etc..), we could never distinguish between the case that the universe is infinite or discretized at a fine enough grid. The new work (as far as I understand) provides a finite experiment that can potentially succeed with probability 1 if the two provers use an infinite amount of shared entangled state, but would succeed with probability at most 1/2 if they use only a finite amount. A priori you would expect that if there is a strategy that succeeds with probability 1 with an infinite entanglement, then you could succeed with probability at least 1-\epsilon with a finite entangled state whose dimension depends only on \epsilon.

The result was preceded by Ito and Vidick’s 2012 result that \mathbf{NEXP} \subseteq \mathbf{MIP^*} and Natarajan and Wright’s result last year that \mathbf{NEEXP} (non deterministic double exponential time) is contained in \mathbf{MIP^*}. This brings to mind Edmonds’ classic quote that:

“For practical purposes the difference between algebraic and exponential order is often more crucial than the difference between finite and non-finite”

sometimes, the difference between double-exponential and infinite turns out to be non-existent..

by Boaz Barak at January 14, 2020 05:40 PM UTC

Another Update (Jan. 16): Yet another reason to be excited about this result—one that somehow hadn’t occurred to me—is that, as far as I know, it’s the first-ever fully convincing example of a non-relativizing computability result. See this comment for more.

Update: If you’re interested in the above topic, then you should probably stop reading this post right now, and switch to this better post by Thomas Vidick, one of the authors of the new breakthrough. (Or this by Boaz Barak or this by Lance Fortnow or this by Ken Regan.) (For background, also see Thomas Vidick’s excellent piece for the AMS Notices.)

Still here? Alright, alright…

Here’s the paper, which weighs in at 165 pages. The authors are Zhengfeng Ji, Anand Natarajan, my former postdoc Thomas Vidick, John Wright (who will be joining the CS faculty here at UT Austin this fall), and my wife Dana’s former student Henry Yuen. Rather than pretending that I can provide intelligent commentary on this opus in the space of a day, I’ll basically just open my comment section to discussion and quote the abstract:

We show that the class MIP* of languages that can be decided by a classical verifier interacting with multiple all-powerful quantum provers sharing entanglement is equal to the class RE of recursively enumerable languages. Our proof builds upon the quantum low-degree test of (Natarajan and Vidick, FOCS 2018) by integrating recent developments from (Natarajan and Wright, FOCS 2019) and combining them with the recursive compression framework of (Fitzsimons et al., STOC 2019).
An immediate byproduct of our result is that there is an efficient reduction from the Halting Problem to the problem of deciding whether a two-player nonlocal game has entangled value 1 or at most 1/2. Using a known connection, undecidability of the entangled value implies a negative answer to Tsirelson’s problem: we show, by providing an explicit example, that the closure Cqa of the set of quantum tensor product correlations is strictly included in the set Cqc of quantum commuting correlations. Following work of (Fritz, Rev. Math. Phys. 2012) and (Junge et al., J. Math. Phys. 2011) our results provide a refutation of Connes’ embedding conjecture from the theory of von Neumann algebras.

To say it differently (in response to a commenter’s request), some of the major implications are as follows.

(1) There is a protocol by which two entangled provers can convince a polynomial-time verifier of the answer to any computable problem whatsoever (!!), or indeed that a given Turing machine halts.

(2) There is a two-prover game, analogous to the Bell/CHSH game, for which Alice and Bob can do markedly better with a literally infinite amount of entanglement than they can with any finite amount of entanglement.

(3) There is no algorithm even to approximate the entangled value of a two-prover game (i.e., the probability that Alice and Bob win the game, if they use the best possible strategy and as much entanglement as they like). Instead, this problem is equivalent to the halting problem.

(4) There are types of correlations between Alice and Bob that can be produced using infinite entanglement, but that can’t even be approximated using any finite amount of entanglement.

(5) The Connes embedding conjecture, a central conjecture from the theory of operator algebras dating back to the 1970s, is false.

Note that all of these implications—including the ones for pure math and the foundations of quantum physics—were obtained using tools that originated in theoretical computer science, specifically the study of interactive proof systems.

I can remember when the class MIP* was first defined and studied, back around 2003, and people made the point that we didn’t know any reasonable upper bound on the class’s power—not NEXP, not NEEEEXP, not even the set of all computable languages. Back then, the joke was how far our proof techniques were from what was self-evidently the truth. I don’t remember a single person who seriously contemplated that two entangled provers could convince a polynomial-time verifier than an arbitrary Turing machine halts.

Still, ever since Natarajan and Wright’s NEEXP in MIP* breakthrough last year, all of us in quantum computing theory knew that MIP*=RE was a live possibility—and all through the summer and fall, I heard many hints that such a breakthrough was imminent.

It’s worth pointing out that, with only classical correlations between the provers, MIP gives “merely” the power of NEXP (Nondeterministic Exponential Time), while with arbitrary non-signalling correlations between the provers, the so-called MIPns gives the power of EXP (Deterministic Exponential Time). So it’s particularly striking that quantum entanglement, which is “intermediate” between classical correlations and arbitrary non-signalling correlations, yields such wildly greater computational power than either of those two.

The usual proviso applies: when I’ve blogged excitedly about preprints with amazing new results, most have stood, but at least two ended up being retracted. Still, assuming this one stands (as I’m guessing it will), I regard it as easily one of the biggest complexity-theoretic (and indeed computability-theoretic!) surprises so far in this century. Huge congratulations to the authors on what looks to be a historic achievement.

In unrelated news, for anyone for whom the 165-page MIP* paper is too heavy going (really??), please enjoy this CNBC video on quantum computing, which features several clips of yours truly speaking in front of a fake UT tower.

In other unrelated news, I’m also excited about this preprint by Avishay Tal, which sets a new record for the largest known separation between quantum query complexity and classical randomized query complexity, making substantial progress toward proving a conjecture by me and Andris Ambainis from 2015. (Not the “Aaronson-Ambainis Conjecture,” a different conjecture.)

by Scott at January 14, 2020 04:59 PM UTC

The ACM EC20 conference to be held on July 13-17, 2020 in Budapest is now calling for proposals for tutorials and workshops.  The deadline for submission of such proposals is March 2nd, 2020.

by Noam Nisan at January 14, 2020 09:56 AM UTC

In a previous post I reported on the beautiful recent result by Natarajan and Wright showing the astounding power of multi-prover interactive proofs with quantum provers sharing entanglement: in letters, {\text{NEEXP} \subseteq \text{MIP}^\star}. In this post I want to report on follow-up work with Ji, Natarajan, Wright, and Yuen, that we just posted to arXiv. This time however I will tell the story from a personal point of view, with all the caveats that this implies: the “hard science” will be limited (but there could be a hint as to how “science”, to use a big word, “progresses”, to use an ill-defined one), the story is far too long, and it might be mostly of interest to me only. It’s a one-sided story, but that has to be. (In particular below I may at times attribute credit in the form “X had this idea”. This is my recollection only, and it is likely to be inaccurate. Certainly I am ignoring a lot of important threads.) I wrote this because I enjoyed recollecting some of the best moments in the story just as much as some the hardest; it is fun to look back and find meanings in ideas that initially appeared disconnected. Think of it as an example of how different lines of work can come together in unexpected ways; a case for open-ended research. It’s also an antidote against despair that I am preparing for myself: whenever I feel I’ve been stuck on a project for far too long, I’ll come back to this post and ask myself if it’s been 14 years yet — if not, then press on.

It likely comes as a surprise to me only that I am no longer fresh out of the cradle. My academic life started in earnest some 14 years ago, when in the Spring of 2006 I completed my Masters thesis in Computer Science under the supervision of Julia Kempe, at Orsay in France. I had met Julia the previous term: her class on quantum computing was, by far, the best-taught and most exciting course in the Masters program I was attending, and she had gotten me instantly hooked. Julia agreed to supervise my thesis, and suggested that I look into some interesting recent result by Stephanie Wehner that linked the study of entanglement and nonlocality in quantum mechanics to complexity-theoretic questions about interactive proof systems (specifically, this was Stephanie’s paper showing that {\text{XOR-MIP}^\star \subseteq \text{QIP}(2)}).

At the time the topic was very new. It had been initiated the previous year with a beautiful paper by Cleve et al. (that I have recommended to many a student since!). It was a perfect fit for me: the mathematical aspects of complexity theory and quantum computing connected to my undergraduate background, while the relative concreteness of quantum mechanics (it is a physical theory after all) spoke to my desire for real-world connection (not “impact” or even “application” — just “connection”). Once I got myself up to speed in the area (which consisted of three papers: the two I already mentioned, together with a paper by Kobayashi and Matsumoto where they studied interactive proofs with quantum messages), Julia suggested looking into the the “entangled-prover” class {\text{MIP}^\star} introduced in the aforementioned paper by Cleve et al. Nothing was known about this class! Nothing besides the trivial inclusion of single-prover interactive proofs, IP, and the containment in…ALL, the trivial class that contains all languages.
Yet the characterization MIP=NEXP of its classical counterpart by Babai et al. in the 1990s had led to one of the most productive lines of work in complexity of the past few decades, through the PCP theorem and its use from hardness of approximation to efficient cryptographic schemes. Surely, studying {\text{MIP}^\star} had to be a productive direction? In spite of its well-established connection to classical complexity theory, via the formalism of interactive proofs, this was a real gamble. The study of entanglement from the complexity-theoretic perspective was entirely new, and bound to be fraught with difficulty; very few results were available and the existing lines of works, from the foundations of nonlocality to more recent endeavors in device-independent cryptography, provided little other starting point than strong evidence that even the simplest examples came with many unanswered questions. But my mentor was fearless, and far from a novice in terms of defraying new areas, having done pioneering work in areas ranging from quantum random walks to Hamiltonian complexity through adiabatic computation. Surely this would lead to something?

It certainly did. More sleepless nights than papers, clearly, but then the opposite would only indicate dullness. Julia’s question led to far more unexpected consequences than I, or I believe she, could have imagined at the time. I am writing this post to celebrate, in a personal way, the latest step in 15 years of research by dozens of researchers: today my co-authors and I uploaded to the quant-ph arXiv what we consider a complete characterization of the power of entangled-prover interactive proof systems by proving the equality {\text{MIP}^\star = \text{RE}}, the class of all recursively enumerable languages (a complete problem for RE is the halting problem). Without goign too much into the result itself (if you’re interested, we have a long introduction waiting for you), and since this is a personal blog, I will continue on with some personal thoughts about the path that got us there.

When Julia & I started working on the question, our main source of inspiration were the results by Cleve et al. showing that the nonlocal correlations of entanglement had interesting consequences when seen through the lens of interactive proof systems in complexity theory. Since the EPR paper a lot of work in understanding entanglement had already been accomplished in the Physics community, most notably by Mermin, Peres, Bell, and more recently the works in device-indepent quantum cryptography by Acin, Pironio, Scarani and many others stimulated by Ekert’s proposal for quantum key distribution and Mayers and Yao’s idea for “device-independent cryptography”. By then we certainly knew that “spooky action-at-a-distance” did not entail any faster-than-light communication, and indeed was not really “action-at-a-distance” in the first place but merely “correlation-at-a-distance”. What Cleve et al. recognized is that these “spooky correlations-at-a-distance” were sufficiently special so as to not only give numerically different values in “Bell inequalities”, the tool invented by Bell to evidence nonlocality in quantum mechanics, but also have some potentially profound consequences in complexity theory. In particular, examples such as the “Magic Square game” demonstrated that enough correlation could be gained from entanglement so as to defeat basic proof systems whose soundness relied only on the absence of communication between the provers, an assumption that until then had been wrongly equated with the assumption that any computation performed by the provers could be modeled entirely locally. I think that the fallacy of this implicit assumption came as a surprise to complexity theorists, who may still not have entirely internalized it. Yet the perfect quantum strategy for the Magic Square game provides a very concrete “counter-example” to the soundness of the “clause-vs-variable” game for 3SAT. Indeed this game, a reformulation by Aravind and Cleve-Mermin of a Bell Inequality discovered by Mermin and Peres in 1990, can be easily re-framed as a 3SAT system of equations that is not satisfiable and yet is such that the associated two-player clause-vs-variable game has a perfect quantum strategy. It is this observation, made in the paper by Cleve et al., that gave the first strong hint that the use of entanglement in interactive proof systems could make many classical results in the area go awry.

By importing the study of non-locality into complexity theory Cleve et al. immediately brought it into the realm of asymptotic analysis. Complexity theorists don’t study fixed objects, they study families of objects that tend to have a uniform underlying structure and whose interesting properties manifest themselves “in the limit”. As a result of this new perspective focus shifted from the study of single games or correlations to infinite families thereof. Some of the early successes of this translation include the “unbounded violations” that arose from translating asymptotic separations in communication complexity to the language of Bell inequalities and correlations (e.g. this paper). These early successes attracted the attention of some physicists working in foundations as well as some mathematical physicists, leading to a productive exploration that combined tools from quantum information, functional analysis and complexity theory.

The initial observations made by Cleve et al. had pointed to {\text{MIP}^\star} as a possibly interesting complexity class to study. Rather amazingly, nothing was known about it! They had shown that under strong restrictions on the verifier’s predicate (it should be an XOR of two answer bits), a collapse took place: by the work of Hastad, XOR-MIP equals NEXP, but {\text{XOR-MIP}^\star} is included in EXP. This seemed very fortuitous (the inclusion is proved via a connection with semidefinite programming that seems tied to the structure of XOR-MIP protocols): could entanglement induce a collapse of the entire, unrestricted class? We thought (at this point mostly Julia thought, because I had no clue) that this ought not to be the case, and so we set ourselves to show that the equality {\text{MIP}^\star=\text{NEXP}}, that would directly parallel Babai et al.’s characterization MIP=NEXP, holds. We tried to show this by introducing techniques to “immunize” games against entanglement: modify an interactive proof system so that its structure makes it “resistant” to the kind of “nonlocal powers” that can be used to defeat the clause-vs-variable game (witness the Magic Square). This was partially successful, and led to one of the papers I am most proud of — I am proud of it because I think it introduced elementary techniques (such as the use of the Cauchy-Schwarz inequality — inside joke — more seriously, basic things such as “prover-switching”, “commutation tests”, etc.) that are now routine manipulations in the area. The paper was a hard sell! It’s good to remember the first rejections we received. They were not unjustified: the main point of criticism was that we were only able to establish a hardness result for exponentially small completeness-soundness gap. A result for such a small gap in the classical setting follows directly from a very elementary analysis based on the Cook-Levin theorem. So then why did we have to write so many pages (and so many applications of Cauchy-Schwarz!) to arrive at basically the same result (with a {^\star})?

Eventually we got lucky and the paper was accepted to a conference. But the real problem, of establishing any non-trivial lower bound on the class {\text{MIP}^\star} with constant (or, in the absence of any parallel repetition theorem, inverse-polynomial) completeness-soundness gap, remained. By that time I had transitioned from a Masters student in France to a graduate student in Berkeley, and the problem (pre-)occupied me during some of the most difficult years of my Ph.D. I fully remember spending my first year entirely thinking about this (oh and sure, that systems class I had to pass to satisfy the Berkeley requirements), and then my second year — yet, getting nowhere. (I checked the arXiv to make sure I’m not making this up: two full years, no posts.) I am forever grateful to my fellow student Anindya De for having taken me out of the cycle of torture by knocking on my door with one of the most interesting questions I have studied, that led me into quantum cryptography and quickly resulted in an enjoyable paper. It was good to feel productive again! (Though the paper had fun reactions as well: after putting it on the arXiv we quickly heard from experts in the area that we had solved an irrelevant problem, and that we better learn about information theory — which we did, eventually leading to another paper, etc.) The project had distracted me and I set interactive proofs aside; clearly, I was stuck.

About a year later I visited IQC in Waterloo. I don’t remember in what context the visit took place. What I do remember is a meeting in the office of Tsuyoshi Ito, at the time a postdoctoral scholar at IQC. Tsuyoshi asked me to explain our result with Julia. He then asked a very pointed question: the bedrock for the classical analysis of interactive proof systems is the “linearity test” of Blum-Luby-Rubinfeld (BLR). Is there any sense in which we could devise a quantum version of that test?

What a question! This was great. At first it seemed fruitless: in what sense could one argue that quantum provers apply a “linear function”? Sure, quantum mechanics is linear, but that is besides the point. The linearity is a property of the prover’s answers as a function of their question. So what to make of the quantum state, the inherent randomness, etc.?

It took us a few months to figure it out. Once we got there however, the answer was relatively simple — the prover should be making a question-independent measurement that returns a linear function that it applies to its question in order to obtain the answer returned to the verifier — and it opened the path to our subsequent paper showing that the inclusion of NEXP in {\text{MIP}^\star} indeed holds. Tsuyoshi’s question about linearity testing had allowed us to make the connection with PCP techniques; from there to MIP=NEXP there was only one step to make, which is to analyze multi-linearity testing. That step was suggested by my Ph.D. advisor, Umesh Vazirani, who was well aware of the many pathways towards the classical PCP theorem, since the theorem had been obtained in great part by his former student Sanjeev Arora. It took a lot of technical work, yet conceptually a single question from my co-author had sufficed to take me out of a 3-year slumber.

This was in 2012, and I thought we were done. For some reason the converse inclusion, of {\text{MIP}^\star} in NEXP, seemed to resist our efforts, but surely it couldn’t resist much longer. Navascues et al. had introduced a hierarchy of semidefinite programs that seemed to give the right answer (technically they could only show convergence to a relaxation, the commuting value, but that seemed like a technicality; in particular, the values coincide when restricted to finite-dimensional strategies, which is all we computer scientists cared about). There were no convergence bounds on the hierarchy, yet at the same time commutative SDP hierarchies were being used to obtain very strong results in combinatorial optimization, and it seemed like it would only be a matter of time before someone came up with an analysis of the quantum case. (I had been trying to solve a related “dimension reduction problem” with Oded Regev for years, and we were making no progress; yet it seemed someone ought to!)

In Spring 2014 during an open questions session at a workshop at the Simons Institute in Berkeley Dorit Aharonov suggested that I ask the question of the possible inclusion of QMA-EXP, the exponential-sized-proofs analogue of QMA, in {\text{MIP}^\star}. A stronger result than the inclusion of NEXP (under assumptions), wouldn’t it be a more natural “fully quantum” analogue of MIP=NEXP? Dorit’s suggestion was motivated by research on the “quantum PCP theorem”, that aims to establish similar hardness results in the realm of the local Hamiltonian problem; see e.g. this post for the connection. I had no idea how to approach the question — I also didn’t really believe the answer could be positive — but what can you do, if Dorit asks you something… So I reluctantly went to the board and asked the question. Joe Fitzsimons was in the audience, and he immediately picked it up! Joe had the fantastic ideas of using quantum error-correction, or more specifically secret-sharing, to distribute a quantum proof among the provers. His enthusiasm overcame my skepticism, and we eventually showed the desired inclusion. Maybe {\text{MIP}^\star} was bigger than {\text{NEXP}} after all.

Our result, however, had a similar deficiency as the one with Julia, in that the completeness-soundness gap was exponentially small. Obtaining a result with a constant gap took 3 years of couple more years of work and the fantastic energy and insights of a Ph.D. student at MIT, Anand Natarajan. Anand is the first person I know of to have had the courage to dive in to the most technical aspects of the analysis of the aforementioned results, while also bringing in the insights of a “true quantum information theorist” that were supported by Anand’s background in Physics and upbringing in the group of Aram Harrow at MIT. (In contrast I think of myself more as a “raw” mathematician; I don’t really understand quantum states other than as psd matrices…not that I understand math either of course; I suppose I’m some kind of a half-baked mish-mash.) Anand had many ideas but one of the most beautiful ones led to what he poetically called the “Pauli braiding test”, a “truly quantum” analogue of the BLR linearity test that amounts to doing two linearity tests in conjugate bases and piecing the results together into a robust test for {n}-qubit entanglement (I wrote about our work on this here).

At approximately the same time Zhengfeng Ji had another wonderful idea, that was in some sense orthogonal to our work. (My interpretation of) Zhengfeng’s idea is that one can see an interactive proof system as a computation (verifier-prover-verifier) and use Kitaev’s circuit-to-Hamiltonian construction to transform the entire computation into a “quantum CSP” (in the same sense that the local Hamiltonian problem is a quantum analogue of classical constraint satisfaction problems (CSP)) that could then itself be verified by a quantum multi-prover interactive proof system…with exponential gains in efficiency! Zhengfeng’s result implied an exponential improvement in complexity compared to the result by Julia and myself, showing inclusion of NEEXP, instead of NEXP, in {\text{MIP}^\star}. However, Zhengfeng’s technique suffered from the same exponentially small completeness-soundness gap as we had, so that the best lower bound on {\text{MIP}^\star} per se remained NEXP.

Both works led to follow-ups. With Natarajan we promoted the Pauli braiding test into a “quantum low-degree test” that allowed us to show the inclusion of QMA-EXP into {\text{MIP}^\star}, with constant gap, thereby finally answering the question posed by Aharonov 4 years after it was asked. (I should also say that by then all results on {\text{MIP}^\star} started relying on a sequence of parallel repetition results shown by Bavarian, Yuen, and others; I am skipping this part.) In parallel, with Ji, Fitzsimons, and Yuen we showed that Ji’s compression technique could be “iterated” an arbitrary number of times. In fact, by going back to “first principles” and representing verifiers uniformly as Turing machines we realized that the compression technique could be used iteratively to (up to small caveats) give a new proof of the fact (first shown by Slofstra using an embedding theorem for finitely presented group) that the zero-gap version of {\text{MIP}^\star} contains the halting problem. In particular, the entangled value is uncomputable! This was not the first time that uncomputability crops in to a natural problem in quantum computing (e.g. the spectral gap paper), yet it still surprises when it shows up. Uncomputable! How can anything be uncomputable!

As we were wrapping up our paper Henry Yuen realized that our “iterated compression of interactive proof systems” was likely optimal, in the following sense. Even a mild improvement of the technique, in the form of a slower closing of the completeness-soundness gap through compression, would yield a much stronger result: undecidability of the constant-gap class {\text{MIP}^\star}. It was already known by work of Navascues et al., Fritz, and others, that such a result would have, if not surprising, certainly consequences that seemed like they would be taking us out of our depth. In particular, undecidability of any language in {\text{MIP}^\star} would imply a negative resolution to a series of equivalent conjectures in functional analysis, from Tsirelson’s problem to Connes’ Embedding Conjecture through Kirchberg’s QWEP conjecture. While we liked our result, I don’t think that we believed it could resolve any conjecture(s) in functional analysis.

So we moved on. At least I moved on, I did some cryptography for a change. But Anand Natarajan and his co-author John Wright did not stop there. They had the last major insight in this story, which underlies their recent STOC best paper described in the previous post. Briefly, they were able to combine the two lines of work, by Natarajan & myself on low-degree testing and by Ji et al. on compression, to obtain a compression that is specially tailored to the existing {\text{MIP}^\star} protocol for NEXP and compresses that protocol without reducing its completeness-soundness gap. This then let them show Ji’s result that {\text{MIP}^\star} contains NEEXP, but this time with constant gap! The result received well-deserved attention. In particular, it is the first in this line of works to not suffer from any caveats (such as a closing gap, or randomized reductions, or some kind of “unfair” tweak on the model that one could attribute the gain in power to), and it implies an unconditional separation between MIP and {\text{MIP}^\star}.

As they were putting the last touches on their result, suddenly something happened, which is that a path towards a much bigger result opened up. What Natarajan & Wright had achieved is a one-step gapless compression. In our iterated compression paper we had observed that iterated gapless compression would lead to {\text{MIP}^\star=\text{RE}}, implying negative answers to the aforementioned conjectures. So then?

I suppose it took some more work, but in some way all the ideas had been laid out in the previous 15 years of work in the complexity of quantum interactive proof systems; we just had to put it together. And so a decade after the characterization QIP = PSPACE of single-prover quantum interactive proof systems, we have arrived at a characterization of quantum multiprover interactive proof systems, {\text{MIP}^\star = \text{RE}}. With one author in common between the two papers: congratulations Zhengfeng!

Even though we just posted a paper, in a sense there is much more left to do. I am hopeful that our complexity-theoretic result will attract enough interest from the mathematicians’ community, and especially operator algebraists, for whom CEP is a central problem, that some of them will be willing to devote time to understanding the result. I also recognize that much effort is needed on our own side to make it accessible in the first place! I don’t doubt that eventually complexity theory will not be needed to obtain the purely mathematical consequences; yet I am hopeful that some of the ideas may eventually find their way into the construction of interesting mathematical objects (such as, who knows, a non-hyperlinear group).

That was a good Masters project…thanks Julia!

by Thomas at January 14, 2020 01:32 AM UTC

Lance has often said (and also in this) that if P=NP that would be great for the world: much more efficient ways to build things, science could be done better, etc, and that is much more important than that modern crypto would no longer work. We now have the technology to do private key really well--- like a thumb drive that has a billion bits for 1-time pads.

I agree that the world would be better off in some ways, I wonder how much damage would be done in the transition period from public to private key. Would the world recover enough to reap the benefits of P=NP?

First think of what YOU would do if you showed P=NP (and lets assume your algorithm is either reasonable or could be made reasonable with some time and effort).

The novel Factor Man  is about what someone who has solved P=NP does. I won't tell you how it goes, but they deal with the issue intelligently. So if I solved P=NP then I would first re-read it, and think through if I would do that, or modify what is done, or what.  Its a good start.

I reviewed the book in SIGACT News or you can read my review here

On a slightly diff note, here is the latest argument I've heard for why P=NP:

Planar 2-coloring is in P

Planar 4-coloring is in P

So

Planar 3-coloring should be in P.

This was said by a very good math/cs ugrad at UMCP. I do not know if he was kidding.


by GASARCH (noreply@blogger.com) at January 14, 2020 01:28 AM UTC

January 20, 2020 UC Riverside https://www.cs.ucr.edu/~silas/ An all day event to celebrate the TCS research community in Southern California

by shacharlovett at January 14, 2020 12:51 AM UTC

Abstraction is an abstraction. You can’t touch it or taste it or photograph it. You can barely talk about it without resorting to metaphors and analogies. Yet this ghostly concept is an essential tool in both mathematics and computer science. Oddly, it seems to inspire quite different feelings and responses in those two fields. I’ve been wondering why.

In mathematics abstraction serves as a kind of stairway to heaven—as well as a test of stamina for those who want to get there. West stairs to Grandview Park 2017-10-28West stairs to Grand View Park, San Francisco, October 2017. You begin the climb at an early age, at ground level, with things that are not at all abstract. Jelly beans, for example. You learn the important life lesson that if you have five jelly beans and you eat three jelly beans, you will have only two jelly beans left. After absorbing this bitter truth, you are invited to climb the stairs of ab­straction as far as the first landing, where you replace the tasty tangible jelly beans with sugar-free symbols: \(5 - 3 = 2\).

Some years later you reach higher ground. The sym­bols represent­ing par­tic­ular numbers give way to the \(x\)s and \(y\)s that stand for quantities yet to be determined. They are symbols for sym­bols. Later still you come to realize that this algebra business is not just about “solving for \(x\),” for finding a specific number that corresponds to a specific letter. It’s a magical device that allows you to make blanket statements encompassing all numbers: \(x^2 - 1 = (x + 1)(x - 1)\) is true for any value of \(x\).

Continuing onward and upward, you learn to manipulate symbolic expressions in various other ways, such as differentiating and integrating them, or constructing functions of functions of functions. Keep climbing the stairs and eventually you’ll be introduced to areas of mathematics that openly boast of their abstractness. There’s abstract algebra, where you build your own collections of numberlike things: groups, fields, rings, vector spaces. Ben Orlin cartoon: 'Sorry, I only do abstractions, not numbers.' 'But numbers are abstractions.' 'Let me clarify: I only do abstractions of abstractions of abstractions'.jpgCartoon by Ben Orlin, mathwithbaddrawings.com, reprinted under Creative Commons license.Another route up the stairway takes you to category theory, where you’ll find a collection of ideas with the disarming label ab­stract nonsense.

Not everyone is filled with admiration for this Jenga tower of abstrac­tions teetering atop more abstrac­tions. Con­sider Andrew Wiles’s proof of Fermat’s last theorem, and its reception by the public. The theorem, first stated by Pierre de Fermat in the 1630s, makes a simple claim about powers of integers: If \(x, y, z, n\) are all integers greater than \(0\), then \(x^n + y^n = z^n\) has solutions only if \(n \le 2\). The proof of this claim, published in the 1990s, is not nearly so simple. Wiles (with contributions from Richard Taylor) went on a scavenger hunt through much of modern mathematics, collecting a truckload of tools and spare parts needed to make the proof work: elliptic curves, modular forms, Galois groups, functions on the complex plane, L-series. It is truly a tour de force.

Diagram (borrowed from Kenneth A. Ribet and Brian Hayes, “Fermat’s Last Theorem and Modern Arithmetic“) outlines the overall strategy of the Wiles proof. If you had a counterexample to FLT, you could construct an elliptic curve E with certain properties. But the properties deduced on the left and right branches of the diagram turn out to be inconsistent, implying that E does not exist, nor does the counter­example that gave rise to it.Outline of the Wiles-Taylor proof of Fermat's last theorem

Is all that heavy machinery really needed to prove such an innocent-looking state­ment? Many people yearn for a simpler and more direct proof, ideally based on methods that would have been available to Fermat himself. Ken Ribet will be presenting “A 2020 View of Fermat’s Last Theorem” at the Joint Mathematics Meetings later this week. In a preview of the talk, he notes that advances made since 1994 allow a more succinct statement of the proof. But those recent advances are no easier to understand than the original proof.At least nine attempts to construct an elementary proof have been posted on the arXiv in the past 20 years, and there are lots more elsewhere. I think the sentiment motivating much of this work is, “You shouldn’t be allowed to prove a theorem I care about with methods I don’t understand.” Marilyn vos Savant, the Parade columnist, takes an even more extreme position, arguing that Wiles strayed so far from the subject matter of the theorem as to make his proof invalid. (For a critique of her critique, see Boston and Granville.)

Almost all of this grumbling about illegimate methods and excess complexity comes from outside the community of research mathematicians. Insiders see the Wiles proof differently. For them, the wide-ranging nature of the proof is actually what’s most important. The main accomp­lishment, in this view, was cementing a connection between those far-flung areas of mathematics; resolving FLT was just a bonus.

Yet even mathematicians can have misgivings about the intricacy of math­ematical arguments and the ever-taller skyscrapers of abstraction. Jeremy Gray, a historian of mathematics, believes anxiety over abstraction was already rising in the 19th century, when mathematics seemed to be “moving away from reality, into worlds of arbitrary dimension, for example, and into the habit of supplanting intuitive concepts (curves that touch, neighboring points, velocity) with an opaque language of mathematical analysis that bought rigor at a high cost in intelligibility.”

Quite apart from these comments on abstraction, the thesis is well worth reading. It offers alternating sections of “mathsplaining” and “laysplaining.” See also a review in MAA Focus by Adriana Salerno. The thesis was to be published in book form last fall by Birkhäuser, but the book doesn’t seem to be available yet.For a view of abstraction in contemporary mathematics, we have a vivid image from Piper Harron, a young mathematician who wrote an extraordinarily candid PhD thesis in 2016. The introductory chapter begins, “The hardest part about math is the level of abstraction required.” She goes on to explain:

I like to imagine abstraction (abstractly ha ha ha) as pulling the strings on a marionette. The marionette, being “real life,” is easily accessible. Everyone understands the marionette whether it’s walking or dancing or fighting. We can see it and it makes sense. But watch instead the hands of the puppeteers. Can you look at the hand movements of the puppeteers and know what the marionette is doing?… Imagine it gets worse. Much, much worse. Imagine that the marionettes we see are controlled by marionettoids we don’t see which are in turn controlled by pre-puppeteers which are finally controlled by actual puppeteers.

Keep all those puppetoids in mind. I’ll be coming back to them, but first I want to shift my attention to computer science, where the towers of abstraction are just as tall and teetery, but somehow less scary.


Suppose your computer is about to add two numbers…. No, wait, there’s no need to suppose or imagine. In the orange panel below, type some numbers into the \(a\) and \(b\) boxes, then press the “+” button to get the sum in box \(c\). Now, please describe what’s happening inside the machine as that computation is performed.

a


b
c

You can probably guess that somewhere behind the curtains there’s a fragment of code that looks like c = a + b. And, indeed, that statement appears verbatim in the JavaScript program that’s triggered when you click on the plus button. But if you were to go poking around among the circuit boards under the keyboard of your laptop, you wouldn’t find anything resembling that sequence of symbols. The program statement is a high-level abstraction. If you really want to know what’s going on inside the computing engine, you need to dig deeper—down to something as tangible as a jelly bean.

How about an electron? In truth, electrons are not so tangible. The proper mental image is not a hard sphere like a BB but a diffuse probability distribution. In other words, the electron itself is an abstraction.During the computation, clouds of electrons drift through the machine’s circuitry, like swarms of migrating butterflies. Their movements are regulated by the switching action of transistors, and the transistors in turn are controlled by the moving electrons. It is this dance of the electrons that does the arithmetic and produces an answer. Yet it would be madness to describe the evaluation of c = a + b by tracing the motions of all the electrons (perhaps \(10^{23}\) of them) through all the transistors (perhaps \(10^{11}\)).

To understand how electrons are persuaded to do arithmetic for us, we need to introduce a whole sequence of abstractions.

  • First, step back from the focus on individual electrons, and reformulate the problem in terms of continuous quantities: voltage, current, capacitance, inductance.
  • Replace the physical transistors, in which voltages and currents change smoothly, with idealized devices that instantly switch from totally off to fully on.
  • Interpret the two states of a transistor as logical values (true and false) or as numerical values (\(1\) and \(0\)).
  • Organize groups of transistors into “gates” that carry out basic functions of Boolean logic, such as and, or, and not.
  • Assemble the gates into larger functional units, including adders, multipliers, comparators, and other components for doing base-\(2\) arithmetic.
  • Build higher-level modules that allow the adders and such to be operated under the control of a program. This is the conceptual level of the instruction-set architecture, defining the basic operation codes (add, shift, jump, etc.) recognized by the computer hardware.
  • Graduating from hardware to software, design an operating system, a collection of services and interfaces for abstract objects such as files, input and output channels, and concurrent processes.
  • Create a compiler or interpreter that knows how to translate programming language statements such as c = a + b into sequences of machine instructions and operating-system requests.

From the point of view of most programmers, the abstractions listed above represent computational infrastructure: They lie beneath the level where you do most of your thinking—the level where you describe the algorithms and data structures that solve your problem. But computational abstractions are also a tool for building superstructure, for creating new functions beyond what the operating system and the programming language provide. For example, if your programming language handles only numbers drawn from the real number line, you can write procedures for doing arithmetic with complex numbers, such as \(3 + 5i\). (Go ahead, try it in the orange box above.) And, in analogy with the mathematical practice of defining functions of functions, we can build compiler compilers and schemes for metaprogramming—programs that act on other programs.

In both mathematics and computation, rising through the various levels of abstraction gives you a more elevated view of the landscape, with wider scope but less detail. Even if the process is essentially the same in the two fields, however, it doesn’t feel that way, at least to me. In mathematics, abstraction can be a source of anxiety; in computing, it is nothing to be afraid of. In math, you must take care not to tangle the puppet strings; in computing, abstractions are a defense against such confusion. For the mathematician, abstraction is an intellectual challenge; for the programmer, it is an aid to clear thinking.

Why the difference? How can abstraction have such a friendly face in computation and such a stern mien in math? One possible answer is that computation is just plain easier than mathematics. In speaking of “computation,” what I have in mind is the design of algorithms and data structures suitable for a machine we can build out of material components. If you are playing with Turing machines and other toys of theoretical computer science, the game is altogether different. But in my view theoretical computer science is just a funny-looking branch of mathematics. (With apologies to those of my friends who grimace to hear me say it.) Anything that fits into the computer is necessarily discrete and finite. In principle, any computer program could be reduced to a big table mapping all possible inputs to the corresponding outputs. Mathematics is invulnerable to this kind of trivialization by brute force. It has infinities hiding under the bed and lurking behind the closet door, and that’s what makes it both fun and frightening.

Another possible explanation is that computer systems are engineered artifacts; we can build them to our own specifications. If a concept is just too hairy for the human mind to master, we can break it down into simpler pieces. Math is not so complaisant—not even for those who hold that mathematical objects are invented rather than discovered. We can’t just design number theory so that the Riemann hypothesis will be true.

But I think the crucial distinction between math abstractions and computer abstractions lies elsewhere. It’s not in the abstractions themselves but in the boundaries between them.


Warning from the abstraction police on the office door of Radhika Nagpal, Harvard University. (Photographed November 2013.)Abstraction barrier doorway 3402

I believe I first encountered the term abstraction barrier in Abelson and Sussman’s Structure and Inter­pretation of Computer Programs, circa 1986. The underlying idea is surely older; it’s implicit in the “structured programming” literature of the 1960s and 70s. But SICP still offers the clearest and most compelling introduction.In building computer systems, we are urged to compartmentalize, to create self-contained and sealed-off modules—black boxes whose inner workings are concealed from outside observers. In this world, information hiding is considered a virtue, not an impeachable offense. If a design has a layered structure, with abstractions piled one atop the other, the layers are separated by abstraction barriers. A high-level module can reach across the barrier to make use of procedures from lower levels, but it won’t know anything about the implementation of those procedures. When you are writing programs in Lisp or Python, you shouldn’t need to think about how the operating system carries out its chores; and when you’re writing routines for the operating system, you needn’t think about the physics of electrons meandering through the crystal lattice of a semiconductor. Each level of the hierarchy can be treated (almost) independently.

Mathematics also has its abstraction barriers, although I’ve never actually heard the term used by mathematicians. A notable example comes from Giuseppe Peano’s formulation of the foundations of arithmetic, circa 1900. Peano posits the existence of a number \(0\), and a function called successor, \(S(n)\), which takes a number \(n\) and returns the next number in the counting sequence. Thus the natural numbers begin \(0, S(0), S(S(0)), S(S(S(0)))\), and so on. Peano deliberately refrains from saying anything more about what these numbers look like or how they work. They might be implemented as sets, with \(0\) being the empty set and successor the operation of adjoining an element to a set. Or they could be unary lists: (), (|), (||), (|||), . . . The most direct approach is to use Church numerals, in which the successor function itself serves as a counting token, and the number \(n\) is represented by \(n\) nested applications of \(S\).

From these minimalist axioms we can define the rest of arithmetic, starting with addition. In calculating \(a + b\), if \(b\) happens to be \(0\), the problem is solved: \(a + 0 = a\). If \(b\) is not \(0\), then it must be the successor of some number, which we can call \(c\). Then \(a + S(c) = S(a + c)\). Notice that this definition doesn’t depend in any way on how the number \(0\) and the successor function are represented or implemented. Under the hood, we might be working with sets or lists or abacus beads; it makes no difference. An abstraction barrier separates the levels. From addition you can go on to define multiplication, and then exponentiation, and again abstraction barriers protect you from the lower-level details. There’s never any need to think about how the successor function works, just as the computer programmer doesn’t think about the flow of electrons.

The importance of not thinking was stated eloquently by Alfred North Whitehead, more than a century ago:

Alfred North Whitehead, An Introduction of Mathematics, 1911, pp. 45–46.It is a profoundly erroneous truism, repeated by all copybooks and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilisation advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle—they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.


If all of mathematics were like the Peano axioms, we would have a watertight structure, compartmentalized by lots of leakproof abstraction barriers. And abstraction would probably not be considered “the hardest part about math.” But, of course, Peano described only the tiniest corner of mathematics. We also have the puppet strings.

In Piper Harron’s unsettling vision, the puppeteers high above the stage pull strings that control the pre-puppeteers, who in turn operate the marionettoids, who animate the marionettes. Each of these agents can be taken as representing a level of abstraction. The problem is, we want to follow the action at both the top and the bottom of the hierarchy, and possibly at the middle levels as well. The commands coming down from the puppeteers on high embody the abstract ideas that are needed to build theorems and proofs, but the propositions to be proved lie at the level of the marionettes. There’s no separating these levels; the puppet strings tie them together.

In the case of Fermat’s Last Theorem, you might choose to view the Wiles proof as nothing more than an elevated statement about elliptic curves and modular forms, but the proof is famous for something else—for what it tells us about the elementary equation \(x^n + y^n = z^n\). Thus the master puppeteers work at the level of algebraic geometry, but our eyes are on the dancing marionettes of simple number theory. What I’m suggesting, in other words, is that abstraction barriers in mathematics sometimes fail because events on both sides of the barrier make simultaneous claims on our interest.

In computer science, the programmer can ignore the trajectories of the electrons because those details really are of no consequence. Indeed, the electronic guts of the computing machinery could be ripped out and replaced by fluidic devices or fiber optics or hamsters in exercise wheels, and that brain transplant would have no effect on the outcome of the computation. Few areas of mathematics can be so cleanly floated away and rebuilt on a new foundation.

Can this notion of leaky abstraction barriers actually explain why higher mathematics looks so intimidating to most of the human population? It’s surely not the whole story, but maybe it has a role.

In closing I would like to point out an analogy with a few other areas of science, where problems that cross abstraction barriers seem to be particularly difficult. Physics, for example, deals with a vast range of spatial scales. At one end of the spectrum are the quarks and leptons, which rattle around comfortably inside a particle with a radius of \(10^{-15}\) meter; at the other end are galaxy clusters spanning \(10^{24}\) meters. In most cases, effective abstraction barriers separate these levels. When you’re studying celestial mechanics, you don’t have to think about the atomic composition of the planets. Conversely, if you are looking at the interactions of elementary particles, you are allowed to assume they will behave the same way anywhere in the universe. But there are a few areas where the barriers break down. For example, near a critical point where liquid and gas phases merge into an undifferentiated fluid, forces at all scales from molecular to macroscopic become equally important. Turbulent flow is similar, with whirls upon whirls upon whirls. It’s not a coincidence that critical phenomena and turbulence are notoriously difficult to describe.

Biology also covers a wide swath of territory, from molecules and single cells to whole organisms and ecosystems on a planetary scale. Again, abstraction barriers usually allow the biologist to focus on one realm at a time. To understand a predator-prey system you don’t need to know about the structure of cytochrome c. But the barriers don’t always hold. Evolution spans all these levels. It depends on molecular events (mutations in DNA), and determines the shape and fate of the entire tree of life. We can’t fully grasp what’s going on in the biosphere without keeping all these levels in mind at once.

by Brian Hayes at January 13, 2020 11:00 PM UTC

The Computer Science Department at UT Austin invites applications for a Postdoctoral Fellow in theoretical computer science for the 2020-21 academic year. The Fellow will work with Dana Moshkovitz and David Zuckerman on pseudorandomness and computational complexity. Review of applicants will begin on January 15, but applications will be accepted until the position is filled.

Website: https://utaustin.wd1.myworkdayjobs.com/UTstaff/job/UT-MAIN-CAMPUS/Postdoctoral-Fellow_R_00006957
Email: maguilar@cs.utexas.edu

by shacharlovett at January 13, 2020 02:40 PM UTC


The Clay prize anniversary is soon.

SME keynote lecture source

Evelyn Lamb is a mathematician who is also a journalist. She has a blog called Roots of Unity on the Scientific American blog network. Also impressive is that she designed the AMS Page-a-Day Calendar on behalf of the American Mathematical Society. It is available from the AMS with member discounts and is filled with wonderful tidbits on math.

The other day she interviewed Ken and me on P=NP.

Ken just happened to be visiting me that afternoon in New York and we spoke to Evelyn about P=NP. The call was delayed because of equipment issues on both ends. Perhaps the fundamental problem is not P=NP after all, but making computerized equipment work. Oh well.

Evelyn is writing an article for the New Scientist magazine about P=NP. She said it was driven by the near 20th anniversary of the Clay Prizes. Recall there were seven of these, each with a million dollar prize. One, the Poincaré conjecture, was already solved. The others are still open—the million dollars is still there waiting for a brilliant researcher.

Questions and Answers

Here is our own rendition of some questions that came up. We did not keep a recording or notes on our end, and we have paraphrased and expanded some things that we said:

{\bullet } What is the update on P=NP?

Ken: The update is that here is no update. There is no recent progress on resolving P=NP—seemingly none this decade, I’d say. This is still light-years away from it and you could even say the difficulty needed for yea-much progress is discouraging. There are some conjectures that elaborate on P {\neq} NP, including Unique Games and (S)ETH, but those two have gone less clear.


{\bullet } Does the Clay prize help researchers?

Dick: I do not see that the prize gets anyone to work on P=NP.
Ken: I disagree. The prize does help people explain quickly what they are working on to others and why. This is quite valuable.


{\bullet } Could P=NP be undecidable?

Dick: Who knows. I note that known proofs that some combinatorial problem is unprovable in Peano arithmetic somehow rely on a function that grows too fast. The famous Ramsey problem is a perfect example. I do not see any way to get such a function in the P=NP area. Of course I could easily be wrong.

Ken: This is the subject by which Dick and I first came to know each other in the 1980s, for instance this paper of Dick’s with Rich DeMillo vis-à-vis one of mine. I now believe it is resolvable but will need deep and complex techniques.


Here I, Ken, went into the topic of “Natural Proofs,” as Dick did again later.


{\bullet } When will P=NP be solved?

Ken: We just discussed this on our blog. The upshot is that the conjecture has been open long enough—coming to its 50th anniversary if you date it by Steve Cook’s famous 1970–71 paper, 64 years if you date it by Kurt Gödel’s 1956 letter to John von Neumann—that it is going outside the range where data on other conjectures has any predictive value.

Dick: There is a related issue with P=NP. Perhaps we have guessed the wrong direction. Most believe that P is not equal to NP. But many conjectures were finally resolved when someone worked on the right direction.


{\bullet } Who believes P=NP vs P {\neq} NP?

Ken: Our friend Bill Gasarch has polled this three times since 2002. His article finds 80% support for P {\neq} NP, which he says goes higher among those who have worked more on it. I believe unequal, but Dick’s opinion is fluid and Don Knuth recently said he takes the “equal” possibility seriously.


I (Ken) started hunting for how we’ve covered Knuth’s opinion on GLL—it seems only a brief mention here and in comments here—but Dick related hearing it in person from Knuth.


{\bullet } Why so Hard?

Ken: If you believe P {\neq} NP, then it is hard because Nature—mathematical nature—does a bang-up job of making it seem like P {=} NP. Most instances of NP-complete problems are easy; so called SAT-solvers have had much practical success. The larger issue is that nature has frustrated us from proving any nontrivial general lower bounds at all. You can allow a {2^{O(n)}} exponential-time algorithm to make exponentially-long queries to NP and yet no one has proved that the resulting language cannot be decided by linear-sized Boolean circuits. Ryan Williams needed a lot of effort to prove that this class does not have constant-depth poly-size modular-counting circuits, but those could be weaker in relevant ways than general linear-size circuits. But this class is a lot bigger than NP.


I then said another factor is that sometimes algorithms seem to make no visible progress until at the very end when they suddenly come up with a correct answer. Dick and I had tried to quantify a notion of progress. I then started talking about the “hardness versus randomness” phenomenon and the “Natural Proofs” barrier again (for which this 2016 paper by Ryan is a good reference) but Dick cut in with a nub of all these matters.

Dick: A key issue is what I call “Bait-and-Switch” (indeed, in a post on the first day of GLL). Suppose an output {x} is believed to be hard. Add a random {y} to it. The result {z} is also random. One branch of an algorithm computing {y} and another working on {z} seem to have nothing to do with {x}. Yet when you do {y+z} bitwise you have {x}. This destroys any lower-bound argument that would be based on metering progress toward {x}.


{\bullet } Guessing wrong way?

Dick continued saying that this issue only affects the “not equal” position and maybe it’s a hint of people guessing the wrong way. This went back into some things that were said before, and then the call started winding up.

Things We Didn’t Say

We had made mental notes while walking back from lunch across the street in time for the call, but forgot some of them. To recycle an old saying, a mental note isn’t worth the paper it’s written on.

One of them was to remark on Gerhard Woeginger’s P Versus NP claims page and the relative balance of claims of “Equal” and “Not equal.” As of its last update in September 2016, the 116 listed claims are (by Ken’s count) divided 62 for “Equal,” 49 for “Not equal,” 3 for unprovable/undecidable, 1 claiming no claim, and 1 for both “Equal” and “Not equal.” It may be that “Equal” predominates because its logical formula begins with {\exists} and it seems easier to imagine one has found a single {X} that works rather than to exclude all {X}—an infinite number of them.

I (Ken) had intended to connect this and the P=NP poll results to our post two months ago about cases of open questions where one direction seems overwhelmingly supported both by evidence and sentiment. Whatever one thinks about the value of all the P-vs.-NP claims, they witness that P-vs.-NP is certainly not one of those cases.

Last, I had intended to mention the deepest evidence in favor of “not equal.” This is the sinuous thin line between hard and easy cases of problems. Going back at least to Thomas Schaefer’s great 1978 work classifying cases of SAT, we’ve been aware of a surprisingly sharp dichotomy between “hard” and “in P” with seemingly no buffer zone. And the line sometimes seems to fall capriciously. In the second half of this post we mentioned Jin-Yi Cai’s work on dichotomy and a case relevant also to quantum where counting solutions to a fixed family of quadratic mod-4 polynomials in {\{0,1,2,3\}^n} is easy, but counting those in {\{0,1\}^n} to the same polynomial is hard. For another instance of this widely-appreciated point, see Scott Aaronson’s discussion of 3SAT here.

The point is that if P=NP (or counting is in P) then all of this finely filigreed structure would vanish—poof—an illusion all along. Like if the Mandelbrot Set became a smeary blob. But then we’d be left with: why did we have this illusion in the first place?

Open Problems

We suggested that she speak to some others—people more expert than we are. For this and other reasons the article may be quite different. We hope giving our own summary here is a help. What points would you add to it?

We also forgot to ask her about her own work in Teichmüller theory. Here are a couple of her articles on the ABC Conjecture. But that is not a Clay problem and is a subject for another decade.

[Edited 50 to 20]

by RJLipton+KWRegan at January 12, 2020 10:01 PM UTC

St. Petersburg State University invites applications for four associate professorships within the new Department of Mathematics and Computer Science, where students are some of the best in the world.

All areas of mathematics and TCS will be considered, with preference given to candidates working in probability, mathematical physics, mathematical logic or TCS.

Website: https://math-cs.spbu.ru/en/news/news-2019-12-11/
Email: chebyshev.msc@gmail.com

by shacharlovett at January 12, 2020 07:26 PM UTC

A polygonalization of a point set is a simple polygon having all the points as its vertices. Another way to think about it is that it’s a non-crossing Hamiltonian cycle in a complete geometric graph. My recent work on the hardness of counting polygon triangulations was motivated in large part by the problem of counting polygonalizations. Counting polygonalizations should also be hard, but I don’t know how to prove it, and I was hoping that proving other similar problems hard might either lead to more insights or at least provide something to prove a reduction from.

More recently I’ve been attacking the problem from a different direction: looking at the number of polygonalizations for special point sets that are structured enough to make it possible to compute the answer easily while not being completely trivial. The point sets I chose were the integer grids. Smaller grids are uninteresting: grids don’t have any polygonalizations and grids have only one, the outer rectangle.

In a grid, any polygonalization must touch the outer points of the grid in clockwise order. And any subset of the inner points that are between the same pair of outer points can only be connected in consecutive order. So, to describe a polygonalization, we only need to determine for each inner point which pair of outer points it is between. One way to do this is to assign letters to the consecutive pairs of outer points (for instance along the upper left, top, and upper right, and along the lower left, bottom, and lower right). Then, identify each polygonalization with a sequence of letters describing the positions of its inner points, in their left-to-right order.

Polygonalizations of a 2x5 grid labeled by the positions of the three inner points in the cyclic sequence of outer points

This gives a one-to-one correspondence between polygonalizations and strings, where:

  • Each string has length .
  • The letters of the strings are drawn from the alphabet .
  • If a letter appears in a string, all its appearances are consecutive.
  • The subsequence of letters from appear in that order, as does the subsequence of letters from .
  • It is not allowed to have both and , nor to have both and .

These conditions are a little messy, but it’s possible to set up a recurrence for the number of strings that don’t use the left and right side symbols, with choices for the top (lower case) symbols, choices for the bottom (upper case) symbols, and labeled points in the middle. In the recurrence below, represents the number of symbols in the last contiguous block of equal letters, and or represents the number of letters that remain available with values earlier in the alphabetic order than the one that was used for this block. The recurrence is:

with base case . Then the total number of polygonalizations can be found by splitting up the overall length of the string into substrings of left, top or bottom, and right symbols, using this recurrence to count the number of ways of filling in the middle part, multiplying by two for each nonempty left or right part (because there is a choice of upper or lower case for each of these parts), and summing over all splits. My code to do this tells me that the sequence of numbers of polygonalizations for grids, for , is

(not yet in OEIS but I will submit it update 2020-01-14: OEIS A331235).

The next question is: how quickly does this sequence grow? It’s singly exponential (because that’s true of all 2d non-crossing geometric graph counting problems) but what is the base of the exponential? We can make a rough estimate by ignoring faxtors that are polynomial or smaller. The first estimate I tried was wrong: it was that there are roughly ways of choosing whether each letter is upper or lower case, most of which assign roughly half of the letters to each case, and roughly ways of choosing an upper or lower case substring of length , for a total of choices. But this is an overestimate, because it doesn’t take into account the requirement that each letter be contiguous. It would allow strings like which don’t correspond to polygonalizations, and that turns out to matter in the estimation.

To get a better estimate, we need to break down how we choose a string in a different way:

  • First, choose how many of the lower case letters are actually used, and then which subset of them is used.
  • Second, do the same thing with the upper case letters.
  • Finally, repeat times choosing for each letter of the string whether it repeats the previous letter, takes the next lower case letter from the chosen subset, or takes the next upper case letter from the chosen subset, making sure that the first choice is not a repeat and that the total number of next-letter choices matches the number of different letters to choose from.

There are polynomially many choices for how many letters of each type to use, so the total number of choices is within a polynomial factor of the number of choices after we fix these numbers of letters to a single value, the one that maximizes the number of remaining choices. And we can estimate the number of remaining choices using Stirling’s approximation. Estimating the number of polygonalizations in this way shows that it is within a polynomial factor of , where

and where the number of letters we use from each case is . Here, some magic happens: It doesn’t look like and should have nice simple formulas, but they do. With (the golden ratio), we get a maximum at , for which . So our sequence is approximately asymptotic to .

Probably if you go through the same estimation process more carefully, you can recover the polynomial factor as well. I wasn’t so careful; instead I just computed how far off from the numbers in my sequence are, fit a line to these factors on a log-log scale, and observed that its exponent was approximately 1. So, the polynomial factor is , and numerically the sequence of numbers of polygonalizations above is a very good fit to

Why the golden ratio? I don’t know.

(Discuss on Mastodon)

by David Eppstein at January 12, 2020 01:33 PM UTC

Recently, using spectral techniques, H. Huang proved that every subgraph of the hypercube of dimension n induced on more than half the vertices has maximum degree at least the square root of n. Combined with some earlier work, this completed a proof of the sensitivity conjecture. In this work we show how to derive a proof of Huang’s result using only linear dependency and independence of vectors associated with the vertices of the hypercube. Our approach leads to several improvements of the result. In particular we prove that in any induced subgraph of the hypercube with more than half the number of vertices, there are two vertices, one of odd parity and the other of even parity, each with at least n vertices at distance at most 2. As an application we show that for any Boolean function f, the polynomial degree of f is bounded above by the product of the 0- and 1-sensitivities, a strictly stronger statement which implies the sensitivity conjecture.

at January 12, 2020 05:01 AM UTC

June 16-19, 2020 Simons Institute, Berkeley, CA https://womenintheory.wordpress.com/ Submission deadline: February 7, 2020 Registration deadline: February 7, 2020 The Women in Theory (WIT) Workshop is intended for graduate and exceptional undergraduate students in the area of theory of computer science. The workshop will feature technical talks and tutorials by senior and junior women in the … Continue reading Women in Theory 2020

by shacharlovett at January 09, 2020 04:45 PM UTC

The wonderful Women in Theory (WIT) biennial series of workshops started in 2008 and the 7th meeting will take place at   Simons Institute at Berkeley, Jun 16 – 19, 2020. Please see below the call for application.

WIT is one of my favorite (if not the favorite) program in the theory community. Many in our community share my enthusiasm (and theory groups fight for the honor of hosting these meetings). The reactions from past participants leave no room for doubt – this is an important a great and experience. So if you fit the workshop’s qualifications – please do yourself a favor and apply!


 

The Women in Theory (WIT) Workshop is intended for graduate and exceptional undergraduate students in the area of theory of computer science. The workshop will feature technical talks and tutorials by senior and junior women in the field, as well as social events and activities. The motivation for the workshop is twofold. The first goal is to deliver an invigorating educational program; the second is to bring together theory women students from different departments and foster a sense of kinship and camaraderie.

The 7th WIT workshop will take place at  Simons Institute at Berkeley, Jun 16 – 19, 2020.

Confirmed SpeakersMichal Feldman (Tel-Aviv University), Shafi Goldwasser (Simons, UC Berkeley)
OrganizersTal Rabin (IBM), Shubhangi Saraf (Rutgers) and Lisa Zhang (Bell Labs).
Local Host Institution:  Simons Institute at Berkeley.
Local Arrangements:
Special Guest:  Omer Reingold (Stanford).
Contact us: womenintheory2020@gmail.com.

To apply: click here.

Important dates:
Application deadline: Feb 7, 2020
Notification of acceptance: March 15, 2020
Workshop: June 16-19, 2020.

by Omer Reingold at January 09, 2020 04:33 PM UTC

Spoiler Alert: This post has details from the final episodes of the HBO television series Silicon Valley

A few times I've gotten emails from people claiming they have shown P = NP and asking whether they should keep their algorithm a secret to protect the cryptography out there. My typical response is that they should use their algorithm to mine a few bitcoins and then get back to me.

The fictional characters of Pied Piper faced this dilemma when they AI they created "developed a general solution to discrete log in polynomial time" with some nice complexity class diagrams in the background.


Pied Piper was about to roll out its new internet, a distributed network that communicated between cell phones based on a compression algorithm developed by Pied Piper's CEO. Rolling out the network would reveal even more advanced compression based on breaking discrete log. "If we cancel it or shut it down, then others will try to copy or reverse engineer everything that we've built ... Our launch has to fail, publicly and spectacularly."

But here comes the P v NP dilemma: "And what about all the other stuff we're gonna do? I mean, give internet to underserved communities, students in the homework gap, refugees, genomic research. Pied Piper can help scientists cure cancer."

I'd take broken encryption over cancer any day. You can still do encryption even if P = NP, one-time pads distributed via USB drives or quantum. And cancer sucks.

They should have mined a few bitcoins.

by Lance Fortnow (noreply@blogger.com) at January 09, 2020 01:35 AM UTC

Greetings from Oberwolfach.  This week, there is a great meeting here on combinatorics. In this post I want to state the Brown-Erdős-Sós conjecture and one of its variants. The trigger was a beautiful talk I heard from Lior Gishboliner on some impressive progress toward this very difficult problem.  Yet the solution of the problem seems far, and some people even doubt if the conjecture is true. Here is the paper “A New Bound for the Brown-Erdős-Sós problem” by David Conlon, Lior Gishboliner, Yevgeny Levanzov, and Asaf Shapira.

The Brown-Erdős-Sós conjecture and strong versions

Lior’s talk reminded me that Peter Frankl and Voita Rodl told me in the late 80s about strong variants of the Brown-Erdős-Sós problem that implies the Szemeredi theorem. With the kind help of Voita Rodl  who is here I can tell you about two such variants. (But I am solely responsible for all mistakes.)

Let f(n,v,e) be the largest number of edges in an 3-uniform hypergraph with n vertices and no v vertices spanning e edges or more.

Brown-Erdős-Sós Conjecture: f(n,e+3,e)=o(n^2), for every e\ge 3.

For e=3 the BES conjecture is the famous Ruzsa-Szemeredi theorem (that implies Roth’s theorem on 3-term arithmetic progressions). For e=4 the conjecture is painfully open (and is stronger than Szemeredi theorem for 4-term APs).

Here is an even stronger conjectures that implies Szemeredi theorem arithmetic progressions of arbitrary length.

Consider the following 3-uniform hypergraph H(v) with v+3 vertices and v edges: Start with a path with v+1 vertices (and v edges). Color the edges alternately with red and brown and add a red vertex and a brown vertex. Now form a 3-uniform hypergraph whose edges are the red edges of the path joined with the red vertex and the brown edges of the graph joined with the brown vertex.

Conjecture: Every H(v)-free 3-uniform hypergraph with n vertices has o(n^2) vertices.

For another strengthening of the BES-conjecture that implies the Szemeredi theorem see the paper A remark of Pisier-type theorems of Erdos, Nesetril and Rodl. Famously, the celebrated hypergraph regularity lemma (and associated counting lemma) implies another Turan type theorem for hypergraphs that in turn implies Szemeredi’s theorem and its high dimensional extensions.

The new bounds by Conlon, Gishboliner, Levanzov, and Shapira

Although the theorem is for 3-uniform hypergraph the proof relies on the hypergraph regularity lemma for higher dimensional hypergraphs.

With Voita in 1985 (or 86) and today (2020)

by Gil Kalai at January 08, 2020 07:58 PM UTC


Who can remember passwords anyway?

Real Bernie Sanders reaction source

Larry David is an American comedian. He was the lead writer and producer of the Seinfeld TV series. During the previous and current election cycles he has played Presidential candidate Bernie Sanders in skits on Saturday Night Live. His “Bernie” rails about issues with passwords.

Today I want to talk about reducing our dependence on passwords.

On SNL in October 2015, his “Bernie” branched off the topic of Hillary Clinton’s e-mails to famously rant:

What’s the deal with e-mails anyway? I forgot my password the other day, so they say “We’ll email you a new one.” But I can’t get into my email to get the password. I mean, talk about a ball-buster.

In the SNL cold open about the most recent debate three weeks ago, “Bernie” reprised the complaint:

Apple lies, Amazon lies, even my I-Phone lies. Every time it says it’s at 1 percent battery, it stays on for at least 20 minutes. The other times it’s at 7 percent—it shuts down immediately. Apple, what are you trying to hide? And what’s my password?!!

Passwords are still needed. But, and this is the key, we wish to reduce our dependency on them and still be safe. Also the method fits nicely with today’s need for access to encrypted information by law enforcement. We have already discussed this recently here. The trick is to make information safe enough to please us, but not have as many passwords that we need to remember. “Bernie” would be happy.

Let’s next review why we use passwords at all.

Why We Have Passwords

Passwords have been used forever. See our friends at Wikipedia for a story by Polybius, an ancient historian, on how eons ago the Roman military used watchwords. Passwords have been used since the beginning of computing, forever in our world, to safeguard computer systems. The MIT computer system, CTSS, used them starting in 1961 to protect users. This is the reason Fernando Corbató is credited with the invention of computer passwords.

The goal of passwords and watchwords, is simple: Control access. Both the real and fake Bernies use e-mail passwords to keep their communications private. We try to select good passwords, but the management of them can be challenging. It’s not just that people like “Bernie” forget theirs—it’s that our efforts to keep them in our heads often make them too easy to figure out. Jimmy Kimmel showed this in a 2017 segment on his own show.

Passwords are Dead

Managing passwords has lead to the theme, “passwords are dead.” If they are dead, then we must have alternative methods. Some are based on biometrics such as fingerprints and eye scans; others are based on additional hardware. Fingerprints are one of the most popular, but have major drawbacks:

  • They can be unreliable.

  • They can be attacked.

  • They cannot be changed.

I can attest that fingerprints are unreliable. I use a Global Pass to speed my re-entry into the US. After my flight arrives at the airport, I use a Global Pass machine that checks my fingerprints. The machine fails to recognize my fingerprints, every time. This forces me to talk to a custom agent and convince them am who I claim to be. Thus defeating the reason for using the Global Pass.

A 2012 study by Joseph Bonneau, Cormac Herley, Paul Oorschot, and Frank Stajano tilted “The Quest to Replace Passwords” compared 35 alternatives to passwords on their security, usability, and deployability. They showed that most do better on security, many do better on usability. But all alternatives do worse on deployability. The authors conclude:

“Marginal gains are often not sufficient to reach the activation energy necessary to overcome significant transition costs, which may provide the best explanation of why we are likely to live considerably longer before seeing the funeral procession for passwords arrive at the cemetery.”

Colorful language, but the point is that passwords are hard to beat.

How to Remove Them

I have a proposal on how to avoid reliance on just passwords. At least for many applications. The idea is: You would use a password and add a secret password that you do not remember. You do not even know this password. You do not write it down. Clearly, this is additional security if you do not even know the secret password. No one can steal it from you or guess it.

So the issue is: How do you access your own stuff without accessing the secret password? The answer is simple: You run a program that tries all the possibilities. Let this take some time {T}. This point is that for many situations you will not mind if {T} is large. For example, when the access that you are protecting occurs rarely. The security that this affords is based on this: An attacker can and will definitely be able to get access by expending {T} time, if they also know you standard password. But this protects against mass attacks. An attacker may be able to get your data, but will not be able to get millions of people’s data.

Let’s examine a few applications:

{\bullet } End-to-end Encryption: The secret password could be protected by a {T} of order years or even decades of computer time. This would allow law enforcement to get information it needs, while stopping causal attackers.

{\bullet } Recovery of your data: The secret password could be protected by a {T} of order days of computer time. Your computer backup data is secured by such a password. Then it crashes. You need to run such a computation to recover the data. In many situations this would be fine.

{\bullet } A banking site: You would still use a good password. The extra secret one could have a {T} of an hour say. Then when you need to pay an on-line bill, you would have to wait a reasonable amount of time.

{\bullet } Not need to place passwords in your will: Omada King, in his 2013 book The Making Identity, wrote:

[A]ccording to a recent survey from the University of London, one in ten people are now leaving their passwords in their wills to pass on this important information when they die.

This could be avoided by using a secret password with {T} of order months. When it is needed the heirs would run the recovery program and get the access they need.

Ken’s Words

Ken points out that merely trying more than a few possibilities will usually generate a suspicious-activity message and usually a shutdown. So a system would have to be set up to permit “self-hacking.” Ken has a small rotation of “password extenders” not written down and often has to try two or three before gaining access to non-financial sites where he is registered.

Ken is sometimes asked to monitor chess tournaments for possible cheating using the initial “screening” phase of his system. This phase requires minimal effort from Ken as it is supposed to be automated on a server that any chess tournament director would have quick access to, but for various reasons the International Chess Federation (FIDE) has not (yet) erected such a server. So Ken posts the auto-generated reports at a private location known only to him and the arbiter(s) of a particular tournament.

Ken does not maintain a password scheme for the reports. This would become a headache precisely because of the difficulties people have with passwords. Ken does not want to assume the responsibility for managing them. Instead he includes a “quasi-password” as part of the URL he creates. These are often multi-lingual puns or references to quirky artists or factoids in the home country of the tournament. Being memorable and unusual enables the arbiter’s browser to learn the word and link without collision.

This maximizes convenience: For Ken to e-mail reports or zipped folders as attachments would be cumbersome under daily updates. With Ken’s way, the arbiter can view updates even without having to pull up Ken’s previous e-mails with links, just by typing some letters of the weird word in the browser address bar. Hiding directory listings and a “no robots” directive completes minimal security for temporary use during the tournament.

The First and Last Word?

We mentioned Fernando Corbató having originated passwords. But before his passing last July 12, he came to regret them. Here he is quoted:

“It’s become a kind of nightmare. I don’t think anybody can possibly remember all the passwords.”

Open Problems

How practical do you find the “no-password” idea? At least the above suggestion may save us from having to place passwords in our wills.

by RJLipton+KWRegan at January 08, 2020 05:31 AM UTC

Earlier this academic year, CATCS started a project to collect the profiles of TCS candidates on the job market this academic year. Those profiles can now be found at this webpage. The webpage offers a limited amount of search functionality, and we are continuing to improve the layout and add features.

If you are at an institution that is hiring in theory, please feel free to use this database and share it with your colleagues as appropriate.

If you are going on the job market and would like to fill out a profile, please use this form. We will periodically update the webpage adding newer profiles.

Many thanks to Nicole Immorlica, Jack Snoeyink, and Chris Umans for putting together the website, and to all those who contributed their profiles. Questions and comments may be emailed to Shuchi Chawla.

by shuchic at January 06, 2020 11:23 PM UTC

Have you just graduated your PhD and are considering a post-doc position in theory of informatics? Science is your passion and you would like to spend most of your post-doc researching on whatever interests you? You are in the right place.

At MIM UW we offer you:

  1. Great FREEDOM OF CHOICE related to what to work on;
  2. Just A FEW or NO teaching duties;
  3. A lot of TIME FOR RESEARCH;
  4. Chance to cooperate with VERY EXPERIENCED and TALENTED scientists;
  5. FRIENDLY environment;
  6. Excellent SUPPORT from our administrative staff;

If you still hesitate, here are two interviews with former post-docs in ERC GRANT TUgbOAT.

Watch a video and find out more about benefits of working at MIM UW.

Watch an interview with Krzysztof Fleszar, post-doc at the Faculty of Mathematics, Informatics and Mechanics, University of Warsaw and find out how the participation in the project has changed his life.
For more information about ERC GRANT TUgbOAT visit our blog https://duch.mimuw.edu.pl/~tugboat/.
Adam Karczmarz told us about his experience as a post-doc at the Faculty of Mathematics, Informatics and Mechanics, University of Warsaw in ERC GRANT „Towards Unification of Algorithmic Tools”.
To find out more about TUgbOAT project visit our blog https://duch.mimuw.edu.pl/~tugboat/.

by Renata Czarniecka at January 06, 2020 07:39 PM UTC

September 1-4, 2020 Czech Republic http://users.math.cas.cz/talebanfard/workshop2020/ The workshop consists of excellent speakers giving enlightening tutorials on delectable aspects of complexity theory which will take place in Tábor, Czech Republic. The event is co-organized with Krajíček’s Fest celebrating the 60th birthday of Jan Krajíček.

by shacharlovett at January 06, 2020 01:42 PM UTC

For someone based in the UK who works in any kind of international market, it looks reasonable to consider how his business strategy should be affected by leaving the EU. In the case of CS theory research, this perhaps runs counter to an idealised view in which all research is global, and should not be affected by squalid political considerations. The interest inherent in any specific problem or result ought to be independent of where it was studied. On a related note, it may be felt that it’s the research topic that chooses the researcher, not the researcher who chooses the topic. On the other hand, even in CS theory, ones political environment and associated social networks may have a stronger effect than we would like to acknowledge.

When I was a graduate student, Algorithms and Computational Complexity was relatively under-represented in the UK, compared with today. The UK theory community was dominated by so-called “Euro-theory”, which at the time did not seem to exhibit obvious points of contact with algorithms research. People like me had to look west for assurance that our research was of wider interest than what was apparent in our own backyard. To further justify that west-looking approach, it was clear that the USA was, in terms of research, the undisputed world leader. Then as now, it collected the lion’s share of Nobel prizes. It had industrial research labs producing leading CS theory research, such as IBM, AT&T, and NEC, while Europe had nothing similar. For me, this sowed the seeds of a defiantly Atlanticist attitude to CS theory research — appropriate for Brexit Britain, perhaps? — that the best way to pursue high-quality research was via links with colleagues in the USA.

Fast-forward to about five years ago, and my attitude had softened. The UK’s algorithms-and-complexity community steadily grew, and is much larger than it was in the early 90’s. Travel within Europe is relatively quick and cheap, with no visa issues. The European research community became a bit of a comfort-zone, while the USA’s research ethic seemed comparatively intense and high-pressure. The perennial question of “Where’s my next STOC/FOCS paper going to come from?” has always seemed less urgent in Europe.

The EU has attempted to unify Europe’s academic research activity, which is supported by diverse governments, and gives rise to diverse complaints among European colleagues. I am reminded of the “unhappy families” quote from Anna Karenina. In an attempt to give it a bit of unity, there’s some EU funding for research, concerning which we have this criticism from Andre Geim’s Nobel prize speech: “I can offer no nice words for the EU Framework programmes which, except for the European Research Council, can be praised only by Europhobes for discrediting the whole idea of an effectively working Europe.” For my part, I recently tried to get a grant from the European Research Council but they turned me down. If I were a rational agent, I should at this point be one of Geim’s Europhobes; of course in reality things are not quite so simple. But at this point I reckon the US research funding system looks like the least worst. A recent article at Athene Donald’s blog discusses the post-EU era and the idea of a DARPA-like research agency for the UK.

So, leaving the EU looks like an opportune point to dust off the above-mentioned “Atlanticist attitude”. These days China is also becoming more important. But I hope that Europe will not give up on us, but will compete strenuously with them for our attention.

by Paul Goldberg (noreply@blogger.com) at January 06, 2020 12:21 PM UTC

I recently needed to look at what NP problems were possibly intermediary (neither in P nor NP-complete). So I went to Wikipedia and found this.

They had many problems, though some I had never heard of. Those that I had never heard of

should they be on the list?

That is, are they natural? That is hard to define rigorously, but I will take you through my train of thought as I read the first few:

Factoring Integers. Yes, quite possibly intermediary: If  its NPC then PH collapses, and, at least so far, does not seem to be in P.  (the NPC--> PH collapse result: We take

FACT = { (n,x) : n has a nontrivial factor ≤ x }

FACT is clearly in NP:
a complete factorization of n provides evidence that some nontrivial factor is \le x.

FACT is clearly in coNP:
a complete factorization of n provides evidence that no nontrivial factor is \le x

so if FACT is NP-complete then SAT is in coNP.

Factoring is clearly an important and well studied problem. It even has its own Wikipedia entry!

Discrete Log. Similar to Factoring. And it is also an important and well studied problem. It even has its own Wikipedia Entry!

Isomorphism Problems They list Group and Ring isomorphism. They don't list Graph, which is odd. (ADDED LATER- my bad, they do mention Graph Isom in the section on Graph Algorithms) Anyway, if Graph Isom is NPC then PH collapses, and, at least so far, there is no algorithm for Graph Isom in P. (I do not think it is know if Group Isom NPC means PH collapses, or if Ring Isom NPC means PH collapses---if you know of such a proof leave a comment and a pointer to it.)

Graph Isomorphism is a well studied problem and seems important and natural (I don't know if Graph Isomorphism has any real applications they way that factoring and DL do).  It even has its own Wikipedia entry! Group and Ring Isomorphism also seem important and natural. And they have their own Wikipedia entry!

Numbers in Boxes Problem My first reaction-Gee, whats that? For the Factoring, DL, and Isomorphism they did not define the problem-- they gave pointers to the Wikipedia entries on them. For this one there was no Wikipedia entry. There was one reference. I went to it. It was a blog entry of mine! Here it is: here, and to save you time I'll say what it is:

{ (1n,1k) : you can partition 1,...,n into k boxes so that no box has x,y,z with x + y = z }

Is this problem important? Does it exist anywhere outside of my blog entry? Yes--- a special case of it was in Dr. Ecco's Cyperpuzzles by Dennis Shasha (note- Dennis was a classmate of mine in graduate school at Harvard). I think the case was to try to partition {1,...,100} as best you can. Actually I first saw the case of the problem in his book and then generalized it.

The problem is sparse so if it was NP-complete then P = NP, very good evidence that its not NPC. And its been studied for thousands of years, with people looking for poly time algorithms (I think Pythagoras studied it) without success, so its almost surely not in P. OR that last sentence was complete nonsense. Indeed, I don't think anyone has studied the problem computationally, or, for that matter, at all. So the evidence that its not in P is... sparse.

But its worse than that. One could devise MANY sparse problems that are, since spares, likely NOT NPC, and hardly studied, so as-of-now, not in P. Should those count? Only if (a) more people study them so there is an attempt to point to to get it into P, and (b) the problem is natural (which is hard to define).

Note that I can vary the problem: x+2y=z (this relates to lower bounds on VDW numbers)
or any other combination of x,y,z or more that I like.



This raises a question:

When is a problem worthy of being put on lists of problems?

Here are some possibly criteria. One can take ANDS and ORS of them.

1) The problem has a Wikipedia entry. This might fall victim to Goodhearts law: when a measure becomes a target, it ceases to be a measure.  That is, I could make a Wikipedia entry on the Number-in-boxes problem and then say LOOK, its on Wikipedia!

2) More than X people have worked on the problem for some value of X. But here is a reason this might not be a good criteria: look at the problem

{ α : α is a reg expression that allows numbers (so a1000 is fine, makes reg expressions  VERY succint) such that L(α)=Σ* }

This problem looks natural, and was proven by Meyer and Stockmeyer to be EXPSPACE complete.
That is the only paper on this problem, yet the problem really does look natural, and the result is rightly celebrated as a natural problem that is provably not in P.

3) When people in the field look at the problem they say YEAH, thats a good problem.

4) The problem relates to other problems or other fields.

I doubt the Number-in-boxes problem satisfies any of these criteria. The variant with x+2y=z relates to Ramsey Theory. Great.

NOW, back to the list-- I won't go through any more on the list, but I note that for some of them the only reference seems to be a conversation on stack-exchange.  Some of those end up referring to real papers so are more likely natural, but some do not.

Having said that, is there any harm in the list having on it some problems that are not ... worthy? Is that even the right word to use?

Note that I don't have strong opinions on any of these matters, I am just wondering what criteria Wikipedia, and other sources, uses, when they have lists of problems.




by GASARCH (noreply@blogger.com) at January 06, 2020 04:54 AM UTC