Theory of Computing Blog Aggregator

The Department of Computer Science at the Rochester Institute of Technology invites applications for full-time tenure-track assistant professor positions starting in Fall 2021. We are looking to hire in all areas of computer science that strengthen our department.

Website: https://sjobs.brassring.com/TGnewUI/Search/Home/Home?partnerid=25483&siteid=5291#jobDetails=1534060_5291
Email: csfacsearch@cs.rit.edu

by shacharlovett at December 02, 2020 01:44 AM UTC

After discussing postdoc opportunities with me and the opportunities as part of the Simons Collaboration on the Theory of Algorithmic Fairness, let me conclude with postdoc opportunities with Stanford theory group:

The theory group at Stanford invites applications for the Motwani postdoctoral fellowship in theoretical computer science. Information and application instructions below. Applications will be accepted until the positions are filled, but review of applicants will begin after Dec 15.

Website: https://academicjobsonline.org/ajo/jobs/17685
Email: theory.stanford@gmail.com

by Omer Reingold at December 01, 2020 06:59 PM UTC

The theory group at Stanford invites applications for the Motwani postdoctoral fellowship in theoretical computer science. Information and application instructions below. Applications will be accepted until the positions are filled, but review of applicants will begin after Dec 15.

Website: https://academicjobsonline.org/ajo/jobs/17685
Email: theory.stanford@gmail.com

by shacharlovett at December 01, 2020 12:48 AM UTC

Authors: Daniel Neuen
Download: PDF
Abstract: We give an isomorphism test that runs in time $n^{\operatorname{polylog}(h)}$ on all $n$-vertex graphs excluding some $h$-vertex vertex graph as a topological subgraph. Previous results state that isomorphism for such graphs can be tested in time $n^{\operatorname{polylog}(n)}$ (Babai, STOC 2016) and $n^{f(h)}$ for some function $f$ (Grohe and Marx, SIAM J. Comp., 2015).

Our result also unifies and extends previous isomorphism tests for graphs of maximum degree $d$ running in time $n^{\operatorname{polylog}(d)}$ (FOCS 2018) and for graphs of Hadwiger number $h$ running in time $n^{\operatorname{polylog}(h)}$ (FOCS 2020).

at December 01, 2020 10:40 PM UTC

Authors: Moses Charikar, Shivam Garg, Deborah M. Gordon, Kirankumar Shiragur
Download: PDF
Abstract: We introduce a model for ant trail formation, building upon previous work on biologically feasible local algorithms that plausibly describe how ants maintain trail networks. The model is a variant of a reinforced random walk on a directed graph, where ants lay pheromone on edges as they traverse them and the next edge to traverse is chosen based on the pheromone level; this pheromone decays with time. There is a bidirectional flow of ants: the forward flow proceeds along forward edges from source (e.g. the nest) to sink (e.g. a food source), and the backward flow in the opposite direction. Some fraction of ants are lost as they pass through each node (modeling the loss of ants due to exploration). We initiate a theoretical study of this model.

We first consider the linear decision rule, where the flow divides itself among the next set of edges in proportion to their pheromone level. Here, we show that the process converges to the path with minimum leakage when the forward and backward flows do not change over time. When the forward and backward flows increase over time (caused by positive reinforcement from the discovery of a food source, for example), we show that the process converges to the shortest path. These results are for graphs consisting of two parallel paths (a case that has been investigated before in experiments). Through simulations, we show that these results hold for more general graphs drawn from various random graph models. Further, we consider a general family of decision rules, and show that there is no advantage of using a non-linear rule from this family, if the goal is to find the shortest or the minimum leakage path. We also show that bidirectional flow is necessary for convergence to such paths. Our results provide a plausible explanation for field observations, and open up new avenues for further theoretical and experimental investigation.

at December 01, 2020 10:40 PM UTC

Authors: Julio Araujo, Alexandre Cezar, Carlos V. G. C. Lima, Vinicius F. dos Santos, Ana Silva
Download: PDF
Abstract: An orientation $D$ of a graph $G=(V,E)$ is a digraph obtained from $G$ by replacing each edge by exactly one of the two possible arcs with the same end vertices. For each $v \in V(G)$, the indegree of $v$ in $D$, denoted by $d^-_D(v)$, is the number of arcs with head $v$ in $D$. An orientation $D$ of $G$ is proper if $d^-_D(u)\neq d^-_D(v)$, for all $uv\in E(G)$. An orientation with maximum indegree at most $k$ is called a $k$-orientation. The proper orientation number of $G$, denoted by $\overrightarrow{\chi}(G)$, is the minimum integer $k$ such that $G$ admits a proper $k$-orientation. We prove that determining whether $\overrightarrow{\chi}(G) \leq k$ is NP-complete for chordal graphs of bounded diameter, but can be solved in linear-time in the subclass of quasi-threshold graphs. When parameterizing by $k$, we argue that this problem is FPT for chordal graphs and argue that no polynomial kernel exists, unless $NP\subseteq coNP/\ poly$. We present a better kernel to the subclass of split graphs and a linear kernel to the class of cobipartite graphs.

Concerning bounds, we prove tight upper bounds for subclasses of block graphs. We also present new families of trees having proper orientation number at most 2 and at most 3. Actually, we prove a general bound stating that any graph $G$ having no adjacent vertices of degree at least $c+1$ have proper orientation number at most $c$. This implies new classes of (outer)planar graphs with bounded proper orientation number. We also prove that maximal outerplanar graphs $G$ whose weak-dual is a path satisfy $\overrightarrow{\chi}(G)\leq 13$. Finally, we present simple bounds to the classes of chordal claw-free graphs and cographs.

at December 01, 2020 10:39 PM UTC

Authors: Konstantin Makarychev, Miklos Z. Racz, Cyrus Rashtchian, Sergey Yekhanin
Download: PDF
Abstract: Large pools of synthetic DNA molecules have been recently used to reliably store significant volumes of digital data. While DNA as a storage medium has enormous potential because of its high storage density, its practical use is currently severely limited because of the high cost and low throughput of available DNA synthesis technologies.

We study the role of batch optimization in reducing the cost of large scale DNA synthesis, which translates to the following algorithmic task. Given a large pool $\mathcal{S}$ of random quaternary strings of fixed length, partition $\mathcal{S}$ into batches in a way that minimizes the sum of the lengths of the shortest common supersequences across batches.

We introduce two ideas for batch optimization that both improve (in different ways) upon a naive baseline: (1) using both $(ACGT)^{*}$ and its reverse $(TGCA)^{*}$ as reference strands, and batching appropriately, and (2) batching via the quantiles of an appropriate ordering of the strands. We also prove asymptotically matching lower bounds on the cost of DNA synthesis, showing that one cannot improve upon these two ideas. Our results uncover a surprising separation between two cases that naturally arise in the context of DNA data storage: the asymptotic cost savings of batch optimization are significantly greater in the case where strings in $\mathcal{S}$ do not contain repeats of the same character (homopolymers), as compared to the case where strings in $\mathcal{S}$ are unconstrained.

at December 01, 2020 10:44 PM UTC

Authors: Nello Blaser, Erlend Raa Vågset
Download: PDF
Abstract: Finding a cycle of lowest weight that represents a homology class in a simplicial complex is known as homology localization (HL). Here we address this NP-complete problem using parameterized complexity theory. We show that it is W[1]-hard to approximate the HL problem when it is parameterized by solution size. We have also designed and implemented two algorithms based on treewidth solving the HL problem in FPT-time. Both algorithms are ETH-tight but our results shows that one outperforms the other in practice.

at December 01, 2020 12:00 AM UTC

Authors: Neeraj Kumar, Daniel Lokshtanov, Saket Saurabh, Subhash Suri
Download: PDF
Abstract: Given two points s and t in the plane and a set of obstacles defined by closed curves, what is the minimum number of obstacles touched by a path connecting s and t? This is a fundamental and well-studied problem arising naturally in computational geometry, graph theory (under the names Min-Color Path and Minimum Label Path), wireless sensor networks (Barrier Resilience) and motion planning (Minimum Constraint Removal). It remains NP-hard even for very simple-shaped obstacles such as unit-length line segments. In this paper we give the first constant factor approximation algorithm for this problem, resolving an open problem of [Chan and Kirkpatrick, TCS, 2014] and [Bandyapadhyay et al., CGTA, 2020]. We also obtain a constant factor approximation for the Minimum Color Prize Collecting Steiner Forest where the goal is to connect multiple request pairs (s1, t1), . . . ,(sk, tk) while minimizing the number of obstacles touched by any (si, ti) path plus a fixed cost of wi for each pair (si, ti) left disconnected. This generalizes the classic Steiner Forest and Prize-Collecting Steiner Forest problems on planar graphs, for which intricate PTASes are known. In contrast, no PTAS is possible for Min-Color Path even on planar graphs since the problem is known to be APXhard [Eiben and Kanj, TALG, 2020]. Additionally, we show that generalizations of the problem to disconnected obstacles

at December 01, 2020 10:50 PM UTC

Authors: Joshua Sobel, Noah Bertram, Chen Ding, Fatemeh Nargesian, Daniel Gildea
Download: PDF
Abstract: Analyzing patterns in a sequence of events has applications in text analysis, computer programming, and genomics research. In this paper, we consider the all-window-length analysis model which analyzes a sequence of events with respect to windows of all lengths. We study the exact co-occurrence counting problem for the all-window-length analysis model. Our first algorithm is an offline algorithm that counts all-window-length co-occurrences by performing multiple passes over a sequence and computing single-window-length co-occurrences. This algorithm has the time complexity $O(n)$ for each window length and thus a total complexity of $O(n^2)$ and the space complexity $O(|I|)$ for a sequence of size n and an itemset of size $|I|$. We propose AWLCO, an online algorithm that computes all-window-length co-occurrences in a single pass with the expected time complexity of $O(n)$ and space complexity of $O( \sqrt{ n|I| })$. Following this, we generalize our use case to patterns in which we propose an algorithm that computes all-window-length co-occurrence with expected time complexity $O(n|I|)$ and space complexity $O( \sqrt{n|I|} + e_{max}|I|)$, where $e_{max}$ is the length of the largest pattern.

at December 01, 2020 10:46 PM UTC

Authors: Noah Brüstle, Tal Elbaz, Hamed Hatami, Onur Kocer, Bingchan Ma
Download: PDF
Abstract: Let $H$ be a fixed undirected graph on $k$ vertices. The $H$-hitting set problem asks for deleting a minimum number of vertices from a given graph $G$ in such a way that the resulting graph has no copies of $H$ as a subgraph. This problem is a special case of the hypergraph vertex cover problem on $k$-uniform hypergraphs, and thus admits an efficient $k$-factor approximation algorithm. The purpose of this article is to investigate the question that for which graphs $H$ this trivial approximation factor $k$ can be improved.

at December 01, 2020 10:45 PM UTC

Authors: Amit Levi, Ramesh Krishnan S. Pallavoor, Sofya Raskhodnikova, Nithin Varma
Download: PDF
Abstract: We investigate sublinear-time algorithms that take partially erased graphs represented by adjacency lists as input. Our algorithms make degree and neighbor queries to the input graph and work with a specified fraction of adversarial erasures in adjacency entries. We focus on two computational tasks: testing if a graph is connected or $\varepsilon$-far from connected and estimating the average degree. For testing connectedness, we discover a threshold phenomenon: when the fraction of erasures is less than $\varepsilon$, this property can be tested efficiently (in time independent of the size of the graph); when the fraction of erasures is at least $\varepsilon,$ then a number of queries linear in the size of the graph representation is required. Our erasure-resilient algorithm (for the special case with no erasures) is an improvement over the previously known algorithm for connectedness in the standard property testing model and has optimal dependence on the proximity parameter $\varepsilon$. For estimating the average degree, our results provide an "interpolation" between the query complexity for this computational task in the model with no erasures in two different settings: with only degree queries, investigated by Feige (SIAM J. Comput. `06), and with degree queries and neighbor queries, investigated by Goldreich and Ron (Random Struct. Algorithms `08) and Eden et al. (ICALP `17). We conclude with a discussion of our model and open questions raised by our work.

at December 01, 2020 10:42 PM UTC

Authors: Sergey Bereg, Mohammadreza Haghpanah
Download: PDF
Abstract: A given order type in the plane can be represented by a point set. However, it might be difficult to recognize the orientations of some point triples. Recently, Aichholzer \etal \cite{abh19} introduced exit graphs for visualizing order types in the plane. We present a new class of geometric graphs, called {\em OT-graphs}, using abstract order types and their axioms described in the well-known book by Knuth \cite{k92}. Each OT-graph corresponds to a unique order type. We develop efficient algorithms for recognizing OT-graphs and computing a minimal OT-graph for a given order type in the plane. We provide experimental results on all order types of up to nine points in the plane including a comparative analysis of exit graphs and OT-graphs.

at December 01, 2020 10:47 PM UTC

Authors: Yuxuan Yu, Xiaodong Wei, Angran Li, Jialei Ginny Liu, Jeffrey He, Yongjie Jessica Zhang
Download: PDF
Abstract: In this paper, we present two software packages, HexGen and Hex2Spline, that seamlessly integrate geometry design with isogeometric analysis (IGA) in LS-DYNA. Given a boundary representation of a solid model, HexGen creates a hexahedral mesh by utilizing a semi-automatic polycube-based mesh generation method. Hex2Spline takes the output hexahedral mesh from HexGen as the input control mesh and constructs volumetric truncated hierarchical splines. Through B\'{e}zier extraction, Hex2Spline transfers spline information to LS-DYNA and performs IGA therein. We explain the underlying algorithms in each software package and use a rod model to explain how to run the software. We also apply our software to several other complex models to test its robustness. Our goal is to provide a robust volumetric modeling tool and thus expand the boundary of IGA to volume-based industrial applications.

at December 01, 2020 10:49 PM UTC

Authors: Palash Dey, Arnab Maiti, Amatya Sharma
Download: PDF
Abstract: In liquid democracy, each voter either votes herself or delegates her vote to some other voter. This gives rise to what is called a delegation graph. To decide the voters who eventually votes along with the subset of voters whose votes they give, we need to resolve the cycles in the delegation graph. This gives rise to the Resolve Delegation problem where we need to find an acyclic sub-graph of the delegation graph such that the number of voters whose votes they give is bounded above by some integer {\lambda}. Putting a cap on the number of voters whose votes a voter gives enable the system designer restrict the power of any individual voter. The Resolve Delegation problem is already known to be NP-hard. In this paper we study the parameterized complexity of this problem. We show that Resolve Delegation is para-NP-hard with respect to parameters {\lambda}, number of sink nodes and the maximum degree of the delegation graph. We also show that Resolve Delegation is W[1]-hard even with respect to the treewidth of the delegation graph. We complement our negative results by exhibiting FPT algorithms with respect to some other parameters. We finally show that a related problem, which we call Resolve Fractional Delegation, is polynomial time solvable.

at December 01, 2020 12:00 AM UTC

Authors: Spyros Angelopoulos, Malachi Voss
Download: PDF
Abstract: We study the setting in which a mobile agent must locate a hidden target in a bounded or unbounded environment, with no information about the hider's position. In particular, we consider online search, in which the performance of the search strategy is evaluated by its worst case competitive ratio. We introduce a multi-criteria search problem in which the searcher has a budget on its allotted search time, and the objective is to design strategies that are competitively efficient, respect the budget, and maximize the total searched ground. We give analytically optimal strategies for the line and the star environments, and efficient heuristics for general networks.

at December 01, 2020 10:45 PM UTC

Authors: Huu Phuoc Le, Mohab Safey El Din
Download: PDF
Abstract: We design a new algorithm for solving parametric systems having finitely many complex solutions for generic values of the parameters. More precisely, let $f = (f_1, \ldots, f_m)\subset \mathbb{Q}[y][x]$ with $y = (y_1, \ldots, y_t)$ and $x = (x_1, \ldots, x_n)$, $V\subset \mathbb{C}^{t+n}$ be the algebraic set defined by $f$ and $\pi$ be the projection $(y, x) \to y$. Under the assumptions that $f$ admits finitely many complex roots for generic values of $y$ and that the ideal generated by $f$ is radical, we solve the following problem. On input $f$, we compute semi-algebraic formulas defining semi-algebraic subsets $S_1, \ldots, S_l$ of the $y$-space such that $\cup_{i=1}^l S_i$ is dense in $\mathbb{R}^t$ and the number of real points in $V\cap \pi^{-1}(\eta)$ is invariant when $\eta$ varies over each $S_i$.

This algorithm exploits properties of some well chosen monomial bases in the algebra $\mathbb{Q}(y)[x]/I$ where $I$ is the ideal generated by $f$ in $\mathbb{Q}(y)[x]$ and the specialization property of the so-called Hermite matrices. This allows us to obtain compact representations of the sets $S_i$ by means of semi-algebraic formulas encoding the signature of a symmetric matrix. When $f$ satisfies extra genericity assumptions, we derive complexity bounds on the number of arithmetic operations in $\mathbb{Q}$ and the degree of the output polynomials. Let $d$ be the maximal degree of the $f_i$'s and $D = n(d-1)d^n$, we prove that, on a generic $f=(f_1,\ldots,f_n)$, one can compute those semi-algebraic formulas with $O^~( \binom{t+D}{t}2^{3t}n^{2t+1} d^{3nt+2(n+t)+1})$ operations in $\mathbb{Q}$ and that the polynomials involved have degree bounded by $D$.

We report on practical experiments which illustrate the efficiency of our algorithm on generic systems and systems from applications. It allows us to solve problems which are out of reach of the state-of-the-art.

at December 01, 2020 10:48 PM UTC

Authors: Benjamin Laufer
Download: PDF
Abstract: In the criminal legal context, risk assessment algorithms are touted as data-driven, well-tested tools. Studies known as validation tests are typically cited by practitioners to show that a particular risk assessment algorithm has predictive accuracy, establishes legitimate differences between risk groups, and maintains some measure of group fairness in treatment. To establish these important goals, most tests use a one-shot, single-point measurement. Using a Polya Urn model, we explore the implication of feedback effects in sequential scoring-decision processes. We show through simulation that risk can propagate over sequential decisions in ways that are not captured by one-shot tests. For example, even a very small or undetectable level of bias in risk allocation can amplify over sequential risk-based decisions, leading to observable group differences after a number of decision iterations. Risk assessment tools operate in a highly complex and path-dependent process, fraught with historical inequity. We conclude from this study that these tools do not properly account for compounding effects, and require new approaches to development and auditing.

at December 01, 2020 10:45 PM UTC

As promised, more postdoctoral positions now available for 2021

The Simons Collaboration on the Theory of Algorithmic Fairness seek highly qualified candidates (within five years of the award of their PhD) for a postdoctoral research position. Appointments will begin Summer or Fall 2021.

This multi-year program will host several postdoctoral researchers working on modeling and theoretical work understanding (a) the sources of discriminatory behavior of algorithms, and (b) how best to mitigate such impacts.

Descriptions of the scientific agendas of this research collaboration can be found at https://toc4fairnesses.org/.

Fellows will be able to collaborate broadly, including with researchers at partner institutions: Stanford University, Toyota Institute of Technology at Chicago, Massachusetts Institute of Technology, Harvard University, UC Berkeley, Cornell University,  Hebrew University of Jerusalem, Weizmann Institute of Science, University of Toronto,  University of Washington, and University of Pennsylvania.

 The anticipated term for a fellowship is one or two years – to be decided at the time of appointment, with the possibility of extension based on mutual agreement. In addition to competitive salary and benefits, the fellowship also includes funding for independent travel to workshops, conferences and other universities and research labs.

In order to apply, please email a CV, research statement, and have two reference letters sent to jamiemmt@cs.washington.edu. Applications and reference letters are due Dec. 31, 2020, though we will consider applications which arrive after that date. Decisions will be made in February.

Jamie Morgenstern, chair of the postdoctoral search committee
Simons Collaboration on the Foundations of Fairness in Machine Learning

by Omer Reingold at November 30, 2020 09:38 PM UTC

The Computer Science & Engineering department at UC Santa Cruz is recruiting for six junior faculty positions, two in theoretical computer science, two in applied machine learning, and two in experimental systems.
(The TCS position URL is given below. For the others, look at the open recruitments in Computer Science and Engineering.)

Website: https://recruit.ucsc.edu/JPF00962
Email: C. Seshadhri

by shacharlovett at November 30, 2020 09:15 PM UTC

by David Eppstein at November 30, 2020 06:15 PM UTC

It is known that the size of monotone arithmetic $(+,\ast)$ circuits can be exponentially decreased by allowing just one division "at the very end," at the output gate. A natural question is: can the size of $(+,\ast)$ circuits be substantially reduced if we allow divisions "at the very beginning," that is, if besides nonnegative real constants and variables $x_1,\ldots,x_n$, the circuits can also use their reciprocals $1/x_1,\ldots,1/x_n$ as inputs. We answer this question in the negative: the gain in circuit size is then always at most quadratic. Over tropical $(\min,+)$ and $(\max,+)$ semirings, division turns into subtraction; so, reciprocal inputs are then $-x_1,\ldots,-x_n$. We give the same negative answer also for tropical circuits. The question of whether reciprocal inputs can substantially speed up tropical $(\min,+,\max)$ circuits, using both $\min$ and $\max$ gates, remains open.

at November 30, 2020 05:48 PM UTC

The Department of Computer Science at Aalto University invites applications for tenure-track positions at the Assistant Professor level. We are a diverse community welcoming applications in ALL AREAS of Computer Science.

Our CS Theory group (https://research.cs.aalto.fi/theory/) has e.g. received the best paper awards in FOCS 2019 and ICALP 2017, as well as ERC starting grants in 2014 and 2017.

Website: https://www.aalto.fi/en/open-positions/tenure-track-assistant-professors-in-computer-science
Email: laura.kuusisto-noponen@aalto.fi

by shacharlovett at November 30, 2020 03:42 PM UTC

Applications are invited for research associate positions in Algorithms and Complexity, funded by the European Research Council (ERC) starting grant “New Approaches to Counting and Sampling” (NACS), led by Dr. Heng Guo in the School of Informatics, University of Edinburgh.

Website: https://elxw.fa.em3.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1001/job/123/
Email: hguo@inf.ed.ac.uk

by shacharlovett at November 30, 2020 10:30 AM UTC

Looking for excellent post graduate researchers for a yearlong postdoc position at the Computer Science Department at Technion. Main research theme is intersection of theoretical computer science with learning, information and data.

Website: https://nailon.net.technion.ac.il/openings/
Email: mayasidis@cs.technion.ac.il

by shacharlovett at November 30, 2020 07:43 AM UTC


James The Amazing Randi died on October 20, 2020, at the age of 92. He is survived by

his husband Jose Alvarez.  His Wikipedia page is here

 
A few Randi Points:


0) Wikipedia lists his careers as Magician, Author, Skeptic. I didn't know that skeptic was a career.

1) Randi debunked many paranormal claims, though he did not like the term debunker. He preferred investigator.

2) Martin Gardner and James Randi founded the

Committee for Scientific Investigation of Claims of the Paranormal.

which publishes

The Skeptical Inquirer: see here  


3) The internet is both a place where unchecked claims of paranormal activity (and more dangerous lies) can grow faster than in an earlier time, but also a place where magazines like The Skeptical Inquirer, and fact-checking websites, can help check the unchecked claims. What is winning? I leave that as an exercise for the reader. 

4) I suspect most (all?) people reading this blog do not believe in astrology, UFO's, ESP, or other crank theories. Hence I was surprised to read that Alan Turing thought the evidence for  ESP was overwhelming. This was mentioned in passing in his paper on The Turing Test (called there The Imitation Game) as something the Turing Test will have to account for. I've tried to find out why he believed this, without success. Some websites mentioned that belief in the paranormal was more... normal in those days. One suggested that after the counter-intuitive theories of quantum mechanics and relatively were out there, other counter-intuitive theories took hold, like ESP.  Even so, what was the evidence he was referring to?

5) Claims that  I was abducted by a UFO or I saw a UFO have decreased since people now have cell phones so ALWAYS have a way to take pictures. Also rumors like (I had heard this one)

There is an alternative ending to the movie BLAH which made is way to a few DVDs by mistake.

are no longer made since IF true you could EASILY produce evidence of such (post to you tube or elsewhere).

6) The term skeptic just means someone who doubts something, and is not necc a positive things.

I am a skeptic when it comes go Global Warming

being one example.

Randi largely debunked things that were obviously false and not-political. (That the very existence of Global Warming is political is  appalling. At some future point the question of whether or not we ever got to the moon will be political: Something done by big government that worked is impossible, hence it did not happen. See Scott's Blog on disbelief that we ever went to the moon here. And people like Randi will need to debunk the notion that the moon landing was faked.) 

7) Back to Turing- There is a large diff between believing in ESP and believing in astrology.

For ESP Turing mentioned overwhelming evidence.  While he was WRONG, he did see the need to HAVE evidence. And note that ESP CAN be tested and found to NOT be true. Also note that it is plausible (though I really doubt it) that some humans somehow have some level of ESP. Astrology has NO redeeming value or hope whatsoever. (I am reminded that in Martin Gardner's book Fads and Fallacies in the name of science he noted that most people would say things like `YES, I liked your debunking of A, but you are wrong about B--- B is for real!')

UFO's: I do not believe that aliens have come here and abducted people or left crop circles or anything of the sort. The intelligent question of  is there intelligent life in the universe  is quite another matter.

7) When I saw magicians as a kid (1960's) I knew that it was all tricks- though very skillful tricks which were impressive. Sometimes they would indicate that it was real magic but I did not know what they meant. Since then I have learned that in an earlier time it was common that magicians claimed they used  real magic.  I still don't quite know what that means, which is just as well since it does not exist.

8) Randi has been sued by people whose tricks he has debunked. Randi seems to have always won.  I say seems to  since legal cases are not as clear cut as mathematics.  I also looked up Uri Geller. He has sued A LOT of people, and not just people who deny his claims. Thinks like using his likeness  without permission  (he may have a point there). Very hard to tell how he is doing on balance.

9) According to Wikipedia Randi dropped out of High School. I assume he learned A LOT on his own.

(Trivia-- who was the last president who did not have a college degree? I will answer at the end.)

10) This seems like a paradox... or something (quoted from Wikipedia):

BEGIN

Randi has been accused of actually using psychic powers to perform acts such as spoon bending. According to James Alcock, at a meeting where Randi was duplicating the performances of Uri Geller, a professor from the University at Buffalo shouted out that Randi was a fraud.  Randi said: "Yes, indeed, I'm a trickster, I'm a cheat, I'm a charlatan, that's what I do for a living. Everything I've done here was by trickery. The professor shouted back:

That's not what I mean. You're a fraud because you're pretending to do these things through trickery, but you're actually using psychic powers and misleading us by not admitting it.

A similar event involved Senator Claiborne Pell, a confirmed believer in psychic phenomena.  When Randi personally demonstrated to Pell that he could reveal—by simple trickery—a concealed drawing that had been secretly made by the senator, Pell refused to believe that it was a trick, saying: "I think Randi may be a psychic and doesn't realize it." Randi consistently denied having any paranormal powers or abilities.

END

Reminds me of this blog entry where I speculate about someone who codes up a great new classical  factoring algorithm and claims he has a quantum computer, or someone who has a working quantum computer and claims its a great new classical factoring algorithm. 


11) The last president who did not have a college degree: Harry Truman.



by gasarch (noreply@blogger.com) at November 30, 2020 04:36 AM UTC

I wanted to give a pointer to a new preprint on bioRxiv on developing diagnostic assays for viruses, by (first author) Hayden Metsky (and others!) out of the Sabeti Lab at the Broad Institute (that I've been a bit involved with).  Hayden, who somehow is both a computer science PhD and an expert in virology, has devised a novel software pipeline for developing diagnostics that are designed from the start to deal with genomic diversity (a virus evolves to have many somewhat different variants) and the challenge of false matches (you don't want to get false positives from matching some other different virus) -- also known as sensitivity and specificity.  Algorithmically, he uses machine learning to determine scores for possible tests for matches to small pieces of the genome, or probes, and utilizes locality-sensitive hashing, combinatorial optimization algorithms for submodular maximization, and sharding pattern matching across tries as substages in the overall design.  

I am always excited to see algorithmic ideas being used to solve real-world problems, and this is a deep and difficult example of the "algorithmic lens"  at work.  I am optimistically hopeful that this type of technology will help drive the development of viral diagnostic and monitoring methods forward.     

by Michael Mitzenmacher (noreply@blogger.com) at November 30, 2020 03:25 AM UTC

We consider the problem of counting the number of copies of a fixed graph $H$ within an input graph $G$. This is one of the most well-studied algorithmic graph problems, with many theoretical and practical applications. We focus on solving this problem when the input $G$ has {\em bounded degeneracy}. This is a rich family of graphs, containing all graphs without a fixed minor (e.g. planar graphs), as well as graphs generated by various random processes (e.g. preferential attachment graphs). We say that $H$ is {\em easy} if there is a linear-time algorithm for counting the number of copies of $H$ in an input $G$ of bounded degeneracy. A seminal result of Chiba and Nishizeki from '85 states that every $H$ on at most 4 vertices is easy. Bera, Pashanasangi, and Seshadhri recently extended this to all $H$ on 5 vertices, and further proved that for every $k > 5$ there is a $k$-vertex $H$ which is not easy. They left open the natural problem of characterizing all easy graphs $H$. Bressan has recently introduced a framework for counting subgraphs in degenerate graphs, from which one can extract a sufficient condition for a graph $H$ to be easy. Here we show that this sufficient condition is also necessary, thus fully answering the Bera--Pashanasangi--Seshadhri problem. We further resolve two closely related problems; namely characterizing the graphs that are easy with respect to counting induced copies, and with respect to counting homomorphisms. Our proofs rely on several novel approaches for proving hardness results in the context of subgraph-counting.

at November 29, 2020 04:43 PM UTC

In this post, we explore the Lock-Commit paradigm for consensus protocols. This approach is probably the most celebrated and widely used technique for reaching consensus in a safe manner. This approach is one of the key techniques in DLS88 and Lamport’s Paxos. We exemplify this paradigm by showing a single...

at November 29, 2020 01:02 PM UTC

We acquired this small blue glass bottle nearly 15 years ago, as a prop for a Harry Potter themed birthday party. For much of the time since then it’s been decorating our bathroom window with several other blue bottles. Like most bottles these days, it’s molded rather than blown, but a little roughly so the welding seams are more visible than they usually are on wine or liquor bottles.

Triangular blue glass bottle Triangular blue glass bottle
Triangular blue glass bottle Triangular blue glass bottle

As you can see, it has an unusual triangular cross-section. The final image, with a Reuleaux triangle overlaid, shows that the curvature of this cross-section is less than a Reuleaux triangle would have. I don’t think its outline was chosen with any particular mathematics in mind, but only (as so many other curved triangles) to make an interesting shape.

(Discuss on Mastodon)

by David Eppstein at November 27, 2020 06:20 PM UTC

The extremal number for surfaces

Andrey Kupavskii, Alexandr Polyanskii, István Tomon, Dmitriy Zakharov: The extremal number of surfaces

Abstract: In 1973, Brown, Erdős and Sós proved that if \cal H is a 3-uniform hypergraph on n vertices which contains no triangulation of the sphere, then \cal H  has at most O(n^{5/2}) edges, and this bound is the best possible up to a constant factor. Resolving a conjecture of Linial, also reiterated by Keevash, Long, Narayanan, and Scott, we show that the same result holds for triangulations of the torus. Furthermore, we extend our result to every closed orientable surface \cal S.

Remarks:

1) When \cal S is fixed Nati proved that there is a simplicial complex with n vertices and c n^{5/2} 2-faces that does not contain a triangulation of \cal S.

2) It is not known if there is an example with n vertices and c n^{5/2} 2-faces which does not contain a triangulation of any orientable closed surface \cal S.

We can ask the same question also for pseudomanifolds. What is the number of edges that guarantees a subcomplex with the property that every 1-face is included in precisely two 2-faces.

3) Nati conjectured that for every 2-dimensional simplicial complex S there is a constant C_S such that a 2-dimensional simplicial complex with $n$ vertices and $C_Sn^{5/2}$ 2-faces always contains a homeomorph of S. The paper  by Andrey, Alexandr, István, and Dmitriy asserts that proving the exponent 5/2 for the real projective space will imply the same exponent for all nonorientable closed surfaces.

4) The paper also mentions an unpublished result by Friedgut and Linial that n^{8/9} 2-faces suffice for a torus.

5) Here is another problem I am curious about: How many 2-faces guarantee a subcomplex K with H_2(K) \ne 0 and H_1(K)=0?  (You can choose the field of coefficients.)  Without the second requirement the answer is roughly n^2

Universal exponent for homeomorphs.

Nati’s Problem 2: Given a k-dimensional-complex S, how many facets can a  k-complex on n vertices have if it contains no topological copy of S? (Topological copy = homeomorphic image)

Jason Long, Bhargav Narayanan, and Corrine Yap:  Simplicial homeomorphs and trace-bounded hypergraphs 

Abstract: Our first main result is a uniform bound, in every dimension k \in \mathbb N, on the topological Turán numbers of k-dimensional simplicial complexes: for each k \in \mathbb N, there is a \lambda_k \ge k^{-2k^2} such that for any k-dimensional simplicial complex S, every k-complex on n \ge n_0(S) vertices with at least n^{k+1-\lambda_k}  facets contains a homeomorphic copy of S.

This was previously known only in dimensions one and two, both by highly dimension-specific arguments: the existence of \lambda_1 is a result of Mader from 1967, and the existence of \lambda_2 was suggested by Linial in 2006 and recently proved by Keevash-Long-Narayanan-Scott.

Jason Long, Bhargav Narayanan, and Corrine Yap deduce this theorem  from a very interesting purely combinatorial result about trace-bounded hypergraphs.

Here is the link to the aforementioned paper Peter Keevash, Jason Long, Bhargav Narayanan, and Alex Scott: A universal exponent for homeomorphs. The main theorem asserts that \lambda_2 can be taken to be 14/5.

Let me also mention a 2018 result by  Agelos Georgakopoulos, John Haslegrave, Richard Montgomery, and Bhargav Narayanan  in their 2018 paper Spanning surfaces in 3-graphs where they prove a topological extension of Dirac’s theorem about Hamiltonian cycles in graphs proposed by Gowers in 2005.

Finally, finding the correct value for \lambda_k would be very interesting.

by Gil Kalai at November 27, 2020 12:12 PM UTC

We hope you are all doing well and, in the case of our US audience, are slowly emerging from a comfortable Thanksgiving food-induced stupor! Our last TCS+ talk of the semester will take place this coming Wednesday, December 2nd at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Yang Liu from Stanford University will speak about “Faster Algorithms for Unit Maximum Flow” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: The maximum flow problem is one of the most well-studied problems in combinatorial optimization, encompassing a broad range of cut, matching, and scheduling problems. Here we present a recent line of work obtaining provably faster algorithms for solving the maximum flow problem using interior point methods. In particular, we show how to solve the maximum flow problem in m-edge unit capacity graphs in time almost m^{4/3}, improving over the breakthrough m^{10/7} time algorithm of Mądry.

This is based on joint work with Aaron Sidford (arxiv.org/abs/1910.14276, arxiv.org/abs/2003.08929).

by plustcs at November 27, 2020 09:37 AM UTC

This post is based on my recent paper with Noam Razin (to appear at NeurIPS 2020), studying the question of whether norms can explain implicit regularization in deep learning. TL;DR: we argue they cannot.

Implicit regularization = norm minimization?

Understanding the implicit regularization induced by gradient-based optimization is possibly the biggest challenge facing theoretical deep learning these days. In classical machine learning we typically regularize via norms, so it seems only natural to hope that in deep learning something similar is happening under the hood, i.e. the implicit regularization strives to find minimal norm solutions. This is actually the case in the simple setting of overparameterized linear regression $-$ there, by a folklore analysis (cf. Zhang et al. 2017), gradient descent (and any other reasonable gradient-based optimizer) initialized at zero is known to converge to the minimal Euclidean norm solution. A spur of recent works (see our paper for a thorough review) has shown that for various other models an analogous result holds, i.e. gradient descent (when initialized appropriately) converges to solutions that minimize a certain (model-dependent) norm. On the other hand, as discussed last year in posts by Sanjeev as well as Wei and myself, mounting theoretical and empirical evidence suggest that it may not be possible to generally describe implicit regularization in deep learning as minimization of norms. Which is it then?

A standard test-bed: matrix factorization

A standard test-bed for theoretically studying implicit regularization in deep learning is matrix factorization $-$ matrix completion via linear neural networks. Wei and I already presented this model in our previous post, but for self-containedness I will do so again here.

In matrix completion, we are given entries $\{ M_{i, j} : (i, j) \in \Omega \}$ of an unknown matrix $M$, and our job is to recover the remaining entries. This can be seen as a supervised learning (regression) problem, where the training examples are the observed entries of $M$, the model is a matrix $W$ trained with the loss: [ \qquad \ell(W) = \sum\nolimits_{(i, j) \in \Omega} (W_{i, j} - M_{i, j})^2 ~, \qquad\qquad \color{purple}{\text{(1)}} ] and generalization corresponds to how similar $W$ is to $M$ in the unobserved locations. In order for the problem to be well-posed, we have to assume something about $M$ (otherwise the unobserved locations can hold any values, and guaranteeing generalization is impossible). The standard assumption (which has many practical applications) is that $M$ has low rank, meaning the goal is to find, among all global minima of the loss $\ell(W)$, one with minimal rank. The classic algorithm for achieving this is nuclear norm minimization $-$ a convex program which, given enough observed entries and under certain technical assumptions (“incoherence”), recovers $M$ exactly (cf. Candes and Recht).

Matrix factorization represents an alternative, deep learning approach to matrix completion. The idea is to use a linear neural network (fully-connected neural network with linear activation), and optimize the resulting objective via gradient descent (GD). More specifically, rather than working with the loss $\ell(W)$ directly, we choose a depth $L \in \mathbb{N}$, and run GD on the overparameterized objective: [ \phi ( W_1 , W_2 , \ldots , W_L ) := \ell ( W_L W_{L - 1} \cdots W_1) ~. ~~\qquad~ \color{purple}{\text{(2)}} ] Our solution to the matrix completion problem is then: [ \qquad\qquad W_{L : 1} := W_L W_{L - 1} \cdots W_1 ~, \qquad\qquad\qquad \color{purple}{\text{(3)}} ] which we refer to as the product matrix. While (for $L \geq 2$) it is possible to constrain the rank of $W_{L : 1}$ by limiting dimensions of the parameter matrices $\{ W_j \}_j$, from an implicit regularization standpoint, the case of interest is where rank is unconstrained (i.e. dimensions of $\{ W_j \}_j$ are large enough for $W_{L : 1}$ to take on any value). In this case there is no explicit regularization, and the kind of solution GD will converge to is determined implicitly by the parameterization. The degenerate case $L = 1$ is obviously uninteresting (nothing is learned in the unobserved locations), but what happens when depth is added ($L \geq 2$)?

In their NeurIPS 2017 paper, Gunasekar et al. showed empirically that with depth $L = 2$, if GD is run with small learning rate starting from near-zero initialization, then the implicit regularization in matrix factorization tends to produce low-rank solutions (yielding good generalization under the standard assumption of $M$ having low rank). They conjectured that behind the scenes, what takes place is the classic nuclear norm minimization algorithm:

Conjecture 1 (Gunasekar et al. 2017; informally stated): GD (with small learning rate and near-zero initialization) over a depth $L = 2$ matrix factorization finds solution with minimum nuclear norm.

Moreover, they were able to prove the conjecture in a certain restricted setting, and others (e.g. Li et al. 2018) later derived proofs for additional specific cases.

Two years after Conjecture 1 was made, in a NeurIPS 2019 paper with Sanjeev, Wei and Yuping Luo, we presented empirical and theoretical evidence (see previous blog post for details) which led us to hypothesize the opposite, namely, that for any depth $L \geq 2$, the implicit regularization in matrix factorization can not be described as minimization of a norm:

Conjecture 2 (Arora et al. 2019; informally stated): Given a depth $L \geq 2$ matrix factorization, for any norm $\|{\cdot}\|$, there exist matrix completion tasks on which GD (with small learning rate and near-zero initialization) finds solution that does not minimize $\|{\cdot}\|$.

Due to technical subtleties in their formal statements, Conjectures 1 and 2 do not necessarily contradict. However, they represent opposite views on the question of whether or not norms can explain implicit regularization in matrix factorization. The goal of my recent work with Noam was to resolve this open question.

Implicit regularization can drive all norms to infinity

The main result in our paper is a proof that there exist simple matrix completion settings where the implicit regularization in matrix factorization drives all norms towards infinity. By this we affirm Conjecture 2, and in fact go beyond it in the following sense: (i) not only is each norm disqualified by some setting, but there are actually settings that jointly disqualify all norms; and (ii) not only are norms not necessarily minimized, but they can grow towards infinity.

The idea behind our analysis is remarkably simple. We prove the following:

Theorem (informally stated): During GD over matrix factorization (i.e. over $\phi ( W_1 , W_2 , \ldots , W_L)$ defined by Equations $\color{purple}{\text(1)}$ and $\color{purple}{\text(2)}$), if learning rate is sufficiently small and initialization sufficiently close to the origin, then the determinant of the product matrix $W_{1: L}$ (Equation $\color{purple}{\text(3)}$) doesn’t change sign.

A corollary is that if $\det ( W_{L : 1} )$ is positive at initialization (an event whose probability is $0.5$ under any reasonable initialization scheme), then it stays that way throughout. This seemingly benign observation has far-reaching implications. As a simple example, consider the following matrix completion problem ($*$ here stands for unobserved entry): [ \qquad\qquad \begin{pmatrix} * & 1 \newline 1 & 0 \end{pmatrix} ~. \qquad\qquad \color{purple}{\text{(4)}} ] Every solution to this problem, i.e. every matrix that agrees with its observations, must have determinant $-1$. It is therefore only logical to expect that when solving the problem using matrix factorization, the determinant of the product matrix $W_{L : 1}$ will converge to $-1$. On the other hand, we know that (with probability $0.5$ over initialization) $\det ( W_{L : 1} )$ is always positive, so what is going on? This conundrum can only mean one thing $-$ as $W_{L : 1}$ fits the observations, its value in the unobserved location (i.e. $(W_{L : 1})_{11}$) diverges to infinity, which implies that all norms grow to infinity!

The above idea goes way beyond the simple example given in Equation $\color{purple}{\text(4)}$. We use it to prove that in a wide array of matrix completion settings, the implicit regularization in matrix factorization leads norms to increase. We also demonstrate it empirically, showing that in such settings unobserved entries grow during optimization. Here’s the result of an experiment with the setting of Equation $\color{purple}{\text(4)}$:


Figure 1: Solving matrix completion problem defined by Equation $\color{purple}{\text(4)}$ using matrix factorization leads absolute value of unobserved entry to increase (which in turn means norms increase) as loss decreases.

What is happening then?

If the implicit regularization in matrix factorization is not minimizing a norm, what is it doing? While a complete theoretical characterization is still lacking, there are signs that a potentially useful interpretation is minimization of rank. In our aforementioned NeurIPS 2019 paper, we derived a dynamical characterization (and showed supporting experiments) suggesting that matrix factorization is implicitly conducting some kind of greedy low-rank search (see previous blog post for details). This phenomenon actually facilitated a new autoencoding architecture suggested in a recent empirical paper (to appear at NeurIPS 2020) by Yann LeCun and his team at Facebook AI. Going back to the example in Equation $\color{purple}{\text(4)}$, notice that in this matrix completion problem all solutions have rank $2$, but it is possible to essentially minimize rank to $1$ by taking (absolute value of) unobserved entry to infinity. As we’ve seen, this is exactly what the implicit regularization in matrix factorization does!

Intrigued by the rank minimization viewpoint, Noam and I empirically explored an extension of matrix factorization to tensor factorization. Tensors can be thought of as high dimensional arrays, and they admit natural factorizations similarly to matrices (two dimensional arrays). We found that on the task of tensor completion (defined analogously to matrix completion $-$ see Equation $\color{purple}{\text(1)}$ and surrounding text), GD on a tensor factorization tends to produce solutions with low rank, where rank is defined in the context of tensors (for a formal definition, and a general intro to tensors and their factorizations, see this excellent survey by Kolda and Bader). That is, just like in matrix factorization, the implicit regularization in tensor factorization also strives to minimize rank! Here’s a representative result from one of our experiments:


Figure 2: In analogy with matrix factorization, the implicit regularization of tensor factorization (high dimensional extension) strives to find a low (tensor) rank solution. Plots show reconstruction error and (tensor) rank of final solution on multiple tensor completion problems differing in the number of observations. GD over tensor factorization is compared against "linear" method $-$ GD over direct parameterization of tensor initialized at zero (this is equivalent to fitting observations while placing zeros in unobserved locations).

So what can tensor factorizations tell us about deep learning? It turns out that, similarly to how matrix factorizations correspond to prediction of matrix entries via linear neural networks, tensor factorizations can be seen as prediction of tensor entries with a certain type of non-linear neural networks, named convolutional arithmetic circuits (in my PhD I worked a lot on analyzing the expressive power of these models, as well as showing that they work well in practice $-$ see this survey for a soft overview).


Figure 3: The equivalence between matrix factorizations and linear neural networks extends to an equivalence between tensor factorizations and a certain type of non-linear neural networks named convolutional arithmetic circuits.

Analogously to how the input-output mapping of a linear neural network can be thought of as a matrix, that of a convolutional arithmetic circuit is naturally represented by a tensor. The experiment reported in Figure 2 (and similar ones presented in our paper) thus provides a second example of a neural network architecture whose implicit regularization strives to lower a notion of rank for its input-output mapping. This leads us to believe that implicit rank minimization may be a general phenomenon, and developing notions of rank for input-output mappings of contemporary models may be key to explaining generalization in deep learning.

Nadav Cohen

at November 27, 2020 09:00 AM UTC

I wanted to link to a survey that is up entitled Committee on TCS Connections Questionnaire.  They are examining modifying approaches to publishing in the theoretical computer science community, and they are focusing on FOCS/STOC.

I personally approve of the idea of the committee, though I admit I am concerned that it's too little, too late.  For years, FOCS/STOC has been a culture concerned with some sense of "prestige" -- the number of accepted papers has to be kept low, because we want people outside of theory to take FOCS/STOC as an imprimatur for the top theory work.  Because of this, FOCS/STOC has stayed essentially the same size, while the field (whether you view the field as TCS or computer science writ large) has expanded.  This has led to a proliferation of additional conferences (ITCS, HALG, various theory workshops...) that reduce the importance of FOCS/STOC and their role in creating community cohesion.  It has also led to other areas (most notably AI) becoming the home to work that should be quite at home in major TCS conferences.  

I don't think FOCS/STOC is what is used to be (the central home for theory results, when theory was smaller) or what it has supposedly wanted to be (the home for the best theory results).  I think it makes a lot of sense to stop and think about what they should be for the future.  Hence the survey is important, and I encourage the theoretical computer science community to respond.  I'm not sure, though, that there are great answers -- external forces, and the community's general aversion to change, may mean that there is not much to be done.  

by Michael Mitzenmacher (noreply@blogger.com) at November 26, 2020 03:46 PM UTC

Prediction algorithms assign numbers to individuals that are popularly understood as individual ``probabilities''---what is the probability of 5-year survival after cancer diagnosis?---and which increasingly form the basis for life-altering decisions. Drawing on an understanding of computational indistinguishability developed in complexity theory and cryptography, we introduce Outcome Indistinguishability. Predictors that are Outcome Indistinguishable yield a generative model for outcomes that cannot be efficiently refuted on the basis of the real-life observations produced by Nature. We investigate a hierarchy of Outcome Indistinguishability definitions, whose stringency increases with the degree to which distinguishers may access the predictor in question. Our findings reveal that Outcome Indistinguishability behaves qualitatively differently than previously studied notions of indistinguishability. First, we provide constructions at all levels of the hierarchy. Then, leveraging recently-developed machinery for proving average-case fine-grained hardness, we obtain lower bounds on the complexity of the more stringent forms of Outcome Indistinguishability. This hardness result provides the first scientific grounds for the political argument that, when inspecting algorithmic risk prediction instruments, auditors should be granted oracle access to the algorithm, not simply historical predictions.

at November 26, 2020 04:54 AM UTC

While a lot of pain is still ahead, this year I’m thankful that a dark chapter in American history might be finally drawing to a close. I’m thankful that the mRNA vaccines actually work. I’m thankful that my family has remained safe, and I’m thankful for all the essential workers who’ve kept our civilization running.

A few things:

  1. Friend-of-the-blog Jelani Nelson asked me to advertise an important questionnaire for theoretical computer scientists, about what the future of STOC and FOCS should look like (for example, should they become all virtual?). It only takes 2 or 3 minutes to fill out (I just did).
  2. Here’s a podcast that I recently did with UT Austin undergraduate Dwarkesh Patel. (As usual, I recommend 2x speed to compensate for my verbal tics.)
  3. Feel free to use the comments on this post to talk about recent progress in quantum computing or computational complexity! Like, I dunno, a (sub)exponential black-box speedup for the adiabatic algorithm, or anti-concentration for log-depth random quantum circuits, or an improved shadow tomography procedure, or a quantum algorithm for nonlinear differential equations, or a barrier to proving strong 3-party parallel repetition, or equivalence of one-way functions and time-bounded Kolmogorov complexity, or turning any hard-on-average NP problem into one that’s guaranteed to have solutions.
  4. It’s funny how quantum computing, P vs. NP, and so forth can come to feel like just an utterly mundane day job, not something anyone outside a small circle could possibly want to talk about while the fate of civilization hangs in the balance. Sometimes it takes my readers to remind me that not only are these topics what brought most of you here in the first place, they’re also awesome! So, I’ll mark that down as one more thing to be thankful for.

by Scott at November 26, 2020 02:03 AM UTC

[Announcement from Jelani Nelson –Boaz]

TL;DR: https://tinyurl.com/tcs-connections

A task force has been convened by CATCS to investigate possible
approaches to modifying aspects of the TCS community, especially our
publishing culture, to enhance connections with other areas of CS and
be as welcoming as possible to a broad range of contributions within
theory. This committee will collect and synthesize feedback from the
community via the questionnaire on https://tinyurl.com/tcs-connections then make suggestions.

Since adjusting conference formats may help with this, we would like to get
a general idea of community opinion on format choices. If you have
some opinions you would like to share, please use this form. Though
the questions below focus primarily on FOCS/STOC, we are welcome to
receiving any and all suggestions that would make the TCS community as
broad and well-connected to other areas of TCS as possible (see the
last question of the survey).

Task force members:
Ken Clarkson (IBM Research)
Sandy Irani (UC Irvine)
Bobby Kleinberg (Cornell)
Adam Klivans (UT Austin)
Ravi Kumar (Google)
Jelani Nelson (UC Berkeley)
Yuval Rabani (Hebrew University)

by Boaz Barak at November 25, 2020 08:20 PM UTC

I (Dan Spielman) have an opening for a postdoc to start in July or September 2021. The exact start date will be flexible. The postdoc will be hosted at the Yale Institute for Network Science.

The position will have an initial appointment of one year, but will be extendible to two or three years. We will try to create opportunities to teach if the postdoc is interested.

Website: http://cs-www.cs.yale.edu/homes/spielman/postdoc.html
Email: daniel.spielman@yale.edu

by shacharlovett at November 25, 2020 07:05 PM UTC

On November, 2020  we had a very nice open problem session in our weekly combinatorics seminar at HUJI.  So I thought to have a series of posts to describe you the problems presented there.  This is the first post in the series. Later I will add a link to the zoom recording of the session, and to a writeup of all the problems.

While talking on open problems let me mention that in connection with the OPAC 2020 conference (now delayed to May 2021),  a community blog to discuss open problems in algebraic combinatorics was created. Everybody is invited to check out the problems posted there, to subscribe, and to contribute their own problems. And, of course, to solve the problems! The list of problems so far is impressive.

Nati’s problem

Problem 1: Given a 2-manifold M what is the the number of 2-faces for a 2-dimensional simplicial complex S on n vertices that guarantee that S contains a triangulation of M?

The case that M is a triangulation of a two dimensional sphere goes back to a theorem of Brown, Erdos and Sos. They proved that the answer is Cn^{5/2}. Nati proved that the lower bound applies for every 2-manifold M.

Problem 2: Given a k-complex S, how many facets can a  k-complex on n vertices have if it contains no topological copy of S? (Topological copy = homeomorphic image)

Nati mentioned some further background and recent progress regarding these problems and I will mention them in a separate post.

“High dimensional combinatorics”

Problems 1 and 2 are parts of “Linial’s programme” of studying systematically “high dimensional combinatorics”.  Here are links to a few papers, slides of talks and videotaped talks. Among the highlights of this theory are the study, starting with a paper by Linial and Roy Meshulam of the Erdos-Renyi model for simplicial complexes (which is closely related to high dimensional expanders), the study of high dimensional permutations, and high dimensional tournaments.

Challenges of high-dimensional combinatorics (slides), Laszlo Lovasz 70th birthday conference, Budapest July 2018. video (covering 2/3 of the slides)

What is high-dimensional combinatorics?, Random-Approx, August ’08.

On high-dimensional acyclic tournaments, Nati Linial and Avraham Morgenstern, Discrete and Computational Geometry: Volume 50, Issue 4 (2013), Page 1085-1100.

Homological connectivity of random 2-dimensional complexes, N. Linial, and R. Meshulam, Combinatorica, 26(2006) 475–487.

Random Simplicial complexes – Around the phase transition, Nati Linial and Yuval Peled, A Journey Through Discrete Mathematics, A tribute to Jiri Matousek, 543–570 (2017).

( See Nati’s homepage for more)

by Gil Kalai at November 25, 2020 03:32 PM UTC

Galileo Galileo has many self-appointed intellectual heirs these days. Whether it’s a claim that the election has been stolen, that COVID-19 is less fatal than the flu, that climate change or evolution are hoaxes, or that P=NP, we keep hearing from people considering themselves as bold truth-tellers railing against conventional wisdom. We are encouraged to “teach the debate” and that if only paid attention, we will see that their Tweet, declaration, or arxiv paper contains an irrefutable proof of their assertions.

In the words of Ted Cruz, “They brand you a heretic. Today, the global warming alarmists are the equivalent of the flat-Earthers. It used to be [that] it is accepted scientific wisdom the Earth is flat, and this heretic named Galileo was branded a denier”.

Of course by Galileo’s time it was well known that the earth was spherical, and Magellan circumnavigated the earth more than 40 years before Galileo was born. But putting aside Cruz’s confusion of flat earth and geocentrism, the story of heliocentric theory is not one of an outsider railing against the scientific mainstream. Galileo himself was a chaired professor of mathematics at the University of Padua, and later philosopher and mathematician to the grand duke of Tuscany. He was very much part of the scientific establishment of his time. Moreover, though Galileo did provide important evidence for heliocentrism, he was not the only one doing so. Kepler found a heliocentric model with elliptical orbits that actually made correct predictions, and, though it took a decade or so, Kepler’s book eventually became the standard textbook for astronomy.

My point in this post is not to rehash the history of heliocentrism or Galileo but rather to call out a misconception which, to use Sean Carrol’s phrasing, amounts to valorization of puzzle-solving over wisdom.

It is tempting to think that an argument, regardless whether it comes from an expert or a random person on Twitter, can be presented in a self-contained way and judged on its merits. However, this is not how things work in any interesting setting. Even in the case of a purported P vs NP proof, there is background knowledge on computational complexity without which it would be hard to spot holes in the argument. This is doubly so for any claim involving empirical facts, whether it’s about elections, infections, climate etc. It is not possible to evaluate such claims without context, and to get this context you need to turn to the experts that have studied the topic.

I have written before in defense of expertise (see also here) but Carroll puts it very well. Another way to say it is that the operational interpretation of the common refrain

Extraordinary claims require extraordinary evidence

is

Treat claims conforming to conventional wisdom with charity, and claims disputing it with skepticism.

(There is a question of how to define “conventional wisdom” but interestingly there is usually agreement in practice by both sides. Most “deniers” of various sorts are proud of going against conventional wisdom, but don’t acknowledge that this means they are more likely to be wrong.)

As an example, even if someone has expertise in analytic number theory, and so presumably has plenty of so-called “puzzle-solving intelligence”, that doesn’t mean that they can evaluate a statistical claim on election fraud and their analysis should be considered evidence (apparently at this point the number theorist himself agrees). We can try to read and debunk what they wrote, or we can assume that if there was evidence for large-scale fraud, then the president of the United States and his well-funded campaign would have managed to find actual statisticians and experts on election to make the case.

There can be debate if Trump’s attempt to overthrow the election should be considered as dangerous or merely absurd, but the constant attacks on the very notions of truth, science, and expertise are causing far-reaching harm.

(H/T: Scott Aaronson, who makes a similar point around the election conspiracies.)

by Boaz Barak at November 24, 2020 10:30 PM UTC