Theory of Computing Blog Aggregator

The Euler International Mathematical Institute in St.Petersburg is seeking
postdocs in Math, TCS, Mathematical and Theoretical Physics. St.Petersburg is the most beautiful city in the world and has multiple mathematical locations including Steklov Institute of Mathematics and Dept. of Mathematics and CS in St.Petersburg State Univ. The preference is given to applications sent before 11/30/19.


by shacharlovett at October 22, 2019 08:52 AM UTC

Authors: Ulrich Bauer, Abhishek Rathod, Jonathan Spreer
Download: PDF
Abstract: Deciding whether two simplicial complexes are homotopy equivalent is a fundamental problem in topology, which is famously undecidable. There exists a combinatorial refinement of this concept, called simple-homotopy equivalence: two simplicial complexes are of the same simple-homotopy type if they can be transformed into each other by a sequence of two basic homotopy equivalences, an elementary collapse and its inverse, an elementary expansion. In this article we consider the following related problem: given a 2-dimensional simplicial complex, is there a simple-homotopy equivalence to a 1-dimensional simplicial complex using at most p expansions? We show that the problem, which we call the erasability expansion height, is W[P]-complete in the natural parameter p.

at October 22, 2019 12:00 AM UTC

Authors: N. V. Kitov, M. V. Volkov
Download: PDF
Abstract: Kauffman monoids $\mathcal{K}_n$ and Jones monoids $\mathcal{J}_n$, $n=2,3,\dots$, are two families of monoids relevant in knot theory. We prove a somewhat counterintuitive result that the Kauffman monoids $\mathcal{K}_3$ and $\mathcal{K}_4$ satisfy exactly the same identities. This leads to a polynomial time algorithm to check whether a given identity holds in $\mathcal{K}_4$. As a byproduct, we also find a polynomial time algorithm for checking identities in the Jones monoid $\mathcal{J}_4$.

at October 22, 2019 01:20 AM UTC

Authors: Zhenwei Dai, Anshumali Shrivastava
Download: PDF
Abstract: Recent work suggests improving the performance of Bloom filter by incorporating a machine learning model as a binary classifier. However, such learned Bloom filter does not take full advantage of the predicted probability scores. We proposed new algorithms that generalize the learned Bloom filter by using the complete spectrum of the scores regions. We proved our algorithms have lower False Positive Rate (FPR) and memory usage compared with the existing approaches to learned Bloom filter. We also demonstrated the improved performance of our algorithms on real-world datasets.

at October 22, 2019 12:00 AM UTC

Authors: Aram Harrow, Saeed Mehraban, Mehdi Soleimanifar
Download: PDF
Abstract: In this paper, we present a quasi-polynomial time classical algorithm that estimates the partition function of quantum many-body systems at temperatures above the thermal phase transition point. It is known that in the worst case, the same problem is NP-hard below this point. Together with our work, this shows that the transition in the phase of a quantum system is also accompanied by a transition in the hardness of approximation. We also show that in a system of n particles above the phase transition point, the correlation between two observables whose distance is at least log(n) decays exponentially. We can improve the factor of log(n) to a constant when the Hamiltonian has commuting terms or is on a 1D chain. The key to our results is a characterization of the phase transition and the critical behavior of the system in terms of the complex zeros of the partition function. Our work extends a seminal work of Dobrushin and Shlosman on the equivalence between the decay of correlations and the analyticity of the free energy in classical spin models. On the algorithmic side, our result extends the scope of a recent approach due to Barvinok for solving classical counting problems to quantum many-body systems.

at October 22, 2019 12:00 AM UTC

Authors: Jacob Holm, Eva Rotenberg
Download: PDF
Abstract: We show that every labelled planar graph $G$ can be assigned a canonical embedding $\phi(G)$, such that for any planar $G'$ that differs from $G$ by the insertion or deletion of one edge, the number of local changes to the combinatorial embedding needed to get from $\phi(G)$ to $\phi(G')$ is $O(\log n)$.

In contrast, there exist embedded graphs where $\Omega(n)$ changes are necessary to accommodate one inserted edge. We provide a matching lower bound of $\Omega(\log n)$ local changes, and although our upper bound is worst-case, our lower bound hold in the amortized case as well.

Our proof is based on BC trees and SPQR trees, and we develop \emph{pre-split} variants of these for general graphs, based on a novel biased heavy-path decomposition, where the structural changes corresponding to edge insertions and deletions in the underlying graph consist of at most $O(\log n)$ basic operations of a particularly simple form.

As a secondary result, we show how to maintain the pre-split trees under edge insertions in the underlying graph deterministically in worst case $O(\log^3 n)$ time. Using this, we obtain deterministic data structures for incremental planarity testing, incremental planar embedding, and incremental triconnectivity, that each have worst case $O(\log^3 n)$ update and query time, answering an open question by La Poutr\'e and Westbrook from 1998.

at October 22, 2019 12:00 AM UTC

Authors: Zhiyang Dou, Shiqing Xin, Rui Xu, Jian Xu, Yuanfeng Zhou, Shuangmin Chen, Wenping Wang, Xiuyang Zhao, Changhe Tu
Download: PDF
Abstract: Motivated by the fact that the medial axis transform is able to encode nearly the complete shape, we propose to use as few medial balls as possible to approximate the original enclosed volume by the boundary surface. We progressively select new medial balls, in a top-down style, to enlarge the region spanned by the existing medial balls. The key spirit of the selection strategy is to encourage large medial balls while imposing given geometric constraints. We further propose a speedup technique based on a provable observation that the intersection of medial balls implies the adjacency of power cells (in the sense of the power crust). We further elaborate the selection rules in combination with two closely related applications. One application is to develop an easy-to-use ball-stick modeling system that helps non-professional users to quickly build a shape with only balls and wires, but any penetration between two medial balls must be suppressed. The other application is to generate porous structures with convex, compact (with a high isoperimetric quotient) and shape-aware pores where two adjacent spherical pores may have penetration as long as the mechanical rigidity can be well preserved.

at October 22, 2019 01:25 AM UTC

Authors: Jeremiah Blocki, Mike Cinkoske
Download: PDF
Abstract: We create a graph reduction that transforms an $(e, d)$-edge-depth-robust graph with $m$ edges into a $(e/4,d)$-depth-robust graph with $O(m)$ nodes and constant indegree. An $(e,d)$-depth robust graph is a directed, acyclic graph with the property that that after removing any $e$ nodes of the graph there remains a path with length at least $d$. Similarly, an $(e, d)$-edge-depth robust graph is a directed, acyclic graph with the property that after removing any $e$ edges of the graph there remains a path with length at least $d$. Our reduction relies on constructing graphs with a property we define and analyze called ST-Robustness. We say that a directed, acyclic graph with $n$ inputs and $n$ outputs is $(k_1, k_2)$-ST-Robust if we can remove any $k_1$ nodes and there exists a subgraph containing at least $k_2$ inputs and $k_2$ outputs such that each of the $k_2$ inputs is connected to all of the $k_2$ outputs. We use our reduction on a well known edge-depth-robust graph to construct an $(\frac{n \log \log n}{\log n}, \frac{n}{\log n (\log n)^{\log \log n}})$-depth-robust graph.

at October 22, 2019 12:00 AM UTC

Authors: Tomer Kotek, Johann A. Makowsky
Download: PDF
Abstract: This is a survey on the exact complexity of computing the Tutte polynomial. It is the longer 2017 version of Chapter 25 of the CRC Handbook on the Tutte polynomial and related topics, edited by J. Ellis-Monaghan and I. Moffatt, which is due to appear in the first quarter of 2020. In the version to be published in the Handbook the Sections 5 and 6 are shortened and made into a single section.

at October 22, 2019 01:20 AM UTC

Authors: Anand Louis, Rakesh Venkat
Download: PDF
Abstract: Graph partitioning problems are a central topic of study in algorithms and complexity theory. Edge expansion and vertex expansion, two popular graph partitioning objectives, seek a $2$-partition of the vertex set of the graph that minimizes the considered objective. However, for many natural applications, one might require a graph to be partitioned into $k$ parts, for some $k \geq 2$. For a $k$-partition $S_1, \ldots, S_k$ of the vertex set of a graph $G = (V,E)$, the $k$-way edge expansion (resp. vertex expansion) of $\{S_1, \ldots, S_k\}$ is defined as $\max_{i \in [k]} \Phi(S_i)$, and the balanced $k$-way edge expansion (resp. vertex expansion) of $G$ is defined as \[ \min_{ \{S_1, \ldots, S_k\} \in \mathcal{P}_k} \max_{i \in [k]} \Phi(S_i) \, , \] where $\mathcal{P}_k$ is the set of all balanced $k$-partitions of $V$ (i.e each part of a $k$-partition in $\mathcal{P}_k$ should have cardinality $|V|/k$), and $\Phi(S)$ denotes the edge expansion (resp. vertex expansion) of $S \subset V$. We study a natural planted model for graphs where the vertex set of a graph has a $k$-partition $S_1, \ldots, S_k$ such that the graph induced on each $S_i$ has large expansion, but each $S_i$ has small edge expansion (resp. vertex expansion) in the graph. We give bi-criteria approximation algorithms for computing the balanced $k$-way edge expansion (resp. vertex expansion) of instances in this planted model.

at October 22, 2019 12:00 AM UTC

Authors: Eunjin Oh, Hee-Kap Ahn
Download: PDF
Abstract: We study the following range searching problem: Preprocess a set $P$ of $n$ points in the plane with respect to a set $\mathcal{O}$ of $k$ orientations % , for a constant, in the plane so that given an $\mathcal{O}$-oriented convex polygon $Q$, the convex hull of $P\cap Q$ can be computed efficiently, where an $\mathcal{O}$-oriented polygon is a polygon whose edges have orientations in $\mathcal{O}$. We present a data structure with $O(nk^3\log^2n)$ space and $O(nk^3\log^2n)$ construction time, and an $O(h+s\log^2 n)$-time query algorithm for any query $\mathcal{O}$-oriented convex $s$-gon $Q$, where $h$ is the complexity of the convex hull.

Also, we can compute the perimeter or area of the convex hull of $P\cap Q$ in $O(s\log^2n)$ time using the data structure.

at October 22, 2019 12:00 AM UTC

Authors: Ibrahim Jubran, Alaa Maalouf, Dan Feldman
Download: PDF
Abstract: A coreset (or core-set) of an input set is its small summation, such that solving a problem on the coreset as its input, provably yields the same result as solving the same problem on the original (full) set, for a given family of problems (models, classifiers, loss functions). Over the past decade, coreset construction algorithms have been suggested for many fundamental problems in e.g. machine/deep learning, computer vision, graphics, databases, and theoretical computer science. This introductory paper was written following requests from (usually non-expert, but also colleagues) regarding the many inconsistent coreset definitions, lack of available source code, the required deep theoretical background from different fields, and the dense papers that make it hard for beginners to apply coresets and develop new ones.

The paper provides folklore, classic and simple results including step-by-step proofs and figures, for the simplest (accurate) coresets of very basic problems, such as: sum of vectors, minimum enclosing ball, SVD/ PCA and linear regression. Nevertheless, we did not find most of their constructions in the literature. Moreover, we expect that putting them together in a retrospective context would help the reader to grasp modern results that usually extend and generalize these fundamental observations. Experts might appreciate the unified notation and comparison table that links between existing results.

Open source code with example scripts are provided for all the presented algorithms, to demonstrate their practical usage, and to support the readers who are more familiar with programming than math.

at October 22, 2019 12:00 AM UTC

Authors: Yujin Choi, Seungjun Lee, Hee-Kap Ahn
Download: PDF
Abstract: We study the problem of finding maximum-area rectangles contained in a polygon in the plane. There has been a fair amount of work for this problem when the rectangles have to be axis-aligned or when the polygon is convex. We consider this problem in a simple polygon with $n$ vertices, possibly with holes, and with no restriction on the orientation of the rectangles. We present an algorithm that computes a maximum-area rectangle in $O(n^3\log n)$ time using $O(kn^2)$ space, where $k$ is the number of reflex vertices of $P$. Our algorithm can report all maximum-area rectangles in the same time using $O(n^3)$ space.

We also present a simple algorithm that finds a maximum-area rectangle contained in a convex polygon with $n$ vertices in $O(n^3)$ time using $O(n)$ space.

at October 22, 2019 12:00 AM UTC

Authors: Nesreen K. Ahmed, Nick Duffield, Ryan A. Rossi
Download: PDF
Abstract: Temporal networks representing a stream of timestamped edges are seemingly ubiquitous in the real-world. However, the massive size and continuous nature of these networks make them fundamentally challenging to analyze and leverage for descriptive and predictive modeling tasks. In this work, we propose a general framework for temporal network sampling with unbiased estimation. We develop online, single-pass sampling algorithms and unbiased estimators for temporal network sampling. The proposed algorithms enable fast, accurate, and memory-efficient statistical estimation of temporal network patterns and properties. In addition, we propose a temporally decaying sampling algorithm with unbiased estimators for studying networks that evolve in continuous time, where the strength of links is a function of time, and the motif patterns are temporally-weighted. In contrast to the prior notion of a $\bigtriangleup t$-temporal motif, the proposed formulation and algorithms for counting temporally weighted motifs are useful for forecasting tasks in networks such as predicting future links, or a future time-series variable of nodes and links. Finally, extensive experiments on a variety of temporal networks from different domains demonstrate the effectiveness of the proposed algorithms.

at October 22, 2019 01:21 AM UTC

Authors: Titus Dose
Download: PDF
Abstract: We build on a working program initiated by Pudl\'ak [Pud17] and construct an oracle relative to which each set in $\mathrm{coNP}$ has $\mathrm{P}$-optimal proof systems and $\mathrm{NP}\cap\mathrm{coNP}$ does not have complete problems.

at October 22, 2019 01:20 AM UTC

Our 1977 paper on the role of formal methods

[ Harvard ]

Harry Lewis is known for his research in mathematical logic, and for his wonderful contributions to teaching. He had two students that you may have heard of before: a Bill Gates and a Mark Zuckerberg.

Today I wish to talk about a recent request from Harry about a book that he is editing.

The book is the “Classic Papers of CS” based on a course that he has been teaching for years. It will contain 46 papers with short introductions by Harry. My paper from 1977 with Alan Perlis and Rich DeMillo will be included. The paper is “Social Processes and Proofs of Theorems and Programs”.

Harry says that “A valued colleague believes this paper displays such polemical overreach that it should not appear in this collection”. I hope that it does still appear anyway. Harry goes on to say

And though verification techniques are widely used today for hardware designs, formal verification of large software systems is still a rarity.


Our Point

I have mixed feeling about our paper, which is now getting close to fifty years old. I believe we had some good points to make then. And that these are still relevant today. Our paper starts with:

Many people have argued that computer programming should strive to become more like mathematics. Maybe so, but not in the way they seem to think.

Our point was just this: Proofs in mathematics and not just formal arguments that show that a theorem is correct. They are much more. They must show why and how something is true. They must explain and extend our understanding of why something is true. They must do more than just demonstrate that something is correct.

They must also make it clear what they claim to prove. A difficulty we felt, then, was that care must be given to what one is claiming to prove. In mathematics often what is being proved is simple to state. In practice that is less clear. A long complex statement may not correctly capture what one is trying to prove.

Who proves that the specification is correct?

Proof Of Our Point

I have often wondered why some do not see this point. That proofs are more than “correctness checks”. I thought I would list some “proofs” of this point.

{\bullet } The great Carl Gauss gave the first proof of the law of quadratic reciprocity. He later published six more proofs, and two more were found in his posthumous papers. There are now over two hundred published proofs.

So much to say that a proof is just a check.

{\bullet } Thomas Hales solved the Kepler conjecture on sphere packing in three-dimensional Euclidean space. He faced some comments that his proof might not be certain—it was said to be {99\%}. So he used formal methods to get a formal proof. But {\dots}

{\bullet } Maryna Viazovska solved the related problem in eight dimensions. Her proof is here. The excitement of this packing result is striking compared with Hales’s result. No need for correctness checks in her proof.

Henry Cohn says here:

One measure of the complexity of a proof is how long it takes the community to digest it. By this standard, Viazovska’s proof is remarkably simple. It was understood by a number of people within a few days of her arXiv posting, and within a week it led to further progress: Abhinav Kumar, Stephen Miller, Danylo Radchenko, and I worked with Viazovska to adapt her methods to prove that the Leech lattice is an optimal sphere packing in twenty-four dimensions. This is the only other case above three dimensions in which the sphere packing problem has been solved.

So, a proof that a great proof is a proof that helps create new proofs of something else. Okay, nasty way to say it. What mean is a great proof is one that enables new insights, that enables further progress, that advances the field. Not just a result that “checks” for correctness.

{\bullet } The famous ABC conjecture of Joseph Oesterle and David Masser has been claimed by Shinichi Mochizuki. Arguments continue about his proof. Peter Scholze and Jakob Stix believe his proof is flawed and is unfixable. Mochizuki claims they are wrong.

Will a formal proof solve this impasse? Perhaps not. A proof that explains why it is true might. A proof that advances number theory elsewhere might, a proof that could solve other problems would likely.

Open Problems

What do you think about the role of proofs? Did we miss the point years ago?

Will formal verification become effective in the near future? And when it does, will it help provide explanations? We note this recent discussion of a presentation by Kevin Buzzard of Imperial College, London, and one-day workshop on “The Mechanization of Math” which took place two weeks ago in New York City.

[Typo fixed]

by rjlipton at October 21, 2019 03:51 PM UTC

Recently there was an excellent xkcd about differentiation and integration, see here.

This brings up thoughts on diff and int:

1) For some students Integration is when math gets hard.

Diff (at least on the level of Calc I) is rote memorization. A computer program can EASILY do diff

Integration by humans requires more guesswork and thought, Computers can now do it very well but I think that it was  harder to get to work.

Someone who has worked on programs for both, please comment.

2) When I took Honors Calculus back in 1976 (from Jeff Cheeger at SUNY Stonybrook) he made a comment which really puzzled the class, and myself, but later I understood it:

             Integration is easier than Differentiation

The class thought this was very odd since the problem of, GIVEN a function, find its diff was easier than GIVEN a function, find its int.  And of course I am talking about the kinds of functions one is
given in Calc I and Calc II, so this is not meant to be a formal statement.

What he meant was that integration  has better mathematical properties than differentiation.  For example, differentiating the function f(x)=abs(x) (absolute value of x)  is problematic at 0, where it has no problem with integration anywhere (alas, if only our society was as relaxed about integration as f(x)=abs(x) is).

So I would say that the class and Dr. Cheeger were both right (someone else might say they were both wrong) we were just looking at different notions of easy and hard.

Are there other cases in math where `easy' and `hard' can mean very different things?

by GASARCH ( at October 21, 2019 03:47 PM UTC

This year we have a targeted search in all areas of quantum computing, with a particular emphasis on quantum algorithms and quantum complexity theory. Candidates interested in a faculty position should apply here.

by James at October 21, 2019 08:37 AM UTC

Authors: Shaohua Li, Marcin Pilipczuk, Manuel Sorge
Download: PDF
Abstract: Given a graph $G=(V,E)$ and an integer $k$, the Cluster Editing problem asks whether we can transform $G$ into a union of vertex-disjoint cliques by at most $k$ modifications (edge deletions or insertions). In this paper, we study the following variant of Cluster Editing. We are given a graph $G=(V,E)$, a packing $\mathcal{H}$ of modification-disjoint induced $P_3$s (no pair of $P_3$s in $\cal H$ share an edge or non-edge) and an integer $\ell$. The task is to decide whether $G$ can be transformed into a union of vertex-disjoint cliques by at most $\ell+|\cal H|$ modifications (edge deletions or insertions). We show that this problem is NP-hard even when $\ell=0$ (in which case the problem asks to turn $G$ into a disjoint union of cliques by performing exactly one edge deletion or insertion per element of $\cal H$). This answers negatively a question of van Bevern, Froese, and Komusiewicz (CSR 2016, ToCS 2018), repeated by Komusiewicz at Shonan meeting no. 144 in March 2019.

at October 21, 2019 11:21 PM UTC

Authors: Sarah Tymochko, Elizabeth Munch, Firas A. Khasawneh
Download: PDF
Abstract: As the field of Topological Data Analysis continues to show success in theory and in applications, there has been increasing interest in using tools from this field with methods for machine learning. Using persistent homology, specifically persistence diagrams, as inputs to machine learning techniques requires some mathematical creativity. The space of persistence diagrams does not have the desirable properties for machine learning, thus methods such as kernel methods and vectorization methods have been developed. One such featurization of persistence diagrams by Perea, Munch and Khasawneh uses continuous, compactly supported functions, referred to as "template functions," which results in a stable vector representation of the persistence diagram. In this paper, we provide a method of adaptively partitioning persistence diagrams to improve these featurizations based on localized information in the diagrams. Additionally, we provide a framework to adaptively select parameters required for the template functions in order to best utilize the partitioning method. We present results for application to example data sets comparing classification results between template function featurizations with and without partitioning, in addition to other methods from the literature.

at October 21, 2019 12:00 AM UTC

Authors: Sam Buss, Anupam Das, Alexander Knop
Download: PDF
Abstract: This paper studies propositional proof systems in which lines are sequents of decision trees or branching programs - deterministic and nondeterministic. The systems LDT and LNDT are propositional proof systems in which lines represent deterministic or non-deterministic decision trees. Branching programs are modeled as decision dags. Adding extension to LDT and LNDT gives systems eLDT and eLNDT in which lines represent deterministic and non-deterministic branching programs, respectively.

Deterministic and non-deterministic branching programs correspond to log-space (L) and nondeterministic log-space (NL). Thus the systems eLDT and eLNDT are propositional proof systems that reason with (nonuniform) L and NL properties.

The main results of the paper are simulation and non-simulation results for tree-like and dag-like proofs in the systems LDT, LNDT, eLDT, and eLNDT. These systems are also compared with Frege systems, constantdepth Frege systems and extended Frege systems

at October 21, 2019 11:20 PM UTC

Authors: Muhammad Irfan Yousuf, Raheel Anwar
Download: PDF
Abstract: Graph Sampling provides an efficient yet inexpensive solution for analyzing large graphs. While extracting small representative subgraphs from large graphs, the challenge is to capture the properties of the original graph. Several sampling algorithms have been proposed in previous studies, but they lack in extracting good samples. In this paper, we propose a new sampling method called Weighted Edge Sampling. In this method, we give equal weight to all the edges in the beginning. During the sampling process, we sample an edge with the probability proportional to its weight. When an edge is sampled, we increase the weight of its neighboring edges and this increases their probability to be sampled. Our method extracts the neighborhood of a sampled edge more efficiently than previous approaches. We evaluate the efficacy of our sampling approach empirically using several real-world data sets and compare it with some of the previous approaches. We find that our method produces samples that better match the original graphs. We also calculate the Root Mean Square Error and Kolmogorov Smirnov distance to compare the results quantitatively.

at October 21, 2019 12:00 AM UTC

The Johns Hopkins University Department of Computer Science seeks applicants for tenure-track faculty positions at all levels and across all areas of computer science. The department will consider offers in two tracks: (1) an open track seeking excellent candidates across all areas of computer science; and (2) a track seeking candidates in the areas of human computer interaction (HCI).


by shacharlovett at October 20, 2019 04:15 PM UTC

Like many UC Irvine faculty I live in University Hills, a faculty housing complex associated with UC Irvine. It’s a great place to live: the prices are significantly lower than the surrounding area, I like my neighbors, and I love living so close to my office (ten minutes by foot) that I can walk to work instead of having to deal with the twin headaches of Southern California traffic and university parking.

Because it’s so convenient for walking, University Hills is filled with footpaths, many of which pass through greenbelts instead of running alongside the roads. The main footpath leading to the campus from the neighborhood heads towards a building designed in the shape of a giant arch, with the intent of providing a gateway into the central campus. Because the building is part of the engineering school, it’s called the Engineering Gateway. Here it is from the campus side:

Engineering Gateway from the UC Irvine campus

It looks inviting, but you wouldn’t know from this view that it’s now a dead end. Here’s a view from the other side, from the end of the footpath that used to connect to it via a crosswalk across the ring road around campus. The crosswalk has been ripped out, replaced by a fence, and planted with ivy to discourage anyone from crossing that way.

Engineering Gateway from University Hills

Instead, the path has been rerouted to dump you onto the ring road, where a little farther along there’s a new replacement crosswalk. You can get into the campus by crossing there and following a service road (creatively named “Engineering Service Road”) past this lovely view:

Engineering Service Road, UC Irvine

Alternatively, you can still get to the Engineering Gateway by walking a half-block out of your way down the ring road, crossing, and then following this inviting sidewalk another half-block back the way you came:

Along Peltason Road to the Engineering Gateway

I don’t usually take either of those two routes. Instead, I take a different path down a different service road, between two loading docks, where a narrow gap between the backs of two buildings (the University Club and the computer science department) leads into the campus. Here’s what it looks like on weekends; on weekdays, it’s often completely blocked by delivery trucks.

Los Trancos Drive, UC Irvine

It’s almost as if by making these routes so awkward and ugly, the campus offices of transportation and physical and environmental planning, which pride themselves on their sustainability, are trying to send the faculty a message. But what could that message be?

Stop, Do Not Enter Don't Walk
No Pedestrians No Pedestrian Access

(Discuss on Mastodon or more likely on the UHills mailing list)

by David Eppstein at October 19, 2019 08:11 PM UTC

The position is open from 1 June 2020 or as soon as possible thereafter.

Using the power of mathematics, we strive to create fundamental breakthroughs in algorithmic thinking. While the focus of BARC is algorithms theory, we do have a track record of surprising algorithmic discoveries leading to major industrial applications. Please find the full job advertisement at


by shacharlovett at October 18, 2019 07:20 AM UTC

Starting PhD students over time would always assume that the computer science academic job market would be a strong or as weak when they graduate as it is when they were starting, and they would always be wrong. That may have changed. We've had such a stretch of strong growth in computer science, starting as we pulled out of the financial crisis in 2012, that students who started in the strong market back then see only a much stronger market today.

Every fall I recap advice for students, and others, looking for academic jobs. Best source are the ads from the CRA and the ACM. For theoretical computer science specific postdoc and faculty positions check out TCS Jobs and Theory Announcements. If you have jobs to announce, please post to the above and/or feel free to leave a comment on this post. Even if you don't see an ad, almost surely your favorite university is looking to hire computer scientists. Check out their website or email someone at the department. The CRA just published a member book, a collection of one pagers for several departments, almost all of which are trying to grow.

Needless to say we're trying to greatly expand computing at Illinois Tech, come join us.

Something new this year, CATCS is collecting and disseminating profiles of junior theory researchers on the job market this year. Definitely take advantage whether to sign up as a job seeker or to reach out to theorists on the market once the profiles are posted. The CRA also maintains a CV database for candidates for academic, industrial and government research positions.

While this is a job-seekers market, you still need to put your best foot forward. Reach out to professors at conferences, such as the upcoming FOCS. Polish your CV and get your Google Scholar page in good shape. Practice your job talk, a bad one can kill your visit. Research the people you will see during the interview ahead of time, I like to write down one interesting discussion topic for each. You'll need to sell yourself to non-theorists. Data, cybersecurity and quantum are hot this year, highlight your work in those areas without making it look fake.

In any case have fun! You'll meet lots of interesting people in your job search and eat way too much.

by Lance Fortnow ( at October 17, 2019 08:50 PM UTC

The mathematical equations in my blog posts, and the ones you see on many other web sites, are formatted with MathJax, a JavaScript library that lets web developers write LaTeX formulas and turns them within your browser into nicely formatted math. The web pages of my blog are generated by Jekyll, a static web site generation system (meaning that it doesn’t go querying a database for its content, they are just web pages stored in files somewhere). I can write my posts in more than one format, but since the April 2017 LiveJournal apocalypse I’ve been writing them using kramdown, a system built into Jekyll for transforming marked-up text files into html ready for browsers to read and display. And so far mostly those different systems have been getting along really well together. Kramdown knows about MathJax and can handle equations in its input without trying to interpret their syntax as kramdown codes, Jekyll only needs me to modify a template somewhere so that my blog pages include an invocation of the MathJax library, and MathJax in your browser happily formats the equations in my posts. But recently, the MathJax people released MathJax version 3.0.0, and that doesn’t work so well with Jekyll and Kramdown. Despite some difficulty, I seem to have gotten them working again. So I thought it might be helpful to post here what went wrong and how I fixed it, in case others run into the same issues.

There are multiple ways of invoking MathJax, but the one I’ve been using is simply to put a line in my html headers saying to load the MathJax library from a content distribution network (asynchronously, so that it doesn’t delay the pages from being shown to readers). Once MathJax loads, it scans through the html that it has been applied to, looking for blocks of math to reformat. The default way of marking these blocks is to include them in \( ... \) or \[ ... \] delimiters (for inline formulas and display formulas that go on a line of their own, as you might use in LaTeX if you aren’t still using $ ... $ or $$ ... $$ instead). There are ways of changing the defaults, and those ways have also changed between MathJax 2 and MathJax 3, but I wasn’t using them.

In kramdown, you don’t use the same delimiters for math. Kramdown expects to see mathematical formulas delimited by $$ ... $$ in its marked-up text input, always. It will determine from context whether it’s an inline formula or a display formula. It also doesn’t use the default delimiters in the html that it generates. Instead it outputs html that puts inline formulas inside <script type="math/tex"> ... </script> html tags, and, similarly, puts display formulas inside <script type="math/tex; mode=display"> ... </script> tags. This all worked in MathJax 2, and these script delimiters are still recommended in the MathJax 3 documentation, but they don’t work any more.

The right way to fix this would be either to get MathJax 3 to understand the script delimiters, or to get kramdown to know how to generate something that works in MathJax 3, but I don’t have a lot of control over either. And the second-best fix might be to use some other software after kramdown runs, to change the delimiters in the static html files before they get served to anyone, but I don’t have that option on my blog host. Instead, I followed a suggestion in the kramdown documentation for working with KaTeX, a competing JavaScript library to MathJax for formatting mathematical equations in web pages. The suggestion is to add to your html files a little bit of glue JavaScript code that recognizes the formula delimiters produced by kramdown and does something with them. In my case, the something that I want to do is just to convert them to the delimiters that MathJax defaultly recognizes.

Timing is crucial here. If I try to run the JavaScript to convert the delimiters too early, they won’t yet be part of the html document that the JavaScript is running on and won’t be found and converted. In particular, running it at the time the html headers are parsed is too early. If I run it too late, the web page will already have been shown to the person viewing it, and each conversion step of each delimiter will also be shown as a slow and unsightly change of the text, on top of the later changes performed by MathJax. You can put JavaScript code at the end of the body of an html page, but that would be too late. Additionally, MathJax should be loaded asynchronously (to prevent slowdowns before the viewer sees something useful from the web page) but must not run until all of the delimiter conversions are complete, because otherwise it won’t see the converted delimiters. So I ended up with the following chunk of JavaScript code, in the Jekyll file _includes/head.html that gets copied into the headers of my html pages. It waits until the entire document is loaded, converts the delimiters, and then loads the MathJax library.

<script type="text/javascript">
document.addEventListener('DOMContentLoaded', function(){
    el.outerHTML = "\\(" + el.textContent + "\\)";
  document.querySelectorAll("script[type='math/tex; mode=display']").forEach(function(el){
    el.outerHTML = "\\[" + el.textContent + "\\]";
  var script = document.createElement('script');
  script.src = "";
}, false);

This could be simplified somewhat with JQuery, but I didn’t do that because this is the only JavaScript in my files and the overhead of loading JQuery seemed too much for that small use. It’s my first JavaScript code ever, so it could probably be done better by someone with more experience. And it’s a bit of a hack, but it seems to work. One other change that I made implies that you won’t see this code in the html for this post, though. The reason is that I don’t want MathJax incorrectly interpreting the example delimiters in my post and in the code block above as actual mathematics formula delimiters. So I also added some Jekyll conditionals that, with the right keyword in the header of a post, disable including the MathJax Javascript, and I’m using that keyword on this post.

…and I thought I was done, until I started looking at some mathematics-intensive older posts, and found some more problems. In a few cases, kramdown has been putting more than just the script delimiters around its math formulas. Within the script tags, the math has been surrounded by a second level of delimiters, % <![CDATA[ ... %]]>. This coding tells the html parser not to worry about weird special characters in the formula, and it was ignored by the old MathJax because the percent signs cause the rest of their lines to be treated as a comment. But the new MathJax parser doesn’t like the comments (or maybe treats the whole formula as a comment despite the newline characters within it) and displays a blank. This behavior is triggered in kramdown when a formula uses < instead of \lt (easy enough to avoid), or when it uses & (e.g. in an aligned set of equations, not easy to avoid). So the actual code I ended up with is a little more complicated:

<script type="text/javascript">
document.addEventListener('DOMContentLoaded', function(){
  function stripcdata(x) {
    if (x.startsWith('% <![CDATA[') && x.endsWith('%]]>'))
      return x.substring(11,x.length-4);
    return x;
    el.outerHTML = "\\(" + stripcdata(el.textContent) + "\\)";
  document.querySelectorAll("script[type='math/tex; mode=display']").forEach(function(el){
    el.outerHTML = "\\[" + stripcdata(el.textContent) + "\\]";
  var script = document.createElement('script');
  script.src = "";
}, false);

If you see any mathematics glitches in any of my old or new posts, please tell me; they could be more interactions like this that I haven’t spotted yet.

(Discuss on Mathstodon, which also recently switched to MathJax 3)

by David Eppstein at October 17, 2019 08:30 PM UTC

If an origami crease pattern tells you where to put the folds, but not which way to fold them, you may have many choices left to make. A familiar example is the square grid. You can pleat (accordion-fold) the horizontal lines of the grid, and then pleat the resulting folded strip of paper along the vertical lines; the result will be that each horizontal line is consistently a mountain fold or a valley fold, but each vertical line has folds that alternate between mountain and valley. Or you could pleat the vertical lines first, and then the horizontal lines, getting a different folded state. There are many other choices beyond these two.

The famous Miura-ori is another grid-like fold made out of parallelograms instead of squares, and known for its ability to continuously unfold from its folded state to an expanded and completely open state. Like the square grid, a pattern of crease lines in the same positions has many alternative foldings. In fact, this multiplicity of folded states can be helpful in making the miura-ori out of paper. To make it, you can start with pleating a set of parallel lines (like the square grid) to form a folded strip of paper, and then pleat in a different direction that forms a non-right angle to the first pleating direction. The result will be a fold that is not the Miura-ori, but that has its folds in the same places as the Miura-ori. By reversing the orientation of some of these folds, you get the Miura-ori itself.

When working with the space of all foldings of a crease pattern, it’s unfortunately a bit complicated to understand which patterns fold flat (globally, as an entire structure) and which don’t. In a celebrated result from SODA 1996, Bern and Hayes showed that, for arbitrary crease patterns, even determining whether there exists a globally flat-foldable state is NP-complete. So it’s easier to work with “local flat foldings”, meaning a labeling of all of the creases as mountain or valley folds with the property that the creases surrounding each vertex of the folding pattern could be folded flat, if only all of that other stuff farther away from the vertex didn’t get in the way. It’s easy to check whether a single vertex can be folded flat using Maekawa’s theorem, Kawasaki’s theorem and related results.

In an earlier paper, my co-authors and I studied the space of all local flat foldings of the Miura-ori crease pattern, from the point of view of seeking forcing sets, mountain-valley assignments to small subsets of the creases with the property that there is only one way to extend them to a locally flat-foldable mountain-valley assignment on the whole crease pattern. My new preprint, “Face flips in origami tessellations” (with Akitaya, Dujmović, Hull, Jain, and Lubiw, arXiv:1910.05667) instead looks at the connectivity of the system of all local flat foldings. If you’re in one locally flat-folded state (say, the state that you get from pleating the paper once and then pleating the folded strip a second time in a non-orthogonal direction) and you want to get to a different flat-folded state (say, the Miura-ori itself), how many moves does it take? Here, we’re not just allowing any change of a single crease from mountain to valley or vice versa to count as a move. Instead, a move is what happens when you change all of the folds surrounding a single face of the crease pattern, in such a way that the new mountain-valley assignment remains flat-foldable.

The results vary dramatically according to the folding pattern. For the square grid, every square can be flipped, and given any two mountain-valley assignments you can color the squares black or white according to whether they are surrounded an even or odd number of times by cycles of creases that need to be changed. Then the shortest way to get from one assignment to the other is either to flip all the black squares or to flip all the white squares. For a grid of equilateral triangles, it is possible to flip from any mountain-valley assignment to any other one within a polynomial number of steps, but finding the shortest sequence is NP-complete. And for the Miura-ori, it’s again always possible, and there’s a nontrivial polynomial time algorithm for finding the shortest flip sequence. We also have examples of crease patterns forming periodic tilings of the plane where nothing is flippable (so the state space becomes totally disconnected) or where the flippable faces form an independent set (so you merely have to flip the faces whose surrounding mountain-valley assignments differ, in any order). The image below shows two crease patterns of the latter type (called square twist tessellations) with the flippable faces colored blue.

Square twist tessellations

I think the results for the Miura-ori are particularly neat, so I want to outline them in a little more detail. There’s a natural bijection between local flat-foldings of the Miura crease pattern and 3-colorings of the squares of a square grid, which we used in the earlier paper, and flips of Miura faces correspond in the same way to recoloring steps in which we change the color of a single square. There’s also a natural correspondence (but not a bijection!) between 3-colorings of a grid and “height functions”, giving an integer height to each square, with each two adjacent squares having heights that are one step apart. In one direction, you can get a coloring from a height function by taking the heights mod 3. In the other direction, starting from a colored grid and a choice of the height of one of the squares (with the correct value modulo 3) you can go from there to adjacent squares step by step and figure out what their height has to be. It all works out so that, no matter how you do it, you always get a consistent height function from each 3-coloring. But the function depends on the starting height of the first square. If you add a multiple of 3 to this height, you translate the whole function up or down by that amount, and the translation turns out to be important.

Height function of a 3-coloring of a square grid

So if you have two different local flat foldings of the Miura crease pattern, you can translate them in this way into two different 3-colorings, and two different height functions. Then you can convert one of the height functions into the other one, move by move, by repeatedly finding the square whose height is farthest away from its final value and shifting it two steps closer. The total number of steps equals half the volume of the three-dimensional space between the two height functions, and that’s the best you can do to get one height function to the other. But it might not be the best you can do to get one 3-coloring to the other, because of the choice of starting heights. To find the minimum number of moves to get from one local flat folding to another there’s one more computation that you have to do first: find two starting heights for the two height functions that makes the volume between them as small as possible.

For a bit more on height functions, 3-colorings, and the “arctic circle” that one gets by choosing 3-colorings of the square grid randomly, you can read the web page “random 3-coloring of the square and cubic lattice”, by Joakim Linde, describing his joint work with Cris Moore in this area.

(Discuss on Mastodon)

by David Eppstein at October 16, 2019 09:52 PM UTC

  • Spanning Trees with Low (Shallow) Stabbing Number () is the master’s thesis of Johannes Obenaus at the Free University of Berlin and ETH Zürich. The stabbing number of a tree is how many edges a line can cross. Any points in have a tree with stabbing number , useful in some data structures. The thesis includes a solution to Open Problem 17.5 of my book Forbidden Configurations in Discrete Geometry: removing points from a point set might cause the minimum stabbing number of a spanning tree to increase.

  • A pretty result in inversive geometry from the Japanese “Wasan” period, relating the diameters of circles in Steiner chains between two parallel lines to regular polygons.

  • Counting Memories by Chiharu Shiota (), an installation art piece in Katowice, Poland that prompts visitors to reflect on how numbers “connect us universally, comfort us, and help us understand ourselves” by writing down their feelings and memories about numbers that are meaningful to them.

  • Revisiting Minesweeper (). As Uncle Colin shows, calculating the probabilities of different scenarios for the boundary of the cleared region needs to consider as well the number of mines in non-boundary cells. Based on that, one can find the safest move, at least when there are few enough scenarios to list them all. But it looks much harder to find the move most likely to lead to clearing the whole board, even for simple initial situations like the one he shows.

  • Blind folks and the evolving elephant (). Guest post by my colleague Vijay Vazirani on the “Turing’s Invisible Hand” blog, on the different perspectives brought by economics and computer science to problems of matching resource providers with resource consumers.

  • My new dining room ceiling lamp is a trefoil knot (, gallery)! It’s the “Vornado” LED lamp from WAC lighting. We chose it to replace a halogen lamp that shorted out, burned through its power cable, fell onto the table below it, and shattered hot glass all over the room, fortunately without causing a fire or seriously damaging the table and while the room was unoccupied.

    Vornado lamp from WAC lighting

  • Two speakers censored at AISA, an Australian information security conference (). One of them is Australian, the other not. They were both scheduled to talk long before and cancelled after a last minute demand from the Australian Cyber Security Centre. As Bruce Schneier writes, this kind of action merely calls attention to their work and makes the Australian government look stupid and repressive while doing nothing to actually increase security.

  • A history of mathematical crankery (, via), excerpted from David S. Richeson’s book Tales of Impossibility: The 2000-Year Quest to Solve the Mathematical Problems of Antiquity

  • Incenters of chocolate-iced cakes and more fair cake-cutting (). If you want to divide both cake and frosting (area and perimeter) into equal pieces, it helps to start with a shape that has an inscribed circle.

    Relatedly, Erel Segal has written up for Wikipedia a collection of open problems in fair division.

  • The Trump administration wants to roll back fair housing laws by allowing racist algorithms to discriminate on behalf of racist landlords (). The deadline for telling them this is a stupid idea is this Friday, October 18.

  • Japanese KitKats are replacing plastic packaging with origami paper (, via), in a bid to be both more fun and more environmentally conscious.

  • Living Proof (), a free e-book collecting stories of mathematicians about the roadblocks on their paths to where they are now.

  • Kotzig’s theorem (). Every convex polyhedron has an edge whose endpoints have total degree at most 13. You might think that (because the average vertex degree in a convex polyhedron is < 6) there will always be an edge whose endpoints have total degree at most 11, but it’s not true. As Anton Kotzig proved in 1955, the answer is 13. A worst-case example is the triakis icosahedron, whose minimum-degree edges connect vertices of degrees 3 and 10.

by David Eppstein at October 15, 2019 10:00 PM UTC

The next TCS+ talk will take place this coming Tuesday, October 22th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC) (note the unusual day). Hao Huang from Emory University will speak about “A proof of the Sensitivity Conjecture” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: In the n-dimensional hypercube graph, one can easily choose half of the vertices such that they induce an empty graph. However, having even just one more vertex would cause the induced subgraph to contain a vertex of degree at least \sqrt{n}. This result is best possible, and improves a logarithmic lower bound shown by Chung, Furedi, Graham and Seymour in 1988. In this talk we will discuss a very short algebraic proof of it.

As a direct corollary of this purely combinatorial result, the sensitivity and degree of every boolean function are polynomially related. This solves an outstanding foundational problem in theoretical computer science, the Sensitivity Conjecture of Nisan and Szegedy.


by plustcs at October 15, 2019 09:23 PM UTC

It gives me great pleasure to inform you that the project "Static prediction of test flakiness" submitted by Breno Alexandro Ferreira de Miranda (Federal University of Pernambuco, Brazil), Antonia Bertolino (Istituto di Scienza e Tecnologie dell’Informazione – CNR), Emilio Cruciani (Gran Sasso Science Institute) and Roberto Verdecchia (Gran Sasso Science Institute) has been selected for a Facebook Testing and Verification research award. Congratulations to everyone and to CS@GSSI as a whole!

The winners have been selected from a pool of over 100 proposals and are listed at 

Quoting from that web page: 

“We are excited that, once again, we received over 100 proposals, and of such high quality,” says Mark Harman, Research Scientist and Facebook TAV co-chair. “We are really excited to see this research develop, and to see the deployment of advanced research in industry.

“Despite doubling the funding available this year, there were many excellent proposals that we were sadly unable to fund this round. Such was the quality of the proposals we received,” says Mark. “We are very grateful to the scientific research community for engaging so enthusiastically with this call for proposals.”

The award will be presented at the upcoming Facebook Testing and Verification Symposium.

by Luca Aceto ( at October 15, 2019 02:51 PM UTC

BARC is a leading center for fundamental algorithmic research, headed by VILLUM Investigator Mikkel Thorup, aiming to attract top talent from around the world to an ambitious, creative, collaborative, and fun environment. We use the power of mathematics we strive to create fundamental breakthroughs in algorithmic thinking.


by shacharlovett at October 15, 2019 07:44 AM UTC

November 23-24, 2019 Emory University, Atlanta, GA Registration deadline: November 20, 2019 Emory University, Georgia Tech and Georgia State University, with support from the National Science Foundation and National Security Agency, will continue the series of mini-conferences from 2019-2020. The next and 24th overall of these mini-conferences will be held at Emory University on … Continue reading The 24th Atlanta Lecture Series in Combinatorics and Graph Theory

by shacharlovett at October 14, 2019 09:20 PM UTC

Chapter 5 of Problems with a Point (by Gasarch and Kruskal) is about how mathematical objects get their names. If it was an e-book that I could edit and add to (is this a good idea or not? later on that) then I would have added the following.

The Sheldon Conjecture

Background: Nobel Laureate Sheldon Cooper has said that 73 is the best number because

a) 73 is prime.

b) 73 is the 21st prime and note that 7*3=21.

c) The mirror of 73, namely 37, is prime.

d) 37 is the 12th prime, and 12 is the mirror of 21.

Sheldon never quite said its the only such number; that was conjectured by Jessie Byrnes, Chris Spicer, and Alyssa Turnquist here. They called it Sheldon's Conjecture probably since Sheldon Cooper should have conjectured it

Why didn't Sheldon make Sheldon's conjecture? This kind of question has been asked before:

Could Euler have conjectured the prime number theorem

Why didn't Hilbert (or others) pursue Ramsey Theory?

(readers are encouraged to give other examples)

I doubt we'll be asking this about Sheldon Cooper since he is a fictional character.

I am delighted that

a) There is a Sheldon's Conjecture.

b) It has been solved by Pomerance and Spicer, see here

Actually (b) might not be so delightful--- once a conjecture is proven its stops being called by the name of the conjecturer. If you don't believe me just ask Vazsonyi or Baudet. If you don't know who they are then (1) see here and (2) that proves my point. So perhaps I wish it had not been solved so The Sheldon Conjecture would live on as a name.

Another issue this brings up: Lets say that Problems with a Point was an online book that I was able to edit easily. Then I might add material on The Sheldon Conjecture. And while I am at it, I would add The Monty Hall Paradox to the chapter on how theorems get there names. Plus, I would fix some typos and references. Perhaps update some reference. Now lets say that all books were online and the authors could modify them. Would this be good or bad?

1) Good- The book would get better and better as errors got removed.

2) Good- The book would get to include material that is appropriate but came out after it was published.

3) Good- The book would get to include material that is appropriate but the authors forgot to include the first time around.

4) Bad- For referencing the book or for book reviews of the book, you are looking at different objects. The current system has First Edition, Second Edition, etc, so you can point to which one you are looking at. The easily-edited books would have more of a continuous update process so harder to point to things.

5) Bad- When Clyde and I emailed the final version to the publisher we were almost done. When we got the galleys and commented on them we were DONE DONE! For typos and errors maybe I want to fix them online, but entire new sections--- when we're done we are DONE.

6) Bad- at what point is it (i) a corrected version of the old book, (ii) a new edition of the old book, (iii) an entirely new book? Life is complicated enough.

I would prob like a system where you can fix errors but can't add new material. Not sure if that's really a clean distinction.

by GASARCH ( at October 14, 2019 02:11 AM UTC

Google’s supremacy stone soup

Here is a little update on the Google supremacy claims that we discussed in this earlier post (see especially this remark). Don’t miss our previous post on combinatorics.

Recall that a quantum supremacy demonstration would be an experiment where a quantum computer can compute in 100 seconds something that requires a classical computer 10 hours. (Say.)

The Google experiment actually showed that a quantum computer running for 100 seconds PLUS a classic computer that runs 1000 hours can compute something that requires a classic computer 10 hours. (So, of course, this has no computational value, the Google experiment is a sort of a stone soup.)

The crucial mistake in the supremacy claims is that the researchers’ illusion of a calibration method toward a better quality of the quantum computer was in reality a tuning of the device toward a specific statistical goal for a specific circuit.

Let me remark that the mathematics of this mistake seems rather interesting and I plan to come back to it (see at the end of the post for a brief tentative account), and also that while the supremacy claim is false, Google’s calibration method is an interesting piece of experimental physics.

I personally hope that in spite of what appears (to me) to be a major setback, Google will maintain and enhance its  investments in quantum information research. We are still at a basic-science stage where we can expect the unexpected.

Detection statistical flaws using statistics

Now, how can we statistically test such a flaw in a statistical experiment? This is also an interesting question and it reminded me of the following legend (See also here (source of pictures below), here, and here) about Poincaré and the baker, which is often told in the context of using statistics for detection. I first heard it from Maya Bar-Hillel in the late 90s. Since this story never really happened I tell it here a little differently. Famously, Poincaré did testify as an expert in a famous trial and his testimony was on matters related to statistics.

The story of Poincaré and his friend the baker

“My friend the baker,” said Poincaré, “I weighed every loaf of bread that I bought from you in the last year and the distribution is Gaussian with mean 950 grams. How can you claim that your average loaf is 1 kilogram?”

“You are so weird, dear Henri,” the baker replied, “but I will take what you say into consideration.”*

A year later the two pals meet again

“How are you doing dear Henri” asked the baker “are my bread loaves heavy enough for you?”

“Yes, for me they are,” answered Poincaré “but when I weighed all the loaves last year I discovered that your mean value is still 950 grams.”

“How is this possible?” asked the baker

“I weighed your loaves all year long and I discovered that the weights represent a Gaussian distribution with mean 950 grams truncated at 1 kilogram. You make the same bread loaves as before but you keep the heavier ones for me!”

“Ha ha ha” said the baker “touché!”** and the baker continued “I also have something that will surprise you, Henri. I think there is a gap in your proof that 3-manifolds with homology of a sphere is a sphere. So if you don’t tell the bread police I won’t tell the wrong mathematical proofs police :)” joked the baker.

The rest of the story is history, the baker continued to bake bread loaves with an average weight of 950 grams and Poincaré constructed his famous Dodecahedral sphere and formulated the Poincaré conjecture. The friendship of Poincaré and the baker continued for the rest of their lives.

*  “There are many bakeries in Paris” thought the baker “and every buyer can weight the quality, weight, cost, and convenience”.

** While the conversation was originally in French, here, the French word touché is used in its English meaning.

The mathematics of Google’s calibration in a few sentences.

The ideal distribution D_C for a (freezed)  random circuit can be seen exponentially distributed probabilities depending on C.

The first order effect of the noise is to replace D_C by a convex combination tD_C+(1-t)J with a uniform distribution J. (For low fidelity t is rather small.)

The second order effect of the noise is adding a Gaussian fluctuation described by Gaussian-distributed probabilities G_C . Like D_C, these probabilities also depend on the circuit C.

For low fidelity, as in our case, the calibration mainly works in the range where G_C is dominant and the calibration (slightly) “cancels” this Gaussian fluctuation. This does not calibrate the quantum computer but rather tweaks it toward the specific Gaussian contribution that depends on the circuit C.

Trivia question: what is the famous story where a mathematician is described as objecting to children dreaming.

by Gil Kalai at October 13, 2019 12:43 PM UTC

Gérard Cornuéjols

Gérard Cornuéjols‘s beautiful (and freely available) book from 2000 Optimization: Packing and Covering is about an important area of combinatorics which is lovely described in the preface to the book

The integer programming models known as set packing and set covering have a wide range of applications, such as pattern recognition, plant location and airline crew scheduling. Sometimes, due to the special structure of the constraint matrix, the natural linear programming relaxation yields an optimal solution that is integer, thus solving the problem. Sometimes, both the linear programming relaxation and its dual have integer optimal solutions. Under which conditions do such integrality properties hold? This question is of both theoretical and practical interest. Min-max theorems, polyhedral combinatorics and graph theory all come together in this rich area of discrete mathematics. In addition to min-max and polyhedral results, some of the deepest results in this area come in two flavors: “excluded minor” results and “decomposition” results. In these notes, we present several of these beautiful results. Three chapters cover min-max and polyhedral results. The next four cover excluded minor results. In the last three, we present decomposition results.

The last sentence of the preface gives this post some urgency

In particular, we state 18 conjectures. For each of these conjectures, we offer $5000 as an incentive for the first correct solution or refutation before December 2020.

The book starts with Konig’s theorem, the first figure is the Petersen graph, and among the other mathematical heroes mentioned in the book are Edmonds, Johnson, Seymour, Lovász, Lehman, Camion, Tutte, and Truemper.

The title of this post refers to Baker’s dozen. In the 13th century Bakers who were found to have shortchanged customers could be liable to severe punishment, and to guard against the punishment of losing a hand to an axe, a baker would give 13 for the price of 12, to be certain of not being known as a cheat. (Wikipedia) In this post we mention a 19th problem for which Gerard offered 5000 dollars. (I am not sure if there is time limit for that problem. I am thankful to Maria Chudnovsky for telling me about the problem.)


What happened to the 18 problems in the list?

Perhaps the most difficult problem on the list was solved first: two of the problems in the list were about perfect graphs and were settled with the solution of the strong perfect graph conjecture by Chudnovsky, Robertson, Seymour, and Thomas. Three of the problems were about balanced bipartite graphs. They were solved by Chudnovsky and Seymour in 2006. Conjecture 4.14 in Chapter 4 was solved by Jonathan Wang (2010) 30,000 dollars were thus collected and 60,000 dollars are still offered (until Dec 2020).



Balanced bipartite graphs and the 19th problem.

Balanced bipartite graphs are sort of bipartite analogs of perfect graphs. They are bipartite graphs so that every induced cycle have length divisible by four. Gerard’s 19the prize money problem is also about balanced bipartite graphs.

Conjecture: Let G be balanced. Then there is e \in E(G) such that G\backslash e is a balanced graph.

In other words every balanced bipartite graph contains an edge which is not a unique chord in any cycle.

This conjecture is Conjecture 5.20 in

M. Conforti, G. Cornuejos, K. Vuskovic Balanced matrices

In that paper, this conjecture is attributed to:

M. Conforti and M.R Rao “Structural properties and decomposition of linear
balaced matrices”, Mathematical Programming 55 (1992) 129-168.

by Gil Kalai at October 13, 2019 05:43 AM UTC

This was the first Crimson editorial I can recall about the upcoming move computer science will be making to Allston next year. 

I'd encourage students (both undergraduate and graduate, but perhaps especially  undergraduates -- there are more of you!) to find ways to make their concerns prior to the move and eventually their issues with the Allston move heard by the powers-that-be at Harvard.  You could interpret this as cynicism that I think Harvard will mess this up significantly in various ways.  Or you could interpret this as optimism that I think that, while of course problems will arise, Harvard will work to fix them when they are brought to their attention by enough people. 

Under either interpretation, I think the students making their own voice heard will be very important to helping make this move a success.

by Michael Mitzenmacher ( at October 12, 2019 08:53 PM UTC

I was getting ready to start writing a paper proving that Hamiltonian decomposition and linear arboricity are both -complete for planar graphs of maximum degree four. But then I realized that there’s a trivial proof, based on known results:

  1. Finding a Hamiltonian cycle is -complete in 3-regular planar graphs.1

  2. A 3-regular graph has a Hamiltonian cycle if and only if its line graph has a Hamiltonian decomposition.23

  3. The line graph of a 3-regular planar graph (its medial graph) is a 4-regular planar graph.

  4. A 4-regular graph has a Hamiltonian decomposition if and only if the subgraph formed by removing an arbitrarily chosen single vertex has linear arboricity two.

That’s not enough for a paper, even though the linear arboricity result resolves a conjecture of Cygan, Hou, Kowalik, Lužar, and Wu (2012).4 So instead I’ll just leave this here, together with two illustrations I already drew of the gadgets for my proof. It’s a reduction from monotone planar 3-sat, in which the variables are connected in a cycle in the plane, the 3-sat clauses with all variables positive are on one side of the cycle, and the 3-sat clauses with all variables negative are on the other side. Here “planar” means that if you represent each variable and clause as a vertex, draw edges from each variable to each clause that uses it, and also draw edges connecting consecutive variables in the cycle, the result is a planar graph.

My reduction adds a special “truth-setting gadget” to the cycle of variables. It represents the variables by regions of the plane, bounded by edges that are forced to belong to the same cycle in any Hamiltonian decomposition. The truth-setting gadget also forms regions of the plane, again bounded by edges that belong to the same cycle. The truth-setting regions extend through the clause gadgets so that they can reach the other clause gadgets on the other sides of them. A variable or negated variable is true if its region’s bounding edges are in the other cycle than the truth-setting gadget.

Here is the truth-setting gadget (the colored parts of the figure), and the (very simple) variable gadgets (gray shaded regions). Each variable gadget will have one loop of one color above the midline, and another loop of the opposite color below the midline.

Truth-setting and variable gadgets for a reduction from monotone planar 3-sat to Hamiltonian decomposition

And here is the clause gadget:

Clause gadget for a reduction from monotone planar 3-sat to Hamiltonian decomposition

The 4- and 8-vertex subunits in the light yellow disks force the partition into two parts of Hamiltonian cycles to be uniquely determined within each gadget, and the 5-vertex subunits between the variable and clause gadget allow the variable to either connect to the parts of the cycles within the clause gadget or to remain separate from them. To construct a Hamiltonian decomposition, you have to have at least one true variable per clause, so that the clause gadget can be connected to the other cycle than the one from the truth-setting gadget. The reduction produces a multigraph rather than a graph, but that’s easy to fix by replacing some of the vertices by wheels.

Maybe the same idea of having a truth-setting gadget that passes through the clause gadgets can be useful in some other reductions?

  1. Garey, M. R., Johnson, D. S., and Stockmeyer, L. (1974), “Some simplified -complete problems”, Proc. 6th ACM Symposium on Theory of Computing (STOC ‘74), pp. 47–63, doi:10.1145/800119.803884

  2. Kotzig, Anton (1957), “Aus der Theorie der endlichen regulären Graphen dritten und vierten Grades”, Časopis Pěst. Mat. 82: 76–92. As cited by Bondy and Häggkvist.5 

  3. Martin, Pierre (1976), “Cycles hamiltoniens dans les graphes 4-réguliers 4-connexes”, Aequationes Mathematicae 14 (1/2): 37–40, doi:10.1007/BF01836203

  4. Cygan, Marek, Hou, Jian-Feng, Kowalik, Łukasz, Lužar, Borut, and Wu, Jian-Liang (2012), “A planar linear arboricity conjecture”, Journal of Graph Theory 69 (4): 403–425, doi:10.1002/jgt.20592

  5. Bondy, J. A. and Häggkvist, R. (1981), “Edge-disjoint Hamilton cycles in 4-regular planar graphs”, Aequationes Mathematicae, 22 (1): 42–45, doi:10.1007/BF02190157

(Discuss on Mastodon)

by David Eppstein at October 12, 2019 04:13 PM UTC

It has been six weeks since I moved to Milan, and I am not yet completely settled in yet.

For example, although, as of yesterday, I finally have working wired internet access in my place, I still do not have a bus card (obtaining the latter has been one of the most stubbornly intractable problems I have encountered) and all the stuff that I did not carry in two bags is still in transit in a container.

Meanwhile, the busyness of handling the move, getting settled, trying to get a bus card, and teaching two courses, has meant that I did not really have time to sit down with my thoughts and process my feelings about such a major life change. If people ask me what I miss about San Francisco I will, truthfully, say something like UberX, or Thai food, or getting a bus card from a vending machine, because I still have not had a chance to miss the bigger stuff. Similarly, this post will be about random small stuff.

Milan is lovely this time of the year. Unlike most other Italian cities, there is no well-preserved historical neighborhood: the city was devastated by WW2 bombings, and then rebuilt; in the center, majestic old buildings are scattered among generic 1960s era condos. Nonetheless, walking along the Naviglio Grande (a canal running roughly in an East-West direction flanked mostly by 100+ year old buildings) at sunset is quite beautiful.

Milan is one of the most expensive cities in Italy, but one can have an awesome sit-down multi-course lunch for 12 euros (tax included, no tip), or a sit-down coffee for one euro. In one of my new favorite dinner places, the old but energetic lady that runs the place comes to the table to offer a choice of a handful of first courses, a couple of second courses and a couple of desserts (no menu); last time I was there she charged 4 euros for two glasses of wine.

Bocconi keeps rolling out its strategy to expand into computing. After introducing an undergraduate degree in Economics, Management and Computer Science, and a masters degree in Data Science, next year will see the start of a new PhD Program in Statistics and Computer Science. The deadline to apply is in February, and I would be happy to talk to students interested in applying. The language of instruction, like for almost all degree programs at Bocconi, is English. A new undergraduate program on computing is in the final planning stages, and I will say more about it when it is official.

Between teaching two courses, getting settled, and trying to get a bus card, I have not had a lot of time for research, but the local physicists have been giving us a crash course on the replica method to study optimization problems on random instances. I plan, in the near future, to complete my series of posts on online optimization, and to write whatever I understand of the replica method. An example of what is going on:

physicist: “so the smallest eigenvalue of this nxn matrix is {n \choose 2}, which is negative, and this is a problem because it means that the matrix is not PSD”
me: “wait, what?”
physicist: “don’t forget that we are interested in the limit n \rightarrow 0.”

Coming here, I thought that my command of the Italian language, which I haven’t spoken on a regular basis for a while, would be almost perfect, and this has been true, except that the “almost” part has come from unexpected directions.

For example, one can find a number of places serving real (and very good) Chinese food, which was not the case at the time I grew up in Italy. However, I have no idea how Chinese dishes are called in Italian, for the most part I don’t know how they are called in Chinese either, so I have to resort to the English menu, if available, or to guesswork. Quite reasonably, dumplings are called “ravioli” and noodles are called “pasta” or “spaghetti.” In some places, buns are called, also quite reasonably, “panini”. That dish that I like that is usually called, with much understatement, “boiled fish with preserved vegetables” in America? “Zuppa di pesce”. How do you say “bok choy”? I am still not sure.

In Italian, like in French, you can address someone in a formal or an informal way (in Italian the formal way is to use the third person) and the subtle cues of when to use which form have shifted a little with time, so I have been feeling a bit confused and I have mostly been following other people’s lead.

Bocconi is an exceptionally well run institution. This is true in many ways, from maintenance and cleaning all the way to campus strategic planning, but one of the things that most impressed me, a luxury you do not see on American campuses any more, is that they have secretaries. Not directors of operations, not grant administrators, not coordinators of this or that with lots of underlings, but people that you can go talk to, describe a problem you are having or something you need, and then walk away and forget about it, because they will take care of it, with lots of common sense and professionalism. This is amazing! The cognitive load of worrying about lots of things other than teaching and research is something that it feels just wonderful not to have.

Despite this, the Bocconi administration operates on a shoestring budget compared to American universities. How does it do it? I am collecting figures and I plan to write a separate post about it. If only the Bocconi administration was in charge of issuing bus cards for the city!

It should be noted that, while it has a lot of autonomy as a private university, Bocconi is not a sovereign entity and it is subject to Italian law. For example, I recently had to see two doctors who had to certify that I had no medical condition that would make it dangerous for me to be a professor, and the procedure to hire postdocs is a bit Byzantine (but the secretaries take care of most of it!).

In conclusion, I had some very intense, very exciting, and very rewarding six weeks and, while I do not yet have a bus card, I have a medical certificate that I can teach and do research. Now I am off to eat some Chinese ravioli and maybe a Chinese panino.

by luca at October 12, 2019 01:27 PM UTC

With a short solution that was hard for me to find.

[ Royal Society ]

Sir Timothy Gowers is a Fields medalist and fellow blogger. Sometimes he (too) writes about simple topics.

Today I would like to talk about a simple problem that came up recently.

The problem is a simple to state “obvious fact”. The reason I thought you might be interested is that I had a tough time finding the solution. I hope you find the explanation below interesting.

The general issue of proving obviously true statements is discussed here for example. Here is an example from Gowers:

Let {I_{1},I_{2},\dots } be intervals of real numbers with lengths that sum to less than {1}, then their union cannot be all of {[0,1]}.

He says:

It is quite common for people to think this statement is more obvious than it actually is. (The “proof” is this: just translate the intervals so that the end point of {I_{1}} is the beginning point of {I_{2}}, and so on, and that will clearly maximize the length of interval you can cover. The problem is that this argument works just as well in the rationals, where the conclusion is false.)

The Problem’s Statement

The following simple problem came up the other day. Suppose that {t} is an odd number. Show there is some {x} so that

\displaystyle  (x,t) = 1 \text{ and } (4x+1,t) = 1.

Here {(a, b)} is the gcd, the greatest common divisor of {a} and {b}. For example,

\displaystyle  (21, 14) = 7.

This result seems totally obvious, must be true, but I had trouble finding a proof.

The Problem’s Cousin

There is a unproved conjecture in number theory that says: There are an infinite number of {x} so that both {x} and {4x+1} are prime. This clearly shows that there is an {x} for our problem.

I like conjectures like this since they give you an immediate insight that a statement is likely to be true. But we would like a proof that does not use any unproved conjectures. Our problem can be viewed as a poor version of some of these conjectures. Suppose that you have a conjecture that there are an infinite number of {x} so that

\displaystyle  f_{1}(x),f_{2}(x),\dots,f_{m}(x),

are all prime for some given functions {f_{k}(x)}. Then the poor version is to prove that there are {x} so that these numbers are all relatively prime to some given {t}. There are some partial results to the prime version by Ben Green and Terence Tao.

The Problem’s Failed Proofs

My initial idea was to try to set {x} to something like {ty + 1}. The point is that this always satisfies the first constraint: that is {(ty+1, t)=1} for any {y}. Then I planned to try and show there must be some {y} that satisfies the second constraint. Thus the goal is to prove there is some {y} so that

\displaystyle  (4(ty+1)+1, t) = 1.

But this is false. Note that if {5} divides {t} then

\displaystyle  4(ty+1)+1 = 4ty + 5

and so {4x+1} is always divisible by {5}. Ooops.

My next idea was to set {x} to a more “clever” value. I tried

\displaystyle  x = ty + d.

Here I thought that I could make {d} special and control the situation. Now

\displaystyle  4(ty+d)+1 = 4ty + 4d+1.

This looked promising. I then said to myself that why not make {4d+1} a large prime {p}. Then clearly

\displaystyle  4x + 1 = (4t)y + p.

Since {4t} and {p} are relatively prime by the famous Dirichlet’s Theorem on arithmetic progressions we could make {4x+1} a prime too by selecting {y}. This would satisfy the second constraint, and we are done.

Not quite. The trouble is that we need to have also that

\displaystyle  (x,t) = 1.

Now this is

\displaystyle  (ty + d, t) = 1.

The trouble is that {d} might not be relatively prime to {t}. So we could just {\dots} This seems like a recursion and I realized that it might not work.

The Problem’s Solution

I finally found a solution thanks to Delta Airlines. My dear wife, Kathryn Farley, and I were stuck in DC for several hours waiting for our delayed flight home. This time was needed for me to find a solution.

The key for me was to think about the value {t}. It is usually a good idea to look at the simplest case first. So suppose that {t=1}, then clearly the constraints

\displaystyle  (x,t) = 1 \text{ and } (4x+1,t) = 1,

are now trivial. The next simplest case seems to be when {t} is a prime. Let’s try {t=3}. Now {x=1} works. Let’s generalize this to any prime {t>3}. The trick is to set {x} so that

\displaystyle  x \equiv 2 \bmod t.

Then {4x+1} is equal to {9} modulo {t}, which is not divisible by {t}. This shows that when {t} is an odd prime there is always some {x}.

Okay how do we get the full result? What if {t} is the product of several primes? The Chinese remainder theorem to the rescue. Suppose that {t} is divisible by the distinct odd primes {p_{1},p_{2}, \dots, p_{m}}. We can easily see that we do not care if there are repeated factors, since that cannot change the relatively prime constraints.

Then we constraint {x} by:

\displaystyle  x \equiv 1 \bmod 3


\displaystyle  x \equiv 2 \bmod p_{k}

for all primes {p_{k}>3}. Then the Chinese remainder theorem proves there is some {x}. Done.

Open Problems

Is there some one line proof of the problem? Do you know any references? There are several obvious generalizations of this simple problem, perhaps someone might look into them.

by rjlipton at October 11, 2019 04:53 PM UTC