The Euler International Mathematical Institute in St.Petersburg is seeking
postdocs in Math, TCS, Mathematical and Theoretical Physics. St.Petersburg is the most beautiful city in the world and has multiple mathematical locations including Steklov Institute of Mathematics and Dept. of Mathematics and CS in St.Petersburg State Univ. The preference is given to applications sent before 11/30/19.
Website: http://math-cs.spbu.ru/en/news/news-2019-10-22/
Email: euler.postdoc@gmail.com
Authors: Ulrich Bauer, Abhishek Rathod, Jonathan Spreer
Download: PDF
Abstract: Deciding whether two simplicial complexes are homotopy equivalent is a
fundamental problem in topology, which is famously undecidable. There exists a
combinatorial refinement of this concept, called simple-homotopy equivalence:
two simplicial complexes are of the same simple-homotopy type if they can be
transformed into each other by a sequence of two basic homotopy equivalences,
an elementary collapse and its inverse, an elementary expansion. In this
article we consider the following related problem: given a 2-dimensional
simplicial complex, is there a simple-homotopy equivalence to a 1-dimensional
simplicial complex using at most p expansions? We show that the problem, which
we call the erasability expansion height, is W[P]-complete in the natural
parameter p.
Authors: N. V. Kitov, M. V. Volkov
Download: PDF
Abstract: Kauffman monoids $\mathcal{K}_n$ and Jones monoids $\mathcal{J}_n$,
$n=2,3,\dots$, are two families of monoids relevant in knot theory. We prove a
somewhat counterintuitive result that the Kauffman monoids $\mathcal{K}_3$ and
$\mathcal{K}_4$ satisfy exactly the same identities. This leads to a polynomial
time algorithm to check whether a given identity holds in $\mathcal{K}_4$. As a
byproduct, we also find a polynomial time algorithm for checking identities in
the Jones monoid $\mathcal{J}_4$.
Authors: Zhenwei Dai, Anshumali Shrivastava
Download: PDF
Abstract: Recent work suggests improving the performance of Bloom filter by
incorporating a machine learning model as a binary classifier. However, such
learned Bloom filter does not take full advantage of the predicted probability
scores. We proposed new algorithms that generalize the learned Bloom filter by
using the complete spectrum of the scores regions. We proved our algorithms
have lower False Positive Rate (FPR) and memory usage compared with the
existing approaches to learned Bloom filter. We also demonstrated the improved
performance of our algorithms on real-world datasets.
Authors: Aram Harrow, Saeed Mehraban, Mehdi Soleimanifar
Download: PDF
Abstract: In this paper, we present a quasi-polynomial time classical algorithm that
estimates the partition function of quantum many-body systems at temperatures
above the thermal phase transition point. It is known that in the worst case,
the same problem is NP-hard below this point. Together with our work, this
shows that the transition in the phase of a quantum system is also accompanied
by a transition in the hardness of approximation. We also show that in a system
of n particles above the phase transition point, the correlation between two
observables whose distance is at least log(n) decays exponentially. We can
improve the factor of log(n) to a constant when the Hamiltonian has commuting
terms or is on a 1D chain. The key to our results is a characterization of the
phase transition and the critical behavior of the system in terms of the
complex zeros of the partition function. Our work extends a seminal work of
Dobrushin and Shlosman on the equivalence between the decay of correlations and
the analyticity of the free energy in classical spin models. On the algorithmic
side, our result extends the scope of a recent approach due to Barvinok for
solving classical counting problems to quantum many-body systems.
Authors: Jacob Holm, Eva Rotenberg
Download: PDF
Abstract: We show that every labelled planar graph $G$ can be assigned a canonical
embedding $\phi(G)$, such that for any planar $G'$ that differs from $G$ by the
insertion or deletion of one edge, the number of local changes to the
combinatorial embedding needed to get from $\phi(G)$ to $\phi(G')$ is $O(\log
n)$.
In contrast, there exist embedded graphs where $\Omega(n)$ changes are necessary to accommodate one inserted edge. We provide a matching lower bound of $\Omega(\log n)$ local changes, and although our upper bound is worst-case, our lower bound hold in the amortized case as well.
Our proof is based on BC trees and SPQR trees, and we develop \emph{pre-split} variants of these for general graphs, based on a novel biased heavy-path decomposition, where the structural changes corresponding to edge insertions and deletions in the underlying graph consist of at most $O(\log n)$ basic operations of a particularly simple form.
As a secondary result, we show how to maintain the pre-split trees under edge insertions in the underlying graph deterministically in worst case $O(\log^3 n)$ time. Using this, we obtain deterministic data structures for incremental planarity testing, incremental planar embedding, and incremental triconnectivity, that each have worst case $O(\log^3 n)$ update and query time, answering an open question by La Poutr\'e and Westbrook from 1998.
Authors: Zhiyang Dou, Shiqing Xin, Rui Xu, Jian Xu, Yuanfeng Zhou, Shuangmin Chen, Wenping Wang, Xiuyang Zhao, Changhe Tu
Download: PDF
Abstract: Motivated by the fact that the medial axis transform is able to encode nearly
the complete shape, we propose to use as few medial balls as possible to
approximate the original enclosed volume by the boundary surface. We
progressively select new medial balls, in a top-down style, to enlarge the
region spanned by the existing medial balls. The key spirit of the selection
strategy is to encourage large medial balls while imposing given geometric
constraints. We further propose a speedup technique based on a provable
observation that the intersection of medial balls implies the adjacency of
power cells (in the sense of the power crust). We further elaborate the
selection rules in combination with two closely related applications. One
application is to develop an easy-to-use ball-stick modeling system that helps
non-professional users to quickly build a shape with only balls and wires, but
any penetration between two medial balls must be suppressed. The other
application is to generate porous structures with convex, compact (with a high
isoperimetric quotient) and shape-aware pores where two adjacent spherical
pores may have penetration as long as the mechanical rigidity can be well
preserved.
Authors: Jeremiah Blocki, Mike Cinkoske
Download: PDF
Abstract: We create a graph reduction that transforms an $(e, d)$-edge-depth-robust
graph with $m$ edges into a $(e/4,d)$-depth-robust graph with $O(m)$ nodes and
constant indegree. An $(e,d)$-depth robust graph is a directed, acyclic graph
with the property that that after removing any $e$ nodes of the graph there
remains a path with length at least $d$. Similarly, an $(e, d)$-edge-depth
robust graph is a directed, acyclic graph with the property that after removing
any $e$ edges of the graph there remains a path with length at least $d$. Our
reduction relies on constructing graphs with a property we define and analyze
called ST-Robustness. We say that a directed, acyclic graph with $n$ inputs and
$n$ outputs is $(k_1, k_2)$-ST-Robust if we can remove any $k_1$ nodes and
there exists a subgraph containing at least $k_2$ inputs and $k_2$ outputs such
that each of the $k_2$ inputs is connected to all of the $k_2$ outputs. We use
our reduction on a well known edge-depth-robust graph to construct an $(\frac{n
\log \log n}{\log n}, \frac{n}{\log n (\log n)^{\log \log n}})$-depth-robust
graph.
Authors: Tomer Kotek, Johann A. Makowsky
Download: PDF
Abstract: This is a survey on the exact complexity of computing the Tutte polynomial.
It is the longer 2017 version of Chapter 25 of the CRC Handbook on the Tutte
polynomial and related topics, edited by J. Ellis-Monaghan and I. Moffatt,
which is due to appear in the first quarter of 2020. In the version to be
published in the Handbook the Sections 5 and 6 are shortened and made into a
single section.
Authors: Anand Louis, Rakesh Venkat
Download: PDF
Abstract: Graph partitioning problems are a central topic of study in algorithms and
complexity theory. Edge expansion and vertex expansion, two popular graph
partitioning objectives, seek a $2$-partition of the vertex set of the graph
that minimizes the considered objective. However, for many natural
applications, one might require a graph to be partitioned into $k$ parts, for
some $k \geq 2$. For a $k$-partition $S_1, \ldots, S_k$ of the vertex set of a
graph $G = (V,E)$, the $k$-way edge expansion (resp. vertex expansion) of
$\{S_1, \ldots, S_k\}$ is defined as $\max_{i \in [k]} \Phi(S_i)$, and the
balanced $k$-way edge expansion (resp. vertex expansion) of $G$ is defined as
\[ \min_{ \{S_1, \ldots, S_k\} \in \mathcal{P}_k} \max_{i \in [k]} \Phi(S_i) \,
, \] where $\mathcal{P}_k$ is the set of all balanced $k$-partitions of $V$
(i.e each part of a $k$-partition in $\mathcal{P}_k$ should have cardinality
$|V|/k$), and $\Phi(S)$ denotes the edge expansion (resp. vertex expansion) of
$S \subset V$. We study a natural planted model for graphs where the vertex set
of a graph has a $k$-partition $S_1, \ldots, S_k$ such that the graph induced
on each $S_i$ has large expansion, but each $S_i$ has small edge expansion
(resp. vertex expansion) in the graph. We give bi-criteria approximation
algorithms for computing the balanced $k$-way edge expansion (resp. vertex
expansion) of instances in this planted model.
Authors: Eunjin Oh, Hee-Kap Ahn
Download: PDF
Abstract: We study the following range searching problem: Preprocess a set $P$ of $n$
points in the plane with respect to a set $\mathcal{O}$ of $k$ orientations % ,
for a constant, in the plane so that given an $\mathcal{O}$-oriented convex
polygon $Q$, the convex hull of $P\cap Q$ can be computed efficiently, where an
$\mathcal{O}$-oriented polygon is a polygon whose edges have orientations in
$\mathcal{O}$. We present a data structure with $O(nk^3\log^2n)$ space and
$O(nk^3\log^2n)$ construction time, and an $O(h+s\log^2 n)$-time query
algorithm for any query $\mathcal{O}$-oriented convex $s$-gon $Q$, where $h$ is
the complexity of the convex hull.
Also, we can compute the perimeter or area of the convex hull of $P\cap Q$ in $O(s\log^2n)$ time using the data structure.
Authors: Ibrahim Jubran, Alaa Maalouf, Dan Feldman
Download: PDF
Abstract: A coreset (or core-set) of an input set is its small summation, such that
solving a problem on the coreset as its input, provably yields the same result
as solving the same problem on the original (full) set, for a given family of
problems (models, classifiers, loss functions). Over the past decade, coreset
construction algorithms have been suggested for many fundamental problems in
e.g. machine/deep learning, computer vision, graphics, databases, and
theoretical computer science. This introductory paper was written following
requests from (usually non-expert, but also colleagues) regarding the many
inconsistent coreset definitions, lack of available source code, the required
deep theoretical background from different fields, and the dense papers that
make it hard for beginners to apply coresets and develop new ones.
The paper provides folklore, classic and simple results including step-by-step proofs and figures, for the simplest (accurate) coresets of very basic problems, such as: sum of vectors, minimum enclosing ball, SVD/ PCA and linear regression. Nevertheless, we did not find most of their constructions in the literature. Moreover, we expect that putting them together in a retrospective context would help the reader to grasp modern results that usually extend and generalize these fundamental observations. Experts might appreciate the unified notation and comparison table that links between existing results.
Open source code with example scripts are provided for all the presented algorithms, to demonstrate their practical usage, and to support the readers who are more familiar with programming than math.
Authors: Yujin Choi, Seungjun Lee, Hee-Kap Ahn
Download: PDF
Abstract: We study the problem of finding maximum-area rectangles contained in a
polygon in the plane. There has been a fair amount of work for this problem
when the rectangles have to be axis-aligned or when the polygon is convex. We
consider this problem in a simple polygon with $n$ vertices, possibly with
holes, and with no restriction on the orientation of the rectangles. We present
an algorithm that computes a maximum-area rectangle in $O(n^3\log n)$ time
using $O(kn^2)$ space, where $k$ is the number of reflex vertices of $P$. Our
algorithm can report all maximum-area rectangles in the same time using
$O(n^3)$ space.
We also present a simple algorithm that finds a maximum-area rectangle contained in a convex polygon with $n$ vertices in $O(n^3)$ time using $O(n)$ space.
Authors: Nesreen K. Ahmed, Nick Duffield, Ryan A. Rossi
Download: PDF
Abstract: Temporal networks representing a stream of timestamped edges are seemingly
ubiquitous in the real-world. However, the massive size and continuous nature
of these networks make them fundamentally challenging to analyze and leverage
for descriptive and predictive modeling tasks. In this work, we propose a
general framework for temporal network sampling with unbiased estimation. We
develop online, single-pass sampling algorithms and unbiased estimators for
temporal network sampling. The proposed algorithms enable fast, accurate, and
memory-efficient statistical estimation of temporal network patterns and
properties. In addition, we propose a temporally decaying sampling algorithm
with unbiased estimators for studying networks that evolve in continuous time,
where the strength of links is a function of time, and the motif patterns are
temporally-weighted. In contrast to the prior notion of a $\bigtriangleup
t$-temporal motif, the proposed formulation and algorithms for counting
temporally weighted motifs are useful for forecasting tasks in networks such as
predicting future links, or a future time-series variable of nodes and links.
Finally, extensive experiments on a variety of temporal networks from different
domains demonstrate the effectiveness of the proposed algorithms.
Authors: Titus Dose
Download: PDF
Abstract: We build on a working program initiated by Pudl\'ak [Pud17] and construct an
oracle relative to which each set in $\mathrm{coNP}$ has $\mathrm{P}$-optimal
proof systems and $\mathrm{NP}\cap\mathrm{coNP}$ does not have complete
problems.
Our 1977 paper on the role of formal methods
[ Harvard ] |
Harry Lewis is known for his research in mathematical logic, and for his wonderful contributions to teaching. He had two students that you may have heard of before: a Bill Gates and a Mark Zuckerberg.
Today I wish to talk about a recent request from Harry about a book that he is editing.
The book is the “Classic Papers of CS” based on a course that he has been teaching for years. It will contain 46 papers with short introductions by Harry. My paper from 1977 with Alan Perlis and Rich DeMillo will be included. The paper is “Social Processes and Proofs of Theorems and Programs”.
Harry says that “A valued colleague believes this paper displays such polemical overreach that it should not appear in this collection”. I hope that it does still appear anyway. Harry goes on to say
And though verification techniques are widely used today for hardware designs, formal verification of large software systems is still a rarity.
Indeed.
I have mixed feeling about our paper, which is now getting close to fifty years old. I believe we had some good points to make then. And that these are still relevant today. Our paper starts with:
Many people have argued that computer programming should strive to become more like mathematics. Maybe so, but not in the way they seem to think.
Our point was just this: Proofs in mathematics and not just formal arguments that show that a theorem is correct. They are much more. They must show why and how something is true. They must explain and extend our understanding of why something is true. They must do more than just demonstrate that something is correct.
They must also make it clear what they claim to prove. A difficulty we felt, then, was that care must be given to what one is claiming to prove. In mathematics often what is being proved is simple to state. In practice that is less clear. A long complex statement may not correctly capture what one is trying to prove.
Who proves that the specification is correct?
I have often wondered why some do not see this point. That proofs are more than “correctness checks”. I thought I would list some “proofs” of this point.
The great Carl Gauss gave the first proof of the law of quadratic reciprocity. He later published six more proofs, and two more were found in his posthumous papers. There are now over two hundred published proofs.
So much to say that a proof is just a check.
Thomas Hales solved the Kepler conjecture on sphere packing in three-dimensional Euclidean space. He faced some comments that his proof might not be certain—it was said to be . So he used formal methods to get a formal proof. But
Maryna Viazovska solved the related problem in eight dimensions. Her proof is here. The excitement of this packing result is striking compared with Hales’s result. No need for correctness checks in her proof.
Henry Cohn says here:
One measure of the complexity of a proof is how long it takes the community to digest it. By this standard, Viazovska’s proof is remarkably simple. It was understood by a number of people within a few days of her arXiv posting, and within a week it led to further progress: Abhinav Kumar, Stephen Miller, Danylo Radchenko, and I worked with Viazovska to adapt her methods to prove that the Leech lattice is an optimal sphere packing in twenty-four dimensions. This is the only other case above three dimensions in which the sphere packing problem has been solved.
So, a proof that a great proof is a proof that helps create new proofs of something else. Okay, nasty way to say it. What mean is a great proof is one that enables new insights, that enables further progress, that advances the field. Not just a result that “checks” for correctness.
The famous ABC conjecture of Joseph Oesterle and David Masser has been claimed by Shinichi Mochizuki. Arguments continue about his proof. Peter Scholze and Jakob Stix believe his proof is flawed and is unfixable. Mochizuki claims they are wrong.
Will a formal proof solve this impasse? Perhaps not. A proof that explains why it is true might. A proof that advances number theory elsewhere might, a proof that could solve other problems would likely.
What do you think about the role of proofs? Did we miss the point years ago?
Will formal verification become effective in the near future? And when it does, will it help provide explanations? We note this recent discussion of a presentation by Kevin Buzzard of Imperial College, London, and one-day workshop on “The Mechanization of Math” which took place two weeks ago in New York City.
[Typo fixed]
This year we have a targeted search in all areas of quantum computing, with a particular emphasis on quantum algorithms and quantum complexity theory. Candidates interested in a faculty position should apply here.
Authors: Shaohua Li, Marcin Pilipczuk, Manuel Sorge
Download: PDF
Abstract: Given a graph $G=(V,E)$ and an integer $k$, the Cluster Editing problem asks
whether we can transform $G$ into a union of vertex-disjoint cliques by at most
$k$ modifications (edge deletions or insertions). In this paper, we study the
following variant of Cluster Editing. We are given a graph $G=(V,E)$, a packing
$\mathcal{H}$ of modification-disjoint induced $P_3$s (no pair of $P_3$s in
$\cal H$ share an edge or non-edge) and an integer $\ell$. The task is to
decide whether $G$ can be transformed into a union of vertex-disjoint cliques
by at most $\ell+|\cal H|$ modifications (edge deletions or insertions). We
show that this problem is NP-hard even when $\ell=0$ (in which case the problem
asks to turn $G$ into a disjoint union of cliques by performing exactly one
edge deletion or insertion per element of $\cal H$). This answers negatively a
question of van Bevern, Froese, and Komusiewicz (CSR 2016, ToCS 2018), repeated
by Komusiewicz at Shonan meeting no. 144 in March 2019.
Authors: Sarah Tymochko, Elizabeth Munch, Firas A. Khasawneh
Download: PDF
Abstract: As the field of Topological Data Analysis continues to show success in theory
and in applications, there has been increasing interest in using tools from
this field with methods for machine learning. Using persistent homology,
specifically persistence diagrams, as inputs to machine learning techniques
requires some mathematical creativity. The space of persistence diagrams does
not have the desirable properties for machine learning, thus methods such as
kernel methods and vectorization methods have been developed. One such
featurization of persistence diagrams by Perea, Munch and Khasawneh uses
continuous, compactly supported functions, referred to as "template functions,"
which results in a stable vector representation of the persistence diagram. In
this paper, we provide a method of adaptively partitioning persistence diagrams
to improve these featurizations based on localized information in the diagrams.
Additionally, we provide a framework to adaptively select parameters required
for the template functions in order to best utilize the partitioning method. We
present results for application to example data sets comparing classification
results between template function featurizations with and without partitioning,
in addition to other methods from the literature.
Authors: Sam Buss, Anupam Das, Alexander Knop
Download: PDF
Abstract: This paper studies propositional proof systems in which lines are sequents of
decision trees or branching programs - deterministic and nondeterministic. The
systems LDT and LNDT are propositional proof systems in which lines represent
deterministic or non-deterministic decision trees. Branching programs are
modeled as decision dags. Adding extension to LDT and LNDT gives systems eLDT
and eLNDT in which lines represent deterministic and non-deterministic
branching programs, respectively.
Deterministic and non-deterministic branching programs correspond to log-space (L) and nondeterministic log-space (NL). Thus the systems eLDT and eLNDT are propositional proof systems that reason with (nonuniform) L and NL properties.
The main results of the paper are simulation and non-simulation results for tree-like and dag-like proofs in the systems LDT, LNDT, eLDT, and eLNDT. These systems are also compared with Frege systems, constantdepth Frege systems and extended Frege systems
Authors: Muhammad Irfan Yousuf, Raheel Anwar
Download: PDF
Abstract: Graph Sampling provides an efficient yet inexpensive solution for analyzing
large graphs. While extracting small representative subgraphs from large
graphs, the challenge is to capture the properties of the original graph.
Several sampling algorithms have been proposed in previous studies, but they
lack in extracting good samples. In this paper, we propose a new sampling
method called Weighted Edge Sampling. In this method, we give equal weight to
all the edges in the beginning. During the sampling process, we sample an edge
with the probability proportional to its weight. When an edge is sampled, we
increase the weight of its neighboring edges and this increases their
probability to be sampled. Our method extracts the neighborhood of a sampled
edge more efficiently than previous approaches. We evaluate the efficacy of our
sampling approach empirically using several real-world data sets and compare it
with some of the previous approaches. We find that our method produces samples
that better match the original graphs. We also calculate the Root Mean Square
Error and Kolmogorov Smirnov distance to compare the results quantitatively.
The Johns Hopkins University Department of Computer Science seeks applicants for tenure-track faculty positions at all levels and across all areas of computer science. The department will consider offers in two tracks: (1) an open track seeking excellent candidates across all areas of computer science; and (2) a track seeking candidates in the areas of human computer interaction (HCI).
Website: http://www.cs.jhu.edu/about/employment-opportunities/
Email: mdinitz@cs.jhu.edu
Like many UC Irvine faculty I live in University Hills, a faculty housing complex associated with UC Irvine. It’s a great place to live: the prices are significantly lower than the surrounding area, I like my neighbors, and I love living so close to my office (ten minutes by foot) that I can walk to work instead of having to deal with the twin headaches of Southern California traffic and university parking.
Because it’s so convenient for walking, University Hills is filled with footpaths, many of which pass through greenbelts instead of running alongside the roads. The main footpath leading to the campus from the neighborhood heads towards a building designed in the shape of a giant arch, with the intent of providing a gateway into the central campus. Because the building is part of the engineering school, it’s called the Engineering Gateway. Here it is from the campus side:
It looks inviting, but you wouldn’t know from this view that it’s now a dead end. Here’s a view from the other side, from the end of the footpath that used to connect to it via a crosswalk across the ring road around campus. The crosswalk has been ripped out, replaced by a fence, and planted with ivy to discourage anyone from crossing that way.
Instead, the path has been rerouted to dump you onto the ring road, where a little farther along there’s a new replacement crosswalk. You can get into the campus by crossing there and following a service road (creatively named “Engineering Service Road”) past this lovely view:
Alternatively, you can still get to the Engineering Gateway by walking a half-block out of your way down the ring road, crossing, and then following this inviting sidewalk another half-block back the way you came:
I don’t usually take either of those two routes. Instead, I take a different path down a different service road, between two loading docks, where a narrow gap between the backs of two buildings (the University Club and the computer science department) leads into the campus. Here’s what it looks like on weekends; on weekdays, it’s often completely blocked by delivery trucks.
It’s almost as if by making these routes so awkward and ugly, the campus offices of transportation and physical and environmental planning, which pride themselves on their sustainability, are trying to send the faculty a message. But what could that message be?
(Discuss on Mastodon or more likely on the UHills mailing list)
The position is open from 1 June 2020 or as soon as possible thereafter.
Using the power of mathematics, we strive to create fundamental breakthroughs in algorithmic thinking. While the focus of BARC is algorithms theory, we do have a track record of surprising algorithmic discoveries leading to major industrial applications. Please find the full job advertisement at http://employment.ku.dk/.
Website: https://candidate.hr-manager.net/ApplicationInit.aspx?cid=1307&ProjectId=150516&DepartmentId=18971&MediaId=4642
Email: mthorup@di.ku.dk
The mathematical equations in my blog posts, and the ones you see on many other web sites, are formatted with MathJax, a JavaScript library that lets web developers write LaTeX formulas and turns them within your browser into nicely formatted math. The web pages of my blog are generated by Jekyll, a static web site generation system (meaning that it doesn’t go querying a database for its content, they are just web pages stored in files somewhere). I can write my posts in more than one format, but since the April 2017 LiveJournal apocalypse I’ve been writing them using kramdown, a system built into Jekyll for transforming marked-up text files into html ready for browsers to read and display. And so far mostly those different systems have been getting along really well together. Kramdown knows about MathJax and can handle equations in its input without trying to interpret their syntax as kramdown codes, Jekyll only needs me to modify a template somewhere so that my blog pages include an invocation of the MathJax library, and MathJax in your browser happily formats the equations in my posts. But recently, the MathJax people released MathJax version 3.0.0, and that doesn’t work so well with Jekyll and Kramdown. Despite some difficulty, I seem to have gotten them working again. So I thought it might be helpful to post here what went wrong and how I fixed it, in case others run into the same issues.
There are multiple ways of invoking MathJax, but the one I’ve been using is simply to put a line in my html headers saying to load the MathJax library from a content distribution network (asynchronously, so that it doesn’t delay the pages from being shown to readers). Once MathJax loads, it scans through the html that it has been applied to, looking for blocks of math to reformat. The default way of marking these blocks is to include them in \( ... \)
or \[ ... \]
delimiters (for inline formulas and display formulas that go on a line of their own, as you might use in LaTeX if you aren’t still using $ ... $
or $$ ... $$
instead). There are ways of changing the defaults, and those ways have also changed between MathJax 2 and MathJax 3, but I wasn’t using them.
In kramdown, you don’t use the same delimiters for math. Kramdown expects to see mathematical formulas delimited by $$ ... $$
in its marked-up text input, always. It will determine from context whether it’s an inline formula or a display formula. It also doesn’t use the default delimiters in the html that it generates. Instead it outputs html that puts inline formulas inside <script type="math/tex"> ... </script>
html tags, and, similarly, puts display formulas inside <script type="math/tex; mode=display"> ... </script>
tags. This all worked in MathJax 2, and these script delimiters are still recommended in the MathJax 3 documentation, but they don’t work any more.
The right way to fix this would be either to get MathJax 3 to understand the script delimiters, or to get kramdown to know how to generate something that works in MathJax 3, but I don’t have a lot of control over either. And the second-best fix might be to use some other software after kramdown runs, to change the delimiters in the static html files before they get served to anyone, but I don’t have that option on my blog host. Instead, I followed a suggestion in the kramdown documentation for working with KaTeX, a competing JavaScript library to MathJax for formatting mathematical equations in web pages. The suggestion is to add to your html files a little bit of glue JavaScript code that recognizes the formula delimiters produced by kramdown and does something with them. In my case, the something that I want to do is just to convert them to the delimiters that MathJax defaultly recognizes.
Timing is crucial here. If I try to run the JavaScript to convert the delimiters too early, they won’t yet be part of the html document that the JavaScript is running on and won’t be found and converted. In particular, running it at the time the html headers are parsed is too early. If I run it too late, the web page will already have been shown to the person viewing it, and each conversion step of each delimiter will also be shown as a slow and unsightly change of the text, on top of the later changes performed by MathJax. You can put JavaScript code at the end of the body of an html page, but that would be too late. Additionally, MathJax should be loaded asynchronously (to prevent slowdowns before the viewer sees something useful from the web page) but must not run until all of the delimiter conversions are complete, because otherwise it won’t see the converted delimiters. So I ended up with the following chunk of JavaScript code, in the Jekyll file _includes/head.html
that gets copied into the headers of my html pages. It waits until the entire document is loaded, converts the delimiters, and then loads the MathJax library.
This could be simplified somewhat with JQuery, but I didn’t do that because this is the only JavaScript in my files and the overhead of loading JQuery seemed too much for that small use. It’s my first JavaScript code ever, so it could probably be done better by someone with more experience. And it’s a bit of a hack, but it seems to work. One other change that I made implies that you won’t see this code in the html for this post, though. The reason is that I don’t want MathJax incorrectly interpreting the example delimiters in my post and in the code block above as actual mathematics formula delimiters. So I also added some Jekyll conditionals that, with the right keyword in the header of a post, disable including the MathJax Javascript, and I’m using that keyword on this post.
…and I thought I was done, until I started looking at some mathematics-intensive older posts, and found some more problems. In a few cases, kramdown has been putting more than just the script delimiters around its math formulas. Within the script tags, the math has been surrounded by a second level of delimiters, % <![CDATA[ ... %]]>
. This coding tells the html parser not to worry about weird special characters in the formula, and it was ignored by the old MathJax because the percent signs cause the rest of their lines to be treated as a comment. But the new MathJax parser doesn’t like the comments (or maybe treats the whole formula as a comment despite the newline characters within it) and displays a blank. This behavior is triggered in kramdown when a formula uses <
instead of \lt
(easy enough to avoid), or when it uses &
(e.g. in an aligned set of equations, not easy to avoid). So the actual code I ended up with is a little more complicated:
If you see any mathematics glitches in any of my old or new posts, please tell me; they could be more interactions like this that I haven’t spotted yet.
(Discuss on Mathstodon, which also recently switched to MathJax 3)
If an origami crease pattern tells you where to put the folds, but not which way to fold them, you may have many choices left to make. A familiar example is the square grid. You can pleat (accordion-fold) the horizontal lines of the grid, and then pleat the resulting folded strip of paper along the vertical lines; the result will be that each horizontal line is consistently a mountain fold or a valley fold, but each vertical line has folds that alternate between mountain and valley. Or you could pleat the vertical lines first, and then the horizontal lines, getting a different folded state. There are many other choices beyond these two.
The famous Miura-ori is another grid-like fold made out of parallelograms instead of squares, and known for its ability to continuously unfold from its folded state to an expanded and completely open state. Like the square grid, a pattern of crease lines in the same positions has many alternative foldings. In fact, this multiplicity of folded states can be helpful in making the miura-ori out of paper. To make it, you can start with pleating a set of parallel lines (like the square grid) to form a folded strip of paper, and then pleat in a different direction that forms a non-right angle to the first pleating direction. The result will be a fold that is not the Miura-ori, but that has its folds in the same places as the Miura-ori. By reversing the orientation of some of these folds, you get the Miura-ori itself.
When working with the space of all foldings of a crease pattern, it’s unfortunately a bit complicated to understand which patterns fold flat (globally, as an entire structure) and which don’t. In a celebrated result from SODA 1996, Bern and Hayes showed that, for arbitrary crease patterns, even determining whether there exists a globally flat-foldable state is NP-complete. So it’s easier to work with “local flat foldings”, meaning a labeling of all of the creases as mountain or valley folds with the property that the creases surrounding each vertex of the folding pattern could be folded flat, if only all of that other stuff farther away from the vertex didn’t get in the way. It’s easy to check whether a single vertex can be folded flat using Maekawa’s theorem, Kawasaki’s theorem and related results.
In an earlier paper, my co-authors and I studied the space of all local flat foldings of the Miura-ori crease pattern, from the point of view of seeking forcing sets, mountain-valley assignments to small subsets of the creases with the property that there is only one way to extend them to a locally flat-foldable mountain-valley assignment on the whole crease pattern. My new preprint, “Face flips in origami tessellations” (with Akitaya, Dujmović, Hull, Jain, and Lubiw, arXiv:1910.05667) instead looks at the connectivity of the system of all local flat foldings. If you’re in one locally flat-folded state (say, the state that you get from pleating the paper once and then pleating the folded strip a second time in a non-orthogonal direction) and you want to get to a different flat-folded state (say, the Miura-ori itself), how many moves does it take? Here, we’re not just allowing any change of a single crease from mountain to valley or vice versa to count as a move. Instead, a move is what happens when you change all of the folds surrounding a single face of the crease pattern, in such a way that the new mountain-valley assignment remains flat-foldable.
The results vary dramatically according to the folding pattern. For the square grid, every square can be flipped, and given any two mountain-valley assignments you can color the squares black or white according to whether they are surrounded an even or odd number of times by cycles of creases that need to be changed. Then the shortest way to get from one assignment to the other is either to flip all the black squares or to flip all the white squares. For a grid of equilateral triangles, it is possible to flip from any mountain-valley assignment to any other one within a polynomial number of steps, but finding the shortest sequence is NP-complete. And for the Miura-ori, it’s again always possible, and there’s a nontrivial polynomial time algorithm for finding the shortest flip sequence. We also have examples of crease patterns forming periodic tilings of the plane where nothing is flippable (so the state space becomes totally disconnected) or where the flippable faces form an independent set (so you merely have to flip the faces whose surrounding mountain-valley assignments differ, in any order). The image below shows two crease patterns of the latter type (called square twist tessellations) with the flippable faces colored blue.
I think the results for the Miura-ori are particularly neat, so I want to outline them in a little more detail. There’s a natural bijection between local flat-foldings of the Miura crease pattern and 3-colorings of the squares of a square grid, which we used in the earlier paper, and flips of Miura faces correspond in the same way to recoloring steps in which we change the color of a single square. There’s also a natural correspondence (but not a bijection!) between 3-colorings of a grid and “height functions”, giving an integer height to each square, with each two adjacent squares having heights that are one step apart. In one direction, you can get a coloring from a height function by taking the heights mod 3. In the other direction, starting from a colored grid and a choice of the height of one of the squares (with the correct value modulo 3) you can go from there to adjacent squares step by step and figure out what their height has to be. It all works out so that, no matter how you do it, you always get a consistent height function from each 3-coloring. But the function depends on the starting height of the first square. If you add a multiple of 3 to this height, you translate the whole function up or down by that amount, and the translation turns out to be important.
So if you have two different local flat foldings of the Miura crease pattern, you can translate them in this way into two different 3-colorings, and two different height functions. Then you can convert one of the height functions into the other one, move by move, by repeatedly finding the square whose height is farthest away from its final value and shifting it two steps closer. The total number of steps equals half the volume of the three-dimensional space between the two height functions, and that’s the best you can do to get one height function to the other. But it might not be the best you can do to get one 3-coloring to the other, because of the choice of starting heights. To find the minimum number of moves to get from one local flat folding to another there’s one more computation that you have to do first: find two starting heights for the two height functions that makes the volume between them as small as possible.
For a bit more on height functions, 3-colorings, and the “arctic circle” that one gets by choosing 3-colorings of the square grid randomly, you can read the web page “random 3-coloring of the square and cubic lattice”, by Joakim Linde, describing his joint work with Cris Moore in this area.
Spanning Trees with Low (Shallow) Stabbing Number () is the master’s thesis of Johannes Obenaus at the Free University of Berlin and ETH Zürich. The stabbing number of a tree is how many edges a line can cross. Any points in have a tree with stabbing number , useful in some data structures. The thesis includes a solution to Open Problem 17.5 of my book Forbidden Configurations in Discrete Geometry: removing points from a point set might cause the minimum stabbing number of a spanning tree to increase.
A pretty result in inversive geometry from the Japanese “Wasan” period, relating the diameters of circles in Steiner chains between two parallel lines to regular polygons.
Counting Memories by Chiharu Shiota (), an installation art piece in Katowice, Poland that prompts visitors to reflect on how numbers “connect us universally, comfort us, and help us understand ourselves” by writing down their feelings and memories about numbers that are meaningful to them.
Revisiting Minesweeper (). As Uncle Colin shows, calculating the probabilities of different scenarios for the boundary of the cleared region needs to consider as well the number of mines in non-boundary cells. Based on that, one can find the safest move, at least when there are few enough scenarios to list them all. But it looks much harder to find the move most likely to lead to clearing the whole board, even for simple initial situations like the one he shows.
Blind folks and the evolving elephant (). Guest post by my colleague Vijay Vazirani on the “Turing’s Invisible Hand” blog, on the different perspectives brought by economics and computer science to problems of matching resource providers with resource consumers.
My new dining room ceiling lamp is a trefoil knot (, gallery)! It’s the “Vornado” LED lamp from WAC lighting. We chose it to replace a halogen lamp that shorted out, burned through its power cable, fell onto the table below it, and shattered hot glass all over the room, fortunately without causing a fire or seriously damaging the table and while the room was unoccupied.
Two speakers censored at AISA, an Australian information security conference (). One of them is Australian, the other not. They were both scheduled to talk long before and cancelled after a last minute demand from the Australian Cyber Security Centre. As Bruce Schneier writes, this kind of action merely calls attention to their work and makes the Australian government look stupid and repressive while doing nothing to actually increase security.
A history of mathematical crankery (, via), excerpted from David S. Richeson’s book Tales of Impossibility: The 2000-Year Quest to Solve the Mathematical Problems of Antiquity
Incenters of chocolate-iced cakes and more fair cake-cutting (). If you want to divide both cake and frosting (area and perimeter) into equal pieces, it helps to start with a shape that has an inscribed circle.
Relatedly, Erel Segal has written up for Wikipedia a collection of open problems in fair division.
The Trump administration wants to roll back fair housing laws by allowing racist algorithms to discriminate on behalf of racist landlords (). The deadline for telling them this is a stupid idea is this Friday, October 18.
Japanese KitKats are replacing plastic packaging with origami paper (, via), in a bid to be both more fun and more environmentally conscious.
Living Proof (), a free e-book collecting stories of mathematicians about the roadblocks on their paths to where they are now.
Kotzig’s theorem (). Every convex polyhedron has an edge whose endpoints have total degree at most 13. You might think that (because the average vertex degree in a convex polyhedron is < 6) there will always be an edge whose endpoints have total degree at most 11, but it’s not true. As Anton Kotzig proved in 1955, the answer is 13. A worst-case example is the triakis icosahedron, whose minimum-degree edges connect vertices of degrees 3 and 10.
The next TCS+ talk will take place this coming Tuesday, October 22th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC) (note the unusual day). Hao Huang from Emory University will speak about “A proof of the Sensitivity Conjecture” (abstract below).
Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.
Abstract: In the -dimensional hypercube graph, one can easily choose half of the vertices such that they induce an empty graph. However, having even just one more vertex would cause the induced subgraph to contain a vertex of degree at least . This result is best possible, and improves a logarithmic lower bound shown by Chung, Furedi, Graham and Seymour in 1988. In this talk we will discuss a very short algebraic proof of it.
As a direct corollary of this purely combinatorial result, the sensitivity and degree of every boolean function are polynomially related. This solves an outstanding foundational problem in theoretical computer science, the Sensitivity Conjecture of Nisan and Szegedy.