Theory of Computing Blog Aggregator

Authors: Shaleen Deep, Xiao Hu, Paraschos Koutris
Download: PDF
Abstract: In this paper, we investigate space-time tradeoffs for answering boolean conjunctive queries. The goal is to create a data structure in an initial preprocessing phase and use it for answering (multiple) queries. Previous work has developed data structures that trade off space usage for answering time and has proved conditional space lower bounds for queries of practical interest such as the path and triangle query. However, most of these results cater to only those queries, lack a comprehensive framework, and are not generalizable. The isolated treatment of these queries also fails to utilize the connections with extensive research on related problems within the database community. The key insight in this work is to exploit the formalism of relational algebra by casting the problems as answering join queries over a relational database. Using the notion of boolean {\em adorned queries} and {\em access patterns}, we propose a unified framework that captures several widely studied algorithmic problems. Our main contribution is three-fold. First, we present an algorithm that recovers existing space-time tradeoffs for several problems. The algorithm is based on an application of the {\em join size bound} to capture the space usage of our data structure. We combine our data structure with {\em query decomposition} techniques to further improve the tradeoffs and show that it is readily extensible to queries with negation. Second, we falsify two conjectures proposed in the existing literature that relates to the space-time lower bound for path queries and triangle detection by proposing an unexpectedly better algorithm. This result opens a new avenue for improving several algorithmic results that have so far been assumed to be (conditionally) optimal. Finally, we prove new conditional space-time lower bounds for star and path queries.

at September 23, 2021 01:23 AM UTC

Authors: Matthew Bright, Andrew I Cooper, Vitaliy Kurlin
Download: PDF
Abstract: A periodic lattice in Euclidean space is the infinite set of all integer linear combinations of basis vectors. Any lattice can be generated by infinitely many different bases. Motivated by rigid crystal structures, we consider lattices up to rigid motion or isometry, which preserves inter-point distances. Then all isometry classes of lattices form a continuous space. There are several parameterisations of this space in dimensions two and three, but this is the first which is not discontinuous in singular cases. We introduce new continuous coordinates (root products) on the space of lattices and new metrics between root forms satisfying all metric axioms and continuity under all perturbations. The root forms allow visualisations of hundreds of thousands of real crystal lattices from the Cambridge Structural Database for the first time.

at September 23, 2021 01:35 AM UTC

Authors: Kunal Marwaha, Stuart Hadfield
Download: PDF
Abstract: We consider the power of local algorithms for approximately solving Max $k$XOR, a generalization of two constraint satisfaction problems previously studied with classical and quantum algorithms (MaxCut and Max E3LIN2). On instances with either random signs or no overlapping clauses and $D+1$ clauses per variable, we calculate the average satisfying fraction of the depth-1 QAOA and compare with a generalization of the local threshold algorithm. Notably, the quantum algorithm outperforms the threshold algorithm for $k > 4$.

On the other hand, we highlight potential difficulties for the QAOA to achieve computational quantum advantage on this problem. We first compute a tight upper bound on the maximum satisfying fraction of nearly all large random regular Max $k$XOR instances by numerically calculating the ground state energy density $P(k)$ of a mean-field $k$-spin glass. The upper bound grows with $k$ much faster than the performance of both one-local algorithms. We also identify a new obstruction result for low-depth quantum circuits (including the QAOA) when $k=3$, generalizing a result of Bravyi et al [arXiv:1910.08980] when $k=2$. We conjecture that a similar obstruction exists for all $k$.

at September 23, 2021 01:23 AM UTC

Authors: Subeesh Vasu, Nicolas Talabot, Artem Lukoianov, Pierre Baque, Jonathan Donier, Pascal Fua
Download: PDF
Abstract: CAD modeling typically involves the use of simple geometric primitives whereas recent advances in deep-learning based 3D surface modeling have opened new shape design avenues. Unfortunately, these advances have not yet been accepted by the CAD community because they cannot be integrated into engineering workflows. To remedy this, we propose a novel approach to effectively combining geometric primitives and free-form surfaces represented by implicit surfaces for accurate modeling that preserves interpretability, enforces consistency, and enables easy manipulation.

at September 23, 2021 01:28 AM UTC

Authors: Josephine M. Thomas, Silvia Beddar-Wiesing, Alice Moallemy-Oureh, Rüdiger Nather
Download: PDF
Abstract: Graph representations have gained importance in almost every scientific field, ranging from mathematics, biology, social sciences and physics to computer science. In contrast to other data formats, graphs propose the possibility to model relations between entities. Together with the continuously rising amount of available data, graphs therefore open up a wide range of modeling capabilities for theoretical and real-world problems. However, the modeling possibilities of graphs have not been fully exploited. One reason for this is that there is neither an easily comprehensible overview of graph types nor an analysis of their modeling capacities available. As a result, neither the potential of modeling with certain graph types is exhausted nor higher modeling freedom and more efficient computing of graphs after transformation to another graph type is in scope of view of many users. In order to clarify the modeling possibilities of graphs, we order the different graph types, collate their memory complexity and provide an expressivity measure on them. Furthermore, we introduce transformation algorithms between the graph types from which equal expressivity of all graph types can be inferred, i.e., they are able to represent the same information or properties respectively. Finally, we provide a guideline for the question when a graph type transformation is efficient by defining a cost function dependend on the memory complexity and the transformation runtime as a decision-making tool.

at September 23, 2021 12:00 AM UTC

Authors: Oswin Aichholzer, Jan Kynčl, Manfred Scheucher, Birgit Vogtenhuber, Pavel Valtr
Download: PDF
Abstract: A $k$-crossing family in a point set $S$ in general position is a set of $k$ segments spanned by points of $S$ such that all $k$ segments mutually cross. In this short note we present two statements on crossing families which are based on sets of small cardinality: (1)~Any set of at least 15 points contains a crossing family of size~4. (2)~There are sets of $n$ points which do not contain a crossing family of size larger than~$8\lceil \frac{n}{41} \rceil$. Both results improve the previously best known bounds.

at September 23, 2021 01:25 AM UTC

Authors: Jonathan J. Mize
Download: PDF
Abstract: The Curry-Howard correspondence is often called the proofs-as-programs result. I offer a generalization of this result, something which may be called machines as programs. Utilizing this insight, I introduce two new Turing Machines called "Ceiling Machines." The formal ingredients of these two machines are nearly identical. But there are crucial differences, splitting the two into a "Higher Ceiling Machine" and a "Lower Ceiling Machine." A potential graph of state transitions of the Higher Ceiling Machine is then offered. This graph is termed the "canonically nondeterministic solution" or CNDS, whose accompanying problem is its own replication, i.e., the problem, "Replicate CNDS" (whose accompanying algorithm is cast in Martin-L\"of type theory). I then show that while this graph can be replicated (solved) in polynomial time by a nondeterministic machine -- of which the Higher Ceiling Machine is a canonical example -- it cannot be solved in polynomial time by a deterministic machine, of which the Lower Ceiling Machine is also canonical. It is consequently proven that P $\neq$ NP.

at September 23, 2021 01:20 AM UTC

Authors: Florian Jacob, Saskia Bayreuther, Hannes Hartenstein
Download: PDF
Abstract: We explore the property of equivocation tolerance for Conflict-Free Replicated Data Types (CRDTs). We show that a subclass of CRDTs is equivocation-tolerant and can thereby cope with any number of Byzantine faults: Without equivocation detection, prevention or remediation, they still fulfill strong eventual consistency (SEC). We also conjecture that there is only one operation-based CRDT design supporting non-commutative operations that fulfills SEC in Byzantine environments with any number of faults.

at September 23, 2021 01:21 AM UTC

Authors: Daniel Lemire, Wojciech Muła
Download: PDF
Abstract: In software, text is often represented using Unicode formats (UTF-8 and UTF-16). We frequently have to convert text from one format to the other, a process called transcoding. Popular transcoding functions are slower than state-of-the-art disks and networks. These transcoding functions make little use of the single-instruction-multiple-data (SIMD) instructions available on commodity processors. By designing transcoding algorithms for SIMD instructions, we multiply the speed of transcoding on current systems (x64 and ARM). To ensure reproducibility, we make our software freely available as an open source library.

at September 23, 2021 01:20 AM UTC

Authors: Will Ma, Pan Xu, Yifan Xu
Download: PDF
Abstract: Matching markets involve heterogeneous agents (typically from two parties) who are paired for mutual benefit. During the last decade, matching markets have emerged and grown rapidly through the medium of the Internet. They have evolved into a new format, called Online Matching Markets (OMMs), with examples ranging from crowdsourcing to online recommendations to ridesharing. There are two features distinguishing OMMs from traditional matching markets. One is the dynamic arrival of one side of the market: we refer to these as online agents while the rest are offline agents. Examples of online and offline agents include keywords (online) and sponsors (offline) in Google Advertising; workers (online) and tasks (offline) in Amazon Mechanical Turk (AMT); riders (online) and drivers (offline when restricted to a short time window) in ridesharing. The second distinguishing feature of OMMs is the real-time decision-making element. However, studies have shown that the algorithms making decisions in these OMMs leave disparities in the match rates of offline agents. For example, tasks in neighborhoods of low socioeconomic status rarely get matched to gig workers, and drivers of certain races/genders get discriminated against in matchmaking. In this paper, we propose online matching algorithms which optimize for either individual or group-level fairness among offline agents in OMMs. We present two linear-programming (LP) based sampling algorithms, which achieve online competitive ratios at least 0.725 for individual fairness maximization (IFM) and 0.719 for group fairness maximization (GFM), respectively. We conduct extensive numerical experiments and results show that our boosted version of sampling algorithms are not only conceptually easy to implement but also highly effective in practical instances of fairness-maximization-related models.

at September 23, 2021 01:21 AM UTC

We propose a diagonalization-based approach to several important questions in proof complexity. We illustrate this approach in the context of the algebraic proof system IPS and in the context of propositional proof systems more generally. We give an explicit sequence of CNF formulas $\{\phi_n\}$ such that VNP$\neq$VP iff there are no polynomial-size IPS proofs for the formulas $\phi_n$. This provides the first natural equivalence between proof complexity lower bounds and standard algebraic complexity lower bounds. Our proof of this fact uses the implication from IPS lower bounds to algebraic complexity lower bounds due to Grochow and Pitassi together with a diagonalization argument: the formulas $\phi_n$ themselves assert the non-existence of short IPS proofs for formulas encoding VNP$\neq$VP at a different input length. Our result also has meta-mathematical implications: it gives evidence for the difficulty of proving strong lower bounds for IPS within IPS. More generally, for any strong enough propositional proof system $R$ we propose a new explicit hard candidate, the iterated $R$-lower bound formulas, which inductively asserts the non-existence of short $R$ proofs for formulas encoding this same statement at a different input length. We show that these formulas are unconditionally hard for Resolution following recent results of Atserias and Muller and of Garlik. We further give evidence in favour of this hypothesis for other proof systems.

at September 22, 2021 11:41 PM UTC

Authors: Michael Lampis, Valia Mitsou
Download: PDF
Abstract: Vertex Integrity is a graph measure which sits squarely between two more well-studied notions, namely vertex cover and tree-depth, and that has recently gained attention as a structural graph parameter. In this paper we investigate the algorithmic trade-offs involved with this parameter from the point of view of algorithmic meta-theorems for First-Order (FO) and Monadic Second Order (MSO) logic. Our positive results are the following: (i) given a graph $G$ of vertex integrity $k$ and an FO formula $\phi$ with $q$ quantifiers, deciding if $G$ satisfies $\phi$ can be done in time $2^{O(k^2q+q\log q)}+n^{O(1)}$; (ii) for MSO formulas with $q$ quantifiers, the same can be done in time $2^{2^{O(k^2+kq)}}+n^{O(1)}$. Both results are obtained using kernelization arguments, which pre-process the input to sizes $2^{O(k^2)}q$ and $2^{O(k^2+kq)}$ respectively.

The complexities of our meta-theorems are significantly better than the corresponding meta-theorems for tree-depth, which involve towers of exponentials. However, they are worse than the roughly $2^{O(kq)}$ and $2^{2^{O(k+q)}}$ complexities known for corresponding meta-theorems for vertex cover. To explain this deterioration we present two formula constructions which lead to fine-grained complexity lower bounds and establish that the dependence of our meta-theorems on $k$ is best possible. More precisely, we show that it is not possible to decide FO formulas with $q$ quantifiers in time $2^{o(k^2q)}$, and that there exists a constant-size MSO formula which cannot be decided in time $2^{2^{o(k^2)}}$, both under the ETH. Hence, the quadratic blow-up in the dependence on $k$ is unavoidable and vertex integrity has a complexity for FO and MSO logic which is truly intermediate between vertex cover and tree-depth.

at September 22, 2021 12:00 AM UTC

Authors: Martin Dvorak, Vladimir Kolmogorov
Download: PDF
Abstract: Given a fixed finite metric space $(V,\mu)$, the {\em minimum $0$-extension problem}, denoted as ${\tt 0\mbox{-}Ext}[\mu]$, is equivalent to the following optimization problem: minimize function of the form $\min\limits_{x\in V^n} \sum_i f_i(x_i) + \sum_{ij}c_{ij}\mu(x_i,x_j)$ where $c_{ij},c_{vi}$ are given nonnegative costs and $f_i:V\rightarrow \mathbb R$ are functions given by $f_i(x_i)=\sum_{v\in V}c_{vi}\mu(x_i,v)$. The computational complexity of ${\tt 0\mbox{-}Ext}[\mu]$ has been recently established by Karzanov and by Hirai: if metric $\mu$ is {\em orientable modular} then ${\tt 0\mbox{-}Ext}[\mu]$ can be solved in polynomial time, otherwise ${\tt 0\mbox{-}Ext}[\mu]$ is NP-hard. To prove the tractability part, Hirai developed a theory of discrete convex functions on orientable modular graphs generalizing several known classes of functions in discrete convex analysis, such as $L^\natural$-convex functions.

We consider a more general version of the problem in which unary functions $f_i(x_i)$ can additionally have terms of the form $c_{uv;i}\mu(x_i,\{u,v\})$ for $\{u,v\}\in F$, where set $F\subseteq\binom{V}{2}$ is fixed. We extend the complexity classification above by providing an explicit condition on $(\mu,F)$ for the problem to be tractable. In order to prove the tractability part, we generalize Hirai's theory and define a larger class of discrete convex functions. It covers, in particular, another well-known class of functions, namely submodular functions on an integer lattice.

Finally, we improve the complexity of Hirai's algorithm for solving ${\tt 0\mbox{-}Ext}$ on orientable modular graphs.

at September 22, 2021 11:22 PM UTC

Authors: S Raja, Sumukha Bharadwaj G V
Download: PDF
Abstract: In this paper, we study the computational complexity of the commutative determinant polynomial computed by a class of set-multilinear circuits which we call regular set-multilinear circuits. Regular set-multilinear circuits are commutative circuits with a restriction on the order in which they can compute polynomials. A regular circuit can be seen as the commutative analogue of the ordered circuit defined by Hrubes,Wigderson and Yehudayoff [HWY10]. We show that if the commutative determinant polynomial has small representation in the sum of constantly many regular set-multilinear circuits, then the commutative permanent polynomial also has a small arithmetic circuit.

at September 22, 2021 11:22 PM UTC

Authors: Stephan Helfrich, Arne Herzel, Stefan Ruzika, Clemens Thielen
Download: PDF
Abstract: In a widely studied class of multi-parametric optimization problems, the objective value of each solution is an affine function of real-valued parameters. For many important multi-parametric optimization problems, an optimal solutions set with minimum cardinality can contain super-polynomially many solutions. Consequently, any exact algorithm for such problems must output a super-polynomial number of solutions.

We propose an approximation algorithm that is applicable to a general class of multi-parametric optimization problems and outputs a number of solutions that is bounded polynomially in the instance size and the inverse of the approximation guarantee. This method lifts approximation algorithms for non-parametric optimization problems to their parametric formulations, providing an approximation guarantee that is arbitrarily close to the approximation guarantee for the non-parametric problem. If the non-parametric problem can be solved exactly in polynomial time or if an FPTAS is available, the method yields an FPTAS. We discuss implications to important multi-parametric combinatorial optimizations problems. Remarkably, we obtain a $(\frac{3}{2} + \varepsilon)$-approximation algorithm for the multi-parametric metric travelling salesman problem, whereas the non-parametric version is known to be APX-complete. Furthermore, we show that the cardinality of a minimal size approximation set is in general not $\ell$-approximable for any natural number $\ell$.

at September 22, 2021 11:23 PM UTC

Authors: Arne Herzel, Stephan Helfrich, Stefan Ruzika, Clemens Thielen
Download: PDF
Abstract: This article investigates the approximation quality achievable for biobjective minimization problems with respect to the Pareto cone by solutions that are (approximately) optimal with respect to larger ordering cones. When simultaneously considering $\alpha$-approximations for all closed convex ordering cones of a fixed inner angle $\gamma \in [\frac \pi 2, \pi]$, an approximation guarantee between $\alpha$ and $2 \alpha$ is achieved, which depends continuously on $\gamma$. The analysis is best-possible for any inner angle and it generalizes and unifies the known results that the set of supported solutions is a 2-approximation and that the efficient set itself is a 1-approximation. Moreover, it is shown that, for maximization problems, no approximation guarantee is achievable by considering larger ordering cones in the described fashion, which again generalizes a known result about the set of supported solutions.

at September 22, 2021 11:23 PM UTC

Authors: Hirotoshi Yasuoka
Download: PDF
Abstract: In this paper, we study the computational complexity of the quadratic unconstrained binary optimization (QUBO) problem under the functional problem FP^NP categorization. We focus on three sub-classes: (1) When all coefficients are integers QUBO is FP^NP-complete. When every coefficient is an integer lower bounded by a constant k, QUBO is FP^NP[log]-complete. (3) When coefficients can only be in the set {1, 0, -1}, QUBO is again FP^NP[log]-complete. With recent results in quantum annealing able to solve QUBO problems efficiently, our results provide a clear connection between quantum annealing algorithms and the FP^NP complexity class categorization. We also study the computational complexity of the decision version of the QUBO problem with integer coefficients. We prove that this problem is DP-complete.

at September 22, 2021 11:20 PM UTC

Authors: Mark de Berg, Sándor Kisfaludi-Bak, Morteza Monemizadeh, Leonidas Theocharous
Download: PDF
Abstract: Let $F$ be a set of $n$ objects in the plane and let $G(F)$ be its intersection graph. A balanced clique-based separator of $G(F)$ is a set $S$ consisting of cliques whose removal partitions $G(F)$ into components of size at most $\delta n$, for some fixed constant $\delta<1$. The weight of a clique-based separator is defined as $\sum_{C\in S}\log (|C|+1)$. Recently De Berg et al. (SICOMP 2020) proved that if $S$ consists of convex fat objects, then $G(F)$ admits a balanced clique-based separator of weight $O(\sqrt{n})$. We extend this result in several directions, obtaining the following results. Map graphs admit a balanced clique-based separator of weight $O(\sqrt{n})$, which is tight in the worst case. Intersection graphs of pseudo-disks admit a balanced clique-based separator of weight $O(n^{2/3}\log n)$. If the pseudo-disks are polygonal and of total complexity $O(n)$ then the weight of the separator improves to $O(\sqrt{n}\log n)$. Intersection graphs of geodesic disks inside a simple polygon admit a balanced clique-based separator of weight $O(n^{2/3}\log n)$. Visibility-restricted unit-disk graphs in a polygonal domain with $r$ reflex vertices admit a balanced clique-based separator of weight $O(\sqrt{n}+r\log(n/r))$, which is tight in the worst case. These results immediately imply sub-exponential algorithms for MAXIMUM INDEPENDENT SET (and, hence, VERTEX COVER), for FEEDBACK VERTEX SET, and for $q$-COLORING for constant $q$ in these graph classes.

at September 22, 2021 11:26 PM UTC

Authors: Lifeng Zhou, Vijay Kumar
Download: PDF
Abstract: The problem of multi-robot target tracking asks for actively planning the joint motion of robots to track targets. In this paper, we focus on such target tracking problems in adversarial environments, where attacks or failures may deactivate robots' sensors and communications. In contrast to the previous works that consider no attacks or sensing attacks only, we formalize the first robust multi-robot tracking framework that accounts for any fixed numbers of worst-case sensing and communication attacks. To secure against such attacks, we design the first robust planning algorithm, named Robust Active Target Tracking (RATT), which approximates the communication attacks to equivalent sensing attacks and then optimizes against the approximated and original sensing attacks. We show that RATT provides provable suboptimality bounds on the tracking quality for any non-decreasing objective function. Our analysis utilizes the notations of curvature for set functions introduced in combinatorial optimization. In addition, RATT runs in polynomial time and terminates with the same running time as state-of-the-art algorithms for (non-robust) target tracking. Finally, we evaluate RATT with both qualitative and quantitative simulations across various scenarios. In the evaluations, RATT exhibits a tracking quality that is near-optimal and superior to varying non-robust heuristics. We also demonstrate RATT's superiority and robustness against varying attack models (e.g., worst-case and bounded rational attacks).

at September 22, 2021 11:24 PM UTC

Authors: Ingo Wald
Download: PDF
Abstract: We describe a simple yet highly parallel method for re-indexing "indexed" data sets like triangle meshes or unstructured-mesh data sets -- which is useful for operations such as removing duplicate or un-used vertices, merging different meshes, etc. In particlar, our method is parallel and GPU-friendly in the sense that it all its steps are either trivially parallel, or use GPU-parallel primitives like sorting, prefix-sum; thus making it well suited for highly parallel architectures like GPUs.

at September 22, 2021 11:25 PM UTC

Though I singled out a mask study in the last post, I’ve had a growing discomfort with statistical modeling and significance more generally. Statistical models explicitly describe the probability of outcomes of experiments in terms of specific variables of individuals from a population. Such statistical models bring with them a host of assumptions and powers. But when are they appropriate for deducing significance of the outcomes of experiments? Here, I’ll argue that they are almost never appropriate in most settings, and, moreover, they can result in confidence intervals that rarely contain the correct parameters.

Note that in order to sample from some population, we need not assume that the population is itself random. An experimenter can choose to sample one individual from a larger pool at random even when the pool is a deterministic collection. Similarly, the experimenter can randomly assign each member of the pool to a group for experimentation even when the pool is deterministic. We need not build a statistical model of our population in order to study it experimentally.

But the convention in much scientific practice entangles the randomness used in survey and experiment design with the randomness of the natural world. The dominant convention in experimental sciences is to assert the existence of a probability distribution from which all observations in an experiment are samples. That is, most analyses begin with the assumption that the population is itself a stochastic entity whose statistical properties can be accurately modeled. Such probabilistic modeling is immediately problematic. We are forced to confront the notion of probability itself. What does the probability of an event mean? Is a Bayesian or Frequentist viewpoint more correct? What does it mean to sample a natural process?

My perspective on the pitfalls of probabilistic modeling has been heavily influenced by the writings of David Freedman. Freedman advocates for a pragmatic approach to statistical modeling. “Probability is just a property of a mathematical model intended to describe some features of the natural world. For the model to be useful, it must be shown to be in good correspondence with the system it describes.” This is the less pithy, but more prescriptive version of Box’s famous and tiresome aphorism “All models are wrong, but some are useful.”

Model building and validation requires deep domain knowledge for a task at hand. One of my favorite models is Ohm’s law, which states that the current that flows across a conductive material is proportional to the voltage drop across the resistor. Due to thermal noise, the actual model is

\[\small{ \text{current} = \text{material constant} \times \text{voltage} + \text{noise}}\]

And the noise is Gaussian white noise. This model has been extensively tested and is a foundation of all circuit design. Remarkably, this simple formula describes complex electronic behavior. Physics is full of amazing examples of statistical models that accurately predict the outcome of experiments to a dozen significant figures.

But in biology, medicine, social science, and economics, our models are much less accurate and less grounded in natural laws. Most of the time, models are selected because they are convenient, not because they are plausible, well motivated from phenomenological principles, or even empirically validated. Freedman built a cottage industry around pointing out how poorly motivated many of the common statistical models are.

A particular example called out by Freedman is the ubiquitous logistic regression model. Freedman observed that in a variety of randomized control trials, experimenters would “correct” the estimates of their randomized controlled trials by “controlling for fixed effects” with logistic regression. Controlling for fixed effects is pernicious jargon that means “set up a regression problem with all of the variables that one thinks make their effect size look too small, and then solve the regression as an attempt to remove the influence of these variables.” It is most common in observational studies, but many insist it is appropriate in the context of randomized controlled trials as well. This convention of correction persists in the present, and I highlighted similar corrections in my last post. Freedman argues that such corrections are misguided, especially in the context of randomized trials.

To understand the motivation for using logistic in the first place, I need to tell you what an odds ratio is. Experiments are often interested in estimating an odds ratio of particular outcomes. The odds of an outcome is the number of individuals with a positive outcome divided by the number of individuals with a negative outcome. If the odds of winning a lottery are one million to one, that means that for every person that wins, one million people lose.

In the context of randomized experiments, let’s suppose that when given a treatment, M out of N individuals will have a favorable outcome, and without treatment only Q out of N will have a positive outcome Then the odds ratio is simply the ratio of the odds of the outcome with and without treatment

\[\small{ \frac{M}{N-M} \cdot \frac{N-Q}{Q}\,.}\]

When the odds ratio is large, we deem a treatment to be highly effective.

Logistic regression posits that individuals have a vector of features $Z$ that influence the odds of the effectiveness of treatment. Specifically, if $Y$ is the indicator of the desired outcome and $X$ is indicator of treatment, the logistic regression model asserts the log of the odds the outcome is a linear function of the treatment and the selected features. Since the model assumes the population is random, we can write the fraction of individuals with a positive outcome as probability. With this identification, the assumption of logistic regression is

\[\small{ \log \frac{p(Y=1 | X,Z ) }{ 1-p(Y=1 | X,Z) } = \beta X + \gamma^\top Z + \alpha \,.}\]

This model is convenient if we are interested in odds ratios. In the logistic model, no matter what the covariate $Z$, the odds ratio is

\[\small{ \frac{p(Y=1 | X=1,Z ) } {1-p(Y=1| X=1,Z) } \cdot \frac{1-p(Y=1 | X=0, Z)}{p(Y=1 | X=0, Z)} = e^\beta\,.}\]

Hence, if we can estimate beta, we can estimate the odds ratio. And if we can estimate the variance of beta, we can compute confidence intervals over our odds ratio estimates.

But all of this assumes that the model is correct! If the logistic model is not correct, then our confidence intervals are meaningless. Or at least, they are not confidence intervals around any true notion of odds ratios.

It’s almost always reasonable to be suspicious of this logistic model. It is first asserting that the odds ratio is a constant for a fixed covariate $Z$. That is, all subjects experience the same proportional effect of treatment. Even less realistically, the model asserts that the outcome is positive for an individual $i$ with treatment value $X_i$ and covariate value $Z_i$ if

\[\small{ U_i \leq \alpha + \beta X_i + \gamma^\top Z_i\,,}\]

where $U_i$ is a random variable sampled from the logistic distribution. The model assumes the $U_i$ are independent of the treatment and the covariates, the $U_i$ are independent across all of the individuals, and the $U_i$ have a common logistic distribution. The only thing that differs between treatment and control is the value of the threshold on the left hand side. These are a lot of assumptions, and they are seldom verified or tested in papers where logistic regression is applied.

The question remains: does this matter? Do these modeling assumptions actually affect our inferences about effect size? Freedman provides an elegant argument demonstrating that logistic regression always overestimates the true odds ratio, and sometimes can do so quite poorly. He also shows that simply computing a plugin estimator of the odds ratio with no covariate adjustment at all is consistent in randomized controlled trials. That is, there is no need to adjust for covariates in randomized controlled trials if you want an accurate estimate of the odds ratio. Furthermore, covariate adjustments can lead to significant overestimation of the treatment’s effect size.

Rather than going through Freedman’s theoretical argument, it’s a bit more evocative to give an example. The following is gleaned from a helpful discussion with Joel Middleton. Suppose you sample 10000 children from a larger population where each child in the population has an equal chance of liking and disliking vegetables. We propose a treatment of bribing kids with a cookie to eat their veggies, and randomly assign this treatment to half of the subjects. Left untreated, veggie haters have a 20% probability of finishing their veggies, but veggie lovers have an 80% probability of eating their greens. When bribed with a cookie, veggie haters and veggie lovers have 25% and 85% of eating their vegetables, respectively.

An average child in the study has a probability of 50% of eating their veggies if in control and 55% if in treatment. The log odds ratio is thus

\[\small{ \log \frac{.55}{1−.55}−\log\frac{.5}{1−.5} \approx 0.2\,.}\]

Now, when you instead run logistic regression, the coefficient of the treatment variable is larger than 0.2. Indeed, when I try this in python, running 1000 synthetic experiments, I find that the median point estimate is 0.32, which is, as promised, larger than the true log odds. Even more worrisome is the 95% confidence interval contains 0.2 only 38% of the time. Clearly, the confidence intervals are not accurate when the model is wrong.

When the true effect size is large, this discrepancy between the logistic regression estimate and the true log odds might not be that big a deal: your error bars are wrong, but the effect size is estimated in the correct direction. But the results of such logistic regression analyzes are regularly quoted as estimates of odds ratios (For example, look at any observational study of vaccine effectiveness). The precision of these estimates is unfortunately lacking and misleading.

Even if the model was true, the same argument shows that the coefficient of the treatment variable overestimates the log-odds. This is demonstrated in the second example in the python notebook. Even when the model is true, the coefficient of the treatment variable is an overestimate of the true log odds. If the model was true, one could at least say they had constructed a reasonable estimate of the change in odds for an individual under treatment. But if the model is wrong, there’s nothing we can say at all other than we have overestimated the effect size and our error bars might not contain the true odds with any certainty.

So what is the remedy here? The thing is, we already know the answer: if we randomized the assignment, we can estimate log odds by counting the number of positive outcomes under treatment and control, and then just plugging these values into the odds ratio. If you do this, you find an estimate whose median is precisely equal to the true log odds. No covariate adjustment is required.

This is merely one example showing why it is critical to decouple the randomness used to probe a system from the randomness inherent in its system itself. Statistical models are not necessary for statistical inference, but randomness itself is amazingly… let’s say… useful for understanding natural phenomena. I will spend the next few blogs reminding myself and you faithful blog readers that proper experiments, machine learning predictions, and statistical summarizations can all be designed without statistical models of populations. Perhaps the mathematical formalizations of randomized algorithms can suggest paths to reform conventional experimental formalism.

at September 21, 2021 12:00 AM UTC

Communication complexity of Byzantine Agreement (BA) has been studied for decades. Dolev and Reischuk showed that quadratic communication is necessary for BA with perfect security, i.e., without error. Berman, Garay, and Perry showed that this lower bound is tight for the unauthenticated setting (without signatures) with $f < n/3$. However,...

at September 20, 2021 07:28 PM UTC

This post is the first of two on Information Theoretic HotStuff (IT-HS). Information Theoretic HotStuff is a Byzantine Consensus protocol in the partially synchronous model. It replaces all of HotStuff’s cryptographic signatures with simple information theoretic message passing techniques over authenticated channels. Information theoretic protocols are often easy to reason...

at September 20, 2021 12:07 PM UTC

 As of Sept 17, Matt Amodio has won 23 straight games in a row on Jeopardy and won over $800,000 in regular play. The following website is not quite up to date, but its close: here. Of course, that website will change. 

1) They refer to him as A CS grad student from a school in New Haven. My first thought was probably Yale, but whatever it is, they should just say it. I looked it up and it is Yale. So why aren't they just saying A CS grad student from Yale?  If someone works for an airline company they do not tell you which airline- prob to avoid giving that airline free publicity. But I would think a school is different. And I remember (perhaps incorrectly) that they DO say what school someone teaches at or is a student at. 

(ADDED LATER: a colleague of mine who was on Jeop (he lost his only game) tells me that YES, you re NOT ALLOWED to say the company you work for. He was from Riverdale Park, MD which might make some people think there is a Univ of MD at Riverdale Park . He also told me that when he was on the show the following happened:  On One of the shows of the game before I  played, Alex was curious which LA area restaurant somebody worked at (to see if he had eaten there--- he hadn't), and sure enough, they edited the name of the restaurant out.)

2) Longest streak: Ken Jennings: 74. Also most money in reg play: roughly 2.5 Mill

    2nd longest: James Holzhauer: 32. Also  2nd most money in reg play: roughly 2.4 Mill

    3rd longest: Matt Amodio: 23. Also  3rd most money in reg play: roughly 0.8 Mill

3) I do not think Matt will move into second place on any of these categories. He bets big on the daily doubles and it has paid off but either (a) he will miss and it will lead to a loss, or (b) he will just not get the daily double and be against a very good opponent. Item (b) happened to James H- and the person who beat him did have a good enough win streak to be in the Tournament of Champions. I wonder if they try to stop a long streak by picking really good opponents. I also wonder if they can even tell who will be a really good opponent. 

4) Matt has played in front of (or will- counting tomorrow) six hosts: Robin Roberts, LeVar Burton, David Faber, Joe Buck, Mike Richards, and Mayim Bialik. Six is a record which I suspect won't be broken, except possibly by Matt himself if he also plays in front of Ken Jennings (the rest of 2021 will be Mayim B and Ken J as hosts, see here.

5) Matt works in AI. When he gets his PhD and is on the job market will his Jeopardy success help him, hurt him, or neither?  

6) James H and Matt A are both very good at calculating how much to bet. I think Ken J is not quite as good but still good. Generally the players on Jeop are not that good at that aspect. I had the chance to ask some a champions (not any of those three) why that was and she said that most people get into because of the trivia-aspect, not the betting aspect. I wonder if just as players now study lots of facts to prep, they will also learn how to bet better. 

7) Ken J as host is a bit odd in that, if he says (as Alex T did sometimes) That category looks hard I won't believe him. I also have this suspicion that when a contestant gets something wrong Ken might be thinking what a moron; however, (a) by all accounts Ken is a nice guy, and (b)  I might be projecting. 

by gasarch (noreply@blogger.com) at September 20, 2021 12:35 AM UTC

I (Dean Doron) invite applicants for a postdoctoral position at Ben-Gurion University’s Computer Science department.
Candidates with a strong background in TCS whose research interests include pseudorandomness, complexity, coding theory, or related combinatorial constructions, are welcome to apply.
Start date is flexible. To apply, please email me your CV and 2-3 representative papers.

Website: https://www.cs.bgu.ac.il/~deand
Email: deand@bgu.ac.il

by shacharlovett at September 19, 2021 09:53 AM UTC

Mad Hatter: Now, statistics prove, prove that you’ve one birthday.
March Hare: Imagine, just one birthday every year.
Mad Hatter: Ah, but there are three hundred and sixty four unbirthdays!
March Hare: Precisely why we’re gathered here to cheer.

Ken Regan just had his birthday the other day. Today is another unbirthday.

Please join me in wishing Ken many more happy birthdays as well as many many more unbirthdays.

Ken is a Mathematician

Ken is of course a co-author of this blog. We try to cover computer science and mathematics, but we are open to other areas. Ken started his career in Mathematics; he obtained a PhD in math from Oxford University in 1986. It was under Dominic Welsh and was tilted: On the separation of complexity classes. Welsh’s family tree is here.

Ken is a Programmer

Ken is of course famous for being one of the world’s top chess cheating detectors. Recall to cheat at chess is to use a computer program to make your moves in a game against a human opponent. This is strictly illegal.

Ken was featured on the cover in chess review. His work gets critic review here by David Barnes and Julio Hernandez-Castro in the 2015 article On the limits of engine analysis for cheating detection in chess in the journal Computers and Security.

Ken is Unique

Ken is special in many ways. He is smart, he is a wonderful partner, and he is a great friend. I would claim that he is one of the few people that have the following three properties:

  1. He is an international master in chess of rating {2372};
  2. He is a strong researcher in complexity theory;
  3. He is a terrific programmer.

The last I would claim is pretty remarkable. There are many strong chess players, there are many of us working in complexity theory. But few have written as much production quality code as Ken has. For his work on chess cheating he has had to write thousands of lines of code. This code must run fast, and be correct. The latter, correctness, is really important. This since his work on detecting cheating has led players to be suspended from chess for years. It also has helped protect players who did not cheat, even though many thought they had.

Open Problems

Again I wish him a wonderful unbirthday day.

by rjlipton at September 17, 2021 01:45 PM UTC

Department of Computer Science, Reykjavik University 

PhD scholarship in Data Science for sustainability by predicting food consumption to reduce food waste 

The Department of Computer Science at Reykjavík University is looking for a PhD student to build, apply and evaluate a global model that predicts future food consumption by geographic area. The goal of the project is twofold: Reduce food waste in restaurants and grocery stores by using global machine learning algorithms. Understand food consumption trends in restaurants and grocery stores by analyzing the global model’s input data and their results. The ideal candidate should possess data science expertise, be passionate about sustainability and enthusiastic about research. The project is a collaboration with GreenBytes and will be carried out in Reykjavik, Iceland.

See https://jobs.50skills.com/ru/en/10068 for full details and for information on how to apply.

by Luca Aceto (noreply@blogger.com) at September 16, 2021 12:23 PM UTC

The School of Computer Science at the University of Sydney (Australia) is seeking applications for several academic positions at all levels. Successful applicants will have an excellent research record and be able to teach in a range of courses.

Website: https://usyd.wd3.myworkdayjobs.com/en-US/USYD_EXTERNAL_CAREER_SITE/job/Camperdown-Campus/Multiple-Continuing-Academic-Positions–School-of-Computer-Science_0085639
Email: hr.recruitmentcampaign@sydney.edu.au

by shacharlovett at September 16, 2021 04:01 AM UTC

This Erev Yom Kippur, I wish to repent for not putting enough quantum computing content on this blog. Of course, repentance is meaningless unless accompanied by genuine reform. That being the case, please enjoy the YouTube video of my ACM TechTalk from last week about quantum supremacy—although, as you’ll see if you watch the thing, I oscillate between quantum supremacy and other terms like “quantum advantage” and even “quantum supremadvantage.” This represents the first time ever that I got pushback about a talk before I’d delivered it for political reasons—the social-justice people, it turns out, are actually serious about wanting to ban the term “quantum supremacy”—but my desire to point out all the difficulties with their proposal competed with my desire not to let that issue overshadow my talk.

And there’s plenty to talk about! While regular Shtetl-Optimized readers will have already heard (or read) most of what I say, I make some new comments, including about the new paper from last week, the night before my talk (!), by the USTC group in China, where they report a quantum supremacy experiment based on random circuit sampling with a superconducting chip, this time with a record-setting 60 qubits and 24 layers of gates. On the other hand, I also stress how increasing the circuit fidelity has become a much more urgent issue than further increasing the number of qubits (or in the case of BosonSampling, the number of photons), if our goal is for these experiments to remain a couple steps ahead of classical spoofing algorithms.

Anyway, I hope you enjoy my lovingly handcrafted visuals. Over the course of this pandemic, I’ve become a full convert to writing out my talks with a stylus pen rather than PowerPointing them—not only is it faster for me, not only does it allow for continuous scrolling rather than arbitrary divisions into slides, but it enforces simplicity and concision in ways they should be enforced.

While there was only time for me to field a few questions at the end of the talk, I later supplied written answers to 52 questions (!!) that I hadn’t gotten to. If you have a question, please check to see if it’s already there, and otherwise ask away in the comments!

Thanks so much to Yan Timanovsky for inviting and organizing this talk, and to whurley for hosting it.

by Scott at September 15, 2021 11:12 PM UTC

by David Eppstein at September 15, 2021 02:57 PM UTC

Way back in 2005, I posed Ten Semi-Grand Challenges for Quantum Computing Theory, on at least half of which I’d say there’s been dramatic progress in the 16 years since (most of the challenges were open-ended, so that it’s unclear when to count them as “solved”). I posed more open quantum complexity problems in 2010, and some classical complexity problems in 2011. In the latter cases, I’d say there’s been dramatic progress on about a third of the problems. I won’t go through the problems one by one, but feel free to ask in the comments about any that interest you.

Shall I push my luck as a problem-poser? Shall or shall not, I have.

My impetus, this time around, was a kind invitation by Travis Humble, the editor-in-chief of the new ACM Transactions on Quantum Computing, to contribute a perspective piece to that journal on the occasion of my ACM Prize. I agreed—but only on the condition that, rather than ponderously pontificate about the direction of the field, I could simply discuss a bunch of open problems that I wanted to see solved. The result is below. It’s coming soon to an arXiv near you, but Shtetl-Optimized readers get it first.

Open Problems Related to Quantum Query Complexity (11 pages, PDF)

by Scott Aaronson

Abstract: I offer a case that quantum query complexity still has loads of enticing and fundamental open problems—from relativized QMA versus QCMA and BQP versus IP, to time/space tradeoffs for collision and element distinctness, to polynomial degree versus quantum query complexity for partial functions, to the Unitary Synthesis Problem and more.

Some of the problems on my new hit-list are ones that I and others have flogged for years or even decades, but others, as far as I know, appear here for the first time. If your favorite quantum query complexity open problem, or a problem I’ve discussed in the past, is missing, that doesn’t mean that it’s been solved or is no longer interesting—it might mean I simply ran out of time or energy before I got to it.

Enjoy! And tell me what I missed or got wrong or has a trivial solution that I overlooked.

by Scott at September 15, 2021 04:09 AM UTC

We study a simple and general template for constructing affine extractors by composing a linear transformation with resilient functions. Using this we show that good affine extractors can be computed by non-explicit circuits of various types, including AC0-Xor circuits: AC0 circuits with a layer of parity gates at the input. We also show that one-sided extractor can be computed by small DNF-Xor circuits, and separate these circuits from other well-studied classes. As a further motivation for studying DNF-Xor circuits we show that if they can approximate inner product then small AC0-Xor circuits can compute it exactly – a long-standing open problem.

at September 14, 2021 04:42 PM UTC

September 21 – December 10, 2021 Virtual https://www.ideal.northwestern.edu/special-quarters/fall-2021/ IDEAL – The Institute for Data, Econometrics, Algorithms, and Learning (an NSF-funded collaborative institute across Northwestern, TTIC and U Chicago) is organizing a Fall 2021 special quarter on “Robustness in High-dimensional Statistics and Machine Learning”. The special-quarter activities include mini-workshops, seminars, graduate courses, and a reading group. … Continue reading IDEAL Special Quarter Fall 2021 (Robustness in High-dimensional Statistics and Machine Learning)

by shacharlovett at September 13, 2021 10:47 PM UTC


Mathematical edges to edgy laws

Cropped from Lex Fridman podcast

Scott Aaronson is one of the top researchers in complexity theory and especially in quantum computing. He is famous for more than his research, he is famous also for his blog Shtetl-Optimized. Its scope includes just about everything that touches on complexity theory, but includes many topics that span just about anything that is interesting these days.

For a whole week, he has had the topic: Exciting opportunities at Kabul University!

Scott is kidding about Kabul of course. Rather, his topic is the recent Texas law against abortion. Here are his initial comments:

Now, of course, Texas has effectively outlawed abortion—well, after the 6th week, which is before many women even realize they’re pregnant, and when the fetus is still the size of a grain of rice…

There are no exceptions for rape or incest, and—this is the “novel” part—there’s a bounty system, with $10,000 fines for anyone who helps in any way with an abortion, payable to anyone who snitches on them. Texas has openly defied Roe v. Wade and, for the first time in half a century, has gotten five Supreme Court justices (three appointed by Donald Trump) to go along with it. Roe v. Wade is de facto no longer the law of the United States.

We will not talk about abortion, but rather the novel aspect of the Texas law. This will not lead to quantum computing, not even along lines of a famous SMBC cartoon that Scott created with Zach Weinersmith, from which we give the second panel:


We promise it will lead to some math, however.

The Texas Law

What can we do to stop the Texas law? Many are worried that the fact that the US Supreme Court has refused to even temporarily block the Texas law is unheard of. Does this mean that the new Texas law somehow is valid? Does this mean that abortions will be made illegal? Not clear.

One part of the Texas law is clever—we can disagree with it and still note that the new law is clever. The law does not give the usual suspects to the local law enforcement, who are usually the ones responsible to enforce the law. Rather the law insists that anyone can raise the issue that X has violated the law against an abortion. Say S raises that X helped make an abortion happen. Then S gets $10,000 payment provided the case is upheld in court. Scott calls S the snitches.

As recognized by the Emergency Medical Network (Emnet), this step in moving the enforcement from the usual police to the general public is one of the reasons the Texas law is so hard to fight. The usual way to challenge a new law is to immediately file some counter suit stopping those charged with enforcement. This seems to be impossible with this trick. Who can be S?—by construction that could be just about anyone. There is no single body to force to defend the law in court. Clever.

The Texas Law—The Trick

We are experts in theory of computation. Can we use that ability to find a way to attack the new law? Can we see any weakness that is inherent in the law? Some trick? Or is the Texas idea somehow a breakthrough in ways to state a law against abortion?

Ken and I have an insight. A standard trick in theory is given a new idea can we generalize it. If we can use the new idea to solve other problems then we should shed some light on the new idea. If the idea is too powerful, then we essentially prove that it cannot work in general. If it does work elsewhere, then we can use it to solve previous unsolved problems.

The Texas Law for Speeding

Cars speeding is a serious problem just about everywhere. I live in a rural area and it is 35 mph here.


Yet when I am driving I am often passed by cars that cross a yellow line going 50 or faster. The issue is that the laws insist that some officer witness a violation. But since that rarely happens, the speeders almost always get no ticket.

Let’s use the same trick that is used in the new Texas law. If a car races by me then I will send a digital message to the police that says: “Today I was driving my car number 1236335356 at time 11:19.42am and witnessed a car with plate number ERRW5365 speeding at 55.11mph while passing me. I was at location …

This would be sent to the police and they would then have the task of checking it out. Provided it checks out, I would be awarded a bounty of $500, say. And the speeder would get a fine and …

How could I prove the speed? I wouldn’t have time to take a video in the moment. But Ken’s smartphone—and those of many of his colleagues—have an app that is solving problems of this type 24/7. His is a University at Buffalo product but there are many similar ones. Quoting the website makes everything quickly clear:

PocketCare S is a bluetooth low energy (BLE) solution which enables smartphones to send and receive anonymous beacon signals. It checks the distance between a smartphone and another beacon (or smartphone running PocketCare S) … [and] … records the duration a close encounter with another beacon. PocketCare S is designed to report social distance information without collecting or revealing any personally identifiable information about any specific individual.

The car would only need to have a detectable beacon, not the same app—and the info giving proof of speed over said duration might already be in the authorities’ data banks even before the complaint is filed.

Bipartite Epidemiology?

Of course, what we are trying to say is that this kind of lawsuit would become a new kind of virus. It might extend to many other transgressions besides speeding. What strikes us further is that this would assume a binary structure that we do not know to exist in medical pandemics.

The first point is that the people who voted for the Texas law believe they are not the ones who would be violating it. Call them Team A. The remaining people—Team 2—would want to get back at them. Maybe speeding is an equal vice for those two teams, so that wouldn’t work. We’ve seen suggestions like Team B suing for open-carry of firearms. Let us just suppose Team B finds something.

The second point is that a “snitch” lawsuit is like one person infecting another. But unlike our present pandemic, it’s one where people in A can only infect those in B, and vice versa. We have a bipartite graph {(A,B)}. How does this change the math?

One issue noted by Scott in a comment is whether the law allows someone to be sued multiple times. If not, then being sued once would be like acquiring immunity. The US recognizes principles of “no double jeopardy” but the Texas law appears to avoid them.

We’ve tried to look for related analysis of the popular college “Humans Versus Zombies” (HvZ) game. But in that game, the zombies only increase. Scientific American once published an analysis titled, “Modelling a werewolf epidemic” that also mentioned zombies. But it did not say the werewolves could only infect zombies and vice-versa.

Binary infection situations do of course exist when two groups of people suddenly mix, each having already acquired resistance to a disease that infects the other. There have been great tragedies when the mixing was a relatively small number of A colonizing into B. But we still do not know of models that would apply when the kinds of “infections” involve spontaneously arise en masse in the manner of a pandemic with high {R_0} factor—as might happen if the courts really legitimize this kind of “snitch suit” law.

Open Problems

Is this type of snitching a viable adjunct to law enforcement for speeding? Could it save lives? What about the associated dimension of propagating personal information into massive repositories?

by RJLipton+KWRegan at September 13, 2021 09:26 PM UTC

The Alon-Edmonds-Luby distance amplification procedure (FOCS 1995) is an algorithm that transforms a code with vanishing distance to a code with constant distance. AEL was invoked by Kopparty, Meir, Ron-Zewi, and Saraf (J. ACM 2017) for obtaining their state-of-the-art LDC, LCC and LTC. Cohen and Yankovitz (CCC 2021) devised a procedure that can amplify inverse-polynomial distances, exponentially extending the regime of distances that can be amplified by AEL. However, the improved procedure only works for LDC and assuming rate $1-\frac1{\mathrm{poly} \log n}$. In this work we devise a distance amplification procedure for LCC with inverse-polynomial distances even for vanishing rate $\frac1{\mathrm{poly} \log\log n}$. For LDC, we obtain a more modest improvement and require rate $1-\frac1{\mathrm{poly} \log\log n}$. Thus, the tables have turned and it is now LCC that can be better amplified. Our key idea for accomplishing this, deviating from prior work, is to tailor the distance amplification procedure to the code at hand. Our second result concerns the relation between linear LDC and LCC. We prove the existence of linear LDC that are not LCC, qualitatively extending a separation by Kaufman and Viderman (RANDOM 2010).

at September 13, 2021 02:58 PM UTC

NYU Shanghai invites applications for open rank Tenured/Tenure-Track positions in Computer Science. We seek candidates who have completed a Ph.D. in Computer Science, or closely related discipline. We welcome candidates in all subfields of CS, with particular interest in Human-Computer Interaction (HCI), Operating and Distributed Systems, Blockchain, Quantum Computing, and Deep Learning.

Website: https://apply.interfolio.com/93616
Email: shanghai.faculty.recruitment@nyu.edu

by shacharlovett at September 13, 2021 02:14 PM UTC

We suggest a general framework to study dependency schemes for dependency quantified Boolean formulas (DQBF). As our main contribution, we exhibit a new infinite collection of implication-free DQBF dependency schemes that generalise the reflexive resolution path dependency scheme. We establish soundness of all these schemes, implying that they can be used in any DQBF proof system. We further explore the power of QBF and DQBF resolution systems parameterised by implication-free dependency schemes and show that the hierarchical structure naturally present among the dependency schemes translates isomorphically to a hierarchical structure of parameterised proof systems with respect to p-simulation. As a special case, we demonstrate that our new schemes are exponentially stronger than the reflexive resolution path dependency scheme when used in Q-resolution, thus resulting in the strongest QBF dependency schemes known to date.

at September 13, 2021 10:56 AM UTC

A massive cluster-randomized controlled trial run in Bangladesh to test the efficacy of mask wearing on reducing coronavirus transmission released its initial results and the covid pundits have been buzzing with excitement. There have already been too many hot takes on the report, but, after wrangling with the 94 pages, I came to a different conclusion than most who have used this study as evidence for their mask predilections. I worry that because of statistical ambiguity, there’s not much that can be deduced at all.

The Bangladesh mask study was a cluster-randomized controlled trial. Rather than randomizing patients, this study randomized villages. Though the sample size looked enormous (340,000 individuals), the effective number of samples was only 600 because the treatment was applied to individual villages. Villages were paired using demographic attributes and one of each pair was randomly assigned to an intervention and the other to no intervention. The 300 intervention villages received free masks, information on the importance of masking, role modeling by community leaders, and in-person reminders for 8 weeks. The 300 villages in the control group did not receive any interventions.

The goal was to determine how much these interventions contributed to patients who both reported symptoms, subsequently consented to antibody testing, and tested positive. The study reports the precise number of people who report symptoms down to the single person (13,273 in treatment, 13,893 in control). The study reports the precise number of symptomatic people who consented to blood draws (5,414 in treatment, 5,538 in control). And the study reports the precise number of blood draws that were tested for covid antibodies (5,006 in treatment, 4,971 in control). But here’s where the picture gets murky: The number of actual positive tests appears nowhere in the preprint.

The study reports that 0.76% of the people in the control villages were symptomatic and seropositive whereas 0.69% of the people in the treatment villages were symptomatic and seropositive. This corresponds to about a 1.1 fold reduction in risk, and this result was deemed by the authors to be statistically significant.

Where do these seropositivity percentages come from? The paper does not make clear what is being counted. Do the authors compute the number of cases in treatment divided by the number of individuals treated? Or do they compute the prevalence in each cluster and average these up? These two different estimates of prevalence can give different answers. For instance, we can imagine a simplified scenario with two clusters. Let’s say one treatment-control pair consists of two villages with 10,000 people each and pair two consists of villages with 6,000 people. In the larger treatment village, there is an outbreak with 136 cases, but in the smaller treatment village there are no cases. In the control villages, the larger village observes 75 cases and the smaller 46 cases. If we count over the number of individuals, then the prevalence ratio is 1.1 in favor of the control arm. However, if we first compute prevalence at the village level and then average these estimates, we find a prevalence ratio of 1.1 in favor of the treatment arm. But which is better? Reducing prevalence at the village level or the population level? This question is especially difficult when the actual magnitude of the outcome is so small. In either case we are talking about a difference of 15 cases between the treatment and control villages in a population of 32,000 individuals.

illustration of how different summary statistics can yield different effect sizes.

When effect sizes are small and sensitive to measurement, convention compels us to find refuge in an argument over statistical significance. So let’s then examine how significance is derived in the working paper. The authors say that they model the count of symptomatic seropositive individuals as a “a generalized linear model (GLM) with a normal family and identity link.” This is a fancy way of saying that they modeled the count as if it were sampled from a normal distribution and ran ordinary least-squares regression. Based on the captions on the tables, it appears that they modeled the rate of symptomatic seroprevalence in each village as a normal distribution whose mean is a function of the village cluster and some other covariates. They then apparently computed estimates of village level seroprevalence under the model and averaged these estimates to yield a final estimate of seroprevalence in treatment and control.

While the Gaussian model made the authors’ coding simple and allowed them to report results in standard econometric formatting, the model is also certainly wrong. Counts cannot be normally distributed as normal random variables as they cannot take negative values. Indeed, 36 out of the 300 villages had zero infections, and such an event is exceedingly unlikely when the distribution is well-approximated by a Gaussian. Rather than adjust their modeling assumptions, the authors just removed these villages from the regression, leading to an overestimate of the average seroprevalence.

The report computes p-values and confidence intervals from these regressions. What exactly do these quantities mean? P-values and confidence intervals associated with a regression are valid only if the model is true. What if the model is not true? If the model is wrong, the error bars are meaningless. And if the confidence intervals and p-values are meaningless, then I’m not sure what we can conclude.

The authors provided several robustness checks to fend off claims like mine. They also estimated the effect using a non-preregistered model which modeled the count as being Poisson distributed and found the effects were similar. Again, however, modeling infectious diseases with Poisson distributions is not realistic. A Poisson distribution models independent events that occur with some fixed rate. Such models are reasonable for modeling noninfectious diseases such as heart attacks, but we know that infections do not occur as random independent events: interactions with other sick individuals creates complex dynamic spread and the epidemic curves we see far too often thrown around online. Mathematically, it is not surprising that two models yield the same estimated effect sizes: they are both generalized linear models and their estimates are computed with similar algorithms. But since both models are wrong, it’s not clear why including both provides much in the way of reassurance.

Rather than providing all of this statistical analysis, why not report the unadjusted counts of positive individuals for the reader to interpret? Especially considering that symptoms were reported precisely down to the person.

It’s useful to compare this mask study to the RCTs for vaccines. Vaccine studies are fortunate to be the purest of randomized controlled trials. If RCTs are a “gold standard” for causal inference, then vaccine studies are the “gold standard” of RCTs. Vaccines trials are easy to blind, almost always have clinical equipoise, only measure direct effects on individuals that can be nearly uniformly sampled from the world’s population, and are trivial to verify statistically. Looking at the example of the Pfizer vaccine, the effect size was enormous (20x risk reduction) and the confidence intervals were just based on exact calculations for independent, binary random variables. And the CIs didn’t matter much because the effect size was so large. You could just stare at the Kaplan-Meier curve and bask in the amazing effectiveness of MRNA vaccines.

Unfortunately, of course, most effect sizes are not factor of 20. Indeed, they are usually less than a factor of 2. As we saw in the mask study, the effect size was less than a factor of 1.1. I’m picking on the mask study only because it has been so attention grabbing. It’s a convenient example to illustrate how statistical modeling can muddy the waters in randomized control trials. But it is only one of many examples I’ve come across in the past few months. If you pick a random paper out of the New England Journal of Medicine or the American Economic Review, you will likely find similar statistical muddiness.

We lose sight of the forest for the trees when we fight over p-values rather than considering effect sizes. Ernest Rutherford is famously quoted proclaiming “If your experiment needs statistics, you ought to have done a better experiment.” Rutherford, who discovered the structure of the atom, and is famous for pioneering innovations in experiment design and measurement for amplifying small effects. He was not opposed to probing the minuscule, but was interested in how to best produce empirical evidence.

I think a milder version of Rutherford’s aphorism should guide scientific investigation: If the effect size is so small that we need sophisticated statistics, maybe that means the effect isn’t real. Using sophisticated statistical scaffolding clouds our judgement. We end up using statistical methods as a crutch, not to dig signals out of noise, but to convince ourselves of signals when there are none. And this matters for recommendations on policy. If the effects of an intervention are modest at best, and only manifest themselves after statistical correction, how can we use such a study to guide policy?

Misapplication of statistics can lead to bad policy decisions. Medical devices can be approved even when the statistics are done in error. And statistical errors can even dampen confidence in effective vaccines.

Of course, there is an existential problem arguing for large effect sizes. If most effect sizes are small or zero, then most interventions are useless. And this forces us scientists to confront our cosmic impotence, which remains a humbling and frustrating experience.

I’ve probably been thinking too much about statistics and models. I’m not the first mid-career professor to realize that statistical modeling is widely misapplied (For example Doug Altman, David Freedman, or Joseph Romano, among many others). But I’m hoping to write more about this with a bit of an algorithmic lens, discussing some of the earlier worries about modeling in the process. Can we revisit suggestions for improving our evidentiary framework with contemporary computational statistics tools? Where might this lead us in both the experimental sciences and in machine learning?

at September 13, 2021 12:00 AM UTC

There is a blog called lesswrong. Many people contribute to it (how many people must contribute to a website before it stops being called a blog and starts being called... Not sure what?). The theme is rationality. They (not sure who they are) have made a best-of collection from lesswrong which is named

A Map that Reflects the Territory (available here)

Actually, its not one book, but five mini-books. I quote the titles and paraphrase the first sentence of each:

Epistemology:  How we come to know the world.

Agency: The ability to take action in the world and control the future.

Coordination:  The ability of multiple agents to work together.

Curiosity: The desire to understand how the world works.

Alignment: The problem of aligning the thoughts and goals.

I have written a review of the the book. The book was my first exposure to the blog, except sometimes reading about the blog, probably on Scott's blog.

I am posting this to both complexity blog and to lesswrong, though with lesswrong I will have a different intro since they know lesswrong but might not know complextyblog. 

My review is here.

I would appreciate intelligent comments and suggestions, which I will use to improve the review.

by gasarch (noreply@blogger.com) at September 12, 2021 07:38 PM UTC

For a function $t : 2^\star \to 1^\star$, let $C_t$ be the set of problems decidable on input $x$ in time at most $t(x)$ almost everywhere. The Union Theorem of Meyer and McCreight asserts that any union $\bigcup_{i < \omega} C_{t_i}$ for a uniformly recursive sequence of bounds $t_i$ is equal to $C_L$ for some single recursive function $L$. In particular the class PTIME of polynomial-time relations can be expressed as $C_L$ for some total recursive function $L : 2^\star \to 1^\star$. By controlling the complexity of the construction, we show that in fact PTIME equals $C_L$ for some $L$ computable in quasi-polynomial time.

at September 12, 2021 12:21 PM UTC