Theory of Computing Blog Aggregator

Just like I did last year, and the year before, I’m putting up a post to let y’all know about opportunities in our growing Quantum Information Center at UT Austin.

I’m proud to report that we’re building something pretty good here. This fall Shyam Shankar joined our Electrical and Computer Engineering (ECE) faculty to do experimental superconducting qubits, while (as I blogged in the summer) the quantum complexity theorist John Wright will join me on the CS faculty in Fall 2020. Meanwhile, Drew Potter, an expert on topological qubits, rejoined our physics faculty after a brief leave. Our weekly quantum information group meeting now regularly attracts around 30 participants—from the turnout, you wouldn’t know it’s not MIT or Caltech or Waterloo. My own group now has five postdocs and six PhD students—as well as some amazing undergrads striving to meet the bar set by Ewin Tang. Course offerings in quantum information currently include Brian La Cour’s Freshman Research Initiative, my own undergrad Intro to Quantum Information Science honors class, and graduate classes on quantum complexity theory, experimental realizations of QC, and topological matter (with more to come). We’ll also be starting an undergraduate Quantum Information Science concentration next fall.

So without further ado:

(1) If you’re interested in pursuing a PhD focused on quantum computing and information (and/or classical theoretical computer science) at UT Austin: please apply! If you want to work with me or John Wright on quantum algorithms and complexity, apply to CS (I can also supervise physics students in rare cases). Also apply to CS, of course, if you want to work with our other CS theory faculty: David Zuckerman, Dana Moshkovitz, Adam Klivans, Anna Gal, Eric Price, Brent Waters, Vijaya Ramachandran, or Greg Plaxton. If you want to work with Drew Potter on nonabelian anyons or suchlike, or with Allan MacDonald, Linda Reichl, Elaine Li, or others on many-body quantum theory, apply to physics. If you want to work with Shyam Shankar on superconducting qubits, apply to ECE. Note that the deadline for CS and physics is December 1, while the deadline for ECE is December 15.

You don’t need to ask me whether I’m on the lookout for great students: I always am! If you say on your application that you want to work with me, I’ll be sure to see it. Emailing individual faculty members is not how it works and won’t help. Admissions are extremely competitive, so I strongly encourage you to apply broadly to maximize your options.

(2) If you’re interested in a postdoc in my group, I’ll have approximately two openings starting in Fall 2020. To apply, just send me an email by January 1, 2020 with the following info:
– Your CV
– 2 or 3 of your best papers (links or PDF attachments)
– The names of two recommenders (who should email me their letters separately)

(3) If you’re on the faculty job market in quantum computing and information—well, please give me a heads-up if you’re potentially interested in Austin! Our CS, physics, and ECE departments are all open to considering additional candidates in quantum information, both junior and senior. I can’t take credit for this—it surely has to do with developments beyond my control, both at UT and beyond—but I’m happy to relay that, in the three years since I arrived in Texas, the appetite for strengthening UT’s presence in quantum information has undergone jaw-dropping growth at every level of the university.

Also, Austin-Bergstrom International Airport now has direct flights to London, Frankfurt, and (soon) Amsterdam.

Hook ’em Hadamards!

by Scott at November 12, 2019 07:02 AM UTC

Authors: Cyrille W. Combettes, Sebastian Pokutta
Download: PDF
Abstract: The approximate Carath\'eodory theorem states that given a polytope $\mathcal{P}$, each point in $\mathcal{P}$ can be approximated within $\epsilon$-accuracy in $\ell_p$-norm as the convex combination of $\mathcal{O}(pD_p^2/\epsilon^2)$ vertices, where $p\geq2$ and $D_p$ is the diameter of $\mathcal{P}$ in $\ell_p$-norm. A solution satisfying these properties can be built using probabilistic arguments [Barman, 2015] or by applying mirror descent to the dual problem [Mirrokni et al., 2017]. We revisit the approximate Carath\'eodory problem by solving the primal problem via the Frank-Wolfe algorithm, providing a simplified analysis and leading to an efficient practical method. Sublinear to linear sparsity bounds are derived naturally using existing convergence results of the Frank-Wolfe algorithm in different scenarios.

at November 12, 2019 12:00 AM UTC

Authors: Zhuo Feng
Download: PDF
Abstract: Spectral graph sparsification aims to find ultra-sparse subgraphs whose Laplacian matrix can well approximate the original Laplacian eigenvalues and eigenvectors. In recent years, spectral sparsification techniques have been extensively studied for accelerating various numerical and graph-related applications. Prior nearly-linear-time spectral sparsification methods first extract low-stretch spanning tree from the original graph to form the backbone of the sparsifier, and then recover small portions of spectrally-critical off-tree edges to the spanning tree to significantly improve the approximation quality. However, it is not clear how many off-tree edges should be recovered for achieving a desired spectral similarity level within the sparsifier. Motivated by recent graph signal processing techniques, this paper proposes a similarity-aware spectral graph sparsification framework that leverages efficient spectral off-tree edge embedding and filtering schemes to construct spectral sparsifiers with guaranteed spectral similarity (relative condition number) level. An iterative graph densification scheme is also introduced to facilitate efficient and effective filtering of off-tree edges for highly ill-conditioned problems. The proposed method has been validated using various kinds of graphs obtained from public domain sparse matrix collections relevant to VLSI CAD, finite element analysis, as well as social and data networks frequently studied in many machine learning and data mining applications. For instance, a sparse SDD matrix with 40 million unknowns and 180 million nonzeros can be solved (1E-3 accuracy level) within two minutes using a single CPU core and about 6GB memory.

at November 12, 2019 12:00 AM UTC

Authors: Vladan Majerech
Download: PDF
Abstract: We analyze priority queues including DecreaseKey method in its interface. The paper is inspired by Strict Fibonacci Heaps [2], where G. S. Brodal, G. Lagogiannis, and R. E. Tarjan implemented the heap with DecreaseKey and Meld interface in assymptotically optimal worst case times (based on key comparisons). At the end of the paper there are mentioned possible variants of other structural properties an violations than they have used in the analysis. In the main variant a lot of information is wasted during violation reduction steps. Our goal is to concentrate on other variants and to invent natural strategy not losing that much in the information value. In other words we try to choose among them one which corresponds to superexpensive comparision principle as much as possible. The principle was described in [5] of myself, but after publication I have found these ideas in [4] of H. Kaplan, R. E. Tarjan, and U. Zwick.

at November 12, 2019 12:00 AM UTC

Authors: Jungho Ahn, Eduard Eiben, O-joung Kwon, Sang-il Oum
Download: PDF
Abstract: A graph $G$ is an $\ell$-leaf power of a tree $T$ if $V(G)$ is equal to the set of leaves of $T$, and distinct vertices $v$ and $w$ of $G$ are adjacent if and only if the distance between $v$ and $w$ in $T$ is at most $\ell$. Given a graph $G$, the $3$-leaf Power Deletion problem asks whether there is a set $S\subseteq V(G)$ of size at most $k$ such that $G\setminus S$ is a $3$-leaf power of some tree $T$. We provide a polynomial kernel for this problem. More specifically, we present a polynomial-time algorithm for an input instance $(G,k)$ to output an equivalent instance $(G',k')$ such that $k'\le k$ and $G'$ has at most $O(k^{14}\log^{12}k)$ vertices.

at November 12, 2019 12:00 AM UTC

Authors: Nieves R. Brisaboa, Antonio Fariña, Adrián Gómez-Brandón, Gonzalo Navarro, Tirso V. Rodeiro
Download: PDF
Abstract: We present Dv2v, a new dynamic (one-pass) variable-to-variable compressor. Variable-to-variable compression aims at using a modeler that gathers variable-length input symbols and a variable-length statistical coder that assigns shorter codewords to the more frequent symbols. In Dv2v, we process the input text word-wise to gather variable-length symbols that can be either terminals (new words) or non-terminals, subsequences of words seen before in the input text. Those input symbols are set in a vocabulary that is kept sorted by frequency. Therefore, those symbols can be easily encoded with dense codes. Our Dv2v permits real-time transmission of data, i.e. compression/transmission can begin as soon as data become available. Our experiments show that Dv2v is able to overcome the compression ratios of the v2vDC, the state-of-the-art semi-static variable-to-variable compressor, and to almost reach p7zip values. It also draws a competitive performance at both compression and decompression.

at November 12, 2019 02:32 AM UTC

Authors: Nieves R. Brisaboa, Adrián Gómez-Brandón, Gonzalo Navarro, José R. Paramá
Download: PDF
Abstract: We introduce a compressed data structure for the storage of free trajectories of moving objects (such as ships and planes) that efficiently supports various spatio-temporal queries. Our structure, dubbed GraCT, stores the absolute positions of all the objects at regular time intervals (snapshots) using a $k^2$-tree, which is a space- and time-efficient version of a region quadtree. Positions between snapshots are represented as logs of relative movements and compressed using Re-Pair, a grammar-based compressor. The nonterminals of this grammar are enhanced with MBR information to enable fast queries.

The GraCT structure of a dataset occupies less than the raw data compressed with a powerful traditional compressor such as p7zip. Further, instead of requiring full decompression to access the data like a traditional compressor, GraCT supports direct access to object trajectories or to their position at specific time instants, as well as spatial range and nearest-neighbor queries on time instants and/or time intervals.

Compared to traditional methods for storing and indexing spatio-temporal data, GraCT requires two orders of magnitude less space, and is competitive in query times. In particular, thanks to its compressed representation, the GraCT structure may reside in main memory in situations where any classical uncompressed index must resort to disk, thereby being one or two orders of magnitude faster.

at November 12, 2019 02:44 AM UTC

Authors: Shi-Xin Zhang
Download: PDF
Abstract: In this note, we provide a unifying framework to investigate the computational complexity of classical spin models and give the full classification on spin models in terms of system dimensions, randomness, external magnetic fields and types of spin coupling. We further discuss about the implications of NP-complete Hamiltonian models in physics and the fundamental limitations of all numerical methods imposed by such models. We conclude by a brief discussion on the picture when quantum computation and quantum complexity theory are included.

at November 12, 2019 12:00 AM UTC

Authors: Nian-Ze Lee, Jie-Hong R. Jiang
Download: PDF
Abstract: Stochastic Boolean Satisfiability (SSAT) is a logical formalism to model decision problems with uncertainty, such as Partially Observable Markov Decision Process (POMDP). SSAT, however, is limited by its descriptive power within the PSPACE complexity class. More complex problems, such as the NEXPTIME-complete Decentralized POMDP (Dec-POMDP), cannot be succinctly encoded with SSAT. To provide a logical formalism of such problems, we generalize Dependency Quantified Boolean Formula (DQBF), a representative problem in the NEXPTIME-complete class, to its stochastic variant, named Dependency SSAT (DSSAT), and show that DSSAT is also NEXPTIME-complete. To demonstrate the descriptive power of DSSAT, we further establish a polynomial-time reduction from Dec-POMDP to DSSAT. Our results may encourage DSSAT solver development to enable potential broad applications.

at November 12, 2019 12:00 AM UTC

Authors: Guoxuan Zhang
Download: PDF
Abstract: This paper proposes an A*SLAM system that features combining two sets of fisheye stereo cameras and taking the image edge as the SLAM features. The dual fisheye stereo camera sets cover the full environmental view of the SLAM system. From each fisheye stereo image pair, a panorama depth image can be directly extracted for initializing the SLAM feature. The edge feature is an illumination invariant feature. The paper presents a method of the edge-based simultaneous localization and mapping process using both the normal and inverted images interchangeably.

at November 12, 2019 12:00 AM UTC

Authors: Daniel Leivant
Download: PDF
Abstract: We propose a generic imperative programming language STR that captures PTime computations, on both infinite inductive structures and families of finite structures. The approach, set up in [29] for primitive-recursive complexity, construes finite partial-functions as a universal canonical form of data, and uses structure components for loop variants. STR is obtained by the further refinement that assigns ranks to finite partial-functions, which regulate the interaction of loops, yielding programs that run in polynomial time. STR captures algorithms that have eluded ramified recurrence, and is promising as an artifact of Implicit Complexity which is malleable to static analysis implementations.

at November 12, 2019 12:00 AM UTC

Authors: Yuval Dagan, Vitaly Feldman
Download: PDF
Abstract: Local differential privacy (LDP) is a model where users send privatized data to an untrusted central server whose goal it to solve some data analysis task. In the non-interactive version of this model the protocol consists of a single round in which a server sends requests to all users then receives their responses. This version is deployed in industry due to its practical advantages and has attracted significant research interest. Our main result is an exponential lower bound on the number of samples necessary to solve the standard task of learning a large-margin linear separator in the non-interactive LDP model. Via a standard reduction this lower bound implies an exponential lower bound for stochastic convex optimization and specifically, for learning linear models with a convex, Lipschitz and smooth loss. These results answer the questions posed in \citep{SmithTU17,DanielyF18}. Our lower bound relies on a new technique for constructing pairs of distributions with nearly matching moments but whose supports can be nearly separated by a large margin hyperplane. These lower bounds also hold in the model where communication from each user is limited and follow from a lower bound on learning using non-adaptive \emph{statistical queries}.

at November 12, 2019 12:00 AM UTC

Authors: Christian Ikenmeyer, Umangathan Kandasamy
Download: PDF
Abstract: Understanding the difference between group orbits and their closures is a key difficulty in geometric complexity theory (GCT): While the GCT program is set up to separate certain orbit closures, many beautiful mathematical properties are only known for the group orbits, in particular close relations with symmetry groups and invariant spaces, while the orbit closures seem much more difficult to understand. However, in order to prove lower bounds in algebraic complexity theory, considering group orbits is not enough.

In this paper we tighten the relationship between the orbit of the power sum polynomial and its closure, so that we can separate this orbit closure from the orbit closure of the product of variables by just considering the symmetry groups of both polynomials and their representation theoretic decomposition coefficients. In a natural way our construction yields a multiplicity obstruction that is neither an occurrence obstruction, nor a so-called vanishing ideal occurrence obstruction. All multiplicity obstructions so far have been of one of these two types.

Our paper is the first implementation of the ambitious approach that was originally suggested in the first papers on geometric complexity theory by Mulmuley and Sohoni (SIAM J Comput 2001, 2008): Before our paper, all existence proofs of obstructions only took into account the symmetry group of one of the two polynomials (or tensors) that were to be separated. In our paper the multiplicity obstruction is obtained by comparing the representation theoretic decomposition coefficients of both symmetry groups.

Our proof uses a semi-explicit description of the coordinate ring of the orbit closure of the power sum polynomial in terms of Young tableaux, which enables its comparison to the coordinate ring of the orbit.

at November 12, 2019 12:00 AM UTC

Authors: Bahman Kalantari
Download: PDF
Abstract: We show {\it semidefinite programming} (SDP) feasibility problem is equivalent to solving a {\it convex hull relaxation} (CHR) for system of quadratic equations. On the one hand this offers a simple description of SDP. On the other hand, this equivalence makes it possible to describe a version of the {\it Triangle Algorithm} for SDP feasibility based on solving CHR. Specifically, the Triangle Algorithm either computes an approximation to the least-norm feasible solution of SDP, or using its {\it distance duality}, provides a separation when no solution within a prescribed norm exists. The worst-case complexity of each iteration is computing largest eigenvalue of a symmetric matrix arising in that iteration. Alternate complexity bounds on the total number of iterations are derived. Further applications includes solving an SDP optimization problem. The Triangle Algorithm thus provides an alternative to the existing interior-point algorithms for SDP. Finally, gaining from these results and insights, we discuss potential extension to solving general system of polynomial equations.

at November 12, 2019 12:00 AM UTC

Authors: Venkatesan Guruswami, Andrii Riazanov, Min Ye
Download: PDF
Abstract: Let $W$ be a binary-input memoryless symmetric (BMS) channel with Shannon capacity $I(W)$ and fix any $\alpha > 0$. We construct, for any sufficiently small $\delta > 0$, binary linear codes of block length $O(1/\delta^{2+\alpha})$ and rate $I(W)-\delta$ that enable reliable communication on $W$ with quasi-linear time encoding and decoding. Shannon's noisy coding theorem established the existence of such codes (without efficient constructions or decoding) with block length $O(1/\delta^2)$. This quadratic dependence on the gap $\delta$ to capacity is known to be best possible. Our result thus yields a constructive version of Shannon's theorem with near-optimal convergence to capacity as a function of the block length. This resolves a central theoretical challenge associated with the attainment of Shannon capacity. Previously such a result was only known for the erasure channel.

Our codes are a variant of Ar{\i}kan's polar codes based on multiple carefully constructed local kernels, one for each intermediate channel that arises in the decoding. A crucial ingredient in the analysis is a strong converse of the noisy coding theorem when communicating using random linear codes on arbitrary BMS channels. Our converse theorem shows extreme unpredictability of even a single message bit for random coding at rates slightly above capacity.

at November 12, 2019 12:00 AM UTC

Authors: Nathaniel Harms
Download: PDF
Abstract: We introduce a communication model called universal SMP, in which Alice and Bob receive a function $f$ belonging to a family $\mathcal{F}$, and inputs $x$ and $y$. Alice and Bob use shared randomness to send a message to a third party who cannot see $f, x, y$, or the shared randomness, and must decide $f(x,y)$. Our main application of universal SMP is to relate communication complexity to graph labeling, where the goal is to give a short label to each vertex in a graph, so that adjacency or other functions of two vertices $x$ and $y$ can be determined from the labels $\ell(x),\ell(y)$. We give a universal SMP protocol using $O(k^2)$ bits of communication for deciding whether two vertices have distance at most $k$ on distributive lattices (generalizing the $k$-Hamming Distance problem in communication complexity), and explain how this implies an $O(k^2\log n)$ labeling scheme for determining $\mathrm{dist}(x,y) \leq k$ on distributive lattices with size $n$; in contrast, we show that a universal SMP protocol for determining $\mathrm{dist}(x,y) \leq 2$ in modular lattices (a superset of distributive lattices) has super-constant $\Omega(n^{1/4})$ communication cost. On the other hand, we demonstrate that many graph families known to have efficient adjacency labeling schemes, such as trees, low-arboricity graphs, and planar graphs, admit constant-cost communication protocols for adjacency. Trees also have an $O(k)$ protocol for deciding $\mathrm{dist}(x,y) \leq k$ and planar graphs have an $O(1)$ protocol for $\mathrm{dist}(x,y) \leq 2$, which implies a new $O(\log n)$ labeling scheme for the same problem on planar graphs.

at November 12, 2019 12:00 AM UTC

Authors: Nathan Keller, Ohad Klein
Download: PDF
Abstract: We prove the following conjecture, raised by Aaronson and Ambainis in 2008: Let $f:\{-1,1\}^n \rightarrow [-1,1]$ be a multilinear polynomial of degree $d$. Then there exists a variable $x_i$ whose influence on $f$ is at least $\mathrm{poly}(\mathrm{Var}(f)/d)$.

As was shown by Aaronson and Ambainis, this result implies the following well-known conjecture on the power of quantum computing, dating back to 1999: Let $Q$ be a quantum algorithm that makes $T$ queries to a Boolean input and let $\epsilon,\delta > 0$. Then there exists a deterministic classical algorithm that makes $\mathrm{poly}(T,1/\epsilon,1/\delta)$ queries to the input and that approximates $Q$'s acceptance probability to within an additive error $\epsilon$ on a $1-\delta$ fraction of inputs. In other words, any quantum algorithm can be simulated on most inputs by a classical algorithm which is only polynomially slower, in terms of query complexity.

at November 12, 2019 12:00 AM UTC

Authors: Eduard Eiben, William Lochet, Saket Saurabh
Download: PDF
Abstract: For a fixed graph $H$, the $H$-free-editing problem asks whether we can modify a given graph $G$ by adding or deleting at most $k$ edges such that the resulting graph does not contain $H$ as an induced subgraph. The problem is known to be NP-complete for all fixed $H$ with at least $3$ vertices and it admits a $2^{O(k)}n^{O(1)}$ algorithm. Cai and Cai showed that the $H$-free-editing problem does not admit a polynomial kernel whenever $H$ or its complement is a path or a cycle with at least $4$ edges or a $3$-connected graph with at least $1$ edge missing. Their results suggest that if $H$ is not independent set or a clique, then $H$-free-editing admits polynomial kernels only for few small graphs $H$, unless $\textsf{coNP} \in \textsf{NP/poly}$. Therefore, resolving the kernelization of $H$-free-editing for small graphs $H$ plays a crucial role in obtaining a complete dichotomy for this problem. In this paper, we positively answer the question of compressibility for one of the last two unresolved graphs $H$ on $4$ vertices. Namely, we give the first polynomial kernel for paw-free editing with $O(k^{6})$vertices.

at November 12, 2019 12:00 AM UTC

Authors: Hossein Esfandiari, Amin Karbasi, Vahab Mirrokni
Download: PDF
Abstract: Adaptive sequential decision making is one of the central challenges in machine learning and artificial intelligence. In such problems, the goal is to design an interactive policy that plans for an action to take, from a finite set of $n$ actions, given some partial observations. It has been shown that in many applications such as active learning, robotics, sequential experimental design, and active detection, the utility function satisfies adaptive submodularity, a notion that generalizes the notion of diminishing returns to policies. In this paper, we revisit the power of adaptivity in maximizing an adaptive monotone submodular function. We propose an efficient batch policy that with $O(\log n \times\log k)$ adaptive rounds of observations can achieve an almost tight $(1-1/e-\epsilon)$ approximation guarantee with respect to an optimal policy that carries out $k$ actions in a fully sequential setting. To complement our results, we also show that it is impossible to achieve a constant factor approximation with $o(\log n)$ adaptive rounds. We also extend our result to the case of adaptive stochastic minimum cost coverage where the goal is to reach a desired utility $Q$ with the cheapest policy. We first prove the conjecture by Golovin and Krause that the greedy policy achieves the asymptotically tight logarithmic approximation guarantee without resorting to stronger notions of adaptivity. We then propose a batch policy that provides the same guarantee in polylogarithmic adaptive rounds through a similar information-parallelism scheme. Our results shrink the adaptivity gap in adaptive submodular maximization by an exponential factor.

at November 12, 2019 12:00 AM UTC

Authors: Justin Y. Chen, Gregory Valiant, Paul Valiant
Download: PDF
Abstract: We introduce a framework for studying how distributional assumptions on the process by which data is partitioned into a training and test set can be leveraged to provide accurate estimation or learning algorithms, even for worst-case datasets. We consider a setting of $n$ datapoints, $x_1,\ldots,x_n$, together with a specified distribution, $P$, over partitions of these datapoints into a training set, test set, and irrelevant set. An algorithm takes as input a description of $P$ (or sample access), the indices of the test and training sets, and the datapoints in the training set, and returns a model or estimate that will be evaluated on the datapoints in the test set. We evaluate an algorithm in terms of its worst-case expected performance: the expected performance over potential test/training sets, for worst-case datapoints, $x_1,\ldots,x_n.$ This framework is a departure from more typical distributional assumptions on the datapoints (e.g. that data is drawn independently, or according to an exchangeable process), and can model a number of natural data collection processes, including processes with dependencies such as "snowball sampling" and "chain sampling", and settings where test and training sets satisfy chronological constraints (e.g. the test instances were observed after the training instances).

Within this framework, we consider the setting where datapoints are bounded real numbers, and the goal is to estimate the mean of the test set. We give an efficient algorithm that returns a weighted combination of the training set---whose weights depend on the distribution, $P$, and on the training and test set indices---and show that the worst-case expected error achieved by this algorithm is at most a multiplicative $\pi/2$ factor worse than the optimal of such algorithms. The algorithm, and its proof, leverage a surprising connection to the Grothendieck problem.

at November 12, 2019 12:00 AM UTC

Authors: Philip Bille, Jonas Ellert, Johannes Fischer, Inge Li Gørtz, Florian Kurpicz, Ian Munro, Eva Rotenberg
Download: PDF
Abstract: We present the first linear time algorithm to construct the $2n$-bit version of the Lyndon array using only $o(n)$ bits of working space. A simpler variant of this algorithm computes the plain ($n\lg n$-bit) version of the Lyndon array using only $\mathcal{O}(1)$ words of additional working space. All previous algorithms are either not linear, or use at least $n\lg n$ bits of additional working space. Also in practice, our new algorithms outperform the previous best ones by an order of magnitude, both in terms of time and space.

at November 12, 2019 12:00 AM UTC

Authors: Moreno Marzolla, Gabriele D'Angelo
Download: PDF
Abstract: The problem of identifying intersections between two sets of d-dimensional axis-parallel rectangles appears frequently in the context of agent-based simulation studies. For this reason, the High Level Architecture (HLA) specification -- a standard framework for interoperability among simulators -- includes a Data Distribution Management (DDM) service whose responsibility is to report all intersections between a set of subscription and update regions. The algorithms at the core of the DDM service are CPU-intensive, and could greatly benefit from the large computing power of modern multi-core processors. In this paper we propose two parallel solutions to the DDM problem that can operate effectively on shared-memory multiprocessors. The first solution is based on a data structure (the Interval Tree) that allows concurrent computation of intersections between subscription and update regions. The second solution is based on a novel parallel extension of the Sort Based Matching algorithm, whose sequential version is considered among the most efficient solutions to the DDM problem. Extensive experimental evaluation of the proposed algorithms confirm their effectiveness on taking advantage of multiple execution units in a shared-memory architecture.

at November 12, 2019 12:00 AM UTC

The classic TQBF problem can be viewed as a game in which two players alternate turns assigning truth values to a CNF formula's variables in a prescribed order, and the winner is determined by whether the CNF gets satisfied. The complexity of deciding which player has a winning strategy in this game is well-understood: it is NL-complete for 2-CNFs and PSPACE-complete for 3-CNFs. We continue the study of the unordered variant of this game, in which each turn consists of picking any remaining variable and assigning it a truth value. The complexity of deciding who can win on a given CNF is less well-understood; prior work by the authors showed it is in L for 2-CNFs and PSPACE-complete for 5-CNFs. We conjecture it may be efficiently solvable on 3-CNFs, and we make progress in this direction by proving the problem is in P, indeed in L, for 3-CNFs with a certain restriction, namely that each width-3 clause has at least one variable that appears in no other clause. Another (incomparable) restriction of this problem was previously shown to be tractable by Kutz.

at November 11, 2019 10:26 PM UTC

We show that assuming the strong exponential-time hypothesis (SETH), there are no non-trivial algorithms for the nearest codeword problem (NCP), the minimum distance problem (MDP), or the nearest codeword problem with preprocessing (NCPP) on linear codes over any finite field. More precisely, we show that there are no NCP, MDP, or NCPP algorithms running in time $q^{(1-\epsilon)n}$ for any constant $\epsilon>0$ for codes with $q^n$ codewords. (In the case of NCPP, we assume non-uniform SETH.) We also show that there are no sub-exponential-time algorithms for $\gamma$-approximate versions of these problems for some constant $\gamma > 1$, under different versions of the exponential-time hypothesis.

at November 11, 2019 07:23 PM UTC

Applications are invited for multiple tenure-track positions at all levels across all areas of theoretical computer science. Our department is looking to grow rapidly in several areas, and theory is one of them.

Website: https://academicjobsonline.org/ajo/jobs/14482
Email: ablanca@cse.psu.edu

by shacharlovett at November 11, 2019 06:50 PM UTC

Many discrete minimization problems, including various versions of the shortest path problem, can be efficiently solved by dynamic programming (DP) algorithms that are ``pure'' in that they only perform basic operations, as $\min$, $\max$, $+$, but no conditional branchings via if-then-else in their recursion equations. It is known that any pure $(\min,+)$ DP algorithm solving the minimum weight spanning tree problem on undirected $n$-vertex graphs must perform at least $2^{\Omega(\sqrt{n})}$ operations. We show that this problem \emph{can} be solved by a pure $(\min,\max,+)$ DP algorithm performing only $O(n^3)$ operations. The algorithm is essentially a $(\min,\max)$ algorithm: addition operations are only used to output the final values. The presence of both $\min$ and $\max$ operations means that now DP algorithms can sort: this explains the title of the paper.

at November 11, 2019 06:19 PM UTC

Microsoft Research has multiple research intern positions in the Economics and Computation group and Microeconomics group for the Summer of 2020. Positions are available in both the New England and New York City labs.

Research areas include market design, algorithmic game theory, social network theory, prediction markets, connections between economics and machine learning, applied and theoretical microeconomics, public economics, industrial organization, behavioral economics, and applied and theoretical econometrics (including ALICE).

Potential mentors in MSR NE include  Hunt Allcott, Greg Lewis, Brendan Lucier, Markus Mobius, and Vasilis Syrgkanis.  Potential mentors in MSR NYC include Nicole Immorlica, David Rothschild, Alex Slivkins and Jenn Wortman Vaughan.

Candidates must be currently enrolled in a PhD program in Computer Science, Economics, or a related field.  Please apply by December 13 for full consideration.  The application for MSR NE can be found here; the application for MSR NYC can be found here.  Candidates are encouraged to apply to both postings.

by Yannai A. Gonczarowski at November 11, 2019 06:08 PM UTC


Models of the primes

[ Montreal ]

Andrew Granville writes brilliant papers that explain hard results in number theory. He also proves hard results in number theory.

Today, Ken and I use the famous Goldbach conjecture to discuss a third rail: how to identify which results “should be” true even though they have been too hard to prove.

Granville has just recently published a graphic novel, Prime Suspects: The Anatomy of Integers and Permutations, with his sister Jennifer Granville and illustrator Robert Lewis. It features constables named Green and Tao, a detective Jack von Neumann, and students named Emmy Germain and Sergei Langer among other allusions to real (and complex) mathematicians. It grew out of a 2009 musical play that premiered at IAS.

The driver of the plot is a deep connection between the frequency of primes below a number {x} and that of permutations in {S_N} that are “prime” in the sense of having only one cycle. Substituting {\log(x)} for {N} and vice-versa tends to create correspondences of known results in number theory vis-à-vis permutation group theory. See this MAA review. Going beyond the known theorems described in the novel, we wonder how far such heuristic sleuthing methods can go on long-unsolved cases.

The Goldbach Reminder

The statement we know as the (Strong) Goldbach Conjecture is that every even number {n \ge 4} can be written as the sum of two prime numbers. It was made in 1742 by Christian Goldbach. He wrote to Leonhard Euler:

“Every integer that can be written as the sum of two primes can also be written as the sum of as many primes as desired, until all terms are {1}.”

Well, that is not what we call Goldbach’s conjecture. He and many others at the time considered {1} to be a prime number. What he’s getting at can be seen from this snippet of his letter:

Above his sums, Goldbach put an asterisk * to the note in the margin, in which he asserts his conjecture:

“Every integer greater than 2 can be written as the sum of three primes.”

Wait—that is not the Goldbach conjecture either. It is the “Weak” one and was apparently proved in 2013 by Harald Helfgott. We discussed this in a 2014 post whose larger theme we are continuing here. It also proves the first conjecture, but not the strong conjecture.

But what Goldbach seems to be driving at with his drawing of sums is having one of the “primes” be {1}. Then the strong conjecture is needed. Euler pointed this out in his reply to Goldbach’s letter. But Euler, who was a saint in many ways, charitably reminded Goldbach of a communication earlier that year when Goldbach had observed that his first conjecture followed from the strong statement. Euler went on to say:

“That every even number should be a sum of two primes I hold to be a completely certain theorem, irrespective of my not being able to prove it.”

Ken has translated this a little differently from Wikipedia’s article and its source, reading into Euler’s words the stance of truth shining apart from proof. How one can justify this stance is what we want to discuss.

A Curious Conjecture

The conjecture is curious on several fronts. For one it is usually said to be “obviously correct.” It has been checked to about {10^{18}} or so by computation. There are many open conjectures in number theory that are likely to be true. But few are claimed to be “true” with such a strong bias. Many other conjectures are likely to be true, but none as likely as the Goldbach.

In 1975, Hugh Montgomery and Robert Vaughan proved that the Goldbach is true for most even numbers. That is that the number of possible even numbers {x} less than some {N} are not sums of two primes grows like {o(N)}. Thus if one picks a random even number {x} it is likely to be the sum of two primes. Here the “likely” is a mathematical certainty.

How do we “know” that it is likely to be true? One source is the method of prime models. Primes are quite mysterious and hard to understand. So there are heuristic models that suggest we think of the primes as “random”. Of course this is silly, since the primes are a deterministic fixed sequence of numbers. But the hope is that the following is true.

If {\Gamma} is a statement about the primes that is with high probability in the random model, then it is true.

Of course this is nonsense.

But it is interesting nonsense. Harald Cramér has a model that is simple. Granville add some refinements to this model here and here. More recently William Banks and Kevin Ford and Terence Tao have a new model for the primes here.

These models are useful for making and thinking about number theory conjectures. Perhaps one day they will be able to really be used to determine truth. They are certainly good heuristics to have when studying the prime numbers. We are jealous. In complexity theory it would be wonderful to have anything like these models. Perhaps {\dots}

A Model Example

Cramér’s model is simple to state. Imagine that the primes {\cal P} are replaced by a random set {\cal R} by placing {n} in with probability {1/\ln n}. And we make these choices independently. The Fermat numbers are those

\displaystyle  2^{2^{n}} + 1.

The first of these

\displaystyle  3, 5, 17, 257, 65537

are prime. Fermat thought this continued but it is not true. Euler showed that the next is not a prime

\displaystyle  641 \times 6,700,417.

An interesting problem is are there any more prime Fermat numbers? Many believe that there are no more, or art most there are a finite number in total. Let’s look at using the model to understand the Fermat numbers:

\displaystyle\sum_{n=0}^{\infty} \frac{1}{\ln F_{n}} = \frac{1}{\ln 2} \sum_{n=0}^{\infty} \frac{1}{\log_{2}\left(2^{2^{n}}+1 \right)} = \frac{1}{\ln 2} \sum_{n=0}^{\infty} 2^{-n}  = \frac{2}{\ln 2}.

Therefore, the total expected number of Fermat primes is at most finite. Of course this is assuming the model is predictive.

Proved?

Our friends at Wikipedia say:

As with many famous conjectures in mathematics, there are a number of purported proofs of the Goldbach conjecture, none of which are accepted by the mathematical community.

Try a Google search yourself for “Goldbach conjecture proved”. The top hits include several “proofs” that the conjecture is true. The proofs are all short and simple. All are believed to be wrong. I find it interesting that the proofs, in many cases, use a random like argument in there “proofs”. The trouble is that the above models are only heuristics. So the proofs seem to be incomplete.

Open Problems

Can we imagine getting heuristic models for complexity theory? For quantum algorithms perhaps. What would such heuristic models even look like? We wonder.

by rjlipton at November 11, 2019 04:25 PM UTC

I often give two versions of an exam and TELL THE STUDENTS I am doing this so that they don't even try to cheat. I've even had two different classes take the midterm at the same time, same room, every other seat, so the person next to you is in a different course. And I TELL THE STUDENTS that I am doing this.  A colleague of mine says I shouldn't TELL THE STUDENTS. Here are our arguments

1) Don't tell: students cheat a lot and this is a way to catch them.

2) Tell:  Dealing with cheating distracts from our mission of teaching so best to be preventative so it does not happen. Less noble- tell them so that you don't have to deal with the cheating issue.

I have heard of the following case at a diff school some years ago and want your take on it:
there was one question on the midterm that was different on the two exams- the prof changed the key number, but they were the same question really. The prof was in a hurry for some reason and FORGOT TO TELL THE STUDENTS. You can probably guess what happened next, but not what happened after that

One of the students exams had the solution to THE OTHER PROBLEM on it. Clearly cheating. When called in the student said:

Since you didn't tell us that they were different exams the cheating claim is unfair!

They DID admit their guilt, but they DID NOT have any contrition.

 Options for what penalty to go for:

1) A 0 on the exam itself

2) An F in the course

3) A notation on the transcript indicating Failed-because-cheated. I don't know what that notation was at the schol the story took place, but at UMCP its XF. (Side Note- not clear if someone outside of UMCP looks at a transcript and sees an XF they'll know what the means. But the F part makes it look bad.)

4) Expulsion from school. (This might not be the profs call- this may depend on if its a first offense.)

The lack of contrition bothers me, though the prof who told me the story said that the student may have said it out of shock- the first thing that came into their mind. I asked the prof how the student was doing in the class and the prof said, CORRECTLY, that that is irrelevant.

SO- what penalty would you go for?

The professor went for XF. The student, at the hearing, once again said


Since you didn't tell us that they were different exams the cheating claim is unfair!

The professor told me that he thinks the student was trying to claim it was entrapment, though he had a hard time expressing this coherently. If the student had been a coherent thinker, he probably wouldn't have needed to cheat.

He got the equivalent of an XF.

But here is my real question: Should we TELL THE STUDENTS that they are different exams (I think yes) or
should we NOT tell them so can catch them?






by GASARCH (noreply@blogger.com) at November 11, 2019 03:09 PM UTC

Authors: Jacob Holm, Eva Rotenberg
Download: PDF
Abstract: Given a dynamic graph subject to insertions and deletions of edges, a natural question is whether the graph presently admits a planar embedding. We give a deterministic fully-dynamic algorithm for general graphs, running in amortized $O(\log^3 n)$ time per edge insertion or deletion, that maintains a bit indicating whether or not the graph is presently planar. This is an exponential improvement over the previous best algorithm [Eppstein, Galil, Italiano, Spencer, 1996] which spends amortized $O(\sqrt{n})$ time per update.

at November 11, 2019 12:00 AM UTC

Authors: Jacob Fox, Jonathan Tidor, Yufei Zhao
Download: PDF
Abstract: We prove an arithmetic analog of the induced graph removal lemma for complexity 1 patterns over finite fields. Informally speaking, we show that given a fixed collection of $r$-colored complexity 1 arithmetic patterns over $\mathbb F_q$, every coloring $\phi \colon \mathbb F_q^n \setminus\{0\} \to [r]$ with $o(1)$ density of every such pattern can be recolored on an $o(1)$-fraction of the space so that no such pattern remains.

at November 11, 2019 12:00 AM UTC

Authors: Marc Khoury, Jonathan Richard Shewchuk
Download: PDF
Abstract: How good is a triangulation as an approximation of a smooth curved surface or manifold? We provide bounds on the {\em interpolation error}, the error in the position of the surface, and the {\em normal error}, the error in the normal vectors of the surface, as approximated by a piecewise linearly triangulated surface whose vertices lie on the original, smooth surface. The interpolation error is the distance from an arbitrary point on the triangulation to the nearest point on the original, smooth manifold, or vice versa. The normal error is the angle separating the vector (or space) normal to a triangle from the vector (or space) normal to the smooth manifold (measured at a suitable point near the triangle). We also study the {\em normal variation}, the angle separating the normal vectors (or normal spaces) at two different points on a smooth manifold. Our bounds apply to manifolds of any dimension embedded in Euclidean spaces of any dimension, and our interpolation error bounds apply to simplices of any dimension, although our normal error bounds apply only to triangles. These bounds are expressed in terms of the sizes of suitable medial balls (the {\em empty ball size} or {\em local feature size} measured at certain points on the manifold), and have applications in Delaunay triangulation-based algorithms for provably good surface reconstruction and provably good mesh generation. Our bounds have better constants than the prior bounds we know of---and for several results in higher dimensions, our bounds are the first to give explicit constants.

at November 11, 2019 12:00 AM UTC

Authors: Eugenio Angriman, Alexander van der Grinten, Henning Meyerhenke
Download: PDF
Abstract: In network analysis and graph mining, closeness centrality is a popular measure to infer the importance of a vertex. Computing closeness efficiently for individual vertices received considerable attention. The NP-hard problem of group closeness maximization, in turn, is more challenging: the objective is to find a vertex group that is central as a whole and state-of-the-art heuristics for it do not scale to very big graphs yet.

In this paper, we present new local search heuristics for group closeness maximization. By using randomized approximation techniques and dynamic data structures, our algorithms are often able to perform locally optimal decisions efficiently. The final result is a group with high (but not optimal) closeness centrality.

We compare our algorithms to the current state-of-the-art greedy heuristic both on weighted and on unweighted real-world graphs. For graphs with hundreds of millions of edges, our local search algorithms take only around ten minutes, while greedy requires more than ten hours. Overall, our new algorithms are between one and two orders of magnitude faster, depending on the desired group size and solution quality. For example, on weighted graphs and $k = 10$, our algorithms yield solutions of $12,4\%$ higher quality, while also being $793,6\times$ faster. For unweighted graphs and $k = 10$, we achieve solutions within $99,4\%$ of the state-of-the-art quality while being $127,8\times$ faster.

at November 11, 2019 11:25 PM UTC

Authors: Guilherme C. M. Gomes, Matheus R. Guedes, Vinícius F. dos Santos
Download: PDF
Abstract: An $n$-vertex graph is equitably $k$-colorable if there is a proper coloring of its vertices such that each color is used either $\left\lfloor{n/k}\right\rfloor$ or $\left\lceil{n/k}\right\rceil$ times. While classic Vertex Coloring is fixed parameter tractable under well established parameters such as pathwidth and feedback vertex set, equitable coloring is W[1]-hard. We prove that Equitable Coloring is fixed parameter tractable when parameterized by distance to cluster or co-cluster graphs, improving on the FPT algorithm of Fiala et al. (2011) parameterized by vertex cover. In terms of intractability, we adapt the proof of Fellows et al. (2011) to show that Equitable Coloring is W[1]-hard when simultaneously parameterized by distance to disjoint paths and number of colors. We also revisit the literature and derive other results on the parameterized complexity of the problem through minor reductions or other simple observations.

at November 11, 2019 12:00 AM UTC

Authors: Marco Gaboardi, Kobbi Nissim, David Purser
Download: PDF
Abstract: We study the problem of verifying differential privacy for straight line programs with probabilistic choice. Programs in this class can be seen as randomized Boolean circuits. We focus on two different questions: first, deciding whether a program satisfies a prescribed level of privacy; second, approximating the privacy parameters a program realizes.

We show that the problem of deciding whether a program satisfies $\varepsilon$-differential privacy is $coNP^{\#P}$-complete. In fact, this is the case when either the input domain or the output range of the program is large. Further, we show that deciding whether a program is $(\varepsilon,\delta)$-differentially private is $coNP^{\#P}$-hard, and in $coNP^{\#P}$ for small output domains, but always in $coNP^{\#P^{\#P}}$. Finally, we show that the problem of approximating the level of differential privacy is both $NP$-hard and $coNP$-hard.

at November 11, 2019 11:22 PM UTC

Authors: Miguel Romero, Marcin Wrochna, Stanislav Živný
Download: PDF
Abstract: We identify a sufficient condition, treewidth-pliability, that gives a polynomial-time approximation scheme (PTAS) for a large class of Max-2-CSPs parametrised by the class of allowed constraint graphs (with arbitrary constraints on an unbounded alphabet). Our result applies more generally to the maximum homomorphism problem between two rational-valued structures.

The condition unifies the two main approaches for designing PTASes. One is Baker's layering technique, which applies to sparse graphs such as planar or excluded-minor graphs. The other is based on Szemer\'{e}di's regularity lemma and applies to dense graphs. Albeit with some limitations, we extend the applicability of both techniques to new classes of Max-CSPs.

Treewidth-pliability turns out to be a robust notion that can be defined in several equivalent ways, including characterisations via size, treedepth, or the Hadwiger number. We show connections to the notions of fractional-treewidth-fragility from structural graph theory, hyperfiniteness from the area of property testing, and regularity partitions from the theory of dense graph limits. These may be of independent interest. In particular we show that a monotone class of graphs is hyperfinite if and only if it is fractionally-treewidth-fragile and has bounded degree.

at November 11, 2019 12:00 AM UTC

Authors: Miguel E. Coimbra, Alexandre P. Francisco, Luís M. S. Russo, Guillermo de Bernardo, Susana Ladra, Gonzalo Navarro
Download: PDF
Abstract: We address the problem of representing dynamic graphs using $k^2$-trees. The $k^2$-tree data structure is one of the succinct data structures proposed for representing static graphs, and binary relations in general. It relies on compact representations of bit vectors. Hence, by relying on compact representations of dynamic bit vectors, we can also represent dynamic graphs. In this paper we follow instead the ideas by Munro {\em et al.}, and we present an alternative implementation for representing dynamic graphs using $k^2$-trees. Our experimental results show that this new implementation is competitive in practice.

at November 11, 2019 11:42 PM UTC

Authors: Christopher Harshaw, Fredrik Sävje, Daniel Spielman, Peng Zhang
Download: PDF
Abstract: The paper introduces a class of experimental designs that allows experimenters to control the robustness and efficiency of their experiments. The designs build on a recently introduced algorithm in discrepancy theory, the Gram-Schmidt walk. We provide a tight analysis of this algorithm, allowing us to prove important properties of the designs it produces. These designs aim to simultaneously balance all linear functions of the covariates, and the variance of an estimator of the average treatment effect is shown to be bounded by a quantity that is proportional to the loss function of a ridge regression of the potential outcomes on the covariates. No regression is actually conducted, and one may see the procedure as regression adjustment by design. The class of designs is parameterized so to give experimenters control over the worse case performance of the treatment effect estimator. Greater covariate balance is attained by allowing for a less robust design in terms of worst case variance. We argue that the trade-off between robustness and efficiency is an inherent aspect of experimental design. Finally, we provide non-asymptotic tail bounds for the treatment effect estimator under the class of designs we describe.

at November 11, 2019 11:25 PM UTC

Authors: Rong Ge, Holden Lee, Jianfeng Lu
Download: PDF
Abstract: Estimating the normalizing constant of an unnormalized probability distribution has important applications in computer science, statistical physics, machine learning, and statistics. In this work, we consider the problem of estimating the normalizing constant $Z=\int_{\mathbb{R}^d} e^{-f(x)}\,\mathrm{d}x$ to within a multiplication factor of $1 \pm \varepsilon$ for a $\mu$-strongly convex and $L$-smooth function $f$, given query access to $f(x)$ and $\nabla f(x)$. We give both algorithms and lowerbounds for this problem. Using an annealing algorithm combined with a multilevel Monte Carlo method based on underdamped Langevin dynamics, we show that $\widetilde{\mathcal{O}}\Bigl(\frac{d^{4/3}\kappa + d^{7/6}\kappa^{7/6}}{\varepsilon^2}\Bigr)$ queries to $\nabla f$ are sufficient, where $\kappa= L / \mu$ is the condition number. Moreover, we provide an information theoretic lowerbound, showing that at least $\frac{d^{1-o(1)}}{\varepsilon^{2-o(1)}}$ queries are necessary. This provides a first nontrivial lowerbound for the problem.

at November 11, 2019 12:00 AM UTC

Authors: Jason Bentley, Daniel Gibney, Sharma V. Thankachan
Download: PDF
Abstract: We present the first set of results on the computational complexity of minimizing BWT-runs via alphabet reordering. We prove that the decision version of this problem is NP-complete and cannot be solved in time $2^{o(\sigma)}n$ unless the Exponential Time Hypothesis fails, where $\sigma$ is the size of the alphabet. Moreover, we show that optimization variations of this problem yield strong inapproximability results. In doing so we relate two previously disparate topics: the size of a path cover of a graph and the number of runs in the BWT of a text. This provides a surprising connection between problems on graphs and string compression. As a result we are able to prove (all assuming P $\neq$ NP): (i) No PTAS exists if we define the cost of a solution as exactly the number of runs exceeding $\sigma$; (ii) For all $\delta > 0$, no polytime $\epsilon n^{1/2}$-approximation algorithm exists for $\epsilon > 0$ small enough if we consider the number of runs exceeding $(1+\delta)\sigma$ as the cost of a solution. In this case the problem is APX-hard as well. To the best of our knowledge these are the first ever inapproximability results pertaining to the BWT. In addition, by relating recent results in the field of dictionary compression, we demonstrate that if we define cost purely as the number of runs, we obtain a $\log^2 n$-approximation algorithm. Finally, we provide an efficient algorithm for the more restricted problem of finding an optimal ordering on a subset of symbols (occurring only once) under ordering constraints which runs in optimal time for small values of $\sigma$. We also look at a version of the problem on the newly discovered class of graphs with BWT like properties called Wheeler graphs. Here also we show NP-hardness results on a related problem which we call Source Ordering.

at November 11, 2019 12:00 AM UTC