Theory of Computing Blog Aggregator

Authors: Naor Alaluf, Moran Feldman
Download: PDF
Abstract: In this paper we consider the problem of maximizing a non-negative submodular function subject to a cardinality constraint in the data stream model. Previously, the best known algorithm for this problem was a $5.828$-approximation semi-streaming algorithm based on a local search technique (Feldman et al., 2018). For the special case of this problem in which the objective function is also monotone, the state-of-the-art semi-streaming algorithm is an algorithm known as Sieve-Streaming, which is based on a different technique (Badanidiyuru, 2014). Adapting the technique of Sieve-Streaming to non-monotone objective functions has turned out to be a challenging task, which has so far prevented an improvement over the local search based $5.828$-approximation. In this work, we overcome the above challenge, and manage to adapt Sieve-Streaming to non-monotone objective functions by introducing a "just right" amount of randomness into it. Consequently, we get a semi-streaming polynomial time $4.282$-approximation algorithm for non-monotone objectives. Moreover, if one allows our algorithm to run in super-polynomial time, then its approximation ratio can be further improved to $3 + \varepsilon$.

at June 27, 2019 01:27 AM UTC

Authors: Claudson F. Bornstein, Martin Charles Golumbic, Tanilson D. Santos, Uéverton S. Souza, Jayme L. Szwarcfiter
Download: PDF
Abstract: Golumbic, Lipshteyn and Stern defined in 2009 the class of EPG graphs, as the intersection graph of edge paths on a grid. An EPG graph $G$ is a graph that admits a representation where its vertices correspond to paths in a grid $Q$, such that two vertices of $G$ are adjacent if and only if their corresponding paths in $Q$ have a common edge. If the paths in the representation have at most $k$ changes of direction (bends), we say that it is a $B_k$-EPG representation. A collection $C$ of sets satisfies the Helly property when every sub-collection of $C$ that is pairwise intersecting has at least one common element. In this paper we show that the problem of recognizing $B_k$-EPG graphs $G=(V,E)$ whose edge-intersections of paths in a grid satisfy the Helly property, so-called Helly-$B_k$ EPG graphs, is in $\mathcal{NP}$, for every $k$ bounded by a polynomial of $|V(G)|$. In addition, we show that recognizing Helly-$B_1$ EPG graphs is $NP$-complete, and it remains $NP$-complete even when restricted to 2-apex and 3-degenerate graphs.

at June 27, 2019 12:00 AM UTC

Authors: Thomas Heinis
Download: PDF
Abstract: Key to DNA storage is encoding the information to a sequence of nucleotides before it can be synthesised for storage. Definition of such an encoding or mapping must adhere to multiple design restrictions. First, not all possible sequences of nucleotides can be synthesised. Homopolymers, e.g., sequences of the same nucleotide, of a length of more than two, for example, cannot be synthesised without potential errors. Similarly, the G-C content of the resulting sequences should be higher than 50\%. Second, given that synthesis is expensive, the encoding must map as many bits as possible to one nucleotide. Third, the synthesis (as well as the sequencing) is error prone, leading to substitutions, deletions and insertions. An encoding must therefore be designed to be resilient to errors through error correction codes or replication. Fourth, for the purpose of computation and selective retrieval, encodings should result in substantially different sequences across all data, even for very similar data. In the following we discuss the history and evolution of encodings.

at June 27, 2019 01:22 AM UTC

Authors: Giulia Bernardini, Huiping Chen, Alessio Conte, Roberto Grossi, Grigorios Loukides, Nadia Pisanti, Solon P. Pissis, Giovanna Rosone
Download: PDF
Abstract: String data are often disseminated to support applications such as location-based service provision or DNA sequence analysis. This dissemination, however, may expose sensitive patterns that model confidential knowledge (e.g., trips to mental health clinics from a string representing a user's location history). In this paper, we consider the problem of sanitizing a string by concealing the occurrences of sensitive patterns, while maintaining data utility. First, we propose a time-optimal algorithm, TFS-ALGO, to construct the shortest string preserving the order of appearance and the frequency of all non-sensitive patterns. Such a string allows accurately performing tasks based on the sequential nature and pattern frequencies of the string. Second, we propose a time-optimal algorithm, PFS-ALGO, which preserves a partial order of appearance of non-sensitive patterns but produces a much shorter string that can be analyzed more efficiently. The strings produced by either of these algorithms may reveal the location of sensitive patterns. In response, we propose a heuristic, MCSR-ALGO, which replaces letters in these strings with carefully selected letters, so that sensitive patterns are not reinstated and occurrences of spurious patterns are prevented. We implemented our sanitization approach that applies TFS-ALGO, PFS-ALGO and then MCSR-ALGO and experimentally show that it is effective and efficient.

at June 27, 2019 01:24 AM UTC

Authors: Fabrizio Grandoni, Stefan Kratsch, Andreas Wiese
Download: PDF
Abstract: The area of parameterized approximation seeks to combine approximation and parameterized algorithms to obtain, e.g., (1+eps)-approximations in f(k,eps)n^{O(1)} time where k is some parameter of the input. We obtain the following results on parameterized approximability: 1) In the maximum independent set of rectangles problem (MISR) we are given a collection of n axis parallel rectangles in the plane. Our goal is to select a maximum-cardinality subset of pairwise non-overlapping rectangles. This problem is NP-hard and also W[1]-hard [Marx, ESA'05]. The best-known polynomial-time approximation factor is O(loglog n) [Chalermsook and Chuzhoy, SODA'09] and it admits a QPTAS [Adamaszek and Wiese, FOCS'13; Chuzhoy and Ene, FOCS'16]. Here we present a parameterized approximation scheme (PAS) for MISR, i.e. an algorithm that, for any given constant eps>0 and integer k>0, in time f(k,eps)n^{g(eps)}, either outputs a solution of size at least k/(1+eps), or declares that the optimum solution has size less than k. 2) In the (2-dimensional) geometric knapsack problem (TDK) we are given an axis-aligned square knapsack and a collection of axis-aligned rectangles in the plane (items). Our goal is to translate a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of TDK with rotations (TDKR), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factors are 558/325+eps and 4/3+eps, resp. [Galvez et al., FOCS'17]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese, SODA'15]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for TDKR. For all considered problems, getting time f(k,eps)n^{O(1)}, rather than f(k,eps)n^{g(eps)}, would give FPT time f'(k)n^{O(1)} exact algorithms using eps=1/(k+1), contradicting W[1]-hardness.

at June 27, 2019 01:23 AM UTC

Authors: Adam Lev-Libfeld
Download: PDF
Abstract: As demand for Real-Time applications rises among the general public, the importance of enabling large-scale, unbound algorithms to solve conventional problems with low to no latency is critical for product viability. Timer algorithms are prevalent in the core mechanisms behind operating systems, network protocol implementation, stream processing, and several database capabilities. This paper presents a field-tested algorithm for low latency, unbound range timer structure, based upon the well excepted Timing Wheel algorithm. Using a set of queues hashed by TTL, the algorithm allows for a simpler implementation, minimal overhead no overflow and no performance degradation in comparison to the current state of the algorithms under typical use cases.

at June 27, 2019 01:26 AM UTC

Authors: Rafael Pass, Muthuramakrishnan Venkitasubramaniam
Download: PDF
Abstract: Consider the following two fundamental open problems in complexity theory: 1) Does a hard-on-average language in NP imply the existence of one-way functions? 2) Does a hard-on-average language in NP imply a hard problem in TFNP (i.e., the class of total NP search problem)? We show that the answer to (at least) one of these questions is yes. In other words, in Impagliazzo's Pessiland (where NP is hard-on-average, but one-way functions do not exist), TFNP is unconditionally hard (on average). This result follows from a more general theory of interactive average-case complexity, and in particular, a novel round-collapse theorem for computationally-sound protocols, analogous to Babai-Moran's celebrated round-collapse theorem for information-theoretically sound protocols. As another consequence of this treatment, we show that the existence of O(1)-round public-coin non-trivial arguments (i.e., argument systems that are not proofs) imply the existence of a hard-on-average problem in NP/poly.

at June 27, 2019 01:21 AM UTC

Authors: Mingyu Xiao
Download: PDF
Abstract: A mixed dominating set of a graph $G = (V, E)$ is a mixed set $D$ of vertices and edges, such that for every edge or vertex, if it is not in $D$, then it is adjacent or incident to at least one vertex or edge in $D$. The mixed domination problem is to find a mixed dominating set with a minimum cardinality. It has applications in system control and some other scenarios and it is $NP$-hard to compute an optimal solution. This paper studies approximation algorithms and hardness of the weighted mixed dominating set problem. The weighted version is a generalization of the unweighted version, where all vertices are assigned the same nonnegative weight $w_v$ and all edges are assigned the same nonnegative weight $w_e$, and the question is to find a mixed dominating set with a minimum total weight. Although the mixed dominating set problem has a simple 2-approximation algorithm, few approximation results for the weighted version are known. The main contributions of this paper include: [1.] for $w_e\geq w_v$, a 2-approximation algorithm; [2.] for $w_e\geq 2w_v$, inapproximability within ratio 1.3606 unless $P=NP$ and within ratio 2 under UGC; [3.] for $2w_v > w_e\geq w_v$, inapproximability within ratio 1.1803 unless $P=NP$ and within ratio 1.5 under UGC; [4.] for $w_e< w_v$, inapproximability within ratio $(1-\epsilon)\ln |V|$ unless $P=NP$ for any $\epsilon >0$.

at June 27, 2019 01:27 AM UTC

Authors: H. Philathong, V. Akshay, I. Zacharov, J. Biamonte
Download: PDF
Abstract: Gibbs sampling is fundamental to a wide range of computer algorithms. Such algorithms are set to be replaced by physics based processors$-$be it quantum or stochastic annealing devices$-$which embed problem instances and evolve a physical system into an ensemble to recover a probability distribution. At a critical constraint to variable ratio, decision problems$-$such as propositional satisfiability$-$appear to statistically exhibit an abrupt transition in required computational resources. This so called, algorithmic or computational phase transition signature, has yet-to-be observed in contemporary physics based processors. We found that the computational phase transition admits a signature in Gibbs' distributions and hence we predict and prescribe the physical observation of this effect. We simulate such an experiment, that when realized experimentally, we believe would represent a milestone in the physical theory of computation.

at June 27, 2019 01:20 AM UTC

This event was a great success! Thank you to all of the participants for contributing your time. Please keep up the momentum and continue to edit the pages you made a start on. Please continue to record your progress on the list of topics. Special thanks to Aviad Rubinstein and Yuval Filmus for offering expert advice at the event.

We plan to organize this event again at future STOCs, and hope many more people can participate. Even an hour of your time can have a huge impact on the community!

20190625_212644

by shuchic at June 26, 2019 03:54 PM UTC

Authors: Erva Ulu, James McCann, Levent Burak Kara
Download: PDF
Abstract: We introduce a method to design lightweight shell objects that are structurally robust under the external forces they may experience during use. Given an input 3D model and a general description of the external forces, our algorithm generates a structurally-sound minimum weight shell object. Our approach works by altering the local shell thickness repeatedly based on the stresses that develop inside the object. A key issue in shell design is that large thickness values might result in self-intersections on the inner boundary creating a significant computational challenge during optimization. To address this, we propose a shape parametrization based on the solution to the Laplace's equation that guarantees smooth and intersection-free shell boundaries. Combined with our gradient-free optimization algorithm, our method provides a practical solution to the structural design of hollow objects with a single inner cavity. We demonstrate our method on a variety of problems with arbitrary 3D models under complex force configurations and validate its performance with physical experiments.

at June 26, 2019 11:31 PM UTC

Authors: Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford
Download: PDF
Abstract: A landmark result of non-smooth convex optimization is that gradient descent is an optimal algorithm whenever the number of computed gradients is smaller than the dimension $d$. In this paper we study the extension of this result to the parallel optimization setting. Namely we consider optimization algorithms interacting with a highly parallel gradient oracle, that is one that can answer $\mathrm{poly}(d)$ gradient queries in parallel. We show that in this case gradient descent is optimal only up to $\tilde{O}(\sqrt{d})$ rounds of interactions with the oracle. The lower bound improves upon a decades old construction by Nemirovski which proves optimality only up to $d^{1/3}$ rounds (as recently observed by Balkanski and Singer), and the suboptimality of gradient descent after $\sqrt{d}$ rounds was already observed by Duchi, Bartlett and Wainwright. In the latter regime we propose a new method with improved complexity, which we conjecture to be optimal. The analysis of this new method is based upon a generalized version of the recent results on optimal acceleration for highly smooth convex optimization.

at June 26, 2019 11:20 PM UTC

Authors: Ronen Eldan, Assaf Naor
Download: PDF
Abstract: Answering a question of Abbasi-Zadeh, Bansal, Guruganesh, Nikolov, Schwartz and Singh (2018), we prove the existence of a slowed-down sticky Brownian motion whose induced rounding for MAXCUT attains the Goemans--Williamson approximation ratio. This is an especially simple particular case of the general rounding framework of Krivine diffusions that we investigate elsewhere.

at June 26, 2019 11:25 PM UTC

Authors: David Durfee, Yu Gao, Gramoz Goranci, Richard Peng
Download: PDF
Abstract: We study \emph{dynamic} algorithms for maintaining spectral vertex sparsifiers of graphs with respect to a set of terminals $T$ of our choice. Such objects preserve pairwise resistances, solutions to systems of linear equations, and energy of electrical flows between the terminals in $T$. We give a data structure that supports insertions and deletions of edges, and terminal additions, all in sublinear time. Our result is then applied to the following problems.

(1) A data structure for maintaining solutions to Laplacian systems $\mathbf{L} \mathbf{x} = \mathbf{b}$, where $\mathbf{L}$ is the Laplacian matrix and $\mathbf{b}$ is a demand vector. For a bounded degree, unweighted graph, we support modifications to both $\mathbf{L}$ and $\mathbf{b}$ while providing access to $\epsilon$-approximations to the energy of routing an electrical flow with demand $\mathbf{b}$, as well as query access to entries of a vector $\tilde{\mathbf{x}}$ such that $\left\lVert \tilde{\mathbf{x}}-\mathbf{L}^{\dagger} \mathbf{b} \right\rVert_{\mathbf{L}} \leq \epsilon \left\lVert \mathbf{L}^{\dagger} \mathbf{b} \right\rVert_{\mathbf{L}}$ in $\tilde{O}(n^{11/12}\epsilon^{-5})$ expected amortized update and query time.

(2) A data structure for maintaining All-Pairs Effective Resistance. For an intermixed sequence of edge insertions, deletions, and resistance queries, our data structure returns $(1 \pm \epsilon)$-approximation to all the resistance queries against an oblivious adversary with high probability. Its expected amortized update and query times are $\tilde{O}(\min(m^{3/4},n^{5/6} \epsilon^{-2}) \epsilon^{-4})$ on an unweighted graph, and $\tilde{O}(n^{5/6}\epsilon^{-6})$ on weighted graphs.

These results represent the first data structures for maintaining key primitives from the Laplacian paradigm for graph algorithms in sublinear time without assumptions on the underlying graph topologies.

at June 26, 2019 11:22 PM UTC

Authors: Dekel Tsur
Download: PDF
Abstract: In the $l$-path vertex cover problem the input is an undirected graph $G$ and an integer $k$. The goal is to decide whether there is a set of vertices $S$ of size at most $k$ such that $G-S$ does not contain a path with $l$ vertices. In this paper we give parameterized algorithms for $l$-path vertex cover for $l = 5,6,7$, whose time complexities are $O^*(3.945^k)$, $O^*(4.947^k)$, and $O^*(5.951^k)$, respectively.

at June 26, 2019 11:23 PM UTC

Authors: Joshua Alan Cook
Download: PDF
Abstract: In this paper, I take a step toward answering the following question: for m different small circuits that compute m orthogonal n qubit states, is there a small circuit that will map m computational basis states to these m states without any input leaving any auxiliary bits changed. While this may seem simple, the constraint that auxiliary bits always be returned to 0 on any input (even ones besides the m we care about) led me to use sophisticated techniques. I give an approximation of such a unitary in the m = 2 case that has size polynomial in the approximation error, and the number of qubits n.

at June 26, 2019 11:20 PM UTC

Authors: Ashley Montanaro
Download: PDF
Abstract: Branch-and-bound is a widely used technique for solving combinatorial optimisation problems where one has access to two procedures: a branching procedure that splits a set of potential solutions into subsets, and a cost procedure that determines a lower bound on the cost of any solution in a given subset. Here we describe a quantum algorithm that can accelerate classical branch-and-bound algorithms near-quadratically in a very general setting. We show that the quantum algorithm can find exact ground states for most instances of the Sherrington-Kirkpatrick model in time $O(2^{0.226n})$, which is substantially more efficient than Grover's algorithm.

at June 26, 2019 11:22 PM UTC

Authors: Rasmus Kyng, Richard Peng, Sushant Sachdeva, Di Wang
Download: PDF
Abstract: We present algorithms for solving a large class of flow and regression problems on unit weighted graphs to $(1 + 1 / poly(n))$ accuracy in almost-linear time. These problems include $\ell_p$-norm minimizing flow for $p$ large ($p \in [\omega(1), o(\log^{2/3} n) ]$), and their duals, $\ell_p$-norm semi-supervised learning for $p$ close to $1$.

As $p$ tends to infinity, $\ell_p$-norm flow and its dual tend to max-flow and min-cut respectively. Using this connection and our algorithms, we give an alternate approach for approximating undirected max-flow, and the first almost-linear time approximations of discretizations of total variation minimization objectives.

This algorithm demonstrates that many tools previous viewed as limited to linear systems are in fact applicable to a much wider range of convex objectives. It is based on the the routing-based solver for Laplacian linear systems by Spielman and Teng (STOC '04, SIMAX '14), but require several new tools: adaptive non-linear preconditioning, tree-routing based ultra-sparsification for mixed $\ell_2$ and $\ell_p$ norm objectives, and decomposing graphs into uniform expanders.

at June 26, 2019 11:23 PM UTC

Authors: Abusayeed Saifullah
Download: PDF
Abstract: Self-stabilization for non-masking fault-tolerant distributed system has received considerable research interest over the last decade. In this paper, we propose a self-stabilizing algorithm for 2-edge-connectivity and 2-vertex-connectivity of an asynchronous distributed computer network. It is based on a self-stabilizing depth-first search, and is not a composite algorithm in the sense that it is not composed of a number of self-stabilizing algorithms that run concurrently. The time and space complexities of the algorithm are the same as those of the underlying self-stabilizing depth-first search algorithm which are O(dn\Delta) rounds and O(n\log \Delta) bits per processor, respectively, where \Delta (<= n) is an upper bound on the degree of a node, d (<= n) is the diameter of the graph, and n is the number of nodes in the network.

at June 26, 2019 11:22 PM UTC

Authors: Ke Chen, Adrian Dumitrescu, Wolfgang Mulzer, Csaba D. Tóth
Download: PDF
Abstract: Let $P=(p_1, p_2, \dots, p_n)$ be a polygonal chain. The stretch factor of $P$ is the ratio between the total length of $P$ and the distance of its endpoints, $\sum_{i = 1}^{n-1} |p_i p_{i+1}|/|p_1 p_n|$. For a parameter $c \geq 1$, we call $P$ a $c$-chain if $|p_ip_j|+|p_jp_k| \leq c|p_ip_k|$, for every triple $(i,j,k)$, $1 \leq i<j<k \leq n$. The stretch factor is a global property: it measures how close $P$ is to a straight line, and it involves all the vertices of $P$; being a $c$-chain, on the other hand, is a fingerprint-property: it only depends on subsets of $O(1)$ vertices of the chain.

We investigate how the $c$-chain property influences the stretch factor in the plane: (i) we show that for every $\varepsilon > 0$, there is a noncrossing $c$-chain that has stretch factor $\Omega(n^{1/2-\varepsilon})$, for sufficiently large constant $c=c(\varepsilon)$; (ii) on the other hand, the stretch factor of a $c$-chain $P$ is $O\left(n^{1/2}\right)$, for every constant $c\geq 1$, regardless of whether $P$ is crossing or noncrossing; and (iii) we give a randomized algorithm that can determine, for a polygonal chain $P$ in $\mathbb{R}^2$ with $n$ vertices, the minimum $c\geq 1$ for which $P$ is a $c$-chain in $O\left(n^{2.5}\ {\rm polylog}\ n\right)$ expected time and $O(n\log n)$ space.

at June 26, 2019 11:26 PM UTC

For more than 20 years we’ve had n^{1+c^{-d}} lower bounds for threshold circuits of depth d [IPS97], for a fixed c. There have been several “explanations” for the lack of progress [AK10]. Recently Chen and Tell have given a better explanation showing that you can’t even improve the result to a better c without proving “the whole thing.”

Say you have a finite group G and you want to compute the iterated product of n elements.

Warm-up [AK10]..

Suppose you can compute this with circuits of size s(n)=n^{10} and depth 10. Now we show how you can trade size for depth. Put a complete tree with fan-in f on top of the group product, where each node computes the product of its children (this is correct by associativity, in general this works for a monoid). This tree needs depth \log _{f}n. If you stick your circuit of size s(n) and depth O(1) at each node, the depth of the overall circuit would be obviously O(\log _{f}n) and the overall size would be dominated by the input layer which is s(f)\cdot n/f<s(f)\cdot n. If you are aiming for overall depth d, you need f=n^{O(1/d)}. This gives size n^{1+O(1/d)}.

Hence we have shown that proving bounds n^{1+\omega (1/d)} for some depth d suffices to prove n^{10} lower bounds for depth 10.

Chen and Tell..

The above is not the most efficient way to build a tree! I am writing this post following their paper to understand what they do. As they say, the idea is quite simple. While above the size will be dominated by the input layer, we want to balance things so that every layer has roughly the same contribution.

Let’s say we are aiming for size n^{1+\epsilon } and let’s see what depth we can get. Let’s say now the size is s(n)=n^{k}. Let us denote by n_{i} the number of nodes at level i with i=0 being the root. The fan-in at level i is (n^{1+\epsilon }/n_{i})^{1/k} so that the cost is n^{1+\epsilon } as desired. We have the recursion n_{i+1}=n_{i}\cdot (n^{1+\epsilon }/n_{i})^{1/k}.

The solution to this recursion is n_{i}=n^{(1+\epsilon )(1-(1-1/k)^{i})}, see below.

So that’s it. We need to get to n nodes. So if you set i=O(k\log (1/\epsilon )) you get say n_{i}=n^{(1+\epsilon )(1-\epsilon ^{2})}>n. Going back to k=10, we have exhibited circuits of size n^{1+\epsilon } and depth just O(\log 1/\epsilon ). So proving stronger bounds than this would rule out circuits of size n^{10} and depth 10.

Added later: About the recurrence.

Letting a_{i}:=\log _{n}n_{i} we have the following recurrence for the exponents of n_{i}.

\begin{aligned} a_{0} & =0\\ a_{i+1} & =a_{i}(1-1/k)+(1+\epsilon )/k=:a_{i}b+c. \end{aligned}

This gives

\begin{aligned} a_{i}=c\sum _{j\le i}b{}^{j}=c\frac {1-b^{i+1}}{1-b}=(1+\epsilon )(1-b^{i+1}). \end{aligned}

If it was a'_{i+1}=a'_{i}+(1+\epsilon )/k obviously a'_{k} would already be 1+\epsilon . Instead for a_{i} we need to get to k\log (1/\epsilon ).

My two cents..

I am not sure I need more evidence that making progress on long-standing bounds in complexity theory is hard, but I do find it interesting to prove these links; we have quite a few by now! The fact that we have been stuck forever just short of proving “the whole thing” makes me think that these long-sought bounds may in fact be false. Would love to be proved wrong, but it’s 2019, this connection is proved by balancing a tree better, and you feel confident that P \ne NP?

References

[AK10]   Eric Allender and Michal Koucký. Amplifying lower bounds by means of self-reducibility. J. of the ACM, 57(3), 2010.

[IPS97]   Russell Impagliazzo, Ramamohan Paturi, and Michael E. Saks. Size-depth tradeoffs for threshold circuits. SIAM J. Comput., 26(3):693–707, 1997.

by Emanuele at June 25, 2019 06:02 PM UTC

 

 

On the Importance of Disciplinary Pride for Multidisciplinary Collaboration

I am a big fan of collaborations, even if they come with their own challenges. I always got further and enjoyed research much more because of my collaborators. I’m forever indebted to so many colleagues and dear, dear friends. Each and every one of them was better than me in some ways. To contribute, I had to remember my own strengths and bring them to the table. The premise of this post is that the same holds for collaboration between fields. It should be read as a call for theoreticians to bring the tools and the powerful way of thinking of TOC into collaborations. We shouldn’t be blind to the limitation of our field but obsessing on those limitations is misguided and would only limit our impact. Instead we should bring our best and trust on the other disciplines we collaborate with to do the same (allowing each to complement and compensate for the other).

The context in which these thoughts came to my mind is Algorithmic Fairness. In this and other areas on the interface between society and computing, true collaboration is vital. Not surprisingly, attending multidisciplinary programs on Algorithm Fairness, is a major part of my professional activities these days. And I love it – I get to learn so much from people and disciplines that have been thinking about fairness for many decades and centuries. In addition, the Humanities are simply splendid. Multidisciplinary collaborations come with even more challenges than other collaborations: the language, tools and perspectives are different. But for exactly the same reasons they can be even more rewarding. Nevertheless, my fear and the reason for this post is that my less experienced TOC colleagues might come out from those interdisciplinary meetings frustrated and might lose confidence in what TOC can contribute. It feels to me that old lessons about the value of TOC need to be learned again. There is a lot to be proud of, and holding to this pride would in fact make us better collaborators not worse.

In the context of Algorithmic Fairness, we should definitely acknowledge (as we often do) that science exists within political structures, that algorithms are not objective and that mathematical definitions cannot replace social norms as expressed by policy makers. But let’s not take these as excuses for inaction and let’s not withdraw to the role of spectators. In this era of algorithms, other disciplines need us just as much as we need them .

 

by Omer Reingold at June 24, 2019 04:43 PM UTC

(title of this blog is due to Henry Baker who posted an article about this elsewhere)

Amoeba finds approx solution to TSP in linear time:here.

Over the years we have seen models of computation that claim to solve NPC or other hard problems quickly. I ask non-rhetorically and with and open mind how they have panned out.

In no particular order:

1) Parallelism. For solving problems faster YES. For speeding up how to solve NPC problems I think YES. For making P=NP somehow NO.  Even so, parallel computers have been a definite practical success.

2) Quantum Computing. Will they factor large numbers anytime soon? Ever? Should we view the effort to build them as an awesome and insightful Physics experiment? Are there any problems that they are NOW doing faster? Is Quantum Crypto (I know, not the same thing) actually used? Will other things of interest come out of the study of quantum computing? It already has, see here.

3) DNA computing. Did that lead to practical solutions to NPC problems? I do not think it did. Did that lead to interesting math? Interesting biology? Interesting science?  I do not know.

4) Autistic  computing for finding primes: see here. Oliver Sacks, the neurologist ,claimed that two autistic twin brothers could generate large primes quickly. This story was never put to a rigorous test and may not be quite right.

5) Amoeba computing: Too early to tell. The article seems to say it succeeded on 8 cities

The problem with all of these non-standard models of computing is SCALE.  And the more powerful classic computers get, the harder it is for these nonstandard models to compete.

Are these models interesting even if they don't end up getting us fast algorithms? They can be:

1) Do they lead to mathematics of interest? (Quantum- Yes, Parallelism- Yes)

2) Did they inspire algorithms for classical computers? (Quantum- Yes)

3) Do they give insight into other fields? (Quantum for Physics yes, DNA-computing for bio-??)

4) Have they ACTUALLY sped up  up computations in meaningful ways for problems we care about (Parallelism  has)

If  you know of any result which I missed

 (e.g.,

 Amoeba-computing giving insight into evolution,

Autistic computing being used by the NSA to find primes,

 DNA computing  leading to interesting mathematics)

 then leave polite comments!






by GASARCH (noreply@blogger.com) at June 24, 2019 05:03 AM UTC


NY Times article on the paper

LinkedIn source

Lucy Lu Wang is the lead author of a paper released this Friday on gender parity in computer science. The paper is from the Allen Institute for Artificial Intelligence. The authors are Wang, Gabriel Stanovsky, Luca Weihs, and Oren Etzioni. We will call them WSWE for short.

Today we will discuss some of the issues this study raises.

The paper was highlighted by the New York Times in an article titled, “The Gender Gap in Computer Science Research Won’t Close for 100 Years.” The news article begins with an equally sobering statement of this conclusion:

Women will not reach parity with men in writing published computer science research in this century if current trends hold, according to a study released on Friday.

We are for gender-neutral opportunities and have always promoted this—see our earlier discussions here and here. We are for doing a better job in supporting women in computer science. The study by WSWE is an important paper that helps frame the problem. Quoting them:

The field has made more of an effort to reach a more balanced gender status. But the data seems to show that even with all the progress, we are still not making the change fast enough.

I suggest that you might wish to read the paper. Unfortunately there are many papers with similar conclusions—see this by Natalie Schluter, for example.

Main Points

Here are some points of the paper by WSWE:

{\bullet } There is a measurable gap. No one would, I believe, doubt this. But it is important to see that it is measurable.

{\bullet } The gap is shrinking, but slowly. Again this seems correct, but whether it is shrinking in all relevant measures of publication weight is still an issue.

{\bullet } The predictions. Perhaps it will not be closed for over a century.

{\bullet } Modern technology allows such a study. This is one aspect that we can all applaud. WSWE used automated tools that allowed this study to search millions of papers.

Issues

WSWE filtered a corpus of 2.87 million papers tagged as in computer science. The volume constrained their approaches to handling several basic issues.


{\bullet} How to tell the gender of an author? They use first names and try to detect gender from that alone. This is not easy. Not only can differently-gendered names in different nations or language groups have the same Romanized form, many names apply to both genders within those groups. The names Taylor and Kelley are perfect examples pf the latter.

WSWE used a statistical weighing method. So “Taylor,” for example, would be weighted as 55 percent female, 45 percent male. The weightings come from a large database called Gender API compiled from government and social agencies not directly related to computer science.


{\bullet} Another issue concerns the prediction part of their paper. They attempt to extrapolate and guess when there will be parity between female and male authorship.

As all predictions this is not easy. It is my main complaint with this and other papers on the gender-gap issue. They predict that parity will not be reached until 2167, in 168 years. An earlier study puts the parity point at 280 years away.

Open Problems

I believe that a major issue is hiring by computer science departments and other institutions. A major CS department just hired {N>1} assistant professors, of which {N} were male. This is a problem.

Should studies on the gender gap count all papers? Perhaps they should weight the papers by some citation indices. Are women writing more impactful papers? What percent of papers by gender have citations rate above X?—you get the idea.

Finally I wonder if parity is the right goal? How about aiming for more women papers than men? Why not?


[various formatting and word edits]

by rjlipton at June 23, 2019 10:33 PM UTC

Spring semester is over, yay! To celebrate summer, I’ve compiled lecture notes from the graduate course COS 598D, a.k.a. “optimization for machine learning“.

The course is an aftermath of a few lectures and summer school tutorials given in various locations, in which  lectures goal of the course was to present the most useful methods and ideas in a rigorous-but-not-tedious way:

  • Suitable for a mathematically-prepared undergraduate student, and/or researcher beginning their endeavor into machine learning.
  • Focus on the important: we start with stochastic gradient descent for training deep neural networks.
  • Bring out the main ideas: i.e. projection-free methods, second-order optimization, the three main acceleration techniques, qualitatively discuss the advantages and applicability of each.
  • *short* proofs, that everyone can follow and extend in their future research. I prefer being off by a constant/log factor from the best known bound with a more insightful proof.
  • Not loose track of the goal: generalization/regret, rather than final accuracy. This means we talked about online learning, generalization and precision issues in ML.

The most recent version can be downloaded here:
OPTtutorial

This is still work in progress, please feel free to send me typos/corrections, as well as other topics you’d like to see (on my todos already: lower bounds, quasi-convexity, and the homotopy method).

Note: zero-order/bandit optimization is an obvious topic that’s not address. The reason is purely subjective – it appears as a chapter in this textbook (that also started as lecture notes!).

 

by Elad Hazan at June 23, 2019 09:49 PM UTC

We are looking for strong theory candidates working in the areas of machine learning, optimization, high dimensional statistics, privacy, fairness, and broadly interpreted data science. The postdoc is part of the Data Science fellows program at UCSD.

Expected start date is October 1, 2019. As this is very late in the season, please apply online in the website and ALSO email Shachar.

Website: http://dsfellows.ucsd.edu/
Email: shachar.lovett@gmail.com

by shacharlovett at June 23, 2019 08:34 PM UTC

The Department of Computer Science and Eng. jointly with the Astrophysics and HPC Group of European University Cyprus announce one scholarship for the Ph.D. Program of Computing/Computer Science. The topics are: • Parallel Algorithms and HPC
• Graph Theory and Applications
• Big Data Applications on Astrophysics
Opportunities for extra funding are also available.

Website: http://ahpc.euc.ac.cy/call-of-phd-positions-2/
Email: A.Efstathiou@euc.ac.cy

by shacharlovett at June 23, 2019 08:34 PM UTC

Recently, Bravyi, Gosset, and König (Science, 2018) exhibited a search problem called the 2D Hidden Linear Function (2D HLF) problem that can be solved exactly by a constant-depth quantum circuit using bounded fan-in gates (or QNC^0 circuits), but cannot be solved by any constant-depth classical circuit using bounded fan-in AND, OR, and NOT gates (or NC^0 circuits). In other words, they exhibited a search problem in QNC^0 that is not in NC^0. We strengthen their result by proving that the 2D HLF problem is not contained in AC^0, the class of classical, polynomial-size, constant-depth circuits over the gate set of unbounded fan-in AND and OR gates, and NOT gates. We also supplement this worst-case lower bound with an average-case result: There exists a simple distribution under which any AC^0 circuit (even of nearly exponential size) has exponentially small correlation with the 2D HLF problem. Our results are shown by constructing a new problem in QNC^0, which we call the Relaxed Parity Halving Problem, which is easier to work with. We prove our AC^0 lower bounds for this problem, and then show that it reduces to the 2D HLF problem. As a step towards even stronger lower bounds, we present a search problem that we call the Parity Bending Problem, which is in QNC^0/qpoly (QNC^0 circuits that are allowed to start with a quantum state of their choice that is independent of the input), but is not even in AC^0[2] (the class AC^0 with unbounded fan-in XOR gates). All the quantum circuits in our paper are simple, and the main difficulty lies in proving the classical lower bounds. For this we employ a host of techniques, including a refinement of H{\aa}stad's switching lemmas for multi-output circuits that may be of independent interest, the Razborov-Smolensky AC^0[2] lower bound, Vazirani's XOR lemma, and lower bounds for non-local games.

at June 23, 2019 04:09 AM UTC

Sign up for the new posts via the RSS feed.

I would like to bring up a STOC 2019 workshop Data Science through a Geometric Lens (co-organized with Sanjoy Dasgupta and Cyrus Rashchtian). It will happen on Sunday at 9am. We have a stellar line-up of speakers:

Some of the topics are a bit unusual for the STOC audience, and we hope you will be inspired by fresh ideas. Please do attend and bring your friends!

at June 23, 2019 03:12 AM UTC

As I mentioned in my previous post, I just finished attending the Symposium on Computational Geometry in Portland. The conference proceedings are open access through LIPIcs.

The conference started on Tuesday (after some earlier social activities) with the best paper talk by Arnaud de Mesmay, “Almost tight lower bounds for hard cutting problems in embedded graphs” (with V. Cohen-Addad, É. Colin de Verdière, and D. Marx) proving that, to find the shortest set of cuts to slice a genus- surface into a disk, the exponent must depend linearly on (assuming the exponential time hypothesis).

Several of the contributed talks from throughout the conference particularly caught my attention:

  • Shay Moran’s paper with A. Yehudayoff, “On weak -nets and the Radon number” concerned abstract convexity spaces where the convex sets are any set family closed under intersection. A space has Radon number if every points can be partitioned into two subsets whose convex hulls (smallest containing convex sets) intersect, and a point in the intersection is called a Radon point. Having bounded Radon number turns out to be equivalent to having weak -nets, subsets of points that (for a given measure on the space) intersect every large convex set.

  • Mitchell Jones’ “Journey to the center of the point set” (with Sariel Har-Peled) improved an old paper of mine on using Radon points to find points of high Tukey depth in high-dimensional point sets in time polynomial in the dimension, improving both the polynomial and the depth of the resulting points.

  • Sariel Har-Peled spoke on his work with Timothy Chan, “Smallest -enclosing rectangle revisited”. As well as shaving logs from the time bounds for finding rectangles that enclose a given number of points and minimize their area or perimeter, they found a general reduction from problems on points to problems on points, allowing one to turn factors of in the time bounds into factors of . Previously such reductions were only known for -point subset problems where the optimal solution lies among the nearest neighbors of some input point, true for rectangle perimeter but not rectangle area.

  • Timothy Chan’s “Computing Shapley values in the plane” concerns an interesting combination of game theory and computational geometry. The Shapley value is a method for assigning credit when multiple people collaborate to produce some value (given as input a function from subsets of people to the value of a subset). It can be defined by adding contributors one-by-one in a random order and setting each contributor’s Shapley value to the expected increase in value when that contributor is added. It sums to the total value and is the unique function with this property that obeys several other natural and desirable axioms for credit assignment. For arbitrary functions from subsets of contributors to subset values, it takes time to compute, but Timothy described polynomial time algorithms for cases when the value function has some geometric meaning, such as when it measures the area of the convex hull of a subset of points. There’s probably a lot more to be done in the same direction.

  • Patrick Schnider’s “Ham-Sandwich cuts and center transversals in subspaces” has some partial results towards a conjectured generalization of the ham sandwich theorem: given probability distributions in -dimensional Euclidean space, it should be possible to find hyperplanes whose checkerboard partition of space simultaneously bisects all of the distributions.

  • Jie Xue spoke on “Near-optimal algorithms for shortest paths in weighted unit-disk graphs” (with Haitao Wang). Here “weighted” means that overlapping unit disks are connected by an edge whose length is the Euclidean distance between their centers. Previous methods obtained near-linear algorithms by using bichromatic closest pair data structures to simulate Dijkstra’s algorithm. Instead, Wang and Xue use a related relaxation algorithm that partitions the plane into a grid and at each step relaxes all edges between certain pairs of grid squares, using data structures for additively weighted nearest neighbors. By avoiding the overhead of closest pairs they shave several logs from the runtime.

  • Arnaud de Mesmay spoke again towards the end of the conference on “The unbearable hardness of unknotting”, with Yo’av Rieck, Eric Sedgwick, and Martin Tancer. They used a reduction from 3-satisfiability, with Hopf links for variables and Borromean links for clauses (connected to each other by ribbons), to prove that it’s NP-complete to find an unlinked subset of a link with a maximum number of components. By a second level of replacement that doubles the strands of each link, they also showed that it’s NP-complete to find the minimum number of crossing changes or of Reidemeister moves to unlink a link or to unknot a knot.

On Tuesday afternoon I went to the workshop on open problems. After the obligatory open problem session, we debated whether we should collect problems on The Open Problems Project, The Open Problem Garden, or some new system that, since it doesn’t exist, can be more perfect than anything that does. Then I switched to the Young Researcher’s Forum (unfortunately missing the open problem implementation challenge); at the YRF, my student Daniel Frishberg spoke about our work on using nearest neighbor chains to speed up greedy algorithms.

The invited talks on Wednesday and Friday were by Sanjoy Dasgupta and Bruce Donald, respectively. Dasgupta spoke on using concepts from geometric data structures to interpret the neural structure of the olfactory system in fruit flies (and hopefully, eventually, vice versa: to use our understanding of neural structures to develop new data structures). For instance, the first three layers of neurons in this system appear to implement an “expand-and-sparsify” data structure that represents low-dimensional normalized vectors by randomly mapping them to much higher dimensional vectors and then listing as a set a smaller number of high-value coordinates. This appears to be closely analogous to known methods for locality-sensitive hashing. Donald spoke on using geometric methods to represent and infer the shapes of proteins, and to design proteins with desired shapes and functions.

Wednesday evening was the conference banquet, at the famed Crystal Ballroom, conveniently located near Powell’s Books. Matthew Dickerson showed up (after many years absence from SoCG) and brought some musical family members for a live concert.

Marquee for Michael Dickerson and Shaky Situation

On Thursday afternoon I stopped by the workshop on algebraic methods where I had the pleasure of seing Josh Zahl give his talk on a whiteboard instead of using prepared slides. It was on methods for proving bounds on the number of incidences among geometric objects by combining naive bounds based on forbidden complete bipartite subgraphs of the incidence graphs, cuttings of space into smaller regions within which one applies these naive bounds, and cuttings of the objects into pieces in order to make the forbidden subgraphs smaller.

The conference business meeting was also Thursday, and ran very smoothly. Next year SoCG will be in Zurich; the year after that, in Buffalo. We now have an officially incorporated society, The Society for Computational Geometry, so that we can maintain buffer funds for the conference from one year to the next; it will be supported by an increase of $30-$35 in non-student registration fees. And the attendees voted overwhelmingly in favor of both the SafeTOC anti-harassment guidelines and ensuring and better advertising the availability of childcare at future conferences.

The conference was dedicated to the memory of Ricky Pollack, but it also included a celebration on Friday of a living computational geometer: John Hershberger, who led the effort to bring SoCG to Oregon, and turned 60 just before the conference began. Happy birthday, John!

(Discuss on Mastodon)

by David Eppstein at June 21, 2019 10:09 PM UTC

Guest post by Mark Sellke.

In the comments of the previous blog post we asked if the new viewpoint on best of both worlds can be used to get clean “interpolation” results. The context is as follows: in a STOC 2018 paper followed by a COLT 2019 paper, the following corruption model was discussed: stochastic bandits, except for C rounds which are adversarial. The state of the art bounds were of the form: optimal (or almost optimal) stochastic term plus K C, and it was mentioned as an open problem whether KC could be improved to C (there is a lower bound showing that C is necessary — when C = O(\sqrt{T})). As was discussed in the comment section, it seemed that indeed this clean best of both worlds approach should certainly shed light on the corruption model. It turns out that this is indeed the case, and a one-line calculation resolves positively the open problem from the COLT paper. The formal result is as follows (recall the notation/definitions from the previous blog post):

Lemma: Consider a strategy whose regret with respect to the optimal action i^* is upper bounded by

(1)   \begin{equation*} c \sum_{t=1}^T \sum_{i \neq i^*} \sqrt{\frac{x_{i,t}}{t}} \,. \end{equation*}

Then in the C-corruption stochastic bandit model one has that the regret is bounded by:

    \[ C + 2 c \sqrt{K C} + c^2 \sum_{i \neq i^*} \frac{\log(T)}{\Delta_i} \]

 

Note that by the previous blog post we know strategies that satisfy (1) with c=10 (see Lemma 2 in the previous post).

Proof: In equation (1) let us apply Jensen over the corrupt rounds, this yields a term c \sqrt{K C}. For the non-corrupt rounds, let us use that

    \[ c \sqrt{\frac{x_{i,t}}{t}} \leq \frac{1}{2} \left( \Delta_i x_{i,t} + \frac{c^2}{t \Delta_i} \right) \]

The sum of the second term on the right hand side is upper bounded by c^2 \sum_{i \neq i^*} \frac{\log(T)}{2 \Delta_i}. On the other hand the sum (over non-corrupt rounds) of the first term is equal to 1/2 of the regret over the non-corrupt rounds, which is certainly smaller than 1/2 of the total regret plus C. Thus we obtain (denoting R for the total regret):

    \[ R \leq c \sqrt{K C} + c^2 \sum_{i \neq i^*} \frac{\log(T)}{2 \Delta_i} + \frac{C}{2} + \frac{R}{2} \]

which concludes the proof.

 

by Sebastien Bubeck at June 20, 2019 06:43 PM UTC

While I’m a relative newcomer to differential privacy (my first paper on it was only in 2017), I’ve found the community to be a pleasure to interact with: paradoxically, simultaneously tight-knit yet highly welcoming to newcomers. I partially credit this culture to the number of workshops and programs which bring people together, including, but not limited to, a BIRS workshop, the Privacy Tools project at Harvard, a semester at the Simons Institute, the forthcoming Shonan workshop, and the Theory and Practice of Differential Privacy (TPDP) Workshop.

I’m writing this post to draw attention to the imminent deadline of TPDP 2019, co-located with CCS 2019 in London. I’ll spare you the full details (click the link for more information), but most pressing is the deadline tomorrow, June 21, 2019, anywhere on Earth (let me know if this presents hardship for you, and I can pass concerns on to the chair). Essentially anything related to the theory or practice of differential privacy is welcome. Submissions are limited to four pages in length and are lightly refereed, based on originality, relevance, interest, and clarity. There are no published proceedings, and previously published results are welcome. If you’ve been looking to get to know the community, consider either submitting or attending the workshop!

by Gautam at June 20, 2019 06:05 PM UTC

I’ve been in Portland, Oregon this week for Computational Geometry Week. Here are a few photos of local street art, the first set I’ve taken with my new Pixel 3 XL cellphone (I neglected to bring an actual camera for this trip):

Portland Oregon Street art: Clay & 2nd Portland Oregon Street art: Clay & 2nd
Portland Oregon Street art: Hall & 5th Portland Oregon Street art: College & 4th

Here are the rest of my photos.

(Discuss on Mastodon)

by David Eppstein at June 20, 2019 05:04 PM UTC

In Quanta magazine, Kevin Hartnett has a recent article entitled A New Law to Describe Quantum Computing’s Rise? The article discusses “Neven’s Law”—a conjecture, by Hartmut Neven (head of Google’s quantum computing effort), that the number of integrated qubits is now increasing exponentially with time, so that the difficulty of simulating a state-of-the-art QC on a fixed classical computer is increasing doubly exponentially with time. (Jonathan Dowling tells me that he expressed the same thought years ago.)

Near the end, the Quanta piece quotes some UT Austin professor whose surname starts with a bunch of A’s as follows:

“I think the undeniable reality of this progress puts the ball firmly in the court of those who believe scalable quantum computing can’t work. They’re the ones who need to articulate where and why the progress will stop.”

The quote is perfectly accurate, but in context, it might give the impression that I’m endorsing Neven’s Law. In reality, I’m reluctant to fit a polynomial or an exponential or any other curve through a set of numbers that so far hasn’t exceeded about 50. I say only that, regardless of what anyone believes is the ultimate rate of progress in QC, what’s already happened today puts the ball firmly in the skeptics’ court.

Also in Quanta, Anil Ananthaswamy has a new article out on How to Turn a Quantum Computer Into the Ultimate Randomness Generator. This piece covers two schemes for using a quantum computer to generate “certified random bits”—that is, bits you can prove are random to a faraway skeptic. one due to me, the other due to Brakerski et al. The article cites my paper with Lijie Chen, which shows that under suitable computational assumptions, the outputs in my protocol are hard to spoof using a classical computer. The randomness aspect will be addressed in a paper that I’m currently writing; for now, see these slides.

As long as I’m linking to interesting recent Quanta articles, Erica Klarreich has A 53-Year-Old Network Coloring Conjecture is Disproved. Briefly, Hedetniemi’s Conjecture stated that, given any two finite, undirected graphs G and H, the chromatic number of the tensor product G⊗H is just the minimum of the chromatic numbers of G and H themselves. This reasonable-sounding conjecture has now been falsified by Yaroslav Shitov. For more, see also this post by Gil Kalai—who appears here not in his capacity as a quantum computing skeptic.

In interesting math news beyond Quanta magazine, the Berkeley alumni magazine has a piece about the crucial, neglected topic of mathematicians’ love for Hagoromo-brand chalk (hat tip: Peter Woit). I can personally vouch for this. When I moved to UT Austin three years ago, most offices in CS had whiteboards, but I deliberately chose one with a blackboard. I figured that chalk has its problems—it breaks, the dust gets all over—but I could live with them, much more than I could live with the Fundamental Whiteboard Difficulty, of all the available markers always being dry whenever you want to explain anything. With the Hagoromo brand, though, you pretty much get all the benefits of chalk with none of the downsides, so it just strictly dominates whiteboards.

Jan Kulveit asked me to advertise the European Summer Program on Rationality (ESPR), which will take place this August 13-23, and which is aimed at students ages 16-19. I’ve lectured both at ESPR and at a similar summer program that ESPR was modeled after (called SPARC)—and while I was never there as a student, it looked to me like a phenomenal experience. So if you’re a 16-to-19-year-old who reads this blog, please consider applying!

I’m now at the end of my annual family trip to Tel Aviv, returning to the Eastern US tonight, and then on to STOC’2019 at the ACM Federated Computing Research Conference in Phoenix (which I can blog about if anyone wants me to). It was a good trip, although marred by my two-year-old son Daniel falling onto sun-heated metal and suffering a second-degree burn on his leg, and then by the doctor botching the treatment. Fortunately Daniel’s now healing nicely. For future reference, whenever bandaging a burn wound, be sure to apply lots of Vaseline to prevent the bandage from drying out, and also to change the bandage daily. Accept no fancy-sounding substitute.

by Scott at June 20, 2019 03:54 PM UTC

[Guest post by Virgi Vassilevska Williams on the TCS women program at STOC. In particular the TCS Women Spotlight workshop has a great program and is open to all. –Boaz]
Dear all,
The TCS Women 2019 program is finalized: https://sigact.org/tcswomen/tcs-women-2019/. Here are some details:
On June 23rd, we have our TCS Women Spotlight workshop from 2 pm to 5 pm. This will feature an inspirational talk by Ronitt Rubinfeld (MIT & Tel Aviv University)–“Comedy of Errors”, and rising stars talks by Naama Ben-David (CMU), Debarati Das (Charles University), Andrea Lincoln (MIT) and Oxana Poburinnaya (Boston University). The workshop is open to all and we would love to see you all there.
 
On June 24th, we have the TCS Women lunch and panel of distinguished researchers from academia and industries. This year’s panelists include Shuchi Chawla (University of Wisconsin), Julia Chuzhoy (TTIC), Edith Cohen (Google), Faith Ellen (University of Toronto), Rachel Lin (University of Washington) and Nutan Limaye (Indian Institute of Technology, Mumbai). The event is open to all women participants of FCRC.
 
In addition to these, there will be a TCS Women poster session held jointly with the STOC poster session on June 24th. Fourteen women researchers will be presenting posters on their research. More information is available on our website. All are welcome. 
 
We are providing travel scholarships to more than forty women students this year to attend STOC and FCRC. We secured funding from the NSF, Akamai, Amazon, Google and Microsoft. Thank you!
 
Looking forward to seeing you soon!
 
Best wishes,
The TCS Women organizers

by Boaz Barak at June 20, 2019 02:33 PM UTC

Just a reminder that Grigory Yaroslavtsev has taken over the Theory Jobs Spreadsheet. Check out who is moving where and let everyone know where you will continue your research career.

In 1999 when I considered leaving the University of Chicago for NEC Research I had a conversation with Bob Zimmer, then VP of Research and current U of C President. Zimmer said it was a shame I didn't live in Hyde Park, the Chicago South Side neighborhood where the university resides, and thus not fully connected with the school. At the time I didn't fully understand his point and did leave for NEC, only to return in 2003 and leave again in 2008. I never did live in Hyde Park.

Recently I served on a review committee for a computer science department in a rural college town. You couldn't help but notice the great collegiality among the faculty. Someone told me their theory that you generally get more faculty collegiality in rural versus urban locations. Why?

In urban locations faculty tend to live further from campus, to get bigger homes and better schools, and face more traffic. They are likely to have more connections with people unconnected with the university. There are more consulting opportunities in large cities, a larger variety of coffee houses to hang out in and better connected airports make leaving town easier. Particularly in computer science, where you can do most of your research remotely, faculty will find themselves less likely to come in every day and you lose those constant informal connections with the rest of the faculty. 

This is a recent phenomenon, even going back to when I was a young faculty you needed to come to the office for access to research papers, better computers to write papers, good printers. Interactions with students and colleagues is always better in person but in the past the methods of electronic communication proved more limiting.

The University of Chicago helped create and promote its own neighborhood and ran a very strong private school with reduced tuition for faculty children. Maybe my life would have been different had I immersed myself in that lifestyle. 

Or maybe we should go the other extreme. If we can find great ways to do on-line meetings and teaching, why do we need the physical university at all?

by Lance Fortnow (noreply@blogger.com) at June 20, 2019 01:18 PM UTC


Postdoctoral position financed by the GSSI

Scientific profile: The successful candidate will have a strong research record in one or more of the research areas within the Computer Science group at the GSSI, namely algorithms for modern networks, formal methods for reactive systems and model-based software engineering. We welcome applications in any of those areas.


The position is for two years (renewable for a third year, if this is mutually agreeable) and the salary is 36,000€ per year before taxes.

How to apply: The application must be submitted through the online form available at https://applications.gssi.it/postdoc/ by Thursday, 11 July 2019 at 6 p.m. (Central European Time). For more information, please consult the Call for Applications at https://applications.gssi.it/postdoc/.
Requirements: Candidates are expected to have a PhD degree in Computer Science or related disciplines.

by Luca Aceto (noreply@blogger.com) at June 20, 2019 10:36 AM UTC

The Computer Science group at the Gran Sasso Science Institute (GSSI, http://cs.gssi.it) --- an international PhD school and centre for advanced studies located in L’Aquila, Italy --- has one available postdoctoral position  in the context of the PRIN projects  “ALGADIMAR: Algorithms, Games, and Digital Markets” (Gianlorenzo D’Angelo, PI at GSSI), funded by the Italian Ministry for University and Research.

The position is briefly described below. It is for two years and the salary is 36,000€ per year before taxes. The position is renewable for a third year, if this is mutually agreeable.

How to apply: The application must be submitted through the online form available at https://applications.gssi.it/postdoc/ by Thursday, 11 July 2019 at 6 p.m. (Central European Time). For more information, please consult the Call for Applications at https://applications.gssi.it/postdoc/ or write an email to Gianlorenzo D’Angelo.

Requirements: Candidates are expected to have a PhD degree in Computer Science or related disciplines. Preference will be given to applicants who have previous research experience on topics related to the theme of the project.

Postdoctoral position within the project “ALGADIMAR: Algorithms, Games, and Digital Markets”, supervised by Gianlorenzo D’Angelo

Brief description of the project: Digital markets form an increasingly important component of the global economy. The Internet has enabled new markets with previously unknown features (e-commerce, web-based advertisement, viral marketing, sharing economy, real-time trading), and algorithms play a central role in many decision processes involved in these markets. For example, algorithms run electronic auctions, adjust prices dynamically, trade stocks, harvest big data to decide market strategies and to provide recommendations to customers. The focus of the proposal ALGADIMAR is on the development of new methods and tools in research areas that are critical to the understanding of digital markets: algorithmic game theory, market and mechanism design, machine learning, algorithmic data analysis, optimization in strategic settings. We plan to apply these methods so as to solve fundamental algorithmic problems motivated by Internet advertisement, sharing economy, mechanism design for social good, security games. While our research is focused on foundational work —with rigorous design and analysis of algorithms, mechanisms and games— it will also include empirical validation on large-scale datasets from real-world applications.

Main activities: The research activity will be on one or more of the following topics: design and analysis of algorithms; algorithms and data structures for big data; approximation algorithms; combinatorial optimization; algorithms for graphs and networks; autonomous agents and mobile computing; distributed algorithms; algorithmic game theory, algorithmic aspects of internet and social networks.

by Luca Aceto (noreply@blogger.com) at June 20, 2019 10:07 AM UTC


Complexity of solving polynomial equations

[ Jeff ]

Jeff Lagarias is a mathematician or a professor at the University of Michigan.

Today I wish to discuss Diophantine equations.

Note, the “or” is a poor joke: For mathematicians {A \text{ or } B} is true also when both {A} and {B} are true.

Jeff has a paper titled Complexity of Diophantine Equations and related talk version. It is a nice review of some of the issues around Diophantine equations.

He has worked on number theory, on complexity theory, and on many other problems. Some are applied and some theoretical. He with Peter Shor solved an open problem years ago that was first stated by Ott-Heinrich Keller in 1930. Our friends at Wikipedia state:

In geometry, Keller’s conjecture is the conjecture that in any tiling of Euclidean space by identical hypercubes there are two cubes that meet face to face.

For dimensions ten or more it is now proved to be false thanks to Jeff and Peter. I am amazed by such geometric results, since I have no geometric intuition. Ten dimensions is way beyond me—although curiously I am okay in {n} dimensions. Strange? Jeff has a relevant quote:

Every dimension is special.

Complexity of Diophantine Equations

Recall the main problem is to find solutions to equations usually restricted to be integers or rationals. This restriction makes the problems hard as in open, and hard as in computationally difficult.

Factoring-Hard

Jeff mentions the following hardness result in his talk: Solving in nonnegative integers for {x,y}

\displaystyle  (x+2)(y+2) = N,

is as hard as integer factoring. This follows since {x+2} and {y+2} must be non-trivial factors of {N}. Note this relies on the fact that {x+2} and {y+2} are both greater than or equal to {2}.

We can delete “nonnegative” in the above by the following simple idea. Suppose that {N} has two factors that are both congruent to {3} modulo {8}. If not, then we can still factor, but I believe the idea is clearer without adding extra complications. Then solve in integers

\displaystyle  (8x + 3)(8y+3) = N.

By assumption on {N} there is a solution of this equation. Moreover suppose that {x,y} are some solution. The key is that {8x + 3} cannot be {\pm 1} and {8y+3} also cannot be {\pm 1}. Then it follows that each is also not equal to {N} in absolute value. Thus

\displaystyle  1 < |8x+3| < N,

and we have factored {N}.

NP-hard

Jeff points out that it is unlikely that Diophantine problems are going to be classified by the NP-hard machinery. He says

{\dots} shows the (possible) mismatch of “natural” Diophantine problems with the P versus NP question.

Factoring is one of those in between problems that could be in P, but most believe that it could not be NP-hard.

Rational Case

The problem of deciding whether a polynomial has a rational root is still open. That is given a polynomial with integer coefficients, does the equation

\displaystyle  F(x,y,z,\dots)= 0,

have a rational solution {x,y,z,\dots}? Of course the famous Hilbert’s Tenth problem asks for integer solutions of polynomials and is undecidable. The rational case is open. We have discussed it before here. See this for a survey.

I had tried to see if we could at least prove the following: The decision problem over the rationals is at NP-hard. I thought for a while that I could show that solving equations over the rationals is at least NP-hard. My idea was to try to replace (x+2)(y+2) by a equation that works over the rationals. The attempt failed. I was also unable to find a reference for such a result either. So perhaps it is open.

Open Problems

Is it undecidable to determine whether a polynomial has a root over the rationals? Can we at least get that it is factoring hard?

by rjlipton at June 19, 2019 10:32 PM UTC

With John Wright‘s talk last week, the Spring ’19 season of TCS+ concluded. Thanks to all our followers who tuned in, everyone who suggested a talk or spread the word, and, of course, thanks to all our speakers!

For those who missed a talk, or would like to watch them again in the comfort of your home, institution, or on the seaside: all past talks are now uploaded and available, along with the speakers’ slides.

Have a great summer, and see you in the Fall!

by plustcs at June 19, 2019 06:07 PM UTC