summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--book.tex113
1 files changed, 105 insertions, 8 deletions
diff --git a/book.tex b/book.tex
index cbd0d95..9d588cb 100644
--- a/book.tex
+++ b/book.tex
@@ -977,7 +977,8 @@ We end up with the same query time $O(\log^{d-1} n + k)$, since the procedure is
-\chapter{Connectivity in Dynamic Graphs}
+\chapter{Connectivity in Dynamic Graphs}%
+\label{ch:connectivity}
\topics{Dynamic connectivity on trees, Euler tour trees}.
Before starting, we point out that this chapter is subject to fewer proofs, and more stated results.
@@ -1160,15 +1161,111 @@ The general problem of maintaining a minimum spanning forest can be solved dynam
-
-
-
-
-
-
\chapter{Lower Bounds}
+\topics{Dynamic Partial Sums, Dynamic Connectivity.}
+
+In this chapter we will look at lower bounds for basic computational problems.
+We will use the \emph{Cell Probe} model, in which we get computation ``for free'' while we pay for memoroy accesses.
+
+\section{Dynamic Partial Sum}%
+\label{sec:partial-sum}
+The problem is to maintain an array $A$ of length $n$ containing numbers, while supporting
+two operations: \code{Update}{i, x}, which sets $A[i] = x$, and \code{Query}{i} where we query the prefix sum $A[1] + \cdots + A[i]$.
+We will show that there is a lower bound of $\Omega(\log n)$ on either \algo{Update} or \algo{Query}.
+In the proof we will use the universe ${[n]}$ and $+$ as operator, but any group of polynomial size with an arbitrary operator works.
+
+The idea of the proof is to construct an access sequence where we update with random numbers, which will break any data structure.
+Imagine we list out the queries done through time, and build a mental binary tree over them.
+We want the sequence to be \emph{maximally interleaved}, which means that any access in a left subtree affects a query in a right subtree for the same node.
+
+The way we choose this sequence is by iterating $i$ from $0$ through $n$, and reverse its bits to get the index.
+We assume $n$ is a power of two.
+For instance, if $n = 8$, we get the sequence
+\[\texttt{0b000},\
+\texttt{0b001},\
+\texttt{0b010},\
+\texttt{0b011},\
+\texttt{0b100},\dots
+\implies
+0, 4, 2, 6, 1,\dots\]
+
+The operations we do on each cell is as follows: we first \algo{Query} the number there, and then we set it to a totally random number.
+The argument we will make is that the information transfer guarantees our lower bound.
+
+We claim that for a given node in our mental tree, $\Omega(T)$ memory values are written in the left subtree and then read in the right subtree, where $T$ is the size of the subtrees.
+This implies that $\Omega(n \log n)$ work is begin done, simply by moving $\log n$ bits of information (one number) for each of the $n$ operations.
+We prove the claim:
+each $x_i$ has $\Theta(\log n)$ bits of entropy, since they are totally random numbers.
+Let $R$ be the memory cells read in the right subtree, and $W$ the memory cells written in the left subtree.
+We see that if we are given the state of the data structure before performing the operations in the left subtree and
+the operationg we are going to perform in the right subtree, we can recover the operations done in the left subtree, without knowing what they are.
+That is, by having the state before and the queries on the right we can recover the $x_i$ we are writing in the left.
+
+To see this, we consider again the cast where $n = 8$, so the left tree is the sequence $0, 4, 2, 6$ and the right tree is $1,5,3,7$.
+$Q(1) = x_0 \text{\ (since $x_1 = 0$)}$, so we know $x_0$ which was just written.
+$Q(5) = x_0 + x_2 + x_4 + x_1$; we have to wait for this one.
+$Q(3) = x_0 + x_2 + x_1$: we know $x_1$ and $x_0$, so now we know $x_2$, which gets us $x_4$ from $Q(5)$.
+$Q(7)$ gives is $x_6$.
+
+Now we can explicitly encode $|R \cap W| \log n$ bits, and use this to extract $\Omega(T \log n)$ bits of entropy: the $x_i$s.
+Since we have extracted $\Omega(T \log n)$ bits of entropy from $|R \cap W| \log n$ bits,
+we have $|R \cap W| = \Theta(T)$, which proves the claim.
+
+
+\section{Dynamic Connectivity}
+We have already looked at this problem in Chapter~\ref{ch:connectivity},
+and we claimed that there is a lower bound of $\Omega(\log n)$ on dynamic connectivity.
+We will now prove it.
+
+The graph we will consider is laid out on a grid, such that adjacent columns have a perfect matching;
+the graph can be seen as a composition of permutations.
+We have $n$ vertices, and the grid is a perfect square, so there are $\sqrt{n}$ permutations permuting $\sqrt{n}$ elements.
+Each permutation requires $\sqrt{n} \log n$ bits.
+
+The operations we consider is updaing the $i$th perimutation, and querying that a prefix of the permutations is equivalent to a given permutation.
+These are \emph{block operations}, in the sense that they do not act on single edges or vertices, but groups of them;
+more specifically they correspond to $O(\sqrt{n})$ regular operations.
+This can only make our problem easier, since we now can apply amortization schemes, or somehow expolit that we have information about nearby operations.
+The reason we do not simply query for the total permutation is that this is hard to do using connectivity queries: we would have to search for it, using a lot of queries.
+
+We claim that $\sqrt{n}$ updates and $\sqrt{n}$ verify sums require $\Omega(\sqrt{n} \sqrt{n} \log n)$ cell probes,
+which implies a lower bound of $\Omega(\log n)$ per operation, since we do $\sqrt{n}$ block operations, which all corresponds to $\sqrt{n}$ graph operations.
+
+\subsection{The Proof}
+\todo{wtf is this}
+Similar to in Section~\ref{sec:partial-sum} we will consider the interleaving access pattern.
+We will look at how much information has to be carried over from the left to the right subtree for a given node.
+We claim that that every node in a right subtree have to do $\Omega(l\sqrt{n})$ expected cell probes reading cells that were written
+in the left subtree, where $l$ is the number of leaves in the subtree.
+In addition, we sum this lower bound over every node in the tree.
+Note that there is no double counting of reads, since we only count a read in the \LCA{} for the read and the latest write.
+Since every leaf is in $\log n$ subtrees, and the size of the tree is $O(\sqrt{n})$ we have $\sum l = O(\log n \sqrt{n})$,
+so we end up with $\Omega(\sqrt{n} \sqrt{n} \log n)$, which is the claim.
+
+Now we have to prove the claim from the previous paragraph.
+We can do this in a similar manner as in Section~\ref{sec:partial-sum}.
+The left subtree for a given node contains $l/2$ updates with $l/2$ independent, totally random permutations.
+An encoding of these updates must use $\Omega(l\sqrt{n} \log n)$ bits ($l$ updates, which are $\sqrt{n}$ numbers of size $n$).
+We show that if the claim is false, there is a smaller encoding, which contradicts information theory.
+We will encode verified sums in the right subtree with which we can recover the permutations in the left subtree.
+This is done by encoding the \emph{query} permutations on the right.
+
+However, what we assume to know is a little different this time.
+We still assume we know everything before the left subtree, but we can not assume to know what we are passing into the queries on the right,
+because this is exactly what we are trying to figure out!
+In fact, these two things convey the same information, as both is reconstrucable from the other.
+However, this time we know that the queries, which asks if a composition prefix of the permutations is the same as the given permutation,
+always answer ``Yes''.
+
+\todo{how does this work??}
+
+Short version:
+We end up also encoding a separator of the sets $R\setminus W$ and $W\setminus R$, where $R$ and $W$ are the cells read and writte to in the right and left subtree respectively.
+Then we can simulate all possible input permutations and use the separator to quit early if we see that the cell accessed is not right.
+The encoding size ends up begin $\Omega(l \sqrt{n} \log n)$, which implies that $|R| + |W| = \Omega(l \sqrt{n} \log n)$,
+which was the claim we needed to prove.
+
-Dynamic partial sums, dynamic connectivity.
\chapter{Integer Structures}