[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

For an updated version of these notes, see http://www.cs.yale.edu/homes/aspnes/classes/469/notes.pdf. For practical implementation of hash tables in C, see C/HashTables.

These are theoretical notes on hashing based largely on MotwaniRaghavan §§8.4–8.5 (which is in turn based on work of Carter and Wegman on universal hashing and Fredman, Komlós, and Szemerédi on O(1) worse-case hashing).

1. Hashing: basics

This only works if we are working in a RAM (random-access machine model), where we can access arbitrary memory locations in time O(1) and similarly compute arithmetic operations on O(log M)-bit values in time O(1). There is an argument that in reality any actual RAM machine requires either Ω(log N) time to read one of N memory locations (routing costs) or, if one is particularly pedantic, Ω(N1/3) time (speed of light + finite volume for each location). We will ignore this argument.

2. Universal hash families

3. FKS hashing with O(s) space and O(1) worst-case search time

Goal is to hash a static set S so that we never pay more than constant time for search (not just in expectation).

4. Cuckoo hashing

Goal: Get O(1) search time in a dynamic hash table at the cost of a messy insertion procedure. In fact, each search takes only two reads, which can be done in parallel; this is optimal by a lower bound of Pagh probe.pdf, which shows a matching upper bound for static dictionaries. Cuckoo hashing is an improved version of this result that allows for dynamic insertions.

We'll mostly be following the presentation in the original cuckoo hashing paper by Pagh and Rodler: cuckoo-jour.pdf.

4.1. Structure

We have two tables T1 and T2 of size r each, with separate, independent hash functions h1 and h2. These functions are assumed to be k-universal for some sufficiently large value k; as long as we never look at more than k values at once, this means we can treat them effectively as random functions. (In practice, using crummy hash functions seems to work just fine, a common property of hash tables.)

Every key x is stored either in T1[h1(x)] or T2[h2(x)]. So the search procedure just looks at both of these locations and returns whichever one contains x (or fails if neither contains x).

To insert a value x1, we put it in T1[h1(x1)] or T2[h2(x1)]. If one or both of these locations is empty, we put it there. Otherwise we have to kick out some value that is in the way (this is the cuckoo part of cuckoo hashing, named after the bird that leaves its eggs in other birds' nests). So we let x2 = T1[h1(x1)], and insert x1 in T1[h1(x1)]. We now have a new "nestless" value x2, which we swap with whatever is in T2[h2(x2)]. If that location was empty, we are done; otherwise, we get a new value x3 that we have to put in T1 and so on. The procedure terminates when we find an empty spot or if enough iterations have passed that we don't expect to find an empty spot, in which case we rehash the entire table. The code from the Pagh-Rolder paper looks like this:

A detail not included in the above code is that we always rehash (in theory) after r2 insertions; this avoids potential problems with the hash functions used in the paper not being universal enough.

The main question is how long it takes the insertion procedure to terminate, assuming the table is not too full. Letting s = |S|, we will assume r/s ≥ 1+ε for some fixed ε.

First let's look at what happens during an insert if we have a lot of nestless values. We have a sequence of values x1, x2, where each pair of values xi, xi+1 collides in h1 or h2. Assuming we don't reach the MaxLoop limit, there are three main possibilities (the leaves of the tree below):

  1. Eventually we reach an empty position without seeing the same key twice.
  2. Eventually we see the same key twice; there is some i and j>i such that xj=xi. Since xi was already moved once, when we reach it the second time we will try to move it back, displacing xi-1. This process continues until we have restored x2 to T1[h1(x1)], displacing x2 to T2[h2(x1)] and possibly creating a new sequence of nestless values. Two outcomes are now possible:

    1. Some xl is moved to an empty location. We win!

    2. Some xl is moved to a location we've already looked at. We lose! In this case we are playing musical chairs with more players than chairs, and have to rehash.

Let's look at the probability that we get the last, closed loop case. Following Pagh-Rolder, we let v be the number of distinct nestless keys in the loop. We can now count how many different ways such a loop can form: There are v3 choices for i, j, and l, rv-1 choices of cells for the loop, and sv-1 choices for the non-x1 elements of the loop. We also have 2v edges each of which occurs with probability r-1, giving a total probability of v3rv-1sv-1r-2v = v3(s/r)v/(sn). Summing this over all v gives 1/(rs) ∑ v3(s/r)v = O(1/(rs)) = O(1/r2) (the series converges under the assumption that s/r < 1). Since the cost of hitting a closed loop is O(r), this adds O(1) to the insertion complexity.

Now we look at what happens if we don't get a closed loop. It's a little messy to analyze the behavior of keys that appear more than once in the sequence, so the trick used in the paper is to observe that for any sequences of nestless keys x1...xp, there is a subsequence of size p/3 with no repetitions that starts with x1. Since there are only two subsequences that starts with x1 (we can't have the same key show up more than twice), this will either be x1...xj-1 or x1=xi+j-1...xp, and a case analysis shows that at least one of these will be big. We can then argue that the probability that we get a sequence of v distinct keys starting with x1 in either T1 or T2 is at most 2(s/r)v-1 (since we have to hit a nonempty spot, with probability ≤ s/r, at each step, but there are two possible starting locations), which gives an expected insertion time bounded by ∑ 3v (s/r)v-1 = O(1).

Using a slightly more detailed analysis (see the paper), it can be shown that the bound for non-constant ε is O(1+1/ε).

5. Bloom filters

See MitzenmacherUpfal §5.5.3 for basics and a principled analysis or Bloom filter for many variations and the collective wisdom of the unwashed masses.


CategoryRandomizedAlgorithmsNotes


2014-06-17 11:58