[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

1. A card counting problem

Suppose we take a deck of 2n cards, of which n are black and n are red, and shuffle it so that all (2n)! permutations are equally likely. We now play the following game: for each card, before seeing it turned over, you have the option to say take; you then get 1 point if the card is black and -1 points if the card is red. If you do not say take, the card is turned over anyway, you can observe whether it is red or black, and we move on to the next card.

  1. Suppose you are required to say take exactly once (i.e., once you say take the game ends, and if you haven't said take before you reach the last card you have to take the last card). What is your expected return if you play the game optimally?

  2. Suppose now that you can only say take once, but you are not required to do so. Now what is your expected return with optimal play?

1.1. Solution

In each case, we can set up a recurrence. Let Ti(k,m) be the expected return for the i-th game starting from a state with k black cards and m cards total. The expected value of the next card is k/m - (m-k)/m = 2k/m - 1.

For the first two games, we can take this expected return, or we can flip the card and take whichever of Ti(k-1,m-1) or Ti(k,m-1) we get. The recurrence is thus Ti(k,m) = max(2k/m - 1, (k/m)Ti(k-1,m-1) + (1-k/m)Ti(k,m-1)), with a boundary condition of T1(k,1) = 2k-1 for the first game and T2(k,1) = k for the second.

To solve the recurrence for T1, we look at some small cases:

k

m

T1(k,m)

0

1

-1

1

1

1

0

2

-1

1

2

max(0, (1/2)(1+-1)) = 0

2

2

max(1, 1) = 1

0

3

-1

1

3

max(-1/3, (1/3)(-1)+(2/3)(0)) = -1/3

2

3

max(1/3, (2/3)(0)+(1/3)(1)) = 1/3

3

3

1

At this point we guess that the max is always a no-op and that T1(k,m) = 2k/m - 1 for all k and m. We can verify this by checking the boundary condition:

\begin{align*}
T_1(k,1) &= 2k-1 = 2k/m-1.
\end{align*}

and the recurrence, for m>1:

\begin{align*}
T_1(k,m)
&= \max(2k/m-1, (k/m)T_1(k-1,m-1) + (1-k/m)T_1(k,m-1))
\\
&= \max(2k/m-1, (k/m)(2(k-1)/(m-1) - 1) + (1-k/m)(2k/(m-1)-1))
\\
&= 2k/m - 1.
\end{align*}

For T2, again compute small cases:

k

m

T2(k,m)

0

1

0

1

1

1

0

2

0

1

2

max(0, (1/2)(1+0)) = 1/2

2

2

1

0

3

0

1

3

max(-1/3, (1/3)(0)+(2/3)(1/2)) = 1/3

2

3

max(1/3, (2/3)(1/2)+(1/3)(1)) = 2/3

3

3

1

Now the obvious guess is T2(k,m) = k/m. Again we verify:

\begin{align*}
T_2(k,1) &= k = k/m.
\\
T_2(k,m)
&= \max(2k/m-1, (k/m)T_2(k-1,m-1) + (1-k/m)T_2(k,m-1))
\\
&= \max(2k/m-1, (k/m)((k-1)/(m-1)) + (1-k/m)(k/(m-1)))
\\
&= \max(2k/m-1, k/m)
\\
&= \max(2k-m, k) / m
\\
&= k/m.
\end{align*}

2. Waiting times

Suppose we flip a biased coin that comes up heads with probability p until it comes up heads n times. Let X be the total number of times we flip the coin.

  1. Compute EX.
  2. Show that for any fixed p and fixed δ>0, Pr[X > (1+δ)EX] declines exponentially as a function of n. If it makes your life easier, you may assume that (1+δ)EX is an integer.

2.1. Solution

  1. Express X = ∑ Xi where Xi, i=1..n, is the number of coin-flips in the interval following the (i-1)-th head and ending with the i-th head. Then the Xi are all identically distributed, and for each we have EXi = 1 + (1-p)EXi giving EXi = 1/p and EX = n/p.

  2. This is tricky. The easiest way to get the bound is to observe that the event [X > (1+δ)n/p] occurs precisely when, among the first (1+δ)n/p coin-flips, there are fewer than n heads. The expected number of heads is μ = p((1+δ)n/p) = (1+δ)n, so we are asking if there are fewer than (1/(1+δ))μ = (1-δ/(1+δ))μ heads. Applying the Chernoff lower bound exp(-μδ2/2) with δ ← δ/(1+δ) gives a bound on this probability of exp(-μδ2/2(1+δ)2) = exp(-nδ2/2(1+δ)p).

3. Exposure

Suppose that we generate two strings x and y of length n over an alphabet of size s uniformly at random. A string z of length k is a subsequence of x if there is some sequence of indices f(1)<f(2)<...<f(k) such that zj = xf(j) for j=1..k. A longest common subsequence of x and y is a string z of maximum length that is a subsequence of both x and y. Let X be the length of the longest common subsequence of x and y.

Give the best bound you can on the probability that |X-EX| ≥ α.

3.1. Solution

Use the method of bounded differences, applied to the function f(x1...xn,y1...yn) that computes the length of any longest common subsequence of x and y. It is easy to see that this function is Lipschitz. If we change exactly one yi, for example, then we can reduce the length of the LCS by at most one (since given any LCS that includes yi, we can get a shorter common subsequence by deleting it). But this also means that we can only increase the length of the LCS by at most one as well, since if we could change yi to y'i and increase the LCS by more than one, then changing y'i to yi would decrease the LCS by more than one, contradicting our previous observation.

The Azuma-Hoeffding inequality now gives Pr[|X-EX| ≥ λ√(2n)] ≤ 2 exp(-λ2/2) or Pr[|X-EX| ≥ α] ≤ 2 exp(-α2/4n).


2014-06-17 11:58