[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

For more up-to-date notes see http://www.cs.yale.edu/homes/aspnes/classes/465/notes.pdf.

In IndistinguishabilityProofs, we saw a proof that we can't solve the coordinated attack problem. But maybe we want to solve it anyway. The solution is to change the problem.

1. Randomized coordinated attack

Like standard coordinated attack, but with less coordination. Specifically, we'll allow the processes to flip coins to decide what to do, and assume that the communication pattern (which messages get delivered in each round) is fixed and independent of the coin-flips. This corresponds to assuming an oblivious or perhaps content-oblivious adversary (see TypesOfAdversaries). We'll also relax the Agreement property to only hold with some high probability:

Randomized Agreement

For any adversary A, Pr[some process decides 0 and some other process decides 1 | B] <= epsilon.

Validity (all-0 input => all-0 output and all-1 input + no failures => all-1 output) and Termination (everybody decides within r rounds) are as before.

2. An algorithm

Here's an algorithm that gives epsilon = 1/r. (See LynchBook Section 5.2.2 for details). Simplifying assumption is that network is complete, although strongly-connected network with r >= diameter also works.

2.1. Why it works

Termination
immediate from the algorithm.
Validity
  • If all inputs are 0, no process sees all 1 inputs (technically requires an invariant that processes' non-null views are consistent with the inputs, but that's not hard to prove.)
  • If all inputs are 1 and no messages are lost, then information level of each process after k rounds is k (prove by induction) and all processes learn key and all inputs (immediate from first round). So all processes decide 1.
Randomized Agreement
  • First prove a lemma: Define levelit[k] to be the value of leveli[k] after t rounds. Then for all i, j, k, t, (1) leveli[j]t <= levelj[j]t-1 and (2) | leveli[k]t - levelj[k]t | <= 1. As always, the proof is by induction on rounds. Part (1) is easy and boring so we'll skip it. For part (2), we have:

    • After 0 rounds, leveli0[k] = levelj0[k] = -1 if neither i nor j equals k; if one of them is k, we get levelthat one0[k] = 0, which is still close enough.

    • After t rounds, consider levelit[k] - levelit-1[k] and similarly leveljt[k] - leveljt-1[k]. It's not hard to show that each can jump by at most 1. If both deltas are +1 or both are 0, there's no change in the difference in views and we win from the ind hyp. So the interesting case is when leveli[k] stays the same and levelj[k] increases or vice versa.

    • There are two ways for levelj[k] to increase:

      • If j != k, then j received a message from some j' with levelj't-1[k] > leveljt-1[k]. From the induction hypothesis, levelj't-1[k] <= levelit-1[k] + 1 = levelit[k]. So we are happy.

      • If j = k, then j has leveljt[j] = 1 + maxk != j leveljt[k] <= 1 + leveljt[i] <= 1 + levelit[i]. Again we are happy.

  • Note that in the preceding, the key value didn't figure in; so everybody's level at round r is independent of the key.
  • So now we have that levelir[i] is in { l, l+1 }, where l is some fixed value uncorrelated with key. The only way to get some process to decide 1 while others decide 0 is if l+1 >= key but l < key. (If l = 0, a process at this level doesn't know key, but it can still reason that 0 < key since key is in [1,r].) This can only occur if key = l+1, which occurs with probability at most 1/r since key was chosen uniformly.

3. Almost-matching lower bound

Lower bound is Pr[disagreement] = epsilon >= 1/(r+1).

Proof
  • Define levelit[k] as in previous algorithm (whatever the real algorithm is doing).

  • Now show Pr[i decides 1] <= epsilon * (levelir[i] + 1) by induction on levelir[i]

    • If levelir[i] = 0, real execution is indistinguishable (to i) from execution in which some other process j starts with 0 and receives no messages at all. In that execution, j must decide 0 or risk violating Validity. So i decides 1 with probability at most epsilon (from the disagreement bound).

    • If levelir[i] = k > 0, real execution is indistinguishable (to i) from execution in which some other process j only reaches level k-1 and thereafter receives no messages. From the induction hypothesis, Pr[j decides 1] <= epsilon*k in that pruned execution, and so P[i decides 1] <= epsilon*(k+1) in the pruned execution. But by indistinguishability, Pr[i decides 1] <= epsilon*(k+1) in the original execution.

  • In the all-1 input with no messages lost, levelir,,[i] = r and Pr[i decides 1] = 1 (by Validity). So 1 <= epsilon(r+1) => epsilon >= 1/(r+1).

4. What's really going on here

See CommonKnowledge.


CategoryDistributedComputingNotes


2014-06-17 11:58