[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

For more up-to-date notes see http://www.cs.yale.edu/homes/aspnes/classes/465/notes.pdf.

The session problem demonstrates some limitations on globally simulating a synchronous message-passing system with an asynchronous message-passing system. In a global simulation, our goal is to produce a simulation that looks synchronous "from the outside"; that is, that looks synchronous to an observer that can see the entire schedule. In contrast, a synchronizer (see Synchronizers) produces a simulation that looks synchronous "from the inside"—the resulting execution is indistinguishable from a synchronous execution to any of the processes, but an outside observer can see that different processes execute different rounds at different times.

In our description, we will mostly follow AttiyaWelch §6.2.2; see AttiyaWelch for references on the origin of the problem.

A solution to the session problem is an asynchronous protocol in which each process repeatedly executes some special action. Our goal is to guarantee that these special actions group into s sessions, where a session is an interval of time in which every process executes at least one special action. We also want the protocol to terminate: this means that in every execution, every process executes a finite number of special actions.

A synchronous system can solve this problem trivially in s rounds: each process executes one special action per round. For an asynchronous system, a lower bound of Attiya and Mavronicolas (based on an earlier bound of Arjomandi et. al., who defined the problem in a slightly different communication model), shows that if the diameter of the network is D, there is no solution to the s-session problem that takes (s-1)D time or less in the worst case. The argument is based on reordering events in any such execution to produce fewer than s sessions, using the happens-before relation from LogicalClocks.

1. Outline of the proof

(See AttiyaWelch §6.2.2 for the real proof.)

Fix some algorithm A for solving the s-session problem, and suppose that its worst-case time complexity is (s-1)D or less. Consider some synchronous execution of A (that is, one where the adversary scheduler happens to arrange the schedule to be synchronous) that takes (s-1)D rounds or less. Divide this execution into two segments: an initial segment β that includes all rounds with special actions, and a suffix δ that includes any extra rounds where the algorithm is still floundering around. We will mostly ignore δ, but we have to leave it in to allow for the possibility that whatever is happening there is important for the algorithm to work (e.g. to detect termination).

We now want to perform a causal shuffle on β that leaves it with only s-1 sessions. The first step is to chop β into at most s-1 segments β12, ... of at most D rounds each. Because the diameter of the network is D, there exist processes p0 and p1 such that no chain of messages starting at p0 within some segment reaches p1 before the end of the segment. It follows that for any events e0 of p0 and e1 of p1 in the same segment βi, it is not the case that e0βδe1. So there exists a causal shuffle of βi that puts all events of p0 after all events of p1. By a symmetrical argument, we can similarly put all events of p1 after all events of p0. In both cases the resulting schedule is indistinguishable by all processes from the original.

So now we apply these shuffles to each of the segments βi in alternating order: p0 goes first in the even-numbered segments and p1 goes first in the odd-numbered segments, yielding a sequence of shuffled segments β'i. This has the effect of putting the p0 events together, as in this example with (s-1) = 4:

(here each p0, p1 stands in for a sequence of events of each process).

Now let's count sessions. We can't end a session until we reach a point where both processes have taken at least one step since the end of the last session. If we mark with a slash the earliest places where this can happen, we get a picture like this:

We have at most (s-1) sessions! This concludes the proof.


CategoryDistributedComputingNotes


2014-06-17 11:58