[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

Performance

The basic task of algorithm performance analysis is to compute the asymptotic worst-case running time of a given algorithm. The parts of this phrase mean:

Running time

We want to compute the number of primitive (usually this means ConstantTime) operations executed by the algorithm as a function of the size of the input.

Asymptotic

We care mostly about how this function grows for large inputs, ignoring constant factors (hence the use of AsymptoticNotation).

Worst-case
For a given input size, we consider the most expensive inputs; equivalently, we want an upper bound that applies to all inputs, including the bad ones.

Sometimes we will consider other resources than time, such as space, random bits, gasoline---whatever is appropriate to the problem. But we will still usually look at worst-case asymptotic performance.

Unlike AlgorithmDesign, which requires creativity, algorithm analysis is usually a mechanical process for well-structured algorithms. Such algorithms can generally be decomposed recursively by peeling off outer layers with one of two types:

Iterative

A loop that runs for m iterations in which the i-th iteration takes T(i) time. The total cost is then Sigmai=1 to m T(i). Once we know T(i) (which may require further analysis of the body of the loop), computing the total cost is simply an exercise in ComputingSums.

Recursive

A procedure that breaks up the original problem into smaller instances of the same problem, then calls itself recursively on those smaller instances. The running time of such an algorithm is given by a recurrence relation; to compute it we have to be good at SolvingRecurrences.

Other algorithms have an even simpler structure, where the cost of the outermost algorithm is just the sum of the costs of its subroutines. These we can analyze by adding.

Not all algorithms have such a pleasant structure. For example, nobody knows how long this algorithm runs for arbitrarily large inputs, or even if it terminates for all inputs:

procedure Collatz(n):
    if n = 1:
        return 1
    else if n mod 2 = 0:
        return Collatz(n/2)
    else
        return Collatz(3*n+1)

We will try to avoid such monstrosities in this course.

Correctness

Having a highly efficient algorithm is not very useful if it gives the wrong answer. So in addition to analyzing algorithms for performance, we can also analyze algorithms for correctness. Typically this involves constructing proofs that the algorithm works provided its subroutines work (a form of structural induction, which you may have seen in CS201 if Carsten was teaching it). For iterative algorithms, it involves an induction argument that says that the i-th iteration produces the correct intermediate result provided the previous iterations did. For recursive algorithms, it involves an induction argument ordered by some measure of the size or complexity of the subproblems being solved recursively (which should be smaller by this measure than the original problem).

We will not spend as much time on proving correctness as we do on worrying about performance. For simple algorithms, correctness is usually obvious (though intuitions can be deceptive); for more complex algorithms, correctness is usually too time-consuming to prove rigorously. But you should always be prepared to give at least an informal explanation of why any algorithm you design works.


CategoryAlgorithmNotes


2014-06-17 11:57