Basic problem solving techniques.
1. Feynman's Algorithm
Supposedly used by the physicist [Richard_Feynman]:
- Write down the problem.
- Think very hard.
- Write down the solution.
For mere mortals (i.e., everybody except Richard Feynman), this often works about as well as the [Underpants_Gnomes]'s business plan.
2. Polya's Principles
If you can't solve a problem, then there is an easier problem you can solve: find it. ---[George_Polya]
George Polya described in his 1957 book How to Solve It (HowToSolveIt) a set of heuristics for solving problems. A basic outline of his approach is pretty similar to Feynman's Algorithm:
- Make sure you understand the problem.
- Come up with a plan for solving the problem.
- Check that the plan works.
If the last step fails, try the second step again. If the last step fails in a particularly spectacular way, you may need to go back to the first step as well. You may need to go through many iterations of this process, especially if you are dealing with a research problem that has never been solved before.
What makes Polya's method a little more tractable than Feynman's is that he expands a bit on step 2, suggesting a number of heuristics for finding solutions. Translated into ComputerScience terms, some of these are:
Have you seen it before? Or have you seen the same problem in a slightly different form?
Do you know a related problem? Do you know [an algorithm] that could be useful?
Here is a problem related to yours and solved before. Could you use it? Could you use its result? Could you use its method? Should you introduce some auxiliary element in order to make its use possible?
Could you restate the problem? Could you restate it still differently? Go back to definitions.
- If you cannot solve the proposed problem try to solve first some related problem. Could you imagine a more accessible related problem? A more general problem? A more special problem? An analogous problem? Could you solve a part of the problem? Keep only a part of the condition, drop the other part; how far is the [output] then determined, how can it vary? Could you derive something useful from the [input]? Could you think of other [input] appropriate to determine the [output]? Could you change the [input] or [output], or both if necessary, so that the new [input] and the new [output] are nearer to each other?
Many of these heuristics are suspiciously similar to AlgorithmDesignTechniques described in LevitinBook. One heuristic to consider in a classroom setting, especially if you are (a) stuck, (b) worried that you don't understand the problem, or (c) think that the problem ought to be related to something you've seen before but aren't sure, is: Have I talked to a TA about this problem?
If an algorithmic problem seems really difficult, you can also ask Can I show that this problem has no solution? The answer to this question is usually no (since nobody knows how to prove non-existence of good algorithms except in some limited special cases), but using techniques from ComputationalComplexityTheory you can sometimes come close.