[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

1. A destructive RSA key

Let p and q be distinct primes, and let e = 2(p-1)(q-1). Show that the set A = { xe mod pq | x ∈ ℕ } has exactly four elements.

1.1. Solution

The 2 is bit of a red herring: the solution would work without it (or with 2 replaced by any positive integer).

Using the Chinese remainder theorem, there is a bijection ℤpq ↔ ℤp×Zq such if x maps to (x1,x2), then xe maps to (x1e,x2e). Euler's theorem says that $x_1^e = x_1^{2(p-1)(q-1)} = \left(x_1^{(p-1)}\right)^{2(q-1)} = 1^{2(q-1)} = 1 \pmod{p}$ if x is relatively prime to $p$; similarly, x2e = 1 (mod q) if x2 is relatively prime to q. So for most values (x1,x2), we get (x1e,x2e) = (1,1).

The exception is when x1 = 0 (mod p) or x2 = 0 (mod q). In this case, because 2(p-1)(q-1) ≠ 0, we have 0e = 0. This gives additional possibilities (0,0), (0,1), and (1,0). Running the Chinese remainder theorem in the other direction, each of these four pairs gives exactly one element of ℤpq, for four elements in total.

2. Zero matrices

Recall that a zero matrix, usually just written as 0, is a matrix all of whose entries are zero.

  1. Prove or disprove: If A and B are square matrices of the same size, and AB = 0, then either A = 0 or B = 0.
  2. Prove or disprove: If A and B are square invertible matrices of the same size, then AB ≠ 0.

2.1. Solution

1. Disproof: Here are two nonzero 2-by-2 matrices whose product is zero:

\begin{displaymath}
\begin{bmatrix}
1 & 1 \\
1 & 1
\end{bmatrix}
\begin{bmatrix}
1 & -1\\
-1 & 1
\end{bmatrix}
=
\begin{bmatrix}
0 & 0 \\
0 & 0
\end{bmatrix}
\end{displaymath}

2. Proof: If A and B are invertible, then (AB)-1 = B-1A-1 implies AB is also invertible. So it can't be 0, which isn't.

3. Xorshift formulas

On a typical machine, a signed char value in C consists of 8 bits, which we can imagine as the entries of an 8-dimensional vector x over ℤ2. Some operations that can be performed on signed chars include:

Left shift

Given a signed char x, return a new signed char y, where y0 = 0, y1 = x0, y2 = x1, ..., y7 = x6. (This is written as y = x<<1.)

Right shift with sign extension

Given a signed char x, return a new signed char y, where y7 = x7, y6 = x7, y5 = x6, ..., y0 = x1. (This is written as y = x >> 1.)

XOR

Given two signed chars x and y, return a new signed char z, where zi = xi + yi (mod 2). (This is written as z = x^y.)

Note: By convention, the bits in a character are numbered from 7 (most significant) to 0 (least significant), e.g. x = x7x6x5x4x3x2x1x0. This is backwards from the way we usually write vectors, and also ends at 0. For the purposes of jthis problem, let us meet this convention half-way: imagine that vectors and matrices are written in the usual order (indices go left-to-right and top-to-bottom), but that we start counting at 0.

  1. Suppose we represent x as a column vector. Show that there exist matrices L and R with entries in ℤ2 such that Lx = x << 1 and Rx = x >> 1.

  2. Consider the following rules for building xorshift formulas in some variable x:

    • x is a formula.

    • If f is a formula, so is (f<<1) and (f>>1).

    • If f and g are formulas, so is (f^g).

    Show that for any formula f in this class, there exists a matrix A over ℤ2, such that the result of evaluating f for a given x is the same as the result of computing Ax. (Hint: Use induction on the length of f.)

3.1. Solution

  1. Let L be the matrix with Lij = 1 if j = i-1 and Lij = 0 otherwise. Then (Lx)i = ∑j Lijxj = xi-1 (when i > 0). Let R be the matrix with Lij = 1 if j = i+1 or j = i = 7. Then (Lx)7 = ∑j Lij xj = x7, and for i < 7, (Lx)i = ∑j Lijxj = xi+1.

  2. Define the length l(f) of a formula f as the number of symbols needed to write it: 1 for x, 5+l(f) for (f<<1) and (f>>1), and 3+l(f)+l(g) for (f^g) (the exact definition doesn't matter, as long as the length of a formula is bigger than the length of its subformulas). We now proceed by induction on l(f). If l(f) = 1, then f = x, and f(x) = Ix. For a longer formula, if f is (g<<1), then by the induction hypothesis there exists some matrix B such that g(x) = Bx, and f(x) = LBx, where L is as defined above. Similarly, we can express (g>>1) as RBx. When f = (g^h), let g(x) = Ax and h(x) = Bx. Then f(x) = Ax + Bx = (A+B)x. In each case, we have expressed f as matrix multiplication, as claimed.

Another way to show part 2 is to show that f is linear, i.e. that f(ax) = af(x) (easy, since one need only consider two cases a=0 and a=1) and that f(x+y) = f(x)+f(y) (harder, and essentially corresponds to the argument above). If f is linear, then there exists some matrix A such that f(x) = Ax, since we can obtain the columns of A by feeding basis vectors to f.


2014-06-17 11:57