The signal is binary, 2^k -1's and +1's, with an equal number of each. The transform is the tensor product of k copies of the matrix |1 1| |1 -1|. When I sample the output, I get a distribution of row indices that matches the energy distribution (amplitude squared) of the transformed signal. Any row of the transform has got to match (or anti-match; doesn't matter, because it's the square of the amplitude) on at least half of the signal, and rows I get from sampling will do better. Each sample will make a prediction about the sign of the signal at a given time. Suppose I have s samples. Taken together, I can call the prediction about a given point "reliable" if (significantly?) more than sqrt(s) [the average distance from the origin in a random walk (or should I use the median?)] predict the same sign. Where should I look to find out how to calculate the number of samples needed to get the "reliability" of the predictions above a certian threshold, and how to calculate the percentage of the predictions are "reliable"? (If I use the median, it'll always be half; if I use some other function, it'll probably decrease with the number of samples.) -- Mike Stay Cryptographer / Programmer AccessData Corp. mailto:[EMAIL PROTECTED]
