Great points, Brian.

>Sadly the statistical inferences that can be drawn indicate no
>evidence of any deviation from a theoretical "smooth" exponenential
>decay curve. There is a message in the archive on this very point
>(search for "Noll island")

The important word here is "statistical" - as human beings, even "trained"
ones, we are pretty dismal at being able to recognize what is random, and
what isn't. The sample of 37 exponents is statistically too small to deduce
very much at all, it may be enough to suggest that exponential decay is a
reasonable hypothesis - but not enough to deduce anything about the
deviation from such. (Hence my caveat about the number of samples!).

>The point is that random events *do* tend to occur in clusters

Behavioral studies on human recognition of randomness are a lot simpler to
get these days - just analyse lottery number picking strategies... Humans
have a dire aversion for instance against picking numbers that are
sequential, even those who are "aware" that such a sequence is just as
likely as any other. Similarly, humans avoid any picks that even have a pair
of consecutive numbers, which a bit of combinatorics proves is not that rare
at all. Random events most definitely do cluster, because of the Poisson
"time between" distribution. It's possible to have very large gaps (with
corresponding low probability) but a small gap is typical. In fact, two
consecutive short gaps is just as likely as a single, very long gap. Hence
some very human observations that "trouble/celebrity deaths/bus arrivals"
often occur "in threes" actually have probabilistic foundation!

>Similarly I can find no statistically convincing evidence, even at the
>5% level, that the "Noll islands" really do exist.

I wonder how many more "confirming instances" we'd have to find before we
could even get a result as statistically weak as 5%? My statistical
background is pretty awful but I know that these sort of analyses are order
sqrt(n)... maybe we'll be having the same discussion in a "few years" time
after M(148) is discovered and the sample size is four times larger... of
course M(148) will probably have 10^24 digits...

>(The rest of this reply is off-topic. Stop reading now if you object)
[8-bit hardware RNG]


Jean-Charles Meyrignac kindly informed me off-list that the machine may have
been the Commodore 64, which apparently used such a setup for it's "white
noise" sound channel. As Brian points out, *provided* it's done properly
(correct statistical normalization of the output of each individual RNG)
such a technique is *far* superior to pseudo-random routines. It's only a
little off-topic, after all, one of the Pollard methods (Pollard-rho?) uses
a "traditional" pseudo-random number generator (x -> x^2+c mod N) and
*expects* the output to eventually correlate to indicate a factor. When
probabilistic methods are in use, remember Knuth's caveat that a good
algorithm can be killed stone-dead by a poor random-number generator - and,
worst of all, a good random-number generator may be proven poor in a
particular application. One worth remembering if you're looking for a
parameter in a factoring algorithm...

Chris Nash
Lexington KY
UNITED STATES


________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

Reply via email to