Re: Mersenne: Factoring

2000-11-30 Thread Chris Nash

Hi folks,

Off-topic I know, but...

We could let prime95 decide the next election grin.
Give everybody a different prime number. Multiply the primes for 
candidate A together, likewise for B.

And like in this election, unless we can completely factor the results, 
we wouldn't know who won, either.

:)
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: A new series of Mersenne-like Gaussian primes

2000-09-10 Thread Chris Nash

Hi Mike and Yann

I don't have any Linux software. For searching for new primes of this 
series, 
the best program to use at present is PrimeForm/GW by Chris Nash. You 
could 
ask him ([EMAIL PROTECTED]) whether he has anything for Linux.

Work is currently being done on a direct recompilation of PrimeForm/GW 
for Linux. Since PFGW is a command-line program, this should not be too 
difficult a task - in fact, PFGW is actually developed on BeOS using 
the GNU development tools and the Windows version is already a port.

Peter Kosinar is currently looking at the rebuild, and in fact managed 
to get the BeOS objects to link first time on Linux. There were a few 
system-specific problems (for instance, none of the timing functions 
worked) and Peter has made a few changes, however, the resulting 
program failed. It seems this might be because of some differences 
between the versions of gcc used, the BeOS version is 2.9, and I've yet 
to get 2.95 to successfully build under BeOS. Peter now has all the 
source and is doing a full rebuild, I don't foresee any major problems, 
so a Linux version may be available very soon indeed.

Best Wishes, and thanks for the recommendation!

Chris
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.exu.ilstu.edu/mersenne/faq-mers.txt



Re: Mersenne: pi, limits, and other things OT

2000-02-09 Thread Chris Nash

Hi folks

   I think my favorite counterexample to arguments like this is Gabriel's
 Horn.  Take the function 1/x, and revolve it around the x-axis.  You now
 have something that looks very similar to a trumpet's bell.  Now, find
the
 volume of this from 0 to infinity.  It has a finite volume.  However, it
 has an infinite surface area.
   If someone happens to remember the exact way the integral are written,
 that'd be a big help.  I'm going to try and find my old Calc text now, I'm
 sure it's in there somewhere.

If y=f(x), the volume of revolution is given by

{integral from 1 to infinity) pi.y^2 dx

Note the integral starts at 1, not zero (otherwise the volume is undefined)
for the Horn. The volume is in fact pi.

The surface of revolution is given by

(integral from 1 to infinity) 2.pi.y.sqrt(1+y'^2) dx

where y'=dy/dx= -1/x^2 in the case of the horn.

The integrand is 2.pi.x /sqrt(x^4+1). If you recognise this, good for you
(Apply a change of variable t=x^2 and you will get pi. 1/sqrt(t^2+1) under
the integral, which you might recognize. If you're still stuck, think about
arcsinh t).

Recognize it or not, it really doesn't matter, Note the integrand is
actually a little greater than

2 pi y dx

which is the usual mistake first made with surfaces of revolution
(approximating the surface by 'delta-x height cylinders' instead of 'delta-x
height slices of cones'). However this function is a lot easier to recognize
as the derivative of 2.pi.ln x, and so the integral as we approach infinity
is indeed unbounded.

 I have a little trouble conceptualizing what would happen if you fill
this
 horn with paint.  If you completely fill this horn with paint (a finite
 volume), the inner surface of the horn should be completely covered with
 paint, right?  But the inner surface of the horn has infinite area, so
 wouldn't it take an infinite amount of paint to paint it?  Where is my
 intuition going wrong?

It's a bit like the old gag "how many lawyers does it take to wallpaper a
room?" ("Depends how thinly you slice them"). A given amount of paint or
lawyers can cover an arbitrarily large surface provided you spread it thinly
enough. Not possible in the real world of course (not to mention the Horn's
neck is ultimately too narrow to squeeze a paint molecule down), but
mathematicians aren't limited by such physical constraints.

Single-celled organisms have known for eons that the best way to improve
their rate of nutrition is to stretch their volume into the largest possible
surface area. Fortunately physics intervenes and an infinitesimally thin
organism of infinite length but finite volume isn't a biological
possibility.

Chris Nash
Lexington KY
UNITED STATES


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Mersenne Digest V1 #672

1999-12-20 Thread Chris Nash

   That's certainly the 'obvious' way of trying to construct such starting
 values.  Can anyone think of any other ways to come up with them?  It
seems
 'likely' that if there is no finite covering set of divisors there are an
 infinite number of primes... but my intuition is more likely a negative
 endorsement than a positive one ;-)

This is indeed a great unknown, but a familiar situation. Our heuristics and
our instincts tell us we expect an infinite number of Mersenne (and a finite
number of Fermat) primes.  Neither of which are proven... perhaps may never
be.

The important thing about constructing such a provable sequence of
composites is it shows that 'expecting an infinity' doesn't necessarily
'guarantee any'. The constructions may seem a little 'artificial', but
they're valid 'islands' of counterexamples in a sea of subjects where
heuristics is currently the best we have.

As for the finiteness of the covering set, again it's currently the limit of
our knowledge. Which is more likely, the existence of an infinite covering
set, or of a single prime?

Chris Nash
Lexington KY
UNITED STATES


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Fibonacci Series

1999-12-18 Thread Chris Nash

Hi folks

U(1) = 1786772701928802632268715130455793
U(2) = 1059683225053915111058165141686995
U(N+2) = U(N+1) + U(N)
I checked a few thousand terms, and they were all composite.

There is almost certainly a 'covering set' of divisors. In essence you need
to find a set of primes P and a modulus M, then prove that U(N) has a factor
in P specified by the value of N mod M.

Chris Nash
Lexington KY
UNITED STATES



_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: graphical interface for gimps

1999-09-29 Thread Chris Nash

 Instead of a boring status bar, how about a graphic of a
 caterpillar gnawing away on a leaf
 Neat Idea !!

I love the thinking behind this... after all, it seems fashionable these
days to offer downloadable 'skins' for your mp3 player or whatever - the
world is obsessed with multimedia :) Who knows, maybe some one out there
will write a Prime95 viewer that sits there watching George's window for
text output and show it as a cute animation in the 'skin' of your choice...
in fact, with all the talk about the gui on here, I'm surprised someone
hasn't tried that already :)

Oh, it would have to take zero processor time, of course :)

Chris Nash
Lexington KY
UNITED STATES


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Graphical visualisation of computations

1999-09-25 Thread Chris Nash

 I think few of us fully realize the _enormous_ amount of (highly
 optimized) processing required each iteration to achieve something that
 looks so simple: a squaring, minus 2, mod (2^p-1).
 Whatever the correct figure, the program doesn't show you what it's doing.

This is an interesting point - not so much whether a gui would be nice or
not, but, what would it show? A progress bar, maybe... anything else? There
isn't really anything else to show. Intermediate results of the LL test
don't themselves have a lot of meaning (even the final result, if non-zero,
is devoid of much interpretation). There's not a lot you could plot - a
graph of the iteration time would only serve to show when you opened
Microsoft Word or something...

I must admit, I dallied (albeit very briefly) with the bovine rc5 client.
Were it not for their peculiar statistics ('if keys were drops of water, we
could flood the world in a week' etc) I'd have got bored of the effort a lot
sooner than I did. The SETI client looks pretty (more of a screen burner
than a screen saver though) but the display is at best meaningless, a waste
of cycles at best.

I think we ought to count ourselves lucky that what we've got is bare-bones
and low-impact.

Chris Nash
Lexington KY
UNITED STATES



_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: M(M(127)) and other M(M(p))

1999-09-21 Thread Chris Nash

Hi folks

 Wait, that might just be the reason to search!  Will only searched up to
 k=4 for M(M(6972593)), but if 2*k*M(p)+1 divides M(M(p)), then you've just
 beaten the world record!  Non-Mersenne's might once again grace the top
 10 list!

I really hope that neither Will Edgington (with M(M(6972593))) nor Chip
Kerchner (with M(M(1398269))) dedicated any computer time whatsoever to
search for factors 2*k*M(p)+1 up to k=4.

As Will's page

http://www.garlic.com/~wedgingt/MMPstats.txt

points out, since M(p)=1 mod 3, k cannot be 1 mod 3. Also, since M(p)=-1 mod
8 for odd p=3, k must be 0 or 1 mod 4 (otherwise 2 is not a quadratic
residue of this supposed factor, the 8x+-1 condition).

It follows then the first possible factor of M(M(p)) has k=5.

Chris Nash
Lexington KY
UNITED STATES


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Re: Factor of 2^(2^31-1)-1 found ($)

1999-09-19 Thread Chris Nash

Hi Lucas

 Yesterday (and the day before), I went to the Illinois number theory
conference.
 There (2nd talk of yesterday) J. P. Selfridge announced that he would
 give away $1000 US for any factor found of a number which ought to be
 prime (he provided a list).  On that list was 2^(2^31-1)-1.
 To guard against errors in transmitions the factor is 295257526626031

p=295257526626031
I took 2, and squared it 31 times mod p. And got the result

2^(2^31)=2 mod p

Congratulations Lucas, it is indeed a factor of 2^(2^31-1)-1 well done!
Had it already been shown that M(M(p)) is not necessarily prime?

Chris Nash
Lexington KY
UNITED STATES


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: M(M(127)) and other M(M(p))

1999-09-19 Thread Chris Nash

Hi folks,

After Lucas Wiman's (re)discovery of the factor of M(M(31)), I made some
comment about M(M(p)), something which of course has been long known to not
always be a prime whenever M(p) is. (M(M(13)) is the first counterexample
and even has a factor found by Keller).

Of course, the sequence that still remains unknown is

2
M(2)=3
M(3)=7
M(7)=127
M(127)=170141183460469231731687303715884105727
M(170141183460469231731687303715884105727)=???

the first five of which are prime and the nature of the last still unknown
(hardly surprising!). I noticed Lucas' search found the factor of
M(M(31)) reasonably quickly, a factor which isn't that large a multiple of
M(31) itself.

I checked Chris Caldwell's pages on this, and Curt Noll's trial-factored
M(M(127)) to 5.10^50, surprisingly low considering the size of M(127)
itself, I noticed many other M(M(p)) as listed in
http://www.garlic.com/~wedgingt/MMPstats.txt have only been tested to very
low limits indeed.

I wondered why there wasn't more work done on these - though I understand
it's very hard to motivate people when Guy's law of small numbers no doubt
applies, but everything M(M(61)) and above is currently unknown. It would be
nice to see a few more results there.

Chris


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: M(M(127)) and other M(M(p))

1999-09-19 Thread Chris Nash

 even bigger prime.  If for example, 6*M(p)+1 divides M(M(p)), then it must
 be prime!

Before anybody gets overexcited at the last posting...

It is TRUE that if 2k.M(p)+1 divides M(M(p)), M(p) is prime, and k2M(p)+2,
then 2k.M(p)+1 is prime.

However, unless I'm mistaken, non-divisibility does not prove compositeness.
You could walk past a prime (in fact, you'd expect to walk past several) and
you'd never know...

Chris


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: M(M(127)) and other M(M(p))

1999-09-19 Thread Chris Nash

Hi there Lucas...

 The reason is relativly clear: the work of checking *even one* factor of
 M(M(p)) is greater than the work required for an LL test on that number.
 This is because of the need for p squarings modulo some number greater
 than M(p).

Yes, however there is a rather curious combination of effects going on when
testing these numbers. To test if x is a factor of M(M(p)) requires p
squarings mod x, but p is a little less than log2(x). Admittedly, we're only
testing for factors, but there's a curious side effect of the test...

 prime, unless we find a factor.  Interestingly enough, when we find the
next
 Mersenne prime, searching for a factor of M(M(p)) might allow us to find
an
 even bigger prime.  If for example, 6*M(p)+1 divides M(M(p)), then it must
 be prime!

Oh, you can do much better than that... Let q=M(p), a prime. Now any factor
of M(q) is of the form 2kq+1. Provided 2kq+1(2q+1)(2q+1), ie k2q+2, if
2kq+1 divides M(q), then 2kq+1 is prime. In theory then this sort of factor
test can prove the primality of a number up to TWICE THE SQUARE of M(p), yet
the proof still only requires only p squaring operations (admittedly, to a
slightly less pleasant modulus, but readily optimizable). Of course, the
downside is, one has no idea how far you'd need to search (or even if such a
number exists, M(M(p)) could be prime and you'd never know it) - the upside
is, you might get lucky very quickly...

 Wait, that might just be the reason to search!  Will only searched up to
 k=4 for M(M(6972593)), but if 2*k*M(p)+1 divides M(M(p)), then you've just
 beaten the world record!  Non-Mersenne's might once again grace the top
 10 list!

Steady on, there are a few non-Mersennes hanging on in there :) But this
form is very reminiscent of the Miller/Wheeler record (the first one of the
electronic age), 180.M(127)^2+1. I for one wouldn't object to dedicating a
few cycles here and there to find a factor of M(M(1257787)) and who knows,
find the largest known non-GIMPS prime...

Chris



_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1

1999-09-18 Thread Chris Nash

Hi folks


 Thanks for the "p-1 for dummies" e-mail.
 But as everyone on this list knows, any factor of a Mersenne
 number looks like 2*k*p+1 for N=2^p-1.  Plugging this into
 the above equation gives
 q=2*k*p+1
 q-1=2*k*p

 Doesn't this mean the lower bound on a "p-1" method search should
 be greater that the Mersenne exponent (the p in 2^p-1) to have the best
 chance of success?  Then the "upper bound" of the "p-1" search can
 be resevered for cracking a big factor in the "k" of a Mersenne factor.

This is a great point, one that I stopped a little way short on. Indeed the
'p-1' method constructs a large product of small primes, that you're hoping
will be a multiple of q-1. Since we know q-1 is a multiple of 2p, it makes
sense for us to start the p-1 method not at some base a, but at a^(2p). And
as you point out, the upper bound then is in fact an upper bound on the
factors of k. Provided we search as far as the factors of the unknown k, we
will succeed.

Starting at a^(2p) is relatively a small calculation, and saves us having to
wait until the P-1 method includes p in the exponentiation (it's highly
unlikely we would discover one before). Of course, its possible that even
so, we may not find a factor, but it makes sense to include as much as we
know about the factors at the very start of the method. Alternatively, we
could specify p as a lower bound - something which is probably a good idea
anyway, the calculation of a P-1 factoring is then comparable to a primality
test.

Going to a P-1 limit of 'p' certainly covers all factors with k=p, and many
others besides. The factors that aren't covered are those with a prime or
prime power factor of k above the P-1 limit. That can help you make a good
guess as to how efficient your factoring attempt has been, of course, do not
forget it is also possible to save the result of a P-1 attempt, return
later, and "exponentiate some more" to increase the upper bound.

Chris


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1

1999-09-17 Thread Chris Nash
e the bounds *A LOT* before your chances of success
improve significantly - ultimately they are related very closely, because
success depends on the factors of the unknown magic number x.

Hope this helps

Chris Nash
Lexington KY
UNITED STATES


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Telling people to FAQ Off.

1999-07-30 Thread Chris Nash
e a great idea, provided of course the know-it alls could stand
it. Yes, I'm backing Paul up here. I'm annoyed the way things periodically
go on here. I'm annoyed that, once in a while, someone comes out and shoots
down another for either 1) having a slower computer 2) not having 20 years
of mathematical education, or 3) being interested enough to ask a sensible
question.

Here is a fictitious posting. If you recognize it, great. If anyone wants to
answer it, wonderful - I will find a suitable prize for the first person who
considers this "a waste of time"...

Dear Sirs,

I would like to tell you about a remarkable primality proof I have been
working on. There is little space here for me to include all the details,
but my proof does not require mechanical devices, merely a reasonable amount
of human labor. With the aid of several hundred grains of rice laid out on
the squares of chessboards, I am capable of calculations involving large
numbers. By this method, I plan to find the largest prime number of our
time.

Those who recognize it, please, just smile about it.

Chris Nash
Lexington KY
UNITED STATES
==
Please Note: my e-mail address is now [EMAIL PROTECTED]


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Proth's Test (was: Re: Mersenne: Evil, evil prize thread)

1999-07-25 Thread Chris Nash

 Proth's Test for n = k*2^m+1 says that there exists a such that
 a^(n-1)/2 + 1 is divisible by n.
 The other factor in evaluating this is that, in Proth's Test, a has
 got to be an odd prime - if you pick the wrong prime, you're wasting
 time, though fortunately this can usually be detected very early.

That bit is virtually free of charge. Any quadratic non-residue will do just
fine. If a^(n-1)/2 isn't -1, then the number isn't prime (by Euler's
quadratic residue criterion). The algorithm does take longer, sure, but it's
the targetability that makes the difference. If the first 10^7+ digit prime
has 20 million digits, it'll be taking longer to test each one than a 10
million digit Proth candidate.

I'm playing devil's advocate here. If prize money is anybody's motivation,
we'd all be better off selling our PC's and buying lottery tickets, or
hitting the stock market.

Chris




_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Proth's Test (was: Re: Mersenne: Evil, evil prize thread)

1999-07-25 Thread Chris Nash


 That bit is virtually free of charge. Any quadratic non-residue will do
just
 fine.
 But you don't easily know if a number is a QNR, do you?

Suppose the number you're testing is N. If we assume N is prime, then
quadratic reciprocity could be used to determine whether your base a is a
QNR. So pick your base a, do your test, which proves QNR and hence primality
(Proth's theorem basically states the Euler criterion for a QNR is necessary
*and* sufficient to prove primality). If you don't get what you expect from
the quadratic residue symbol, then it's composite. (Look up 'Kronecker
symbol' - basically, an excuse to use quadratic reciprocity whether or not
you know N is prime).

The LL test implicitly does the same for Mersenne tests - they only make
sense if 3 is a QNR. The start value S_0=4 is really
(2+sqrt(3))+(2-sqrt(3))... square that, and you'll see where the -2 comes
from :)

S_n=(2+sqrt(3))^(2^n) + (2-sqrt(3))^(2^n)

If the test proves compositeness, it doesn't matter whether 3 was *really* a
QNR or not, you've proved what you wanted either way. The final bit of glue
is that all Mersennes of odd exponent1 are equal to 7 mod 12, and so 3 is
indeed a QNR for the prime ones.

Chris


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Development of Fermat's Method (was: RE: Mersenne: Hyperbola)

1999-07-14 Thread Chris Nash

Hi folks

 Suppose we take a number which is proving troublesome to factorize,
 e.g. 2^727-1. Factors of this number are going to be of the form
 1454k+1.
 Suppose we multiply 2^727-1 by 100!. The reason for doing this is
 that this just about guarantees that there is some combination of
 factors a, b such that (a-b)  sqrt(a)
 Now I'm not saying that the effort involved is trivial - I'm sure a
 run length of some hours, days or weeks would be neccessary. Also,
 I'd be very surprised indeed if I'm the first to suggest a trick like
 this, but, is it worth while starting to code this method?

I can give a reasonably informed answer to this, having tried it myself. To
use Fermat's method directly on kN in order to factor N, unless you are very
lucky, is no improvement. In the 2^727-1 case, as pointed out, since you
know the form of the factors, it's sufficient to try a=727x+1, for all a
above sqrt(N), to check a^2-N is a perfect square. To preserve this, make
the factor you multiply by a product of factors of the form 1454k+1 - all
the factors of the product are still of this form.

Do we win, or do we lose? Let's suppose that our composite has precisely two
prime factors, and thus direct application of Fermat will uncover only one
non-trivial solution from which we can derive the factor values. If what we
multiply by has k distinct factors, then there are now 2^k useful
solutions - but the number of values of a that we need to try has increased
by more than that factor. In all likelihood, it would take longer...

But don't despair. Since all we have to check for is "a^2-N is a perfect
square", it's easy just to inspect this value modulo many small primes and
sieve out quadratic non-residues. With a few lookup tables, it's easy to
trial-factor at 10^9, even 10^12, values of a per CPU-second. Even so, for a
219-digit number like 2^727-1, and even assuming there's no factor below 40
or so digits, you'd still need 10^60 or more CPU-seconds to spare. (OK if
you've got a universe with nothing else to do).

Unsurprisingly, the method needs a bit of control to be realizable - which
is what the continued fraction method achieves. Each iteration finds a
(relatively) small square modulo N (at worst sqrt(N) in absolute value),
which, after striking out square factors, making sensible choices of which
to store, and judicious combinations, you hope to eventually find
non-trivial a^2=b^2 mod N. Still need a lot of luck, though, and the steps
are now non-trivial...

Chris Nash
Lexington KY
UNITED STATES
==
Still a co-discoverer of the largest known *non*-Mersenne prime



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Benford's law (was exp. representations)

1999-07-13 Thread Chris Nash

Steven Whitaker wrote:

 Maybe it's my imagination, but it seems to me that the factors of the
 prime exponent Mersenne numbers start with a 1 more often than a 2 or
 3 etc. Are they obeying Benford's law too?
 For instance, for the 10 primes from 5003 to 5081, there are 20 known
 factors. 10 of them start with a 1.

Something similar. The Benford's law distribution works because we 'expect'
natural, boundless, data to have the decimal part of the logarithm "fairly
uniformly" distributed, and a quick look at a slide rule (younger readers,
ask your Dad!) has 30.103% of its length with initial digit 1.

By Merten's theorem, the probability *any* large number N has no factor
smaller than X is C/log X, C is exp(-gamma) if I remember rightly.
(strictly, X has to be much bigger than 1, and much smaller than sqrt(N) for
this to make any sense). By that sort of estimate, suppose N has a factor F
between 10^k and 10^(k+1).

Then the probability that F begins with a 1 is something like

[1/k-1/(k+0.30103)]/[1/k-1/(k+1)]

which tends to log10(2) as k tends to infinity - so the distribution does
approach Benford's law. In fact, if you plot the distribution above, it's
more generous to 1's for smaller factors. It seems a skewed distribution,
but remember it's based on our insistence of observing the world base 10,
and believing that 2-1 is "just as important" as (M38)-(M38-1). The same
observation is repeatable in any base - of course, at the most ridiculous,
ALL factors begin with a 1 when written in base 2.

Chris Nash
Lexington KY
UNITED STATES
=
Still a co-discoverer of the largest known *non*-Mersenne prime




Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Head's algorithm for multiplying mod n

1999-07-08 Thread Chris Nash

Hi folks

   In the book _Primes and Programming_ Head's method of multiplying
   two numbers mod n is mentioned.  Is this actually more effiecient
   than simply multiplying the two numbers and taking the modulus?
  Look at it this way. Head's method is essentially binary long
  multiplication, with the "wrinkle" that you take the remainder modulo
  n after each shift  add operation.
 That is going to be a *lot* slower than FFT convolution, for numbers the
size
 of the Mersenne numbers we're testing! FFT is O(n*log(n)) where n is the
number
 of limbs in the numbers being multiplied.

Limbs? It is good to know that the world has many different literal meanings
in many languages for "bits" - variety is good for us all. (The French word
for "buffer" is also, I seem to remember, rather amusing).

But enough already. One thing I'm surprised nobody has mentioned yet - is it
even in Lucas' FAQ? - is that modular reduction in the Mersenne case is
sinfully inexpensive. Even with a pure integer, no FFT implementation,
reduction mod 2^p-1 is very quick - take the lower p bits, take the rest of
the number shifted down by p bits, add them together. The size of the
original number is usually around 2p bits, so one iteration of the
shift-and-add will get the result correct to within 1 extra subtraction at
most.

As for the FFT implementation, modular reduction is less than inexpensive -
it is virtually free - in fact, it makes use of a fact that means the
algorithm can be a large factor faster than "arbitrary" arithmetic in the
Mersenne case. In a standard FFT multiplication, the numbers have to be
buffered with a large "dead zone" of zeroes - this is due to the fact that
the FFT algorithm works with a periodic signal, so data bits "wrap around"
from either end unless you have a large enough zero-buffer. The Mersenne
method uses it to it's advantage - the bits 2^p and above *should* wrap onto
the least-significant bits as part of the modular reduction. With a little
adjustment (using a non-rational arithmetic base so 2^p is in fact a power
of 2 power of the base), the modular reduction then becomes a desirable, and
free, side-effect of the multiply algorithm - not to mention, the FFT buffer
is half as large, and so calculations are implicitly faster.

Chris Nash
Lexington KY
UNITED STATES
=
Still a co-discoverer of the largest known *non*-Mersenne prime.



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Poaching (was Mersenne Digest V1 #573)

1999-06-11 Thread Chris Nash

 co-worker (30 years of computer repair experience) that your *never ever*
 cheap out on power supplies.  And crappy cases almost always come with
 crappy power supplies.  This is expecially true for a team like that one
 that would have to have constant operation.
 Crappy power supplies can cause flaky performance in innumerable ways
 relating to CPU and motherboard operation, as well as drive operation.

I have to back Lucas up on this one, and can't stress it enough. In one form
or another over the past few years I've been involved in mathematical
computing 24 hours a day, seven days a week. Apart from one of the first
Gateway P-120 motherboards (which apparently had a known tendency to
overheat and Gateway were aware of it), an early US Robotics 28.8 modem
(which was lousy design) and a modem which was struck by lightning (forgot
to unplug the phone cord), the only hardware component to have failed has
been power supplies. I'm currently on my fourth power supply in 2 years on
my current machine.

When the power supply fails, I have been fortunate and not had any permanent
damage to other hardware components, mainly because voltage regulators tend
to be quite robust components and, even in a failure, don't let much more
than 3.3V or 5.0V hit the board. (Though exploding capacitors in the power
supply *will* blank the CMOS). However a power supply is notoriously full of
very poor, very cheap components. It is the 10c resistor, or 50c smoothing
capacitor that fails - not very comforting when you may have thousands of
dollars of hardware hanging off it. People who drive sports cars don't use
the cheapest gasoline...

It seems lately though that component quality is decreasing. AT power
supplies seemed pretty indestructable, but ATX power supplies are much
weaker. John Pierce is right, a $59 case isn't bad - I could have got the
latest power supply without a case for $48. Resist the temptation though to
go to a high street store and pick up a cheap power supply for $30... and
believe me, a spare power supply you have hanging around, or a reconditioned
one is *not* an option.

It's not worth spoiling the ship for a ha-penny of tar... if you're building
a decent system, don't try to save a few bucks on a power supply. A good
brain is useless if the heart stops. I'm reminded of the Russian guy in
"Armageddon" - "American components, Russian components... all made in
Taiwan!".

Chris Nash
Lexington KY
UNITED STATES
===
Co-discoverer of probably the 8th and 11th largest known primes.



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Inane Stuff (Was: Mersenne: M38, SETI, and other random stuff )

1999-06-11 Thread Chris Nash

Once again my apologies for lowering the tone, and many thanks for some
sensible and thought-provoking responses!

   Following conservative estimates of cpu power and number of participants
 doubling every two years, I'd guess that we will have a our first billion
 digit prime in 2021, when we have 40 million participants and Pentium XV
 1000GHz processors.

10^12Hz... wow! Can you imagine the technical innovation needed to get a
machine where light only travels 0.3mm in a clock cycle? That's some densely
packed, erm, stuff... probably not silicon, the sort of thing we probably
can't conceive right now (electron obedience school?), but there's a good 22
years to go yet. Back in 1977, I seem to remember "VLSI" meant a digital
watch was $100, now they're free with Happy Meals. As for 40 million
participants, maybe they'll be giving away LL engines or Dubner crunchers
with Happy Meals by then, maybe every electronic device in my house will be
squaring and subtracting 2 in its idle time.

  I'm not counting on seeing either in my lifetime.
   Well, I still plan on seeing it in my lifetime. ;-)

I'm with Spike on this one. If E.T. does make the call, there are going to
be a lot of people dropping EVERYTHING. And even if we don't have "Microsoft
SpaceBender V1.1a SR3" (courtesy of Bob Burrowes, thanks Bob) to make
interstellar communication a possibility, there'll be a lot of people
running for the cryogenic suspension chambers...

Meanwhile, the potential M38 gets ever nearer... not long to wait now I
shouldn't think. I hope Ernst or whoever is verifying will at least give us
a yes or no answer, even if the EFF rules mean that the exponent won't be
released to the world until it is in print...

Chris Nash
Lexington KY
UNITED STATES
===
Co-discoverer of probably the 8th and 11th largest known primes.



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Mersenne: Connection with Mandelbrot? (was: status of exponents)

1999-06-11 Thread Chris Nash

Me again, my wife's out of town so I'm surfing the net too much. Apologies
if you're sick of me.

 b) All (or all but finitely many with an upper bound or maximum quantity
for the sporadics) Mersenne primes follow some pattern.
 Imagine there turned out to be a link between primes patterns (or Mersenne
 prime patterns) and the Mandelbreot set? It's not out of the question.
That
 thing has interesting additive combinatorics, also doubling patterns,
 Fibonacci sequences, and the like hidden in it.

It's absolutely doubtless there is a link with the Mandelbrot set - and
hopefully I won't fall in to the usual trap of trendy mumbo-jumbo and black
magic that the subject usually yields. It might be doubtful that we could
ever *use* this fact, but who knows, stranger things have happened,
apparently the pretty colors can be generated by charging a metal plate in
the shape of the set itself...

As Paul says, the thing is oozing with combinatorics. Each of its components
has an associated cycle length, in fact, each component has a root point
which, if the iteration is applied n times, the point is mapped back to
itself. Components (or "sprouts") are connected at single "attachment
points", and the cycle length of a child component is a multiple of its
parent. A lot of analysis has been done on the location of these points.

But think of it like this. Suppose you wanted to know how many components
were of each cycle length n. You'd have to find the root points, ie solve
"n'th iterate of x=x", which is an equation of order (you guessed it)
2^(n-1). Factor out the root x=0 (the root of cycle length 1) and the
equation has a Mersenne number degree. Of course, some of these roots are
also roots for factors of n, but you can enumerate them - you'll also end up
running for the Cunningham tables. The strange thing is, since the number of
components of given cycle length grows exponentially, there are not enough
"attachment points" for them all. Hence the familiar "mini-Mandelbrot" sets
have to appear, seemingly in the middle of nowhere (though they are
connected via some very convoluted infinite sequences - the set is, I think,
proven to be a single connected piece).

Hand-waving
Prime cycle lengths are interesting, because of the connection points
problem. A prime cycle length *should* produce more "island molecules" than
a composite would. In theory then we could do a primality test; zoom very
closely on a *very* tiny area of the Mandelbrot set (which we could compute
by some iterative root-finding procedure) and have a look. If there's an
"island molecule" there, N is prime, if not, N is composite. Obviously,
mathematical accuracy - and rendering time - is quite an issue if you're
staring so closely at the thing, but it's an intriguing thought experiment.

Sounds radical? Not at all, we've all seen this process 38 times before...
just not in the complex plane:

The Mandelbrot set iteration z - z^2+c in C
The Lucas test iteration x - x^2-2 in Z(N).

It's just too much to be a coincidence. A Lucas test is nothing more than a
very thin slice of the Mandelbrot set. Island molecules have a lot in common
with Lucas pseudoprimes.
/Hand-waving

Chris Nash
Lexington KY
UNITED STATES
===
Co-discoverer of probably the 8th and 11th largest known primes.



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Inane Stuff (Was: Mersenne: M38, SETI,and other random stuff )

1999-06-11 Thread Chris Nash

I'm on a roll tonight. Here's another one from me. Sorry guys, but this one
was just too plain freaky to be a coincidence.

 The technical term is "superconductor" and I can conceive it quite fine
:-)
 (This will probably provoke more cryonics postings.)

I can see it now, George will be asking all you hardware guys how to
optimize version 118 for the 65536-bit Intel superconductor architecture.
The hardware guys better start working on this stuff, software guys like
these sort of gifts :)

 Room temp superconductors... or we can pull a star trek and use the warp
drive
 to speed up the speed of light and thus the optical components inside the
CPU.
 (This is how the computers on the NCC-1701-D supposedly work...if you
don't
 believe me, read the technical manual, available at fine bookstores
everywhere
 and on multimedia CD-ROM.)

And darn fine it is too. And all true ya know. There's only one leap of
faith required for it all to be possible (subspace), ok, maybe the
transporter is something else, but E=mc^2 makes that already seem like fact.
I had to stop myself going into Star Trek mode on my last post. Paul saved
me the embarrassment because I would have made a mess of it.

 ...maybe every electronic device in my house will be squaring and
 subtracting 2 in its idle time.
 I'd rather compute a Mersenne LL test.

Erm, Paul it *is* an LL test. I forgot the mod N, but there's no charge
for that :)

 cycles to exploring the Mandelbrot set, not some Julia set. And the Julia
set
 in question here is the world's least interesting...just a line segment.

... and it's "*the*" LL test as well. Freaky, eh? But surely not the world's
least interesting? Let's go Star Trek with it, this uninteresting,
line-segment Julia set is folded in subspace (ok, modulo N space, it's
getting late and I really need a trek fix right now) into an LL test. I just
posted that... how strange. Great minds think alike, and all that.

 Square and add i and you get something a tad more interesting...
 like from a storm chaser's lucid dreams.

A very different LL test, but still an LL test. A lucid dream, most
definitely, but perhaps a catalog of all the Lucas pseudoprimes
discriminant -1 (so of the form 4n+3)?

 have "Microsoft SpaceBender V1.1a SR3"...
 Now that would give a whole new meaning to "Internet Exploder". Can you
spell
 "core breach"?

"Gimme an M, Gimme an I, Gimme a C.".

URGENT CORRECTION to the preceding Microsoft-related article.

I love the list server when the "preceding article" arrives afterwards. Some
relativistic effect, obviously.

"Core breach" wouldn't begin to describe it. It would probably leave one of
those subspace rifts that hangs around and swallows hapless ships long
after
the original disaster becomes last century's news...

Or maybe worse. One might board the hapless ship and find Bill Gates rather
than Montgomery Scott suspended in the transporter pattern buffer...

Enough already, I'm probably already in too many people's killfiles than is
healthy.

Chris Nash etc etc



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Repeating LL remainders

1999-05-18 Thread Chris Nash

Just a quick note to thank Peter for his wonderful illustration of the
"repeating LL test remainders" question, with what I hope was a somewhat
surprising result!

 That is, S(32341) == S(1) mod M23.
 This is far more than 23 LL steps before the sequence repeats.
 EXERCISE: Repeat this analysis modulo M11 = 23 * 89.
   Find the period modulo M29 = 233 * 1103 * 2089, after
getting
   the individual periods for each prime factor.

*smiles* I will not pre-empt the result for anyone who wants to calculate
these. The important observation is to look at each factor in turn just as
Peter suggests. One can almost work in reverse as well. What we see in a
(Mersenne number) Lucas-Lehmer test really is a very small sample of the
sequence (2+sqrt(3))^n. If a cycle were detectable from this small sample,
then all the factors of the number under test must satisfy some very
stringent, perhaps intractable conditions. As Brian Beesley points out, I
was guilty of a lot of guesswork, and didn't go into details of the
calculation of the period of the L-sequence. Hopefully Peter's excellent
examples show that the chances of the L-sequence period being spotted in a
much briefer S-sequence are very small indeed.

Chris



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Back to the maths of primes for a sec....

1999-05-17 Thread Chris Nash

  In various places, I have read that the generalized Riemann hypothesis
is
  true, then there is a very simple test for primeness, namely if n is an
  a-SPRP for all integers a2(log n)^2, then n is prime. From a
computation
  viewpoint, is this actually of any use, as it will show if numbers are
  composite and if it is quick, then primes could be checked using it,
then
  double checked via another means, also giving the opportunity to
disprove
  a major hypothesis of maths...

The theorem tells us, that since the SPRP test is polynomial in log n, and
the expected number of tests required is also polynomial, then, if GRH is
true, we have a deterministic primality test in polynomial time. At the
moment, the best verifiable polynomial-time tests are merely "probable
prime". For an arbitrary number "probable prime" is "pretty good", but proof
is elusive. I'll define "pretty good" in a moment.

 Most mathematical programs/packages (maple, gmp, ...)
 which are capable of dealing with large integers
 have a "probably_prime" function.
 The documentation usually does not explain exactly what the algorithm is.
 I could be wrong, but I think these are a-SPRP tests,
 possibly Miller's test itself. Does anyone know for sure?

The initial test in MAPLE was quite weak (in relative terms) and was a
probable primality test, perhaps only PRP. After it was used quite
excessively, its weakness was found - and the function would return "prime"
for a large class of composites. Generally "probable prime" is pretty
tough - an n-digit probable prime is composite with probability around
10^(-n/10), due to research by Pomerance et al. Their results are somewhere
on Chris Caldwell's wonderful site. While this probability is minute beyond
human comprehension - less likely than hardware/software/programmer/user
error! - it isn't proof, because a larger class of composites are "probable
prime" to the test. For instance, all Mersennes of prime exponent are 2-PRP.

The test is now tougher. If MAPLE now identifies a probable prime, it tests
again, to another base, and if that also passes, it performs a test using a
Lucas sequence. Again these are probable prime tests, so it is possible (and
likely) composites exist that pass all three. I think Carl Pomerance himself
has offered a cash prize for anyone who could find a counterexample to the
"twice SPRP, one Lucas sequence" test - an indication that it is not an easy
task of mere computation.

 As to the idea of using it to try to disprove grh,
 the best thing would be for cryptography programs which routinely
 generate relatively big "probably prime" numbers to use Miller's test
 and keep an eye on possible failures. It would probably be hard to
 convince people to introduce that feature, though...

Is "probable prime" enough? Practically, perhaps yes. Mathematically,
obviously not!

Chris Nash
Lexington KY
UNITED STATES



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Repeating LL remainders

1999-05-17 Thread Chris Nash

 I think we should check the math first. I have a sneaky suspicion that
looping
 won't occur in the relevant region (the first 2^n-3 iterations) unless n
is
 composite - which may be interesting, but doesn't help us eliminate
Mersenne
 numbers as candidate primes. But my math is inadequate to prove this 8-(
 I think that you mean n-2 iterations, but you may be right.  It's hard to
 say, without any evidence, or solid math.

Think of it like this.

Define L(n)=(2+sqrt(3))^n+(2-sqrt(3))^n. It turns out that what we usually
call *the* Lucas sequence is just

S(n)=L(2^n).

The whole point of the proof is to show that the set of elements a+b.sqrt(3)
(a, b mod N) that form a closed group under multiplication has the maximal
order, N^2-1, and thus N is prime. The Lucas sequence does this with a
little jiggery-pokery, because it is sufficient to show that L(N+1) is zero,
while L((N+1)/q) isn't for any prime factor q of N+1. For Mersenne numbers
the only factor to worry about is 2, so the test collapses into its briefer
form.

So the question becomes one of solving (2+sqrt(3))^n has zero surd part, and
in fact prove that the group does not have order N^2-1. The sequence L(n)
"will" recur in that case. However, it is not difficult to prove that the
"first" recurrence, ie the point where L(x)=L(1), will generally occur quite
late in the sequence unless N has all small factors - in which case, we
would have eliminated this N by trial factoring.

Remember too, this is late in the "L" sequence, not in the S sequence!
Suppose for instance it occurred between L(2^(n-3)) and L(2^(n-2)) - which,
even assuming the "first" recurrence is equally likely anywhere, would occur
with probability 50%. The S-sequence would not even see it.

In short, it doesn't matter how you buffer the S-sequence. Suppose you
recorded every residue, that's n-2 steps. There are only (n-2)^2 possible
differences, while in theory the recurrence length could be any of O(N)
lengths. The odds of success are small, we'll call them practically
non-existent. If we are testing M(10,000,000+), then we could at most get
10^14 differences, while the number is as big as 10^300.

The odds of a recurrence occurring make the odds of finding a prime seem
positively good!

Chris Nash
Lexington KY
UNITED STATES
===
Co-discoverer of the 7th and 10th largest known primes.



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Any statistics majors out there?

1999-05-08 Thread Chris Nash


 though.  So outside about 14 sigmas you should be able to say the
 probability is below 10e-40. The problem is that if there are small
 deviations from "Gaussian-ness" way out on the wings of your distribution,
 the REAL probability of a certain result is not well approximated by the
 Error Function result.

This is a real good point - if we are assuming a Gaussian distribution, then
we are assuming the best case. The worst case is given by Tchebycheff's
theorem, which states that, given a probability distribution where only the
mean and standard deviation is known, then the probability that an
observation will fall more than x standard deviations from the mean is
bounded above *only* by 1/x^2. (It's a tight bound for one value of x, but
with a very unlikely distribution). In other words, if you have no
guarantees about the distribution, "counting sigmas" is going to give you a
false sense of security, and if the distribution is even slightly deviant
from Gaussian, then the result can be very wrong indeed.

However we do have a little comfort. George's function - maximum convolution
error from a single iteration - does have some distribution. It "appears"
bell-shaped, and looks like a normal distribution, the samples George has
made are also reasonably-sized. Not only that, each observation is actually
the maximum of the deviations from integer over many channels in the FFT -
in other words, the observations are already coming from a "smoothed"
distribution. I'm sure the distribution of a maximum of several observations
is something which there is a statistical test for. (The resulting
distribution may not be normal, but may have an analogous test).

Thanks too Todd for backing up the heuristic calculation of "14 sigmas" that
I got the hard way :)

Chris



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Re: Mersenne Digest V1 #533

1999-03-19 Thread Chris Nash

Great points, Brian.

Sadly the statistical inferences that can be drawn indicate no
evidence of any deviation from a theoretical "smooth" exponenential
decay curve. There is a message in the archive on this very point
(search for "Noll island")

The important word here is "statistical" - as human beings, even "trained"
ones, we are pretty dismal at being able to recognize what is random, and
what isn't. The sample of 37 exponents is statistically too small to deduce
very much at all, it may be enough to suggest that exponential decay is a
reasonable hypothesis - but not enough to deduce anything about the
deviation from such. (Hence my caveat about the number of samples!).

The point is that random events *do* tend to occur in clusters

Behavioral studies on human recognition of randomness are a lot simpler to
get these days - just analyse lottery number picking strategies... Humans
have a dire aversion for instance against picking numbers that are
sequential, even those who are "aware" that such a sequence is just as
likely as any other. Similarly, humans avoid any picks that even have a pair
of consecutive numbers, which a bit of combinatorics proves is not that rare
at all. Random events most definitely do cluster, because of the Poisson
"time between" distribution. It's possible to have very large gaps (with
corresponding low probability) but a small gap is typical. In fact, two
consecutive short gaps is just as likely as a single, very long gap. Hence
some very human observations that "trouble/celebrity deaths/bus arrivals"
often occur "in threes" actually have probabilistic foundation!

Similarly I can find no statistically convincing evidence, even at the
5% level, that the "Noll islands" really do exist.

I wonder how many more "confirming instances" we'd have to find before we
could even get a result as statistically weak as 5%? My statistical
background is pretty awful but I know that these sort of analyses are order
sqrt(n)... maybe we'll be having the same discussion in a "few years" time
after M(148) is discovered and the sample size is four times larger... of
course M(148) will probably have 10^24 digits...

(The rest of this reply is off-topic. Stop reading now if you object)
[8-bit hardware RNG]


Jean-Charles Meyrignac kindly informed me off-list that the machine may have
been the Commodore 64, which apparently used such a setup for it's "white
noise" sound channel. As Brian points out, *provided* it's done properly
(correct statistical normalization of the output of each individual RNG)
such a technique is *far* superior to pseudo-random routines. It's only a
little off-topic, after all, one of the Pollard methods (Pollard-rho?) uses
a "traditional" pseudo-random number generator (x - x^2+c mod N) and
*expects* the output to eventually correlate to indicate a factor. When
probabilistic methods are in use, remember Knuth's caveat that a good
algorithm can be killed stone-dead by a poor random-number generator - and,
worst of all, a good random-number generator may be proven poor in a
particular application. One worth remembering if you're looking for a
parameter in a factoring algorithm...

Chris Nash
Lexington KY
UNITED STATES



Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Re: Mersenne Digest V1 #533

1999-03-18 Thread Chris Nash

Hi folks

There is a technique for assessing the "naturalness" of economic data.
This technique, known as Benford's Law, demonstrates that the
first digits of naturally occurring phenomena do not occur with equal
frequency.  In fact, lower digits occur with greater frequency in
tabulated natural data than larger digits.

Great approach to this from Rodolfo... Benford's Law is visually familiar to
those of us old enough to remember such anachronisms as tables of logarithms
and slide rules! Statistically, it's because models of natural processes
(say, radioactive decay and general "time between" distributions) yield an
exponential decaying Poisson distribution. From a computing point of view,
it's the same effect as what happens if you generate floating-point numbers
of "random" bits, the distribution is skewed (the probability a number is
between 1 and 2 is the same as it is between 2 and 4). Pick a (unbounded)
random number... how can you do it? You can't make all numbers to infinity
equally probable, and any smooth trnasform of the number picked should be
"equally random" and have the same distribution. The answer turns out to be
logarithmic, hence Benford's Law.

(It may be apocryphal, but apparently some 8-bit machine (perhaps Atari?)
had a means of generating "random" numbers because some memory location was
subject to "noise" - effectively some component acted as a radio antenna. It
may even have been by design... but of course results obtained by sampling
this location for random bits were awful. Being natural they were not only
non-uniform and non-independent but also subject to their surroundings. Can
anyone validate this?).

Anyway, such logarithmic behavior is certainly visible in the Mersenne data.
Heuristically, the probability N=2^n-1 is prime is going to be proportional
to 1/log N, ie proportional to 1/n. (we ignore constraints such as n needing
to be prime and the factors being of specific form, but this is a good
enough start). Hence theoretically we expect the number of Mersenne primes
of exponent less than L to be a partial sum of this, proportional to log L.
Hence we expect the n'th Mersenne prime to have an exponent increasing
exponentially, and, in reverse, the logarithms of the exponents should be
statistically regularly spaced. (What follows may be *very* sensitive to a
better model of the distribution, but this will do as a first estimate).

In an argument similar to Benford's Law, the fractional part of these
logarithms should, for a "random" phenomenon, be uniformly distributed on
[0,1). If the phenomenon is truly random, then this result should hold no
matter what base of logarithm we choose. However, consider plotting the
statistical deviation of such observations from randomness for different
bases of logarithm. Any marked deviation from statistical "noise" and
sampling error is a good indicator of non-random data for which the
logarithm base is some sort of controlling parameter. (In effect this is a
similar approach as curve fitting our expected distribution model to the
observed data).

I'd be interested to hear from anyone who constructs such a statistical
deviation vs logarithm base plot. We may expect such a statistical approach
to suggest a distribution where the overall scaling, and artifacts such as
Noll's islands, manifest themselves in the plot as large deviations from
randomness and spikes in the plot. This is one for the statisticians, to
create a suitable measure of the deviation of these fractional parts from a
uniform distribution on [0,1). Perhaps the sample variance will be a good
first measure, but with only 37 samples and a high degree of
non-independence, beware!

Chris Nash
Lexington KY
UNITED STATES




Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: On a lighter note...

1998-09-18 Thread Chris Nash


A bit off-topic, but in a recreational-math kind of way. Plus,
I think we all need to step back a few paces from this USWest
thing, take a deep breath, regain our senses of humor (assuming
we had such to start with :) ...

Ernst... thank you so much... yes we all need to take a break. Much
appreciated after I just downloaded the biggest daily dose of messages since
my time on the list. I giggled a lot :)

As for the new math problem:


 -  /\
 | | /  \ (imagine a circle here,
 | |/\ but that's tough to draw
 | | /  \ using ascii characters).
 -  --
Question: what equation does the above represent?

For one horrible moment, I thought all it could represent was an anagram of
the Electronic Arts logo.

What could be more obvious?


I am very very scared. It reminds me of my last year of 'junior school' in
England in 1982. It had already been decided where I was going to go for the
rest of my education, so I spent several months doing nothing in math but
playing with compasses and constructing involute and evolute curves (I
didn't know that was going on, and didn't really care, the patterns sure
looked pretty). Oddly enough I'd say that was where I got into number theory
as an escape from the mindless tedium.

I tried to think of my answer to the square, triange, shape... in the same
vein as Ernst's suggestions. 4-3=1 is the best I could see! And we are
surprised to find the numbers of college math students are dwindling?

Chris