Mersenne Digest         Monday, July 19 1999         Volume 01 : Number 600




----------------------------------------------------------------------

Date: Sun, 18 Jul 1999 23:03:52 +0100
From: "Brian J. Beesley" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Setu for Dual processor NT

> Is there any way to set the affinity under NT automatically when a service
> starts up?

Put "Affinity=n" into prime.ini (where n = 0, 1, 2, ...) This works 
for both NTPrime and Prime95/NT.

If this line isn't there, the processes are scheduled on the first 
available processor at each timeslice. This costs about 5% on an idle 
system and next to nothing on a system which is very busy (because 
idle priority processes don't see much CPU), but it is expensive on a 
system which has just a little load from other applications.

I think "Affinity=n" may work with linux also.


Regards
Brian Beesley
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Sun, 18 Jul 1999 15:43:15 -0700
From: Colin Percival <[EMAIL PROTECTED]>
Subject: Re: Mersenne: The $100,000 award for 10,000,000 digit prime

At 11:44 AM 18/07/99 -0700, George Woltman wrote:
>2)  Me.  Some would argue that I share some of the credit for any
>Mersenne discoveries.  Be aware that I will donate any share to charity.

  I think that giving some to George is a great idea, with one change:  Buy
him a new computer with some of the money.
  I distinctly remember him telling me that he couldn't look into Pentium
III optimizations because he didn't have a Pentium III.
  Not only would the most deserving person (IMHO) get some of the prize,
but it would be in such a way that it benefitted GIMPS as well.

>3)  Scott Kurowski.  He has real expenses in running the PrimeNet server
>that should be reimbursed and has been instrumental in GIMPS' growth.

  I don't think that this would be a good idea.  I can't speak for Scott (I
can barely speak for myself ;-) ), but it seems to me that entropia
benefits from GIMPS mostly due to the publicity it gets.
  I'm sure the publicity would be far more favorable if he can say that
entropia is _donating_ its services, rather than having to admit that he is
getting some of the money.

>7)  Anyone that makes a mathematical or algorithmic breakthrough that 
>speeds up the search process.  I'm talking about a doubling in search speed
>not a 1% speedup in assembly code.

  I think that this would be great -- but I seriously doubt that any
improvement will be found.  We can't get any better with FP FFTs, since we
don't have any zero-padding any more, and you've specifically disallowed
implementational improvements.  A switch to integer FFTs might be better
for huge FFT lengths, but integer arithmetic is currently very slow
compared to FP arithmetic, so I doubt that will end up helping.
  Really, the only hope I can see for a significant improvement is a really
fast implementation of Schonhage-Strasen multiplication, but
Schonhage-Strassn is almost infamously slow.


Colin Percival
PS.  As for the idea of buying computers just to search for primes, GIMPS
currently has maybe $5 million of computers working on it.  Whatever could
be bought would probably have less impact than a single article in a major
newspaper about GIMPS.  (For reference, when the NY Times wrote about my
computation of the 5 trillionth bit of Pi, I got over 200 new volunteers).

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Sun, 18 Jul 1999 23:07:29 +0000 (GMT)
From: Henrik Olsen <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Setu for Dual processor NT

On Sun, 18 Jul 1999, Brian J. Beesley wrote:
> > Is there any way to set the affinity under NT automatically when a service
> > starts up?
> 
> Put "Affinity=n" into prime.ini (where n = 0, 1, 2, ...) This works 
> for both NTPrime and Prime95/NT.
> 
> If this line isn't there, the processes are scheduled on the first 
> available processor at each timeslice. This costs about 5% on an idle 
> system and next to nothing on a system which is very busy (because 
> idle priority processes don't see much CPU), but it is expensive on a 
> system which has just a little load from other applications.
> 
> I think "Affinity=n" may work with linux also.
There's no support for explicit processor affinity in stock Linux, so I'd
doubt that would work.
On the other hand, the affinity of a process to stay at the same processor
is very high, exactly to prevent that rescheduling performance hit seen in
NT, so I wouldn't expect it to be a big problem.

- -- 
Henrik Olsen,  Dawn Solutions I/S       URL=http://www.iaeste.dk/~henrik/
Darth Vader: Luke, come to the dark side.
Luke: No.
Darth Vader: Your goodness has redeemed me. Die, emperor scum.
                           Return of the Jedi, the Movie-A-Minute version


_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Sun, 18 Jul 1999 20:40:08 -0400
From: Pierre Abbat <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Factoring & LL tests

On Sun, 18 Jul 1999, Geoffrey Faivre-Malloy wrote:
> I was reading Fermat's Enigma today and something occurred to me...would it
> be possible to work with a number quicker if we used a higher base?  I.E.
> Use base 32 instead of base 10?  Thus, you're doing less math.  Of course,
> this would have to be in software because the floats can't be in that base.

If the floats you're talking about are used in the FFT, they are integers to
start with, the digits of the number being squared in some base. After the
multiplication and inverse FFT they are rounded back to integers. The base is
chosen so that the roundoff error of the FFT is less than 1/2 when the digits
are about squared as big as they were to start.

phma
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 03:57:20 +0200 (MET DST)
From: [EMAIL PROTECTED]
Subject: Re: Mersenne: The $100,000 award for 10,000,000 digit prime

> >7)  Anyone that makes a mathematical or algorithmic breakthrough that
> >speeds up the search process.  I'm talking about a doubling in search speed
> >not a 1% speedup in assembly code.
>
  > I think that this would be great -- but I seriously doubt that any
> improvement will be found.  We can't get any better with FP FFTs, since we
> don't have any zero-padding any more, and you've specifically disallowed
> implementational improvements.  A switch to integer FFTs might be better
> for huge FFT lengths, but integer arithmetic is currently very slow
> compared to FP arithmetic, so I doubt that will end up helping.
  > Really, the only hope I can see for a significant improvement is a really
> fast implementation of Schonhage-Strasen multiplication, but
> Schonhage-Strassn is almost infamously slow.

      I can dream of some possibilities.  For example, somebody
might find a way to easily detect whether Mp has an odd or an even
number of prime factors, without revealing the factors themselves.
We could reject all Mp with an even number of prime factors.
If the test runs quickly, this could almost double the search time.

      Other potential improvements are borderline.  For example,

       *)  Somebody finds how to parallelize the FFT using little
           communication.  The wall-clock time might be reduced 10-fold,
           but the CPU time increased 16-fold.  This could be
           great for verifying a new Mersenne prime, but
           does it qualify for the money?

       *)  Somebody finds a novel way to choose elliptic
           curves modulo Mp, using the information that
           its prime factors are == 1 (mod p) and that 2, -p
           are quadratic residues modulo any factor of Mp.
           This lets the trial division phase search 10 bits higher, 
           such as searching to 2^74 rather than 2^64.  
           Does its finder get any money?

       *)  Somebody finds a way to verify the output of the LL test
           without a complete rerun (cf. verification of digital signatures).
           If this eliminates the need for double-checks, does it qualify?

     Peter Montgomery



_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Sun, 18 Jul 1999 21:37:13 -0700 (PDT)
From: poke <[EMAIL PROTECTED]>
Subject: Re: Mersenne: The $100,000 award for 10,000,000 digit prime

10% for George
10% for Scott
10% for Discoverer
10% Charity/Scholarship (The Mersenne Scholarship Fund)
60% divided evenly to everyone who contributed, based on CPU cycles
contributed after the last Mersenne prime found.

If someone manages to double the speed, it should be assumed that the CPU
cycles that are saved would be credited to that person.


- -Chuck


 --
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
: WWW: http://www.silverlink.net/poke   :
: E-Mail: [EMAIL PROTECTED]         :
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
: Ask Mike! Aviation's response to Dear :
: Abby. http://www.avstarair.com        : 
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Sun, 18 Jul 1999 21:41:56 -0700 (PDT)
From: poke <[EMAIL PROTECTED]>
Subject: Re: Mersenne: The $100,000 award for 10,000,000 digit prime

I hate the charity idea only because it seems to me that a "Mersenne
Scholarship Fund" would do much more for our project in many ways:

1. We could control where the money goes to a greater extent.
2. It would allow us to contribute a great deal more mathematics in
general.
3. More notariety.

- -Chuck

On Sat, 17 Jul 1999, Eric Hahn wrote:

> George Woltman wrote:
> >Hi all,
> >
> >     At the risk of opening Pandora's box, I'd like to bring
> >up the possibility of splitting up the $100,000 award for a 10 million
> >digit prime.  I'm soliciting everyone's opinion before making a decision.
> >
> 
> 1/4 to George or charity (his choice)
> 1/4 to Scott or charity (his choice)
> 1/2 to the discover(s) or charity*
> 
> *The discover(s) get to chose only if there is orderly
> exploration of exponents. Otherwise, it goes to a 
> charity of their choice.
> 
> That would be changed to 20%, 20%, 40%, and 20%, with the
> last 20% going to the individual(s) responsible for
> increasing the search speed significantly, if such event
> occurs.
> 
> This promotes an orderly exploration of exponents, yet
> allows those who want to find a 10M digit prime just
> for fun (and unorderly) to have the opportunity without
> being completely penalized.  It also encourages the
> advancement and development of new algorithms.
> 
> This is all, of course, assuming a GIMPser is the
> discover(s)...
> 
> 
> _________________________________________________________________
> Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
> Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers
> 

 --
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
: WWW: http://www.silverlink.net/poke   :
: E-Mail: [EMAIL PROTECTED]         :
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
: Ask Mike! Aviation's response to Dear :
: Abby. http://www.avstarair.com        : 
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 02:14:18 -0400 (EDT)
From: Lucas Wiman  <[EMAIL PROTECTED]>
Subject: Re: Mersenne: The $100,000 award for 10,000,000 digit prime

> I hate the charity idea only because it seems to me that a "Mersenne
> Scholarship Fund" would do much more for our project in many ways:
>
> 1. We could control where the money goes to a greater extent.
> 2. It would allow us to contribute a great deal more mathematics in
> general.
> 3. More notariety.

I doubt that we could actually get many scholarships out of the $100,000.
Maybe scholarships for 6 or 7 people, any more and we start sending them to
community colleges.

(though, if you want to send someone to college, I wouldn't mind it ;)   

I like the idea of using some of it to get George some computers, and
maybe pay his electric bill for a year :)

Ok, that's maybe 10K at the outside, then we run the risk of just cluttering
his house with computers.  

I also like the idea of giving some of it to Scott. And though I think that 
Colin Percival is right when he says that Scott gets most of the reward
from the publicity, he would get this anyway.  Especially if newpaper
articles read $15,000 was given to Scott Kurkowski, who gives his 
server time to run the primenet software...

Different message, same author:
> 10% for George
> 10% for Scott
> 10% for Discoverer
> 10% Charity/Scholarship (The Mersenne Scholarship Fund)
> 60% divided evenly to everyone who contributed, based on CPU cycles
> contributed after the last Mersenne prime found.

~6,000 people in gimps.  $60,000/6,000=$10 on average.  Hardly worth doing...

Though personally, I'm not to comfortable with the idea of splitting the money
away from the winner.  I doubt that the winner would actually do anything to
deserve the money, but I also think that many, many people would be discouraged
to read "if you are the lucky discoverer, you don't get to keep 90 grand that
the EFF would give to you were it not for the software you are about to 
download."  This would also entail a restrictive license agreement, which 
some lawyer would write, and we all know that a "volunteer project's" 
credibility goes straight down the tubes when lawyers get involved.

Hey, if you don't believe it, try reading your bank statment, and your 
credit card bill's fine print.  Then tell me how much respect we would 
lose in the name of legality.

I think the only reasonable way to get any money out of the hands of 
the finder would be to ask them to give some of it away.  I should
think that there would be many people who would be willing to do this,
but I don't think it would make us look good to write and ask to give
it away.

- -Lucas Wiman
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 11:37:27 +0200
From: Alex Kruppa <[EMAIL PROTECTED]>
Subject: Mersenne: Worth it?

Hi,

from stuff I saw on the list earlier I put together this estimate:

The chance for Mp being prime if it has no factor below 2^q is
2q/p (if q is reasonably big, at i.e. 2^q much bigger than 2p+1).

If we test the Mersennes in the 10M-digits range for factors
to, say, 2^80, the chance for, say, M33219281, to be a prime
is 1/207620. So the average money you�ll earn for a test in this
range is a little less than 50c. George has estimated that a test
will take about 1 year with current hardware.

We should try to increase this probability before starting any
full LL test, perhaps by using better factoring algorithms.
Will v19 have P-1 code that works with numbers of this size?
But even with P-1, we might not be able to factor that much
further. The time consuming part of P-1 are multiplications
mod Mp just as in the LL test, so the crossover point between
raising the factoring limit and starting the LL test will be a function
of p (the number of interations in the LL test) and less than linear,
that is, we might not get very far.
Do we have estimates already what the optimal bound
values will be for trying P-1 on a given Mp, to minimize the
(time of P-1 test) + (time of LL test * chance of not finding any factor) ?

What would REALLY help here is a transform that allows multiplying
without loss of accuracy (mod Mp), even if you can�t add/sub in the
transformed data. P-1 only exponentiates, you if could exponentiate
every transformed element and then transform back... talk about
bound values! I don�t know of any such transform, if you do, please
say.

Ciao,
  Alex.

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 08:34:08 -0400 (EDT)
From: Lucas Wiman  <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Worth it?

> We should try to increase this probability before starting any
> full LL test, perhaps by using better factoring algorithms.
> Will v19 have P-1 code that works with numbers of this size?
> But even with P-1, we might not be able to factor that much
> further. The time consuming part of P-1 are multiplications
> mod Mp just as in the LL test, so the crossover point between
> raising the factoring limit and starting the LL test will be a function
> of p (the number of interations in the LL test) and less than linear,
> that is, we might not get very far.
> Do we have estimates already what the optimal bound
> values will be for trying P-1 on a given Mp, to minimize the
> (time of P-1 test) + (time of LL test * chance of not finding any factor) ?

As I understand the Pollard P-1 algorithm, the time required for 1 bit of
the exponent would be equal to 1 iteration of the LL test (one squaring
mod Mp), so values wouldn't be that hard to create.  The main problem is
that to get a bound of any reasonable size, and try multiple bases, you
are looking at a time comperable to the LL test.  Also, one of the keen
things about GIMPS is that in its search for primes, it came up with loads
of factoring data for Mersennes.  However, the P-1 algorithm would make any
factor distribution data a lot less usable (as it would not include P with
P-1 having a large prime factor).

On the other hand, the P-1 algorithm can take advantage of the special
form of Mersenne factors.  We know that P-1 must be divisable by the
Mersenne exponent, and that in 1/2 of the cases, it is divisable by 8.

- -Lucas

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 15:58:45 +0000
From: "David L. Nicol" <[EMAIL PROTECTED]>
Subject: Mersenne: A busy computer is a happy computer - SunWorld - July 1999

introductory article re: distributed computing, decent GIMPS paragraph.


http://www.sunworld.com/swol-07-1999/swol-07-silicon.html?0719i
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 20:48:17 +0100 (BST)
From: Chris Jefferson <[EMAIL PROTECTED]>
Subject: Mersenne: A slight divergence...

It is well known that n is prime if for all prime factors of n-1, a^(n-1)
= 1 mod n and a^((n-1)/q) is not 1 mod n.

For example, take a=2, and 2^p+1 (p is prime, yes, that IS a plus)

Then we need to check if 2^(2^p)=1 mod (2^p -1) and 2^((2^p)/2) is not 1.

Now, we know that 2^p = 1 mod (2^p - 1)

so, 2^n = 2^n*2^p mod (2^p - 1)

It is a simple extension to say that (2^n = 2^(n mod p) ) mod 2^p-1

So 2^(2^p) = 2^(2^p mod p) mod (2^p - 1) (Still following?)

So to check primality of 2^p+1, only need to check value of 2^p and
2^(p-1) (which is obviously (2^p)/2).

The main reason I am asking this is that I know that to check whether a
number is a factor of a mersenne prime involves checking 2^p - 1 mod n for
a number n and I was wondering if anyone's factoring code would go that
high and wether it would be quicker to check this than the traditional 2^p
- - 1.

- ------------------------------------
Chris Jefferson, Girton College, Cambridge, [EMAIL PROTECTED]
- ------------------------------------
Someone may have beaten me to Fermat's Last Theorem, but I've done
Riemann's general equation. However it won't fit in my signature file...
- ------------------------------------

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 22:38:05 +0200 (MET DST)
From: [EMAIL PROTECTED]
Subject: Re:  Mersenne: A slight divergence...

Chris Jefferson <[EMAIL PROTECTED]> writes

> It is well known that n is prime if for all prime factors of n-1, a^(n-1)
> = 1 mod n and a^((n-1)/q) is not 1 mod n.

     What are a and q?  You want to say
`if for each prime factor q of n-1, there exists a base a such that ...' .

> For example, take a=2, and 2^p+1 (p is prime, yes, that IS a plus)
> 
> Then we need to check if 2^(2^p)=1 mod (2^p -1) and 2^((2^p)/2) is not 1.

     What is n here?  Your modulus is 2^p - 1, suggesting you intend
n = 2^p - 1.  But the exponent is 2^p, suggesting you intend n = 2^p + 1.

     If we try p = 3, then 2^p - 1 = 7 is prime.
But 2^(2^p) = 2^8 = 256 is not one modulo 7.

    Peter Montgomery


_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 18:00:42 -0400 (EDT)
From: Lucas Wiman  <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Worth it?

Sorry if this is sending out twice, I wasn't sure that it got sent the
first time...

>> As I understand the Pollard P-1 algorithm, the time required for 1 bit of
>> the exponent would be equal to 1 iteration of the LL test (one squaring
>> mod Mp), so values wouldn't be that hard to create.
(timing values, for comparison with regular factoring)

>> The main problem is
>> that to get a bound of any reasonable size, and try multiple bases, you
>> are looking at a time comperable to the LL test.
(I suppose that on further investigation, with a bound value for factors of
the exponent at 10,000 we are looking at about 14,000 squarings mod Mp, which
is really not all that many)

>> Also, one of the keen
>> things about GIMPS is that in its search for primes, it came up with loads
>> of factoring data for Mersennes.  However, the P-1 algorithm would make any
>> factor distribution data a lot less usable (as it would not include P with
>> P-1 having a large prime factor).
(That is to say, if the P-1 algorithm was used, it still couldn't find all the
factors <2^64, unless we are talking about incredibly high bound values)

>> On the other hand, the P-1 algorithm can take advantage of the special
>> form of Mersenne factors.  We know that P-1 must be divisable by the
>> Mersenne exponent, and that in 1/2 of the cases, it is divisable by 8.
(we know that the factors must be of the form 2*k*p+1, hence the factor-1
must be divisable by 2*p, or in the form 8*n+/-1, hence the factor-1 must
be divisable by 2 or 8)

>         I'm not sure that I understand what you are saying, but the
> following are a few observations that I made when I looked at the
> problem a few years ago.  The major stumbling block for an efficient
> P-1 is the implementation of a time and memory efficient GCD.  If one
> has a good GCD algorithm then a stage 1 only implementation becomes
> cost effective for exponents around 3 million.
>         However, the cost savings is tiny, probably less than a
> percent overall. The major benifit is that one does not need to double
> check the exponent.

Yup, silly me.  You are quite correct, of course, that the GCD is the 
most time consuming part.  

I imagine that from what you are saying, it gets more cost-effective
the higher the exponent.  How does it look at around 33219281?

- -Lucas
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 23:34:32 +0100
From: "Brian J. Beesley" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: The $100,000 award for 10,000,000 digit prime

On 19 Jul 99, at 3:57, [EMAIL PROTECTED] wrote:

>        *)  Somebody finds how to parallelize the FFT using little
>            communication.  The wall-clock time might be reduced 10-fold,
>            but the CPU time increased 16-fold.  This could be
>            great for verifying a new Mersenne prime, but
>            does it qualify for the money?

Surely that's been done - long since - for application on massively 
parallel processors (vector machines).
> 
>        *)  Somebody finds a novel way to choose elliptic
>            curves modulo Mp, using the information that
>            its prime factors are == 1 (mod p) and that 2, -p
>            are quadratic residues modulo any factor of Mp.
>            This lets the trial division phase search 10 bits higher, 
>            such as searching to 2^74 rather than 2^64.  
>            Does its finder get any money?

I think this would speed us up only a few percent at best. 
Nevertheless it's a novel approach & could have other useful 
ramifications.

I mentioned to George that I think the best way to decide which 
improvements are eligible for any share of an award would be to have 
a vote with the electorate consisting of previous Mersenne prime 
discoverers.

>        *)  Somebody finds a way to verify the output of the LL test
>            without a complete rerun (cf. verification of digital signatures).
>            If this eliminates the need for double-checks, does it qualify?

That sounds interesting! Of course, if we could find a _quick_ way of 
computing a few bits of the residual, we could use this as a filter 
which would remove a large proportion of the contenders - never mind 
about being a quick check on a result. I think this really _should_ 
qualify for an award!


Regards
Brian Beesley
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 21:15:00 -0400
From: "Alex Healy" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: The $100,000 award for 10,000,000 digit prime

Now, I'm going to toss out an idea.  I thought about this a few minutes
after reading the previous message and I want to see if you all think its
worthwhile or not, or whether its even correct or not.
Here goes:
1) If we know the last n bits of a number x, then we can (easily) determine
the last n bits of x^2.
2) If we know the last n bits of x^2 we can easily determine the last n bits
of x^2 - 2.
3) It follows by repeating this (I hope) that we can (realatively easily)
determine the last n bits of the last number in the Lucas-Lehmer series.
Now the slightly sketchy part (provided that was all right) . . .
4) There are fewer than, or equal to, 2^n possible combinations of n
terminating bits when we examine the multiples of P = 2^p - 1, i.e. the
group {n_terminating_bits(P), n_terminating_bits(2P),
n_terminating_bits(3P), . . . }.  It is my hope that this group has fewer
(ideally significantly fewer) than 2^n elements.  That way if the last n
bits of the last term of the Lucas-Lehmer test is a combination of bits
which could not possibly be the final n bits of a product of P, then we know
that the residue cannot be 0.
Layman's example:
any number that ends in ...34587 cannot be divisible by 25.  Why? (apart
from the obvious).  Because any number that's a multiple of 25 ends in (now
I'm looking at the group {25, 50, 75, 100, 125, 150 . . .}), 00, 25, 50, or
75.  ...34587 doesn't end with any of these.

I hope I've explained myself enough that you can either take from this any
worth which it might have, or correct the mistakes I've made.  :)

Alex Healy
[EMAIL PROTECTED]
http://www.alexhealy.net

<snip>
>        *)  Somebody finds a way to verify the output of the LL test
>            without a complete rerun (cf. verification of digital
signatures).
>            If this eliminates the need for double-checks, does it qualify?

That sounds interesting! Of course, if we could find a _quick_ way of
computing a few bits of the residual, we could use this as a filter
which would remove a large proportion of the contenders - never mind
about being a quick check on a result. I think this really _should_
qualify for an award!

Regards
Brian Beesley

</snip>

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 18:40:55 -0700
From: Todd Sauke <[EMAIL PROTECTED]>
Subject: RE: Mersenne: The $100,000 award for 10,000,000 digit prime

Alex,

The group you seek always has 2^n elements.  All bit combinations are
possible.
(P = 2^p-1 is "minus one" in n-bit words.  2*P is minus two, etc. up to
2^n*P which is 0.  All bit patterns occur.)

Todd Sauke


>Now, I'm going to toss out an idea.  I thought about this a few minutes
>after reading the previous message and I want to see if you all think its
>worthwhile or not, or whether its even correct or not.
>Here goes:
>1) If we know the last n bits of a number x, then we can (easily) determine
>the last n bits of x^2.
>2) If we know the last n bits of x^2 we can easily determine the last n bits
>of x^2 - 2.
>3) It follows by repeating this (I hope) that we can (realatively easily)
>determine the last n bits of the last number in the Lucas-Lehmer series.
>Now the slightly sketchy part (provided that was all right) . . .
>4) There are fewer than, or equal to, 2^n possible combinations of n
>terminating bits when we examine the multiples of P = 2^p - 1, i.e. the
>group {n_terminating_bits(P), n_terminating_bits(2P),
>n_terminating_bits(3P), . . . }.  It is my hope that this group has fewer
>(ideally significantly fewer) than 2^n elements.  That way if the last n
>bits of the last term of the Lucas-Lehmer test is a combination of bits
>which could not possibly be the final n bits of a product of P, then we know
>that the residue cannot be 0.
>Layman's example:
>any number that ends in ...34587 cannot be divisible by 25.  Why? (apart
>from the obvious).  Because any number that's a multiple of 25 ends in (now
>I'm looking at the group {25, 50, 75, 100, 125, 150 . . .}), 00, 25, 50, or
>75.  ...34587 doesn't end with any of these.
>
>I hope I've explained myself enough that you can either take from this any
>worth which it might have, or correct the mistakes I've made.  :)
>
>Alex Healy
>[EMAIL PROTECTED]
>http://www.alexhealy.net
>
><snip>
>>        *)  Somebody finds a way to verify the output of the LL test
>>            without a complete rerun (cf. verification of digital
>signatures).
>>            If this eliminates the need for double-checks, does it qualify?
>
>That sounds interesting! Of course, if we could find a _quick_ way of
>computing a few bits of the residual, we could use this as a filter
>which would remove a large proportion of the contenders - never mind
>about being a quick check on a result. I think this really _should_
>qualify for an award!
>
>Regards
>Brian Beesley
>
></snip>
>
>_________________________________________________________________
>Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
>Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 22:21:03 -0400
From: David M Einstein <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Worth it?

Disclaimer: All my commentary is based on recollections of what I did a
few years ago. I believe that there are comments in an archive somewhere
by Peter Montgomery and George Woltman conatining much more accurate
info.

Lucas Wiman wrote:
> 
> Sorry if this is sending out twice, I wasn't sure that it got sent the
> first time...
> 
        That was my fault. I cc'd my response to base.org and the DNS got
confused. 

> >> The main problem is
> >> that to get a bound of any reasonable size, and try multiple bases, you
> >> are looking at a time comperable to the LL test.
> (I suppose that on further investigation, with a bound value for factors of
> the exponent at 10,000 we are looking at about 14,000 squarings mod Mp, which
> is really not all that many)
> 
        You will want to do P-1 in addition to, and after seiving. It would be
foolish to use p-1 to discover a 30 bit factor.  If you are doing stage
1 only I believe that I had looked at a b1 of at least 100K which would
translate to about 140K (I'm trusting your numbers) squarings. I think
that that is still less than 5% of the number of squarings for a current
LL test, and would reap factors for about 5% of the numbers so would at
least break even.


> >> Also, one of the keen
> >> things about GIMPS is that in its search for primes, it came up with loads
> >> of factoring data for Mersennes.  However, the P-1 algorithm would make any
> >> factor distribution data a lot less usable (as it would not include P with
> >> P-1 having a large prime factor).
> (That is to say, if the P-1 algorithm was used, it still couldn't find all the
> factors <2^64, unless we are talking about incredibly high bound values)
> 
> >> On the other hand, the P-1 algorithm can take advantage of the special
> >> form of Mersenne factors.  We know that P-1 must be divisable by the
> >> Mersenne exponent, and that in 1/2 of the cases, it is divisable by 8.
> (we know that the factors must be of the form 2*k*p+1, hence the factor-1
> must be divisable by 2*p, or in the form 8*n+/-1, hence the factor-1 must
> be divisable by 2 or 8)
> 
> >         I'm not sure that I understand what you are saying, but the
> > following are a few observations that I made when I looked at the
> > problem a few years ago.  The major stumbling block for an efficient
> > P-1 is the implementation of a time and memory efficient GCD.  If one
> > has a good GCD algorithm then a stage 1 only implementation becomes
> > cost effective for exponents around 3 million.
> >         However, the cost savings is tiny, probably less than a
> > percent overall. The major benifit is that one does not need to double
> > check the exponent.
> 
> Yup, silly me.  You are quite correct, of course, that the GCD is the
> most time consuming part.
> 
        Not exactly. With a reasonable B1 I doubt that it would be the most
time consuming part, but a straight euclidean or lehmer algorithm would
be quite slow.  A divide and conquer algorithm would be much faster, but
extremely memory hungry (Violating GIMPS's unstated (but very positive)
invisibility constraint (No noticable effect on the host systems
operation). I believe that one of the reasons that the next version of
prime95 will contain a straight FFT multiply is to support a divide and
conquer GCD, but that is supposition on my part.


> I imagine that from what you are saying, it gets more cost-effective
> the higher the exponent.  How does it look at around 33219281?

        I think that a P-1 would probably be neccessary. There may be enough
room for ECM to have overcome P-1's free factor of P, but I dont think
so.
> 
> -Lucas
> _________________________________________________________________
> Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
> Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Mon, 19 Jul 1999 22:26:12 -0400
From: "Alex Healy" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: The $100,000 award for 10,000,000 digit prime

Thanks.  I was also considering a base other than base 2.  But I'm afraid
the same problem arises as long as the base is realatively prime to the
Mersenne number we are considering.  For example if you look at 2047 (i.e.
2^11 - 1) in base 23 or base 87 you'll see the algorithm I outline below
actually works, but that that's sort of ad hoc because I know that 23 and 87
are factors. :)  Unless someone else has some particularly brilliant
insight, I admit this method is no better than factoring (since we'd need a
base (radix), b such that (b, 2^p - 1) != 1.

Thanks for the feedback,
Alex


Alex,

The group you seek always has 2^n elements.  All bit combinations are
possible.
(P = 2^p-1 is "minus one" in n-bit words.  2*P is minus two, etc. up to
2^n*P which is 0.  All bit patterns occur.)

Todd Sauke


>Now, I'm going to toss out an idea.  I thought about this a few minutes
>after reading the previous message and I want to see if you all think its
>worthwhile or not, or whether its even correct or not.
>Here goes:
>1) If we know the last n bits of a number x, then we can (easily) determine
>the last n bits of x^2.
>2) If we know the last n bits of x^2 we can easily determine the last n
bits
>of x^2 - 2.
>3) It follows by repeating this (I hope) that we can (realatively easily)
>determine the last n bits of the last number in the Lucas-Lehmer series.
>Now the slightly sketchy part (provided that was all right) . . .
>4) There are fewer than, or equal to, 2^n possible combinations of n
>terminating bits when we examine the multiples of P = 2^p - 1, i.e. the
>group {n_terminating_bits(P), n_terminating_bits(2P),
>n_terminating_bits(3P), . . . }.  It is my hope that this group has fewer
>(ideally significantly fewer) than 2^n elements.  That way if the last n
>bits of the last term of the Lucas-Lehmer test is a combination of bits
>which could not possibly be the final n bits of a product of P, then we
know
>that the residue cannot be 0.
>Layman's example:
>any number that ends in ...34587 cannot be divisible by 25.  Why? (apart
>from the obvious).  Because any number that's a multiple of 25 ends in (now
>I'm looking at the group {25, 50, 75, 100, 125, 150 . . .}), 00, 25, 50, or
>75.  ...34587 doesn't end with any of these.
>
>I hope I've explained myself enough that you can either take from this any
>worth which it might have, or correct the mistakes I've made.  :)
>
>Alex Healy
>[EMAIL PROTECTED]
>http://www.alexhealy.net
>
><snip>
>>        *)  Somebody finds a way to verify the output of the LL test
>>            without a complete rerun (cf. verification of digital
>signatures).
>>            If this eliminates the need for double-checks, does it
qualify?
>
>That sounds interesting! Of course, if we could find a _quick_ way of
>computing a few bits of the residual, we could use this as a filter
>which would remove a large proportion of the contenders - never mind
>about being a quick check on a result. I think this really _should_
>qualify for an award!
>
>Regards
>Brian Beesley
>
></snip>
>
>_________________________________________________________________
>Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
>Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

End of Mersenne Digest V1 #600
******************************

Reply via email to