----- Original Message -----
From: "Brian J. Beesley" <[EMAIL PROTECTED]>
To: "Daran" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, December 05, 2002 12:31 PM
Subject: Re: Mersenne: P-1 and non k-smooth factors

> There is obviously a tradeoff here between increasing B2 and simplifying E
> and increasing E compensating for increased run time by lowering B2.
> However it does seem to be obvious that increasing E always has to be paid
> for in increased memory requirements.

It's the larger values of D, rather than E which use the most memory.  The
client rarely uses all it's allowed, except in the low memory situations.
For example, it needs 46 temps to run at D=150, E=2.  If only 45 temps were
available, the client, as currently configured, would run at D=120, E=2
using 38 temps.  But it could afford to run at D=120, E=6 (42 temps) or even
D=120, E=8 (44 temps), although, for reasons given, I don't think the latter
would be a good idea.

> For exponents around 8M, this is not a particular issue. However there is
> a real, practical constraint so far as Prime95/mprime is concerned - the
> entire _virtual_ address space is limited to 4 GBytes by the 32 bit
> address bus, and the OS kernel claims some (usually half) of this, so that
> the total memory usable by a single process is limited to 2 GBytes. (There
> is a "big memory" variant of the linux kernel which expands this to 3
> GBytes, but the point still stands).

As mentioned by other list members, there's also a 64GB version, which,
apparently, doesn't work.  I expect they'll have it working by the time >4GB
systems become commonplace.

> Since, on my practical experience, a 17M exponent will quite happily use ~
> 800 MBytes in P-1 stage 2,...

At 7186MB per temp, that sounds like a plan of D=420, E=4 (104 temps)

> ...the 32 bit address bus may well be a limiting
> factor within the exponent range covered by current versions of
> Prime95/mprime.

Easily, even at 17M.  To run with D=2310, E=12 requires 496 temps.  It would
go higher, if the memory was there.  D is capped at sqrt(B2-B1).

[...]

> > What I was thinking of doing was writing a program to read in the factor
> > database from, say, 1M up (to avoid any factors found by ECM), discard
> >.any below the TF limit, then try to find the smallest B2 which would
> > yield the factor given E=4, 6, 8, 10, or 12.  This won't tell us the
absolute
> > frequency of extended factors, but the relative frequencies would test,
> > and perhaps quantify, my conjecture about the relative effectiveness of
> > these values.
>
> Why not simply use a random sample of numbers of suitable size? Say
> around 2^41, you are looking for factors from 2^65 upwards with exponents
> around 2^23. P-1 is really about the factors of k in f=2kp+1 since the +1
> is implicit and the 2p comes out "in the wash".
>
> (Does the size of the numbers in the sample actually matter from
> the theoretical point of view?)

No, but if I use random numbers rather than genuine factors of Mersenne
numbers, then the suspicion will be there that there's something special
about the latter which invalidates the result.

But it would probably be sensible to try this first.

> Regards
> Brian Beesley

Daran G.


_________________________________________________________________________
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to