Mersenne Digest Monday, November 1 1999 Volume 01 : Number 654
----------------------------------------------------------------------
Date: Thu, 28 Oct 1999 16:18:33 -0400 (EDT)
From: [EMAIL PROTECTED]
Subject: Re: Mersenne: LL and Pollard-rho in one?
> but doing the Pollard-Rho along a LL test would not
> be particularly efficient, anyways.
Or particularly successful. Remember Pollard-rho heuristically expects to find a
factor p in something along the lines of sqrt(p) iterations. Since we're doing lets
call it 10^7 iterations, you'd probably be better off trial-factoring to 10^14 - which
is already done and beyond with guarantees no factor is missed. You might get a lucky
factor that's larger, but experience (and the law of averages) tells you you aren't
going to be *that* lucky.
>For every 2 LL iterations, you'd have to do another
>one for the cycle find algorithm and a multiply to
>store up any factor you find. Thats 9 transforms
>instead of 4
Brent's modification of Pollard-rho doesn't require storing the two parallel sequences
x_n and x_2n. Instead consider x_n-x_k, where k is the greatest power of 2 that's less
than n. At worst it could take twice as long to find a cycle, but it's at least twice
as fast.
> And Pollard-Rho is probably not very well suited for
> finding Mersenne factors as it cannot exploit the
> fact that these factor are 1 mod (2p) as P-1 can.
The extra exponentiation at the start of the P-1 algorithm is hardly a great
exploitation. Note that 'rho' definition of Pollard-rho just means that your iteration
function should be (pseudo)random - you can create a pseudorandom iteration that does
exploit the form of factors. Of course, that's no longer an LL iteration though.
> I'm mostly asking for curiosity, whether the LL
> iteration really makes a proper Pollard-Rho
> iteration, especially with the -2.
The classic Pollard-rho iteration x -> x^2+a isn't particularly good with a=0 or a=-2.
The reason is the way the cycles degenerate. You want one of the cycles mod some
unknown prime factor to be short. What you don't want is all the cycles to collapse at
the same time... or never collapse at all. Suppose you applied the same Pollard-rho
iteration simultaneously to all (or at least many) possible initial points mod N (this
is a reportedly near-perfect parallelization according to a paper by Dick Crandall).
Why Crandall's parallelization works is that its inevitable that the application of
the iteration reduces the number of distinct points on each pass until eventually your
N initial points are folded down to quite short and detectable cycle lengths. However,
iterate with 0 and -2 and there are some obvious fixed points (solve the quadratic!)
and other, less obvious, short cycles. In effect, there are some points you can't
iterate away no matter how long you keep trying.
There's a good visual indicator that 0 and -2 aren't particularly good. z -> z^2+c is
the Julia set iteration on the complex plane. Let's assume for the moment that somehow
the behavior of the Pollard-rho iteration mod N and the behavior of the Mandelbrot
iteration on C are equivalent - they are, but the mapping between them is hardly
trivial.
The Julia sets for c=0 and c=-2 are devastatingly boring, their iteration brings you
no surprises, no pretty pictures. In much the same way you're not going to get any
exciting factors with this iteration mod N, either.
In Lucas-Lehmer terms, what happens during the LL / Pollard-rho iteration is that all
the prime factors of the number have interlocked cycle lengths. That's great for
primality proving (because if you get the expected final residue, you know something
about *all* prime factors of your number - and hence conclude there can be only one).
But a failed LL test simply tells you you now know *all* prime factors of your number
failed identically. There's nothing in the LL recipe that distinguishes any one prime
factor from any other.
Chris Nash
Lexington KY
UNITED STATES
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Thu, 28 Oct 1999 22:13:37 +0200
From: "Steinar H. Gunderson" <[EMAIL PROTECTED]>
Subject: Mersenne: Re: Quiet
On Thu, Oct 28, 1999 at 05:12:46PM +0200, Lars Lindley wrote:
>I think it's about time we find a new prime...
>The list is so quiet now.
I'm working on it! ;-)
I think a quiet list is better than a list in rage -- don't try
to start a poaching war again, please... The list isn't that
quiet either.
/* Steinar */
- --
Homepage: http://members.xoom.com/sneeze/
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Thu, 28 Oct 1999 19:55:06 EDT
From: [EMAIL PROTECTED]
Subject: Mersenne: GrOGfest '99 reminder
Dear All:
Just a friendly reminder that the second not-quite annual Great Internet
Mersenne Prime Search OktoberGeekfest get-together-kind-of-deal is set
for tomorrow evening.
What: GrOGfest '99
Where: Tied House, Mountain View, California USA
When: Friday, 28. October, beginning at 6:30 pm PDT
Bring: Yourself, and anyone else you like
Dress: casual
I've only heard from a couple GIMPSers who say they'll attend, so I've
reserved a table for 10 on the patio (keyword: GIMPS). Luke Welsh,
Scott Kurowski and I will be there, and perhaps one or two local
Mersenne-type notables.
I wanted to invite Don Knuth, but Luke told me that last time they were
at a party together, DEK grabbed a half-empty minikeg of Lo"wenbra"u
and was running around with it under his arm, spraying partygoers with
beer and shouting, "you're all wet!" I think we'll just send him a card and
photo.
(: .esruoc fo ,gniddik m`I)
Cheers,
- -Ernst
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Thu, 28 Oct 1999 19:55:02 EDT
From: [EMAIL PROTECTED]
Subject: Mersenne: Re: fast GCD
Alex Kruppa wrote:
> Can this be used to do a LL test and Pollard-rho factoring attempt at
> once?
Jason Papadopoulos wrote:
> Except that every once in a while you'd have to perform a GCD, and
> a GCD on million-digit size numbers takes about a day (no kidding!
> There seems to be no fast way to do one!)
A day seems somewhat of an overestimate (though maybe it takes that
long on a Pentium - I don't know.) On a decently fast Alpha, say a 500MHz
21164, I can do a million-digit (and note I mean base-10 digits, not bits)
GCD in under 5 minutes using a simple O(n^2) Lehmer-type implementation
with scalar multipliers of around 63 bits - one gets similar performance
on MIPS after adjusting for the typically lower clock rates. (Things should
be 2-3x faster on Alpha 21264, due its much better integer multiply
The code in question is my own (and will be publically available when I
release my Mersenne p-1 code), but Torbjorn Granlund sent me timings that
indicate the GMP library's GCD is similarly fast on such hardware. The
Alpha implementation uses the UMULH isntuction to get the upper 64 bits
of the 64x64==>128-bit unsigned integer products which arise when
multiplying the digits of a base-2^64 multiword integer by a scalar
of up to 64 bits in size, and the MIPS version uses DMULTU to get the
full 128-bit product. On CPUs supporting 64-bit integers and floats
but lacking such 128-bit multiply instructions, one can emulate UMULH via
a floating multiply, using scalars up to 52 bits in size (we need to save
one of the mantissa bits to effect error correction of the FMUL result.)
On the Intel IA-64, there is hardware support for a (now fully 64-bit)
UMULH-like instruction in the floating multiplier (also used to get the
low 64 bits of the product.) One can in theory do similar on the Pentium,
but for this to be efficient, one needs to be able to load a 64-bit int
directly into the 64-bit mantissa of a floating-point register - I don't
know enough about the x86 to say whether one can do such an operation
on current CPUs. I think George uses similar tricks to do the speedy
Prime95 sieve-based factoring on the Pentium, so it seems it should be
possible.
Lastly, note that one doesn't need to do many GCDs if one is running, say,
a p-1 or ECM factoring algorithm - for large numbers, one could restrict
oneself to to doing a single GCD at the end of stage 1 and perhaps every
day or so in stage 2. This makes a "fast" (i.e. asymptotically faster
than O(n^2)) GCD less pressing.
Alex wrote:
>Thats true (btw, I keep hearing of a n*log(n) gcd - how long would that
>take then? Where can I read up on it - Knuth vol.2 ?)
Richard Candall claims the fastest he's heard of is O(n*log^2(n)), i.e.
a factor of log(n) slower order than a length-n FFT, though for n >> 1,
this should still be much faster than O(n^2). The problem is that it's
likely highly nontrivial to code - using a well-tested library (e.g. GMP)
to effect the basic operations would probably be one's best bet if one
doesn't want to spend months coding it all oneself.
Peter Montgomery's PhD thesis mentions a fast GCD due to Moenck, but
I don't know what the precise work estimate is for that algorithm - Peter?
Other GCD-related musings I've been mulling over:
1) Is there a fast (e.g. O(n*long(n) or such) iterative method to generate
a modular inverse modulo a Mersenne or Fermat number? Of course one can
use an extended GCD algorithm to get the inverse, but I had in mind
something more elegantly, which ideally would allow fast DWT-based FFT
arithmetic to be used, i.e. which avoids having to convert an n-bit
mixed-radix DWT floating-point factoring residue to a fixed-radix integer,
do the EGCD, then convert the result back to floating form in preparation
for more DWT-style arithmetic. This question was inspired by the fact that
one can use a modular analog of the well-known Newton iterative scheme
for multiplicative inverses to get inverses modulo some desired power of 2.
For example, if x is some integer and y_n is the multiplicative inverse of
x modulo, say, 2^16, then one obtains the mod-2^32 inverse via
y_{n+1} = y_n * (2 - x*y_n),
where all intermediate arithmetic is done modulo 2^32. Alas, this scheme
doesn't work except for moduli which are powers of 2.
2) If a fast iteration as described in (1) did exist, could one use it
to obtain a GCD in similar time?
Cheers,
- -Ernst
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Thu, 28 Oct 1999 21:27:05 -0400 (EDT)
From: Jason Stratos Papadopoulos <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Re: fast GCD
On Thu, 28 Oct 1999 [EMAIL PROTECTED] wrote:
> A day seems somewhat of an overestimate (though maybe it takes that
> long on a Pentium - I don't know.) On a decently fast Alpha, say a 500MHz
> 21164, I can do a million-digit (and note I mean base-10 digits, not bits)
> GCD in under 5 minutes using a simple O(n^2) Lehmer-type implementation
I suppose my memory was faulty...maybe the O(n^2) implementation took a
day and the fancier recursive version takes an hour. I also don't remember
where I got the timings to begin with.
> On the Intel IA-64, there is hardware support for a (now fully 64-bit)
> UMULH-like instruction in the floating multiplier (also used to get the
> low 64 bits of the product.) One can in theory do similar on the Pentium,
> but for this to be efficient, one needs to be able to load a 64-bit int
> directly into the 64-bit mantissa of a floating-point register - I don't
> know enough about the x86 to say whether one can do such an operation
> on current CPUs.
You can load a 64-bit integer into an FPU register on x86 machines, but
I believe it has to be a signed load (to load a 32-bit unsigned int you
first have to zero extend to 64 bits and load *that*).
I've heard lots of complaints that IA64 has no fully-integer multiply
(i.e. you have to load your numbers into an FPU register and pull the
result back). Actually, there have been a lot of gripes about IA64;
Intel's developers' guide is great reading but quite difficult at times.
jasonp
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Thu, 28 Oct 1999 20:58:02 -0700
From: Spike Jones <[EMAIL PROTECTED]>
Subject: Re: Mersenne: GrOGfest '99 reminder
But if I were to be sprayed with beer by Knuth I would consider it
the greatest day of my life! If I were to see him I fear I would fall
prostrate upon the earth, crying "Im unwooooorthy! I suuuuck!"
{8^D spike
(: .oot gniddik m'I)
[EMAIL PROTECTED] wrote:
> I wanted to invite Don Knuth, but Luke told me that last time they were
> at a party together, DEK grabbed a half-empty minikeg of Lo"wenbra"u
> and was running around with it under his arm, spraying partygoers with
> beer and shouting, "you're all wet!" I think we'll just send him a card and
> photo.
>
> (: .esruoc fo ,gniddik m`I)
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Fri, 29 Oct 1999 00:59:58 -0400
From: Glenn <[EMAIL PROTECTED]>
Subject: Mersenne: PrimeDC.exe (was Crazy thought of the day...)
"Blosser, Jeremy" wrote:
> Seeing as how the Sega Dreamcast was officially released here in the
U.S.
> today... and the list has been REALLY quiet (due to school starting?),
I
> looked at the specs... and sure 'nuff, apparently, the Hitachi SH4 can
do 2
> FPU ops per cycle (single or double precision) and has the good ole
SIMD
> architecture that the PIII and 3DNow chips seem to support, with
128-bit
> ops, but again, only for 32-bit single precision floating point
numbers...
> so bummer there... but the superscalar FPU is nice...
Check out http://www.canadawired.com/~gvink/Sega/Tech/tech_cpu3.html
for a good overview of this mighty cpu as used in the Dreamcast (1400
MFLOPS -900 MFLOPS sustained with external memory) .
and http://www.canadawired.com/~gvink/Sega/Tech/technical_index.html
for a good overview of the Dreamcast's technical specs.
> Anyway, since the Dreamcast runs WinCE, a port to it would be pretty
easy...
> the only problem I see is storing intermediate files, since the
"memory
> packs" are only 128k or something right now... I suppose you could
store it
> somewhere on the 'net by using its built-in 56k modem or whatever...
see http://msdn.microsoft.com/cetools/platform/factsheet.asp for info on
WinCE
for the Dreamcast.
There are now 4 Mb "memory packs" available so storing intermediate
files most
probably wouldn't be too much of a problem. The dreamcast has 16 Mb main
memory - would this be enough ?
PrimeDC.exe ? Someone should make it happen !
Glenn
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Thu, 28 Oct 1999 22:32:42 -0700
From: Will Edgington <[EMAIL PROTECTED]>
Subject: Mersenne: Updated web pages; new factor counts & percentages
I've updated my web pages Yet Again, including adding some quick links
at the top of mersenne.html to the other Mersenne related files on my
web site.
The 1stfacnt.txt file is gone; I've split it into facntHU.txt (for
incompletely factored Mersennes) and facntD.txt (for completely
factored Mersennes) and the counts are no longer only for first
factors, but for second factors, third factors, etc., as well. The
files also provide counts for all exponents, for only prime exponents,
and percentages with at least one, at least two, etc., factors out of
the exponents for which I have any data at all, rather than just
counts of factors.
There's a significant drop in the percentages of prime exponent
Mersennes with known factors for exponents above 10 million, but that
may simply be because GIMPS and Primenet factoring haven't done much
for exponents that large yet. Comments?
All Mersenne numbers with prime exponents less than 80 million have
been trial factored to at least 2^41. All Mersenne numbers with
exponents under 24687 have had primality checks of their cofactors.
All SPRP cofactors with less than 836 decimal digits have primality
proofs (with a few exceptions; the binary version of ECPP that I have
crashes for some numbers).
Will
http://www.garlic.com/~wedgingt/mersenne.html
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Fri, 29 Oct 1999 03:27:22 -0400
From: George Woltman <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Re: fast GCD
Hi,
At 07:55 PM 10/28/99 EDT, [EMAIL PROTECTED] wrote:
>Jason Papadopoulos wrote:
>
>> Except that every once in a while you'd have to perform a GCD, and
>> a GCD on million-digit size numbers takes about a day (no kidding!
>> There seems to be no fast way to do one!)
>
>A day seems somewhat of an overestimate (though maybe it takes that
>long on a Pentium - I don't know.) On a decently fast Alpha, say a 500MHz
>21164, I can do a million-digit (and note I mean base-10 digits, not bits)
>GCD in under 5 minutes
When I was making a P-1 attempt on F24 I reported the GCD on my PII-400
was taking a day.
Since F24 is 5 million digits your computer would take 5 minutes * 5^2,
or 2 hours. Given your machine is faster, my code is 32-bit, my code
does an extended GCD, the Alpha is a better architecture, and your code
is probably better - you come up with the factor of 12 difference.
Now the good news. Crandall's giants code has an implementation of
the O (n * log n * log n) GCD in it. It was painfully slow. After two
days of optimizing, I will be able to do that 5 million digit
GCD in an estimated 30 minutes.
BTW, I know of no other publicly available implemetation of the fast GCD.
It was written by J. P. Buhler.
Regards,
George
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Fri, 29 Oct 1999 23:07:52 -0700
From: Spike Jones <[EMAIL PROTECTED]>
Subject: Mersenne: geekfest at tiedhouse
The SF Bay area GIMPSers got together this evening for
a pleasant time of fellowship and dinner at the Tied House.
We all had the prime rib...
{8^D spike
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
Date: Mon, 1 Nov 1999 12:59:04 +0100
From: "Shot" <[EMAIL PROTECTED]>
Subject: Mersenne: PrimeNet's Top Producers
Hi.
On 30th October my computer completed another LL test and returned the result
to PrimeNet.
Since then my Individual Account Report seems to be working OK (it has the
current date and proper info), but my rank seems to be somehow corrupted.
When I look at <http://entropia.com/cgi-bin/primenet_user.pl?UserID=Shot> the
server returns an empty file, and I can't find myself in the PrimeNet's Top
Producers List (I should be listed somewhere around 0.747 P90 CPU years).
Could anybody verify this for me, please?
Thanks for your time,
- -- Shot
__
c"? Shot - [EMAIL PROTECTED] hobbies: Star Wars, Pterry, GIMPS, ASCII
`-' [EMAIL PROTECTED] join the GIMPS @ http://www.mersenne.org
Science Explained (by Kids): Water freezes at 32 degrees and
boils at 212 degrees. There are 180 degrees between freezing and
boiling because there are 180 degrees between north and south.
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
------------------------------
End of Mersenne Digest V1 #654
******************************