Mersenne Digest Wednesday, May 5 1999 Volume 01 : Number 552
----------------------------------------------------------------------
Date: Mon, 3 May 1999 14:31:52 -0400
From: "Ernst W. Mayer" <[EMAIL PROTECTED]>
Subject: Mersenne: IA-64
I got an announcement in today's mail advertising "Intel's Merced
and IA-64: Technology and Market Forecast," by Linley Gwennap, editor
of Microprocessor Report, and to be published by MicroDesign Resources
(www.mdronline.com). MDR claims the above to be
"The authoritative analysis of Intel's new microprocessor
architecture from the industry's most trusted source."
They also drop tantalizing hints about
"...the fledgling chip's strengths and its critical weaknesses."
>From the IA-64 register set figure in the advert, one weakness appears to me
to be the sheer amount of silicon: Intel is going from just 8 FP registers
in all the Pentium incarnations to a whopping 128, each still having the
x86's full 80 bits, in the IA-64 There are also 128 65-bit general purpose
(64-bit integer plus carry bit) registers. This raises several questions:
1) How does Intel expect to be able to maintain anything approaching their
current price advantage over other (already existing) high-end chips with
this much Silicon in the IA-64?
2) What use are so many registers without a major increase in the number
of functional units? (I believe IA-64 still has 1 FP adder, 1FP multiplier,
perhaps 2 64-bit integer units, the latter having been a longstanding feature
of e.g. Alpha and MIPS, which both have excellent integer functionality
and neither has 128 integer registers.)
Sure, lots of registers is nice for out-of-order execution (OOE), but even
an OOE monster like the Alpha 21264 has "just" 80 FP registers, each of just
64 bits - IA-64 will have double the silicon here!
The aforementioned book is pricey: $995 before 31 May, but additional
copies are "only" $295. Microprocessor Report subscribers get $100 off
either of these figures. At a potential $195 a copy, perhaps if one of
the readers of this list is a subscriber, there might be enough interest
for a volume order?
Or, perhaps there's a better/cheaper place to obtain the relevant technical
information?
- -Ernst
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Mon, 3 May 1999 13:58:44 -0600
From: "Aaron Blosser" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: IA-64
> The aforementioned book is pricey: $995 before 31 May, but additional
> copies are "only" $295. Microprocessor Report subscribers get $100 off
> either of these figures. At a potential $195 a copy, perhaps if one of
> the readers of this list is a subscriber, there might be enough interest
> for a volume order?
>
> Or, perhaps there's a better/cheaper place to obtain the relevant
> technical
> information?
In the past, I've been blessed to see some Intel demos, but I had to sign an
NDA, so I'm not always sure what I can and cannot say.
But if you can get yourself into an Intel demo or seminar, they pass out all
sorts of literature. I'll be going to another one in August sometime, and
Intel will be there. I should be able to get alot more detail on IA-64 and
Merced then.
But from what I can recall, there are multiple FPU pipelines to take
advantage of all those yummy registers...that and the fact that the core is
really a RISC design now anyways...register renaming, etc.
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Mon, 3 May 1999 15:04:25 -0500
From: "Griffith, Shaun" <[EMAIL PROTECTED]>
Subject: Mersenne: Java Prime and lotsa questions
Since the heavyweights here seem to be involved with finals and such, let me
get you started on your answers...
a) [skipping, I don't know]
b1) Using "properly sized" FFT: Yes, I believe using properly sized FFT's
saves some time. How much depends on the programmer, the data, the
language/compiler, and the hardware. Non-power-of-2 FFT's can even be
useful, but it seems that coding up all (or most) of the possible sizes may
be more trouble than it's worth.
b2) Recurring pattern in "N=N^2-2 % (2^P-1)": I don't recall anyone here
mentioning any interesting repeats, even for composite P. I'm sure many have
looked, and were disappointed.
c1) DB Lookup: If you are in the 2^8,000,000 range, that is 8 million bits,
or ~1MB. A database of all possible results of "N=N^2-2 % (2^P-1)" would
take 1MB * 2^8,000,000 ~= 10^3,500,000MB. This is larger than the number of
particles in the universe, by something like 10^35,000. If instead you meant
to lookup all of the "N=N^2-2" values, and compute the "%(2^P-1)" at
runtime, then P~=8,000,000 gives the last number in the sequence ~=
2^(2^8,000,000), which again runs up against the universal limit.
c2) The LL test does not adapt well to parallel efforts. I think it will
always be the case that interprocess communication will cause the
calculations to run slower than if just one processor crunched on the whole
thing. [Array or vector processor machines are another matter!] But if the
number of significant bits required were orders of magnitude larger than the
hardware, then it might make sense to parallel the squarings. We're not
there yet.
d) NTT's (Number Theoretic Transforms): There has been some talk of that
here before. However, until the ratio of integer to floating point
throughput improves, NTT's will not be very popular. [Note that the FFT
"automagically" computes the %2^P-1, using the Crandall-Fagin procedure, so
NTT's have a lot of ground to make up first.]
e) & f) Don't know
If I have made any misstatements here, someone correct me please.
- -Shaun
Shaun Griffith, [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
Quantum Mechanics: The dreams stuff is made of
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Mon, 3 May 1999 17:06:50 -0600
From: "Aaron Blosser" <[EMAIL PROTECTED]>
Subject: Mersenne: Strange Prime95 behaviour...
I just got some new machines in to play with.
They're only P166's, but fine for double-checking.
Did the requirements for double-checking change? I had the machines set to
get the best type of work, which for a 166 should be double-checks right
now...but they got factoring assignments instead.
Even when I set one to specifically do double-checks, it *still* got
factoring assignments.
Plus, bear in mind these are all the same machine, one machine refuses to
connect with HTTP and gives Primenet Error 13 messages, but it works just
fine with RPC. Doh!
And yet, another machine, identically configured (and I do mean
identical...I used Ghost to clone them) is using HTTP fine.
How peculiar. The non-HTTP working machine is fine in all other
respects...I can use the web browser no problem, logged onto my domain, etc.
Just HTTP gives error 13's no matter what.
Any thoughts? And is Primenet just not handing out Double-check assignments
right now for some reason?
Aaron
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Mon, 03 May 1999 19:38:42 -0500
From: Amy and Shane Sanford <[EMAIL PROTECTED]>
Subject: Mersenne: Iteration time?
Does anybody have a idea of what the iteration time for a PII 450a running
Prime95 under Win 98 should be (LL test in the 683 range)?
Shane
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Mon, 3 May 1999 20:59:07 -0400 (EDT)
From: "Vincent J. Mooney Jr." <[EMAIL PROTECTED]>
Subject: Re: Mersenne: FFt????????
Just a few from last year, I hopw they are still active.
FFT Links
"http://theory.lcs.mit.edu/~fftw/fft-links.html" FFT Introduction (many
FFT links)
"http://www.spd.eee.strath.ac.uk/~interact/FFT/" Fourier Transform in
Signal Processing
"http://cfatab.harvard.edu/nr/bookcpdf.html" Numerical Recipes in C (See
parts on FFT)
At 08:13 PM 5/3/99 +0200, you wrote:
>At First, Hi all
>
>Well, could anybody explain me the FFT-Algorithm. I'd be very pleased about
>it, because I've not found any usefull stuff in the NET
>________________________________________________________________
>Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
>
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Mon, 03 May 1999 22:43:18 -0400
From: Peter Doherty <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Iteration time?
At 19:38 05/03/1999 -0500, you wrote:
>Does anybody have a idea of what the iteration time for a PII 450a running
>Prime95 under Win 98 should be (LL test in the 683 range)?
>
>
>Shane
Should be something like .200 seconds. I'm working on something in the 735
range and it's doing .218 sec on my P2 450. BTW, you say PII
450a...implying a Celeron 300A overclocked to 450...in which case it's a
Celeron 450A...not PII....but the Celeron and P2 are about the same speed
for Prime95
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Mon, 03 May 1999 23:02:05 -0400
From: George Woltman <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Strange Prime95 behaviour...
Hi Aaron and List Members,
At 05:06 PM 5/3/99 -0600, Aaron Blosser wrote:
>Did the requirements for double-checking change?
No. The server is out of double-check assignments to hand out.
Until the problem is fixed it will hand out factoring assignments
instead.
The root of the problem is that the Primenet Server upgrade a couple
of months back has broken our ability to do a database merge. The
database of 2 months ago only had double-check assignments up to
3021377. My latest database has exponents up to 5,000,000 available,
but until we can do a merge the server will hand out factoring work.
Sorry for the inconvenience,
George
P.S. For those that don't know what a database merge is, Scott and I
maintain separate databases of results. Every once in a while we merge or
sync up.
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Mon, 03 May 1999 23:07:40 -0400
From: Jud McCranie <[EMAIL PROTECTED]>
Subject: Mersenne: [Mersenne] Celeron 400
I just got a 400 MHz Celeron, and it is working on 7,376,xxx. It is taking
0.250 seconds per iteration. How does this compare to a PII-400 and a
PIII-400? Is this about right for a Celeron-400?
+------------------------------------------+
| Jud McCranie [EMAIL PROTECTED] |
+------------------------------------------+
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Mon, 3 May 1999 19:22:28 -0600
From: "Blosser, Jeremy" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Iteration time?
I'm getting .244 on my PII-412 (OCd)...
Oddly, on my XEON Processors (400Mhz) I get ~.4 Weirdness...
>-----Original Message-----
>From: Amy and Shane Sanford [mailto:[EMAIL PROTECTED]]
>Sent: Monday, May 03, 1999 7:39 PM
>To: [EMAIL PROTECTED]
>Subject: Mersenne: Iteration time?
>
>
>Does anybody have a idea of what the iteration time for a PII
>450a running
>Prime95 under Win 98 should be (LL test in the 683 range)?
>
>
>Shane
>
>________________________________________________________________
>Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
>
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Mon, 3 May 1999 21:17:49 -0700
From: Will Edgington <[EMAIL PROTECTED]>
Subject: Mersenne: JavaLucas (again) Large FFT results and I need some help oh great
prime gurus...
Blosser, Jeremy writes:
So I ran JavaLucas with an FFT of 256k Its getting 1.75
sec/iteration. So... not too shabby really...
Not bad at all.
1) What the algorithm for finding the best FFT size. I was doing:
[...]
So I looked at MacLucasUNIX, and it has:
while (n < ((BIG_DOUBLE)q)/ROUNDED_HALF_MANTISSA + 4.0)
n <<= 1;
size = n + ((n & MASK) >> SHIFT);
What I can't find is where the ROUNDED_HALF_MANTISSA came from, and
why the mask and shift operation?
ROUNDED_HALF_MANTISSA in MacLucasUNIX is mis-named, really; as I
recall (and I didn't write the original use of it), it's just a
constant based indirectly on the number of bits in the mantissa of the
doubles being used. You'll find the declaration in setup.h, based on
the define of BIG_DOUBLE.
The mask and shift is a "trick" (that I believe George Woltman passed
on to John Sweeney) to leave gaps in the arrays to improve cache
performance. See the file "Wisdom" which John Sweeney wrote and I
pass on as part of the mers distribution. I have no idea whether it
will help or hurt Java performance, but there are some hooks in the
mers package code to use it (MacLucasUNIX) or turn it off (mersenne1)
in some of the functions that use the primary array.
2) I'm looking at the code, in Lucac.c and it looks like:
a) create the scrambled array scrambled[j] = n[permute[j-1]] *
two_to_phi[permute[j-1]]
(not quite sure why)
b) it converts the scrambled array to fft, squares it, does the
inverse fft...
c) multiplies the scrambled array * two_to_minusphi
(Is this the mod here???)
d) calls addsignal
(this one has me lost)
e) calls patch
(carry?)
I believe this is simpler in both mersenne1.c and MacLucasUNIX.c,
though it's been a while since I've looked at any of them this closely
and I myself don't fully understand FFTs.
So, my question is, where is the subtraction?
n = (n*n-2) % (2**p-1)
I see the square, I see (I think) the mod... wheres the -2?
MacLucasUNIX.c does it in normalize(). Mersenne1.c does it in main().
3) I was thinking in the shower this morning... (Odd ain't it?)
And for all n < 2**p-1, the answer will always be n*n-2 right? Also, if n <
sqrt(MAXLONG), then we can do it in place without the FFT...
Just looking at some data sets, the odd of n being less than sqrt(MAXLONG)
is actually not to bad... (especially in Java which has 64-bit longs)...
So where does this short cut fit in? (with a big FFT, this can be a big
difference (I think))...
I'm not sure, but I'd guess that you've goofed somewhere. For
reasonably large p (in this case, anything over about 100), the chance
of n being less than 33 bits should be miniscule, except, of course,
during the first several iterations. But perhaps I'm misunderstanding
what you're saying.
And even when I tried the "obviously-it-will-work" change to start at
194 or 14 (rather than 4) in mersenne1 and MacLucasUNIX, there were
problems on some computers and compilers.
Will
http://www.garlic.com/~wedgingt/mers.tar.gz
mers.tgz
mers.zip
mersenne.html
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Mon, 3 May 1999 22:57:16 -0600
From: "Aaron Blosser" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Strange Prime95 behaviour...
> The root of the problem is that the Primenet Server upgrade a couple
> of months back has broken our ability to do a database merge. The
> database of 2 months ago only had double-check assignments up to
> 3021377. My latest database has exponents up to 5,000,000 available,
> but until we can do a merge the server will hand out factoring work.
>
> Sorry for the inconvenience,
> George
>
> P.S. For those that don't know what a database merge is, Scott and I
> maintain separate databases of results. Every once in a while we merge or
> sync up.
I had noticed that no database merge has taken place since Jan. 29 of this
year, and I suspected it was partly due to the Primenet upgrade, not to
mention that you were busy on version 18.
Don't sweat it...factoring is fine for my P166's. With any luck, a couple
quad PII-450's will land on my desk Thursday...that'll make me happy for
doing full LL tests.
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Tue, 4 May 1999 11:02:19 GMT
From: "Brian J Beesley" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Java Prime and lotsa questions
> b1) Using "properly sized" FFT: Yes, I believe using properly sized FFT's
> saves some time. How much depends on the programmer, the data, the
> language/compiler, and the hardware. Non-power-of-2 FFT's can even be
> useful, but it seems that coding up all (or most) of the possible sizes may
> be more trouble than it's worth.
Well, George went to the trouble of adding 3*2^n, 5*2^n and 7*2^n
run length code into Prime95 - even though it's less efficient than
plain powers-of-two, it's still well worth while, for the exponents that
can benefit from it.
I'd be _delighted_ if someone took it on themselves to add code for
intermediate run lengths into MacLucas/MacLucasUNIX - even just
3*2^n would be very worthwhile. If I had time, I'd do it myself!
>
> b2) Recurring pattern in "N=N^2-2 % (2^P-1)": I don't recall anyone here
> mentioning any interesting repeats, even for composite P. I'm sure many have
> looked, and were disappointed.
If you get to residual 0, then you do get a most interesting repeat -
the next iteration is -2 and all the subsequent ones are +2
This proves that 0 cannot occur as an "interim" residual (any
iteration prior to 2^p-3) if 2^p-1 is going to turn out to be prime.
Although testing for "early zero" would be very quick, it's
nevertheless probably not worth doing. We see no res64=2 values
in the completed results for composite exponents.
Since there are "only" 2^p-1 available values that the residual can
take, if you iterate long enough, the residual is _certain_ to go into
a repeating loop, eventually. However, the chance of this
happenning in the first 2^p-3 iterations would seem to be
vanishingly small. If it _does_ start to repeat "early" then 2^p-1
_must_ be composite - because it can't get to zero without sticking
at 2, see above - but is it really worth adding the extra code to
check?
>
> c2) The LL test does not adapt well to parallel efforts. I think it will
> always be the case that interprocess communication will cause the
> calculations to run slower than if just one processor crunched on the whole
> thing. [Array or vector processor machines are another matter!] But if the
> number of significant bits required were orders of magnitude larger than the
> hardware, then it might make sense to parallel the squarings. We're not
> there yet.
This seems self-evident, however it is certainly true that there is an
efficient way to code an FFT for a vector processor.
>
> d) NTT's (Number Theoretic Transforms): There has been some talk of that
> here before. However, until the ratio of integer to floating point
> throughput improves, NTT's will not be very popular. [Note that the FFT
> "automagically" computes the %2^P-1, using the Crandall-Fagin procedure, so
> NTT's have a lot of ground to make up first.]
>
If you have a processor like StrongArm, which has efficient integer
hardware but no floating-point hardware at all - forcing you to resort
to software emulation for floating-point - then NTT starts to make a
lot of sense. You get _some_ ground made up automatically
because you have no rounding errors, so you can use a smaller
transform size than experience with real/complex FFT would
suggest. I'm still "rather hazy" (read, totally baffled) on exactly how
the Crandall-Fagin procedure works, so I don't know if you could
adapt NTT to do the mod 2^p-1 automatically. But my gut feeling is
that it ought to be possible.
Regards
Brian Beesley
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Tue, 04 May 1999 14:34:51 EDT
From: "Foghorn Leghorn" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Update on using Prime95 as a windows shell
>Don't think I didn't try this (and a number of similar ideas). The
>problem was that the forked copy of Prime95 doesn't seem to start
>executing until ReCache completes. Either that or ReCache
>suspends itself until Prime95 completes - *most* unsatisfactory to
>hog all that memory long-term...
That's interesting; I haven't been getting that problem. When pass 2b
starts, Prime95 launches, and ReCache continues until the pass finishes, and
then it terminates, leaving Prime95 running at peak performance. I'm running
Windows 98, and I compiled your code with Borland C++ Builder 4. Could one
of these factors make a difference?
(If you're interested in trying out my compiled executable, I'd be glad to
send it to you.)
>As you've found out, running ReCache to completion then starting
>Prime95 does have a good effect (it forces DLLs etc. which aren't
>currently active to be dumped, for a start). However, I certainly get
>more consistent results using the manual start procedure.
The automatic method has been completely consistent for me. Strange...
_______________________________________________________________
Get Free Email and Do More On The Web. Visit http://www.msn.com
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Tue, 4 May 1999 13:05:33 -0600
From: "Blosser, Jeremy" <[EMAIL PROTECTED]>
Subject: Mersenne: RE: JavaPrime update... (Lotsa Questions)
Okay, I actually remembered to benchmark JavaPrime on the RS6000 we have
here...
That RS6000 can rock!
FFT = 256k
.4874s/iteration!!!
And that was with my old JavaLucas version... Gotta try the new one which
does the radix 8 ops... but its currently being revamped so I suppose when
I'm done...
I guess the PPC 603s (or are they 604s... hmmm its a RS/6000 63p) can kick
some butt w/ their Java FP libs... Makes me almost want to try it on the
AS/400, I can try it later today... problem is the thing is so friggin'
weird...
Does anyone have benchmarks for MacLucasUnix on the RS6K that I can compare
against? I'd look at the benchmark page, but for some reason my company's
firewall is blocking the site "because it contains sex related content"....
Odd...
Anyway, I was thinking yet again (I know, you gotta hate when I do that...)
The FFT algorithm inherently lends itself towards a recursive
implementation, but for speed reasons we do it iteratively. Or in more
modern implementations, semi-iteratively...
However, for a multiple processor machine... wouldn't it be effecient to
recurse deep enough to spawn X (x being the # of CPUs) threads from there? I
don't see how we run into any memory/access problems in a multithreaded
scenario here... Another thought is to have each CPU perform a portion of
the FFT, and use the Chinese Remainder Theorem to combine the results... (I
think, I need to read up on the CRT some more)...
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Tue, 4 May 99 21:37:16 CES
From: "Cornelius Caesar" <[EMAIL PROTECTED]>
Subject: Mersenne: RE: JavaPrime update... (Lotsa Questions)
>I actually remembered to benchmark JavaPrime on the RS6000 we have here...
>FFT = 256k
>.4874s/iteration!!!
...
>Does anyone have benchmarks for MacLucasUnix on the RS6K that I can compare
>against?
Well, my version of MacLucasUNIX (v4.30) runs doublechecks with FFT=256K
and currently uses about .168 sec/iter. This is a multi-processor RS/6000 F50
with 332 MHz PPC 604e's. Every processor gets one copy of the program.
A 43P with 233 MHz 604e needs the same time for a 128 K FFT.
I believe due to smaller caches on this machine (compared to the F50)
it needs more than double that time on a 256 K FFT.
Your JavaPrime is not so bad...
Cornelius Caesar
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Tue, 4 May 1999 15:10:12 -0600
From: "Blosser, Jeremy" <[EMAIL PROTECTED]>
Subject: Mersenne: RE: JavaPrime update... (Lotsa Questions)
>
>Okay, I actually remembered to benchmark JavaPrime on the
>RS6000 we have
>here...
>That RS6000 can rock!
>
>FFT = 256k
>.4874s/iteration!!!
>
>And that was with my old JavaLucas version... Gotta try the
>new one which
>does the radix 8 ops... but its currently being revamped so I
>suppose when
>I'm done...
>I guess the PPC 603s (or are they 604s... hmmm its a RS/6000
>63p) can kick
>some butt w/ their Java FP libs... Makes me almost want to try
>it on the
>AS/400, I can try it later today... problem is the thing is so friggin'
>weird...
>
BAH! The AS/400s JVM seems to be out of data... :P Of course, IBM has to
make it incredibly difficult to update anything on the AS/400. I can't even
get the PTFs to update it, I gotta have some IT guy do it... REALLY
annoying...
Oh well...
Anyway, the benchmarks were:
256k FFT
2.573s/iteration
The way IBM does things on the AS/400 are just so danged weird... For
example, before "creating the java program" (via CRTJVAPGM) and upping the
optimization on it, I was getting some 8s/iter ... What the heck the
CRTJVAPGM command does is beyond me... (You can set the optimization level
via the JAVA command on the AS/400 anyway)...
Add to that the fact that the file system on the AS/400 is so wacky, it
sucks...
Oh well, back to real work I suppose. Perhaps I'll get the newer version
done later tonight...
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Tue, 4 May 1999 15:55:57 -0600
From: "Blosser, Jeremy" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: RE: JavaPrime update... (Lotsa Questions)
>BAH! The AS/400s JVM seems to be out of data... :P Of course,
data == date ... doh!
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Tue, 4 May 1999 10:19:43 +0000
From: "Steinar H . Gunderson" <[EMAIL PROTECTED]>
Subject: Mersenne: Re: Mersenne Digest V1 #551
On Mon, May 03, 1999 at 11:22:02AM -0700, Mersenne Digest wrote:
>Also, I'm wondering if any pattern begins to occur in the N=N^2-2 % 2^P-1
>sequence... Do you think N ever repeats itself? Has anyone checked this?
I checked it some years ago. It doesn't seem to. (But Mandelbrot fractals
do, that's where I got the idea from originally.) But don't take my word for
it, I was merely playing with Java's BigInteger class, and I don't think I
tested it extensively.
- ---snip---
>Well, could anybody explain me the FFT-Algorithm. I'd be very pleased about
>it, because I've not found any usefull stuff in the NET
If you read George's source, he gives you an excellent advice: Go buy a book :-)
FFTs are (I guess) really serious maths, and you will need both a good
background _and_ a good textbook. Honestly, the Net doesn't contain very much
advanced material such as this.
/* Steinar */
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Wed, 5 May 1999 16:47:27 +0100 (BST)
From: Nick Craig-Wood <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Java Prime and lotsa questions
In <URL:news:local.mersenne> on Tue 04 May, Brian J Beesley wrote:
> I'm still "rather hazy" (read, totally baffled) on exactly how the
> Crandall-Fagin procedure works, so I don't know if you could adapt NTT to
> do the mod 2^p-1 automatically. But my gut feeling is that it ought to be
> possible.
It is possible. That is how my StrongARM code works. The maths for the
irrational base transform all works out provided the relevant fractional
powers of 2 exist in the field you are using (ie for a transform of length n
there must exists a number x so that x^n = 2). Whether this is the complex
field or some other Galois field it doesn't matter. This puts some
constraints on which modulus you can use... My code uses GF(2^64-2^32+1)
which works for DWTs up to 2^26 bits (67 million). The 'interesting' binary
representation of this number makes for lots of short cuts when taking the
modulus also.
- --
Nick Craig-Wood
[EMAIL PROTECTED]
http://www.axis.demon.co.uk/
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Wed, 5 May 1999 11:50:56 -0400
From: Joth Tupper <[EMAIL PROTECTED]>
Subject: Mersenne: RE: JavaPrime update... (Lotsa Questions)
Message text written by "Blosser, Jeremy"
>Chinese Remainder Theorem<
You may want a different way to combine the results from many processors.
CRT - if you have pair-wise relatively prime (positive) integers M1, M2,
..., Mn and residues R1,...Rn
then you can find an integer A such that for all i, 1 LE i LE n, A = Ri
(mod Mi) [here "=" means congruent].
The pairwise relatively prime is important. That is, for all i, j, we have
(Mi, Mj) = 1. In fact, the CRT does not
in general work without this.
The residues can be any integers.
Joth Tupper
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
Date: Wed, 05 May 1999 20:51:27 EDT
From: "Foghorn Leghorn" <[EMAIL PROTECTED]>
Subject: Mersenne: ECM question
At Paul Zimmerman's ECM page,
http://www.loria.fr/~zimmerma/records/ecmnet.html
the optimal B1 value listed for finding 50-digit factors is 43000000, but
George's ECM factoring page uses 44000000 for the same purpose. Is one of
them wrong, or is there a reason for the difference?
_______________________________________________________________
Get Free Email and Do More On The Web. Visit http://www.msn.com
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
------------------------------
End of Mersenne Digest V1 #552
******************************