Mersenne Digest Sunday, September 26 1999 Volume 01 : Number 633 ---------------------------------------------------------------------- Date: Sat, 25 Sep 1999 10:11:02 +0100 From: "Brian J. Beesley" <[EMAIL PROTECTED]> Subject: Re: Mersenne: Re: and, from WAY out in left field... On 25 Sep 99, at 1:23, [EMAIL PROTECTED] wrote: > As with all distributed efforts, we all contribute what we can. It's not > a contest, except perhaps amongst us programmers, in terms of pushing each > other to continually improve our codes. There are other ways of contributing as well as simply contributing CPU cycles - valuable though that contribution is! > >Sorry, I am bitter and can't afford a 550 Pentium..... > > Hopefully soon - here prices have dropped below 1000 $US for 500MHz systems- > you can get a 300-400MHz PII virtually for free if you're willing to pay > $20/month for 36 months of unlimited Internet access. You should try buying > online, e.g. at www.valueamerica.com or some such store. UK "shop window" retail prices _look_ awful - but they insist on selling you a huge monitor, "high-end" multimedia hardware (possibly including an awful scanner and a worse digital camera), a 56K modem (built-in WinModem, useless unless you're running Windoze) and a huge raft of "free" software (mostly obsolete and/or rubbish they can't offload any other way, except as landfill) as well as the bits you actually need. You can buy basic systems at quite reasonable prices here, if you look in the right place e.g. �440 (ex VAT) for a complete PII-400 system. PIII-500 systems are a couple of hundred pounds more expensive, but that's Intel pricing policy. In terms of "bangs per buck", the systems with the fastest processors are relatively poor value for money (especially as the performance of "real" programs is not linearly related to the CPU clock speed!) There is an _enormous_ improvement in speed between a Cyrix 333 and a PII-400, _much_ less between a PII-400 and a PIII-600 (or whatever the fastest chip Intel do at the moment is - I find it hard even to keep track of where the "bleeding edge" is, so fast is the turnover of products!) Bear in mind the transport costs, the complications with customs duty & the possible risks involved in dealing in foreign exchange (you don't know the exchange rate until the credit card company processes the transaction - & they can, and do, manipulate this to your disadvantage) I'm not sure that there's any real benefit in dealing direct with US suppliers for single unit purchases. Bulk purchasing may be an entirely different matter. Here in the UK, component prices seem to be reasonably competitive, especially when purchased through the many outlets which include free delivery. This is a major saving on a small order. > >How do I know that the exponent I am testing has not already run to > >conclusion.? If someone else has run your exponent to completion (unlikely) you will find out when you check in your completion dates. The PrimeNet server will mutter at you. Regards Brian Beesley _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sat, 25 Sep 1999 16:25:43 +0300 From: Jukka Santala <[EMAIL PROTECTED]> Subject: Re: Mersenne: Front-end design Pierre Abbat wrote: > I suggest a couple of named pipes for control (the front end writes to one and > reads from the other, and mprime vice versa). Since writing to a pipe whose > reader is stuck can get you blocked when the pipe is full, and writing to a > pipe with nothing at the other end results in a SIGPIPE, mprime should not > output anything to the pipe unless told to do so, and should trap SIGPIPE and > turn off output. The current .ini structure is, I feel, sufficient for controlling the operation of the client, and preferrable. However, some signal should exist to force the client to re-read these settings after change; at least in *nixes this should be easy (Just handle HUP). As for data out of the client, I like to think that the existing save-state files would be enough, or could be made to be enough. I'm not sure how much cycles generating them frequently will burn, though, which could be a problem. Also the timing data is not easily accessible this way, but I like the approach used in the recently posted *nix GUI. And I concur with the ideas that the "extended GUI" should, if possible, be separate from the main client so as not to run a risk of messing up the results. -Donwulff _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sat, 25 Sep 1999 09:36:01 -0400 (EDT) From: Jason Stratos Papadopoulos <[EMAIL PROTECTED]> Subject: Re: Mersenne: Re: FFTW for GIMPS? On Sat, 25 Sep 1999, Guillermo Ballester Valor wrote: > Yes, certainly I've be able to adapt lucdwt and McLucasUNIX in four > days. On the other hand, my goal only was to know if working with FFTW > is a good idea, and timings obtained make me think it could be. For really big FFTs you can get major gains by using FFTW as a building block in a bigger program, rather than have it do your entire FFT with a single function call. As Ernst mentioned, the building block approach lets you fold some of the forward and inverse FFT computations together, and this saves loads of time in cache misses avoided. On the UltraSPARC, using FFTW for the pieces of your computation rather than the whole thing is somewhere between 2 and 10 times faster than FFTW alone. > If your comparison were ported to intel machines, which is wrong, your > code will run nearly as fast as mprime!!. You say your code is twice > faster than FFTW, sure it is, *BUT* in my pentium-166 the short code I > wrote do an iteration of my actual exponent 3975659 in 0.901 seconds > while mprime take only 0.359. This makes a RPI=40%. Then, your code will > reach nearly 90% !and without lots of assembler code!. On the Pentium, assembly-coded small FFTs run more than twice as fast as FFTW. Even from C, you can do better on the Pentium (do a web search for djbfft, a free Pentium-optimized FFT package). For a recursive split-radix, you need about 200 lines of assembly; surely this is worth twice the speed! jasonp _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sat, 25 Sep 1999 10:55:56 -0500 From: Herb Savage <[EMAIL PROTECTED]> Subject: Re: Decamega Tests (was: RE: Mersenne: GIMPS client output) John R Pierce wrote: > > There's another trick to this, primarily useful in assembler not C > programming... You can do it in C also. Its called Duff's Device. Its ugly but legal C code. Here a URL with info on it: http://wwwwbs.cs.tu-berlin.de/~jutta/c/duffs-device.html Regards, Herb Savage > > Say you unroll something 16 times.... Take teh actual iteration count > modulo 16, and JMP into the loop at that offset to start with the repeat > counter set to the count/16. i.e. if you need to do, say, 60 iterations of > the inner code, thats 48 + 12, so you set the loop count to 3, and jump to > the 4th block of the 16 (16-12 = 4) > > Anyways, prime95 is HEAVILY unrolled, using assembler macros to generate the > inline linear inner 'loops'. > > -jrp _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sat, 25 Sep 1999 19:31:59 +0200 From: "Steinar H. Gunderson" <[EMAIL PROTECTED]> Subject: Re: Re: Decamega Tests (was: RE: Mersenne: GIMPS client output) On Sat, Sep 25, 1999 at 10:55:56AM -0500, Herb Savage wrote: >Here a URL with info on it: > > http://wwwwbs.cs.tu-berlin.de/~jutta/c/duffs-device.html A side note on the text in that URL: Duff uses `n%8', which a not-so-smart compiler would actually implement as mod 8. A smarter one might do `&7' to mask out the lower 3 bits. /* Steinar */ - -- Homepage: http://members.xoom.com/sneeze/ _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sat, 25 Sep 1999 13:31:43 EDT From: [EMAIL PROTECTED] Subject: Mersenne: Re: Mlucas 2.7x for SPARC I wrote <> Whoops, that says the opposite of what I wanted to say, which is I think it DOES run on any SPARC (f90 installed or not). - -Ernst _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sat, 25 Sep 1999 13:31:41 EDT From: [EMAIL PROTECTED] Subject: Mersenne: Re: Mlucas 2.7 for x86? Guillermo Ballester Valor ([EMAIL PROTECTED]) writes: << You say your code is twice faster than FFTW, sure it is, *BUT* in my pentium-166 the short code I wrote do an iteration of my actual exponent 3975659 in 0.901 seconds while mprime take only 0.359. This makes a RPI=40%. Then, your code will reach nearly 90% !and without lots of assembler code! >> Hi, Guillermo: Alas, I don't think it works that way, for several reasons. On RISC Microprocessors, instruction scheduling (while nontrivial) is much easier than on CISCs, meaning that a good compiler can usually generate code that runs nearly as fast as hand-tuned assembly code (ASM) written by a skilled programmer. By "nearly" I mean 50-90%, depending on the CPU, the compiler and the ASM programmer. Non-RISCs like the Intel x86 are notoriously difficult to generate good code for, either automatically or by hand - I believe George spent 2 years getting the ASM for the early versions of Prime95 put together, and he (George - correct me if I'm wrong) was an x86 ASM expert to begin with. I'm not familiar enough with the details of FFTW to say for sure (Steven Johnson or Jason Papadopoulos could answer this), but I'm pretty sure Frigo/Johnson have done some machine-specific ASM of critical portions of the code, at least for some popular architectures - if so, the x86 would be one of the first candidates, for the above reason and the fact that it's so numerous. That may explain why FFTW performs so well on the x86. Whenever I've tried my code (not including v2.7 - that should run at least 2x faster on x86 than 2.6, in my estimation) on x86, it's been 5-10x slower than Prime95. (That's partly due to the fact that my f90 compiler for x86 is awful, but only partly.) >Is there any linux or window Mlucas 2.7 executable for intel machines? No, but anyone with an f90 compiler for x86 is free to download and compile the source. By far the best compiler I know of for x86 is the Digital/Compaq/Microsoft Visual f90 for Windows - I don't have that one to play with, unfortunately, but I'd be interested to hear from someone who does what timings they get. Regards, Ernst _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sat, 25 Sep 1999 21:49:26 +0200 From: Guillermo Ballester Valor <[EMAIL PROTECTED]> Subject: Mersenne: Re: Mlucas 2.7 for x86? [EMAIL PROTECTED] wrote: > > I'm not familiar enough with the details of FFTW to say for sure (Steven > Johnson or Jason Papadopoulos could answer this), but I'm pretty sure > Frigo/Johnson have done some machine-specific ASM of critical portions > of the code, at least for some popular architectures - if so, the x86 > would be one of the first candidates, for the above reason and the fact > that it's so numerous. That may explain why FFTW performs so well on > the x86. I've read on the fly the code used and I've not seen any line of assembler code. Perhaps the C-code has tuned thinking in x86. > >Is there any linux or window Mlucas 2.7 executable for intel machines? > > No, but anyone with an f90 compiler for x86 is free to download and > compile the source. By far the best compiler I know of for x86 is the > Digital/Compaq/Microsoft Visual f90 for Windows - I don't have that > one to play with, unfortunately, but I'd be interested to hear from > someone who does what timings they get. > I've found a microsoft f-90 compiler. I've tried to compile it this afternoon but it gives me some errors, I think easy to solve. The compiler requires interfaces for most fft routines because first parameter 'a' is a target !? :-( I have no many time. But I'll be on work. yours sincerely. | Guillermo Ballester Valor | | [EMAIL PROTECTED] | | c/ cordoba, 19 | | 18151-Ogijares (Spain) | | (Linux registered user 1171811) | _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sat, 25 Sep 1999 22:08:51 +0200 From: Guillermo Ballester Valor <[EMAIL PROTECTED]> Subject: Re: Mersenne: Re: FFTW for GIMPS? Jason Stratos Papadopoulos wrote: > For really big FFTs you can get major gains by using FFTW as a building > block in a bigger program, rather than have it do your entire FFT with > a single function call. As Ernst mentioned, the building block approach > lets you fold some of the forward and inverse FFT computations together, > and this saves loads of time in cache misses avoided. On the UltraSPARC, > using FFTW for the pieces of your computation rather than the whole thing > is somewhere between 2 and 10 times faster than FFTW alone. > It could be terrific!. I'll see that. > On the Pentium, assembly-coded small FFTs run more than twice as fast > as FFTW. Even from C, you can do better on the Pentium (do a web search > for djbfft, a free Pentium-optimized FFT package). For a recursive > split-radix, you need about 200 lines of assembly; surely this is worth > twice the speed! > I would like to write some C-code for general proposes. For tuned assembler we have the Woltman fantastic prime95/mprime code. Thank you very much for your comments. It will help me a lot. Have a nice weekend. | Guillermo Ballester Valor | | [EMAIL PROTECTED] | | c/ cordoba, 19 | | 18151-Ogijares (Spain) | | (Linux registered user 1171811) | _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sat, 25 Sep 1999 20:36:09 -0400 From: "Olivier Langlois" <[EMAIL PROTECTED]> Subject: Re: Mersenne: Re: FFTW for GIMPS? I've played a little bit FFTW and I've remarked that its performance can vary a lot depending on how good your compiler is at optimization. For instance, compiled FFTW code is far from optimal with MSVC++ 6. This compiler doesn't fully use the FPU stack like an ASM programmer could do and I don't know why since I'm sure that writing a compiler that would make a good usage of the FPU registers is far from impossible. So, the compiler you use to compile FFTW is a major factor for the performance you'll get from it. I don't know if someone have done similar experiences and if there is a better compiler than MSVC for intensive FP code. Olivier Langlois - [EMAIL PROTECTED] - http://www3.sympatico.ca/olanglois Nortel 515-818-1010 x46659 Montreal, Canada _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sat, 25 Sep 1999 22:14:26 -0400 (EDT) From: Jason Stratos Papadopoulos <[EMAIL PROTECTED]> Subject: Re: Mersenne: Re: FFTW for GIMPS? On Sat, 25 Sep 1999, Olivier Langlois wrote: > I've played a little bit FFTW and I've remarked that its performance can > vary a lot depending on how good your compiler is at optimization. Absolutely. Compile FFTW with gcc on a Sun and you'll get half the speed of FFTW using Sun cc. > For instance, compiled FFTW code is far from optimal with MSVC++ 6. This > compiler doesn't fully use the FPU stack like an ASM programmer could do and > I don't know why since I'm sure that writing a compiler that would make a > good usage of the FPU registers is far from impossible. > > So, the compiler you use to compile FFTW is a major factor for the > performance you'll get from it. > > I don't know if someone have done similar experiences and if there is a > better compiler than MSVC for intensive FP code. In my (admittedly limited) experience, MSVC produces pretty suboptimal FPU code. gcc for DOS (the djgpp compiler) runs some of my FPU intensive programs twice as fast as VC++ (for the *same* program). djgpp is nice because if you write your floating point C a certain way, the compiler will produce almost identical assembly code. This can be put to good use if you can write asm yourself. Another gripe I have about VC++ is that it's pretty good at using integer registers so that stalls are avoided (for e.g. the Pentium), but if you try to do anything even remotely complicated it hops to a function call and destroys performance. For example, when working with 64-bit integers gcc will recognize a shift by 32 and inline simple code for it, where VC++ will jump to a function that will use variable shifts (the call and shifts together take about 20x the time). Actually, gcc's long long primitives are *way* faster than __int64 stuff on Microsoft's compiler. jasonp _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sun, 26 Sep 1999 04:39:06 +0200 From: "Robert van der Peijl" <[EMAIL PROTECTED]> Subject: Mersenne: Graphical visualisation of computations Jukka Santala <[EMAIL PROTECTED]> wrote on Friday 24 September 1999 at 5:25 PM: > Though you ask this, I find the topic rather appropriate for the list, > especially given the angle of "HOW can we visualize the process of > mathemathical operations. That brings me to the following observation: I think few of us fully realize the _enormous_ amount of (highly optimized) processing required each iteration to achieve something that looks so simple: a squaring, minus 2, mod (2^p-1). Remember, you see nothing happening! Prime just tells you: I used so-and-so-many BILLION clock cycles on the previous iteration. When a computer is ray-tracing (preferrably on some fancy high-end workstation) you are immediately dazzled by the picture-perfect pixels appearing on the screen in brilliant colours. But your result is not for eternity. In GIMPS it is. M7xxxxxx tested. Composite. Or: M10yyyyyy trial-factored to 2^65. No factors below 2^65. And you know that YOU discovered that mathematical fact. That little 64bit residue or checksum is precious. If you had a grain of rice for the number of clock cycles needed to produce just one of those bits, you could feed the entire (current) world population for a whole year! Is that about right? Whatever the correct figure, the program doesn't show you what it's doing. Actually, it might be interesting to be able to see, say, just the last 64 bits (in hex) of the current number L[n] in the Lucas sequence. The program displays those bits for L[p-1], right? So why not during the sequence? The program should IMHO at least -optionally- display a progress bar showing graphically what it now only shows in digits: progress on current exponent [95.2% completed]. The bar would probably grow one pixel longer every couple of hours. Any comments on the issue of graphical visualisation? Cheers, Robert van der Peijl _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sun, 26 Sep 1999 00:03:40 -0400 From: "Chris Nash" <[EMAIL PROTECTED]> Subject: Re: Mersenne: Graphical visualisation of computations > I think few of us fully realize the _enormous_ amount of (highly > optimized) processing required each iteration to achieve something that > looks so simple: a squaring, minus 2, mod (2^p-1). > Whatever the correct figure, the program doesn't show you what it's doing. This is an interesting point - not so much whether a gui would be nice or not, but, what would it show? A progress bar, maybe... anything else? There isn't really anything else to show. Intermediate results of the LL test don't themselves have a lot of meaning (even the final result, if non-zero, is devoid of much interpretation). There's not a lot you could plot - a graph of the iteration time would only serve to show when you opened Microsoft Word or something... I must admit, I dallied (albeit very briefly) with the bovine rc5 client. Were it not for their peculiar statistics ('if keys were drops of water, we could flood the world in a week' etc) I'd have got bored of the effort a lot sooner than I did. The SETI client looks pretty (more of a screen burner than a screen saver though) but the display is at best meaningless, a waste of cycles at best. I think we ought to count ourselves lucky that what we've got is bare-bones and low-impact. Chris Nash Lexington KY UNITED STATES _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sat, 25 Sep 1999 21:59:57 -0700 From: Will Edgington <[EMAIL PROTECTED]> Subject: Mersenne: Mersdata update & web server "fix" I have, just today (Sat., 25 Sept.), updated the mersdata file on my web site; the lowM.txt file now contains all of the data that I have for all Mersenne numbers with exponents thru 1 million, pushing the size of the mersdata file (gzip'd or zip'd) to about 6.5 MB. That pushes my usage to a bit over 15 MB of the 35 that I'm now allowed, so lowM.txt may get larger still. Or, if there's interest, I'll try to figure out a way to "rotate thru" all of my data somehow. As for the web server "fix", every now and then over the last year or so, someone complained that they couldn't download one or more of the .gz, .tgz, or .zip files from my ISP's web server; since I never had a problem and they were always, eventually, able to fix the problem by some tweaks in their web browser, I always just pointed them to their browser as being at fault. This past week, Henrik Olsen insisted that the problem is in my ISP's web server, and gave me enough info that, after trying something out, I now agree with him. While my ISP does not have the problem fixed yet, I have now notified them and added a file to my web disk space that should cure the problem for anyone downloading from there. So, be sure to let me know if you have any trouble downloading anything from my web pages. Will http://www.garlic.com/~wedgingt/mersenne.html mersdata.tgz mersdata.tar.gz mersdata.zip _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sun, 26 Sep 1999 01:42:47 -0400 (EDT) From: Lucas Wiman <[EMAIL PROTECTED]> Subject: Re: Mersenne: Graphical visualisation of computations > not, but, what would it show? A progress bar, maybe... anything else? There > isn't really anything else to show. Intermediate results of the LL test > don't themselves have a lot of meaning (even the final result, if non-zero, > is devoid of much interpretation). There's not a lot you could plot - a > graph of the iteration time would only serve to show when you opened > Microsoft Word or something... Yes, but the point is that they may be pretty. (?) This probably wouldn't alter anyone who already is at GIMPS, but it might attract new members. This feature (based on George's postings about his interests in the project) would be badly maintaned, and a support headache. > I must admit, I dallied (albeit very briefly) with the bovine rc5 client. Gack! About the only manufactured challenge comes from ID software. > Were it not for their peculiar statistics ('if keys were drops of water, we > could flood the world in a week' etc) I'd have got bored of the effort a lot > sooner than I did. The SETI client looks pretty (more of a screen burner > than a screen saver though) but the display is at best meaningless, a waste > of cycles at best. Ha! SETI has gained so many cycles due to that "waste". I know at least three people who "use SETI 'cause it looks cool", and only 1 person (discounting y'all, and myself) who uses GIMPS. Though she let me put it on her computer only after many assurances that it wouldn't do anything wierd to her computer. A good reason why... > I think we ought to count ourselves lucky that what we've got is bare-bones > and low-impact. Despite the extra cycles that it might get us, I couldn't agree more! - -Lucas _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sun, 26 Sep 1999 02:09:08 -0400 (EDT) From: Lucas Wiman <[EMAIL PROTECTED]> Subject: Re: Mersenne: Graphical visualisation of computations > > I must admit, I dallied (albeit very briefly) with the bovine rc5 client. > > Gack! About the only manufactured challenge comes from ID software. Should read About the only more manufactured challenge comes form ID software. :) - -Lucas _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sun, 19 Sep 1999 09:37:33 GMT From: [EMAIL PROTECTED] (Michael Oates) Subject: Re: Mersenne: Graphical visualisation of computations Hi, I think you should be very careful about how much graphical information is to be represented. I have just joined GIMPS recently after using SETI@home. That program has a nice graphical interface, but many people, my self included, spent much time and effort to either disable the display, or use a text only version and various tweaking to speed the machine up. The reason for this is that it takes a great many CPU cycles to perform the display, cycles that would be better spent doing the testing. In the case of SETI@home a speed increase of around 300% was obtained by not having the display!! Don't get me wrong I would like to see some more information about what is going on, but not at the expense of loosing time. Regards, >Jukka Santala <[EMAIL PROTECTED]> wrote on Friday 24 September 1999 at 5:25 >PM: > >> Though you ask this, I find the topic rather appropriate for the list, >> especially given the angle of "HOW can we visualize the process of >> mathemathical operations. > >That brings me to the following observation: >I think few of us fully realize the _enormous_ amount of (highly >optimized) processing required each iteration to achieve something that >looks so simple: a squaring, minus 2, mod (2^p-1). >Remember, you see nothing happening! Prime just tells you: I used >so-and-so-many BILLION clock cycles on the previous iteration. When a >computer is ray-tracing (preferrably on some fancy high-end workstation) >you are immediately dazzled by the picture-perfect pixels appearing on the >screen in brilliant colours. >But your result is not for eternity. In GIMPS it is. M7xxxxxx tested. >Composite. Or: M10yyyyyy trial-factored to 2^65. No factors below 2^65. >And you know that YOU discovered that mathematical fact. >That little 64bit residue or checksum is precious. If you had a grain of >rice for the number of clock cycles needed to produce just one of those >bits, you could feed the entire (current) world population for a whole >year! Is that about right? >Whatever the correct figure, the program doesn't show you what it's doing. >Actually, it might be interesting to be able to see, say, just the last >64 bits (in hex) of the current number L[n] in the Lucas sequence. The >program displays those bits for L[p-1], right? So why not during the >sequence? >The program should IMHO at least -optionally- display a progress bar >showing graphically what it now only shows in digits: >progress on current exponent [95.2% completed]. The bar would probably >grow one pixel longer every couple of hours. >Any comments on the issue of graphical visualisation? > >Cheers, >Robert van der Peijl > >_________________________________________________________________ >Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm >Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers Mike, - -- ATLAS CELESTE - Bevis Star Atlas - & "The CD-ROM" A very rare atlas found at the Godlee Observatory http://www.u-net.com/ph/mas/bevis/ Astronomy in the UK http://www.ph.u-net.com _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sun, 26 Sep 1999 11:46:43 +0100 From: "Brian J. Beesley" <[EMAIL PROTECTED]> Subject: Re: Mersenne: Factors Everywhere Hi everyone, A few points about publishing known factors. 1) I don't see any particular need to make available files in human- readable format, provided that we provide a specification for the file, and also binary executables for common platforms for converting a machine-readable file to human-readbale format. On the other hand, it could be that "zipping" a human-readable format file provides near- optimal compression. 2) Since factors are of the form 2kp+1, it is only neccessary to list p & k. This saves a significant amount of storage (compared with listing p and 2kp+1) and the proper factor is easily recovered. 3) If the data was held in variable-length records in a format similar to p,n,k1,k2,k3... where n is the number of known factors and k1, k2, k3,... are the multipliers for each of the known factors in turn, this would save much duplication compared with the current format (of factor.txt). 4) For values of p which fit into a (integer) CPU register and "small" values of k (up to approx. 64K), trial factoring is so fast that I doubt it would be any faster to search a file than to repeat the calculation - even if the file is already available locally. Therefore I suggest that, in order to reduce substantially the size of the frequently-updated publicly-available known factors file, "small" values of k are omitted, and a program be provided instead to generate the data (with binary executables for common platforms). Perhaps the file should contain a "flag" denoting that factors are known, but have been omitted for this reason. We should still have a large master file somewhere, but this need not be uploaded frequently. 5) Following on from (4), it seems to make sense to record the highest value of k tested by trial factoring, rather than the maximum number of bits. 6) PrimeNet trial factoring has moved well into the 10 millions, however George's factor.zip file contains data only up to 10 million. I know there is a problem with uploading the large files concerned; hopefully, the suggestions above will help to reduce the size of the file, sufficiently to make it feasible to expand the exponent range to cover (at least) the active Primenet assignments for long enough for cheap, high-speed Internet connectivity to become widespread. 7) I have several hundred megabytes of disk space available on my ftp server, which has reasonable Internet access - at least 8 Mbps connectivity to the Internet core - and would be happy to provide means for anyone interested to upload factoring data (or anything else, strictly relevant to Mersenne or closely-related numbers) for the purpose of making it publicly available. Regards Brian Beesley _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sun, 26 Sep 1999 07:02:38 -0400 From: "St. Dee" <[EMAIL PROTECTED]> Subject: Re: Mersenne: Graphical visualisation of computations At 01:42 AM 9/26/1999 -0400, Lucas Wiman wrote: >> not, but, what would it show? A progress bar, maybe... anything else? There >> isn't really anything else to show. Intermediate results of the LL test >> don't themselves have a lot of meaning (even the final result, if non-zero, >> is devoid of much interpretation). There's not a lot you could plot - a >> graph of the iteration time would only serve to show when you opened >> Microsoft Word or something... > >Yes, but the point is that they may be pretty. (?) This probably wouldn't >alter anyone who already is at GIMPS, but it might attract new members. >This feature (based on George's postings about his interests in the project) >would be badly maintaned, and a support headache. I think it should be emphasized, though, that that pretty-looking SETI interface is a pig. On Win* machines, it often more than doubles the time required to complete a block of data. All of the people I know who run SETI went "Ooooh, ahhhhh!" when they saw the GUI, then promptly turned it off because it hogged so many cycles. The REAL attraction of the SETI project is the chance to be the person who helps to discover the existence of E.T. Face it, that captures the public's imagination much more, generally, than does hunting for large prime numbers. I believe the initial attraction to SETI is not the interface, but rather the chance to discover E.T. Just my $.02, Kel _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Sun, 26 Sep 1999 10:30:44 -0700 From: Will Edgington <[EMAIL PROTECTED]> Subject: Re: Mersenne: Factors Everywhere Brian J. Beesley writes: 1) I don't see any particular need to make available files in human- readable format, provided that we provide a specification for the file, and also binary executables for common platforms for converting a machine-readable file to human-readbale format. I, personally, have no way of producing executables except for Intel CPUs, and presently only for Linux (I just yesterday got a old P100 machine up running Win98 (and Prime95, of course:)). I have thought about a binary data format off and on for a while now, but mostly in terms of making updates easier by designing a format that can be updated in place (at least most of the time), which is a different problem. On the other hand, it could be that "zipping" a human-readable format file provides near-optimal compression. Most likely, and this avoids having to provide executables to read it: they're already out there. 2) Since factors are of the form 2kp+1, it is only neccessary to list p & k. This saves a significant amount of storage (compared with listing p and 2kp+1) and the proper factor is easily recovered. Unfortunately, it's only this simple when p is odd. For even p, factors can be one more than any multiple of p, not just one more than even multiples of p. And I'd really rather not have a different format for even exponents ... 3) If the data was held in variable-length records in a format similar to p,n,k1,k2,k3... where n is the number of known factors and k1, k2, k3,... are the multipliers for each of the known factors in turn, this would save much duplication compared with the current format (of factor.txt). This is also a good idea, but a few of the lines would be _very_ long, which some text editors and other programs will have problems with. I'd also suggest dropping n; including it doesn't really add anything and it can be computed quickly from the other data. Perhaps something like: n p,pk1 ,pk2 ,pk3 Note that M(n) has no known factors. 4) For values of p which fit into a (integer) CPU register and "small" values of k (up to approx. 64K), trial factoring is so fast that I doubt it would be any faster to search a file than to repeat the calculation - even if the file is already available locally. Yes, but then there's no way to be sure whether the factor is already known or not. There was a bug in an early post-GIMPS-start factorer of mine that caused it to miss _exactly_ one factor, of M(47). If I hadn't had an explicit list of factors to compare with, I would have never found the bug. Perhaps the file should contain a "flag" denoting that factors are known, but have been omitted for this reason. This might be enough, though. But note that this can already be done using the DATABASE format, if you're willing to omit all factors. Hm; so perhaps a DATABASE file with all the exponents and trial factoring extents and a second, human readable, file, as above, is a possibility. And I've long thought about a small program (& function) to do fast, binary lookups in large Mersenne data files like these (human readable or binary), akin to the common UNIX "look" program for dictionary files, but, of course, doing a numeric search rather than a dictionary search. We should still have a large master file somewhere, but this need not be uploaded frequently. I update my local copy about twice a week usually; it takes only a couple hours on my new PIII/450MHz machine and was well under a day even on my old P233MMX system. These times include automatically downloading data out there every few days to a month, as appropriate for the particular file. 5) Following on from (4), it seems to make sense to record the highest value of k tested by trial factoring, rather than the maximum number of bits. This is lacking in the DATABASE format, but my format implies this info. And, for a year or more, I've actually been treating the "distance" in k's to get to the next power of two as a "gap" in the trial factoring data and thus most of it is just above powers of two. Note also that exponents with known factors can be worked on by Prime95 again, after mersfacgmp or some other factorer pushes the extent past the next power of two above a factor. 6) PrimeNet trial factoring has moved well into the 10 millions, however George's factor.zip file contains data only up to 10 million. Yup. This also means that my factors and his for exponents above 10 million have not been compared for some time. I know there is a problem with uploading the large files concerned; hopefully, the suggestions above will help to reduce the size of the file, sufficiently to make it feasible to expand the exponent range to cover (at least) the active Primenet assignments for long enough for cheap, high-speed Internet connectivity to become widespread. Or perhaps George should simply shift to the factors of the exponents of primary interest to GIMPS - say from the smallest not double checked exponent up - rather than the smallest exponents. 7) I have several hundred megabytes of disk space available on my ftp server, which has reasonable Internet access - at least 8 Mbps connectivity to the Internet core - and would be happy to provide means for anyone interested to upload factoring data (or anything else, strictly relevant to Mersenne or closely-related numbers) for the purpose of making it publicly available. Thanks; that's plenty of space, for all the prime exponent data at least, in whatever format we decide on. Will _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ End of Mersenne Digest V1 #633 ******************************
