Mersenne Digest Wednesday, May 24 2000 Volume 01 : Number 739 ---------------------------------------------------------------------- Date: Mon, 22 May 2000 09:45:32 -0400 From: Sandy Harris <[EMAIL PROTECTED]> Subject: Mersenne: [Fwd: P1363: Primality prover] Is this of use here? - -------- Original Message -------- Subject: P1363: Primality prover Date: Mon, 22 May 2000 06:27:24 +0200 From: [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] To: [EMAIL PROTECTED] - --------------------------------------------------------------------- This is a stds-p1363 broadcast. See the IEEE P1363 web page (http://grouper.ieee.org/groups/1363/) for more information. For list info, see http://grouper.ieee.org/groups/1363/maillist.html - --------------------------------------------------------------------- A beta release of CERTIFIX (a primality proving program I am writing) is available. It is based on the Goldwasser, Kilian & Atkin algorithm. CERTIFIX is an executable for Win95, Win98, NT (hardware Intel compatible). It is a freeware. Currently, it can certify a 1024 bit integer in less than 10 mn (AMD K6-2/450 processor). Download link: http://www.znz.freesurf.fr/files/certifix.zip (300 Kb) The package contains the 5 following files certifix.exe certifix.hlp readme.txt todo.txt changes.txt There is no "install" program. Just copy the files in the folder you want and click on CERTIFIX.EXE. Marcel Martin _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Mon, 22 May 2000 13:43:02 EDT From: [EMAIL PROTECTED] Subject: Mersenne: Re: reliability of double-checking Stefan Struiker wrote (re. the reliability of double-checking): > How many DDs does it take to "reasonably establish" primality, given the > apparent slight error in the L-L test? Is there an extra-precision > version of the test to nail things down? A few years ago, you would likely have heard quite a few people say that such a result could only be acceptable if confirmed via an all- integer test, i.e. one devoid of roundoff error. Now that the reliability of carefully written floating-point-based codes for primality testing has been established to most everyone's satisfaction, the currently accepted standard is two independent tests (floating-point or not) running on separate hardware, preferably using different programs. George has pretty much obviated the latter criterion by using a binary-shifted start value for the LL test, which means that even running the same code on the same exponent and same hardware twice in a row will involve completely different FFT vectors (but which have digits of similar magnitudes) p-1 out of p times; in case one really needs to convince the remaining critics (e.g. if the first test indicates primality), one can simply make sure the double-check is done using an independent program. Nathan Russell wrote: > IIRC, there is no real difference in the double-check and the > first-time test. On average 1 in p times there is not, for the reasons I cited above. Note that this also applies to cases where Prime95 was used for one run and a different code for the other. An interesting sidelight to the all-integer vs. floating-point debate: all-integer is generally thought to be more reliable because it suffers no roundoff (RO) errors, but it's become quite clear that RO errors are not the major culprit in erroneous runs, and moreover can be detected quite reliably. It's the other system components (especially on PC-class hardware, where profit margins are thin to nonexistent and corners must be cut) that more often cause trouble; non-ECC memory, CPU overheating, unexpected (well, perhaps not under Windows :) interactions between various applications, actual flaws in the CPU or motherboard, etc. All of these can ruin a calculation, whether that be floating-point or all-integer. The other thing that is often overlooked is that even an all-integer run can suffer a non-hardware-related kind of error, namely of the overflow variety. Unlike RO error, this can be difficult (or at the very least expensive) to detect, especially for things like a modern Mersenne-testing algorithm, which never explicitly finds the unmodded square of the input number, i.e. does not permit a casting-out-nines- style checksum. Some will say, "why not just enable overflow trapping?", but most performance-oriented CPUs have no hardware support for such trapping, because it is inimical to speed. Instead, one must trap in software, using something like the following pseudocode sequence: unsigned x, y, z !* x, y, z are register-length ints logical of_trap z = x + y !* the sum may overflow... of_trap = (z < x) !* ...in which case we detect it via !* unsigned compare of z with x or y. ..which of course slows things down considerably. Cheers, - -Ernst _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 24 May 2000 12:10:37 EDT From: "Nathan Russell" <[EMAIL PROTECTED]> Subject: Mersenne: The recent popularity of Single-Checking Hi all, There is a user, "sd70045", who has almost 100 single-checkingassignments out on a single machine ID. These would take a state-of-the-art box well over two years to finish. Additionally, these assignments have almost identical figures for time to complete etc. The first exponent in this group is 8936071; the others are directly below it. Puzzled, Nathan ________________________________________________________________________ Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com _________________________________________________________________ Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ End of Mersenne Digest V1 #739 ******************************