Mersenne Digest      Wednesday, October 20 1999      Volume 01 : Number 648




----------------------------------------------------------------------

Date: Wed, 20 Oct 1999 00:00:07 +0100
From: "Brian J. Beesley" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Re: splitting up 10m digit primes

On 19 Oct 99, at 15:05, Ken Kriesel wrote:

> Surely there are other programs.  There must be various factoring efforts
> running on non-Intel processors under non-Microsoft operating systems.

Indeed there are. There are several factoring programs in the mers 
suite, also Ernst Mayer's Mfactor program.

Actually there is a _big_ performance gain in using 64-bit processors 
(instead of IA32 architecture) to trial factor up to 2^63 (or perhaps 
2^64), particularly as Alpha, Sun Ultra etc. have (integer) divide 
instructions which are more efficient than Intel's rather weak 
effort. (39 clocks ...)

My guess is that someone really needs to make these existing 
factoring programs easier to use in order to make them more popular.

Factoring programs need savefiles rather less than LL testers; 
perhaps they need their own format savefile, rather than trying to 
"overload" the LL savefile format with complications it doesn't 
really need.

> I took the 4-byte number present in your first recent post to be the
> version information of the program that generated the file, not the 
> file standard version.  After rereading Brian's comments and your reply
> to him, I think I get it; Brian's modification adds the program version
> levels, and the original had the standard's version.

That's exactly what I meant.

> (Feeling fuzzy from the flu, please pardon the increased noise level.)

Commiserations. Did you try the cure involving the hat and the bottle 
of whiskey? (Place the hat at the bottom of your bed & get into the 
bed. Drink from the bottle until a second hat appears. By the time 
you can get up again, the flu should have flown away...)

I think the conventional cure involving hot lemon drinks & 
paracetamol is probably more reliable, but a small shot of whiskey in 
the hot lemon drink seems to do me no harm 8*)


Regards
Brian Beesley
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 20 Oct 1999 00:00:07 +0100
From: "Brian J. Beesley" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Milestone!

On 18 Oct 99, at 18:17, George Woltman wrote:

> Historically, poaching has been confined to the smallest *untested*
> exponents.  At present, there seems to be little to no poaching going
> on.

I had a double-check on exponent 1745789 "poached" in early September 
by a user called "rick" who appears to have submitted a lot of 
results in this range using systems described as PIIs. My assignment 
was a "legal" PrimeNet assignment running on an old Sun Sparc 2, it 
was about halfway through a 10-week run when it was "poached". The 
PrimeNet entry for this exponent at the time it was poached would 
have shown that it was over 80 days from expiring.

Since the system is too slow to do anything else, I let it finish, 
which it did earlier this month. It's now retired from GIMPS/PrimeNet 
- - it would have retired after one more assignment anyway, as the 
system would need more fixing than is justified to run into Y2K.

Thanks, Rick. Perhaps you would care to get assignments from PrimeNet 
in future, rather than poaching other people's?

Regards
Brian Beesley
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Tue, 19 Oct 1999 20:17:48 EDT
From: [EMAIL PROTECTED]
Subject: Mersenne: platform-independent savefile format (Was: ...)

Nick Craig-Wood <[EMAIL PROTECTED]> wrote:

>Some time ago I proposed a standard format for save files.  It also
>had the advantage that it only saved the actual bits used so would
>make the save files 1MByte rather than 1.8 MByte for an 8,000,000
>exponent in prime95.

Richard Crandall has defined such a standard format for Fermat number
residue files - this helped immensely with the Pepin test of F24. For
example, during the roughly 7 months of the test I maintained an archive
of Pepin residues for every iteration == 0 mod 4096, i.e. we wound up
with a total of around 2^24/2^12 = 2^12 of such files, each about 2MB
in size. When Jason Papadopoulos' floating-point-based test finished
(a few days after mine) he just grabbed my final squaring file and diff-ed
it with the one produced by his code to verify that our final residues
were the same. Similarly, at the same time our two machines were doing
their respective floating-point tests, Richard was overseeing a separate
all-integer check. Since the all-integer algorithms tend to run 5-10X
slower on similar hardware as the floating-FFT-based ones, the only way
to effect an all-integer verification was to parallelize it: a floating-
point run serves as a "wavefront" and deposits residues periodically, and a
number of "drone" machines running integer-based convolutions, with each
grabbing a pair of residue files, say drone Y gets files for squarings x1
and x2, with x1 < x2. Drone Y then starts with the x1 residue, does (x2-x1)
modular all-integer squarings, then compares that result with the contents
of the floating-point generated x2 residue file. In this way, one can use
multiple slow machines to check the results of one fast machine in about
the same time the fast machine needs for the calculation. Having a standard,
platform-independent savefile format means that one can write a high-level
(e.g. C) code for the verification, compile it on any needed platform, and
not have to worry about converting savefile formats. Crandall's Fermat-number
format does a bytewise write of the residue, i.e. one doesn't need to worry
about whether a given machine uses a big or little-endian scheme for storing
individual bytes of integer data.

>At the moment I think both Prime95 and the unix programs write the floating
>point values of the FFT array to a file which is inherently non-portable
>across machine architectures.  It also makes the save files quite a bit
>larger than necessary (an important consideration if we ever want to hold a
>central repository of these files).

Not true - all of the fastest codes (Prime95, Mlucas and MacLucasUNIX) write
residue digits in 4-byte integer form to one or more savefiles. Only MLU
writes a floating-point version of the residue, in addition to the integer.

The two easiest ways to make a platform-independent savefile format are
(i) bytewise, as above, or (ii) in 4-byte integer form, with a preset
- -endian convention. The latter might be faster to read and write, if
e.g. a machine that uses a big-endian register storage convention can
still read/write little-endian integers from/to disk as fast as it can
do for big-endian. Or one can do bytewise (which always works in C - my
f90-based Fermat number code used a small C module for this I/O) and
simply buffer the reads and writes into chunks of the same size used by
the target platform for disk I/O (4096-byte blocks are common, but one
could define the platform-dependent blocksize as a parameter.

I discussed the idea of a platform-independent savefile format with George
and Scott several times during the past year, in conjunction with the idea
of central (e.g. PrimeNet) storage of user residues at periodic intervals;
Scott's major objection was that he didn't think the amount of work lost
due to hard disk problems, accidental savefile deletion or user dropout
justifies such an effort. On the other hand, he admitted he didn't really
have a good estimate of the incidence of the above events (except possibly
the latter item). My main point was that as exponents get larger, the
likelihood of the above occurrences is likely to increase, but one still
needs a reasonable estimate of how frequently it occurs now. The best
prospect for doing this would seem to be some kind of e-mail poll of the
current users, and that needs a decent response rate to be significant.
(And human memories are notoriously unreliable.)

A less intrusive, logistically simpler method might be to simply look at
the percentage of exponents that are reassigned by PrimeNet after their
time limit expires. One might assume that that these were on average
20% completed, so if 5% of exponents get reassigned, one estimates that
1% of work could be saved if the savefiles for said exponents could be
reassigned, as well.

Interestingly, Scott seemed to think the bandwidth needed for storing
and managing user savefiles was less of a problem than justifying the
need for such a strategy in the first place. (Scott, correct me if I'm wrong
in my recollection about this.) Obviously not all users
will have sufficiently fast Internet connections to allow them to
participate in such a scheme, but it's not something that should be
mandatory in any event.

- -Ernst

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Tue, 19 Oct 1999 20:17:49 EDT
From: [EMAIL PROTECTED]
Subject: Mersenne: Mlucas for PowerPC?

Does any of the Mac users out there have Absoft Pro Fortran for PowerPC?
(www.absoft.com/pro.mac.html.) If so, would you be willing to try compiling
the Mlucas v2.7 source (ftp://209.133.33.208/pub/mayer/Mlucas_2.7y.f90.gz)
and doing some timings on the PPC? If so, please e-mail me and I can advise
you as to what compile flags to try and how to also set up MacLucasUNIX.c
to make comparative timings simple.

Cheers,
Ernst

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Tue, 19 Oct 1999 20:17:45 EDT
From: [EMAIL PROTECTED]
Subject: Mersenne: My prime is bigger than yours (Was: something else)

Lucas Wiman <[EMAIL PROTECTED]> wrote:

>Ok, it seems doubtful that we will ever be able to find a test more efficient
>than the LL test.  The LL test involves p multiplications mod Mp, which is
>pretty impressive.

I think we need to look beyond today's technology before attempting to predict
such things. For example, consider quantum computation (QC). When this
becomes a reality (note I don't say "if" - that's also a conjecture), it will
revolutionize certain algorithms (e.g. factoring and list searching) which
involve correlating nonlocal data. With current "classical computational
models such tasks take longer as the number to be factored/list to be searched
gets longer because classical computers can perform only a limited (and
generally quite small) number of correlations (a.k.a. machine operations)
per clock cycle - increasing the clock frequency helps reduce the runtime,
but does not reduce the operation cost. To help with the latter we need
improved algorithms or (as with quantum computing) a fundamentally different
way of manipulating the data in question. The seemingly magical aspect of
QC is due to a fundamental property of quantum-mechanical wavefunctions,
namely nonlocality - the wavefunction amplitude of, say, an electron or
nuclear spin state may decrease exponentially with distance from the
"particle" in question, but it only truly reaches zero at infinity. This
is responsible for the quantum "spookiness" apparent in famous experiments
such as the double-slit or two-photon "teleportation" experiment. Now
nonlocality in itself is spooky - not even so much in the sense of something
pervading all of space as in the sense that even though the amplitude of
something becomes arbitrarily small, its influence need not - two photons
prepared in a correlated state remain correlated until the wavefunction of
one of them is "measured," at which time we can say with complete certainty
what the wavefunction of the other was, no matter how far they have become
separated in the interim - but if one accepts it as a fact (and an ever-
increasing number of experiments have done just that), then one begins to
understand the computational possibilities. For example, if some algorithm
involves correlating N bits in a given fashion, a quantum computer can do
this simultaneously for all the bits in question, because all of their
wavefunctions overlap. One is limited only by (i) the number of quantum
bits (qubits) one can reliably manipulate in the desired way, and (ii)
the "clock cycle," i.e. the length of time it takes for an individual
qubit to flip from one state to another. The first of these is currently
the more daunting due to the phenomenon of decoherence, which is what
happens when a qubit's wavefunction interacts with the noisy real world,
but researchers are coming up with all sorts of schemes to deal with this
problem, which only a few years ago seemed intractable. Whenever something
is shown to be physically possible and promises such a huge gain, human
ingenuity seems to find a way to make it a reality.

The Mersenne-related point is that QC might someday make nonfactorial
primality testing (e.g. LUcas-Lehmer for Mersennes) obsolete because it
would be quicker to directly search for all factors <= sqrt(number).

>However, I agree with your prediction because there
>are much more targeted types of numbers with similar running times (e.g.
>Proth numbers).  As Chris Nash pointed out, a Proth test would have taken
>1/4th the time of the LL test on the Mersenne found in June on a number of
>precisely 1,000,000 digits.

Not true - that assumes one can test a non-Mersenne and a Mersenne number
of similar size in about equal time, whereas in reality, a Lucas-Lehmert
test of a p-bit Mersenne number (or Pepin test of a p-bit Fermat number)
will run several times faster than test of a p-bit Proth number of any
other form due to the fact that the former admit of a discrete-weighted-
transform (DWT) based arithmetic, which allows the need both for zero-
padding of the vector to be squared and for explicit modding at the end
of each squaring step. In practice this appears to allow a number of
given size to be modularly squared about 3-5 times faster than sans DWT.

There are two further reasons why non-DWT classes of numbers will be less
likely to yield a world-record-sized prime:

(1) volunteers for distributed computing efforts will always tend to find
    the prospect of findng a world-record size something more interesting
    than something arcane like 'the largest known irregular Huffergnu"ten
    prime octuplet,' i.e. GIMPS will attract more compute cycles because
    it holds the current prime record, and by a whopping margin; and

(2) Among the Proths, the fact there are so many possible kinds of
    candidates dilutes the effort, i.e. makes it vastly more time-consuming
    to test all the candidates up below any reasonably-sized threshold.

For these reasons (and for the fact that among the only other class of
numbers known to admit DWT arithmetic, the Fermats, there are vastly fewer
primality candidates than among the Mersennes), I predict that a Mersenne
prime will hold the record for a long time to come.

Cheers,
- -Ernst

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Tue, 19 Oct 1999 20:19:30 EDT
From: [EMAIL PROTECTED]
Subject: Mersenne: Islands of... Noll?

<<If Noll's Island Theorem/Theory/Hypothesis is true then there are two of
Mersenne primes in this region.
Therefore, it is M40 below (???!!!???) or M41 (even more ! & ?) if
M(6896873) is the the `upper' (north?) part of the island.
Let's find them all -:) !!!!>>

I've never heard of a rigorous definition of the Noll Island Hypothesis.  Not 
only that, explanations of the "islands" invariably include several 
exceptions, too many for my taste.  Personally, I don't think that Mersenne 
primes obey any special "clumping" principle, other than a statistical 
distribution that leads humans to see clumps.

S.T.L.
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Tue, 19 Oct 1999 20:46:22 -0600
From: "Aaron Blosser" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Milestone!

> I had a double-check on exponent 1745789 "poached" in early September
> by a user called "rick" who appears to have submitted a lot of
> results in this range using systems described as PIIs. My assignment
> was a "legal" PrimeNet assignment running on an old Sun Sparc 2, it
> was about halfway through a 10-week run when it was "poached". The
> PrimeNet entry for this exponent at the time it was poached would
> have shown that it was over 80 days from expiring.
<...>
> Thanks, Rick. Perhaps you would care to get assignments from PrimeNet
> in future, rather than poaching other people's?

Whew!  Thank goodness that wasn't one I did!  Honestly, I feel responsible
for starting this poaching nonsense, but hey, I only took exponents that
still had 150+ days on average to go.  Some that would have taken another
year or even 2 years!  On the other hand, if it looked like it had checked
in recently, I didn't bother.

In fact, I do believe that all of those sorts of exponents are "worked out"
of Primenet now, at least they were last I checked.  Seems that Scott has it
set now to expire exponents if they haven't checked in for 60 days,
regardless of when the due date was.

There were actually very few exponents that had been checked out prior to
this rule that had extremely long completion times, so my poaching days are
over. :-) (the crowd applauses)

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 20 Oct 1999 00:39:47 -0400 (EDT)
From: Lucas Wiman  <[EMAIL PROTECTED]>
Subject: Re: Mersenne: My prime is bigger than yours (Was: something else)

> The Mersenne-related point is that QC might someday make nonfactorial
> primality testing (e.g. LUcas-Lehmer for Mersennes) obsolete because it
> would be quicker to directly search for all factors <= sqrt(number).

After a certain point, the LL test would become profitable again, if I understand
correctly.  That crossover point is 2^31-1 for Mersenne numbers (It would certainly
be possible to prove the next mersenne prime be trial division, though the LL test
would be *much* faster).  Quantum computer might raise this crossover point into the
billions of digits, but I believe that trial division would still be O(sqrt(n)), 
(with a very small constant, but it still increases with the sqrt), whereas
the LL test is O(lg(n)) (it would also be sped up by quantum computing).
Unless I'm missing something about quantum computing (and I probably am).

(please forgive any typos, this telnet app doesn't allow backspace charactors
on my shell account.  Does anyone know of a good telnet app for windows?)
(I would think that it wouldn't be hard to make one, but apparently it is).

> of each squaring step. In practice this appears to allow a number of
> given size to be modularly squared about 3-5 times faster than sans DWT.

True.  That would mean the mersenne exponent would have to be sqrt(3)-sqrt(5)
times bigger than the proth exponent.  This is one more reason why the prize
money is not a great idea.  It just encourages more of the same type of searches,
rather than inovation in math as was intended.  Anyone who would invent a method of
prime production probably would have done it anyway (regardless of the EFF prize),
and I think that the power of distributed computing has been demostrated.

Lucas
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 20 Oct 1999 10:53:56 +0200
From: Preda Mihailescu <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Re: The Mysterious Ways of Mersenne primes

> 
> On Tue, Oct 19, 1999 at 01:55:32AM +0300, Jukka Tapani Santala wrote:
> >you can compare this to something like the value of Pi, relation of the
> >radius to the circumference, which human mind tries to instantly
> >rationalize as a "real" number, demonstrated clearly by the actual attempt
> >to _legalize_ Pi as 3. In fact, ask anyone what Pi is, and majority of
> >them will instantly reply to you "3.14".
> 
> I'd say 3.1415926535897932, just to confuse people :-) (Yes, I do remember
> those decimals. Why? For fun.)
> 
> A friend of mine, being really tired at a LAN-party, decided we should get
> away with our 10-system, and start using a phi-system instead. Sounds good
> in theory, but then you suddenly can't get to know exactly how many fingers
> you've got either?
> 
> >The square of two is another good
> >example of how "irrational" and counter-intuitive mathemathics really is.
> 

I would dare say that restricting one's intuition to what some superstition tells you 
to accept as intuitive, is much more counter-intuitive thant \sqrt{2}. Let me put it in
an other way. The large Mersenne primes which are tested or proved on GIMPS, with
over 1000000 digits: just how intuitive are they ? Noone could call their full decimal
name in a life time, allthough it is finite in length. You have to allow for formulae 
and 
symbols, and then you can call their name based upon ``intuitive'' (i.e. integer, and
_reasonably small_ :-) numbers: you name thme 2^p-1. What else is \sqrt{2} than
x^2 - 2 = 0 and x > 0 ? Isn't that intuitive enough ? What else is pi then 
1+1/3+1/5+1/7+1/9 and so on ... ? One should not be afraid of not being able to 
understand what in fact one can ! I have to wash my glasses every couple
of days.


Friendly,

Preda
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 20 Oct 1999 19:24:10 +1000
From: Simon Burge <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Re: The Mysterious Ways of Mersenne primes 

"Steinar H. Gunderson" wrote:

> On Tue, Oct 19, 1999 at 01:55:32AM +0300, Jukka Tapani Santala wrote:
> >you can compare this to something like the value of Pi, relation of the
> >radius to the circumference, which human mind tries to instantly
> >rationalize as a "real" number, demonstrated clearly by the actual attempt
> >to _legalize_ Pi as 3. In fact, ask anyone what Pi is, and majority of
> >them will instantly reply to you "3.14".
> 
> I'd say 3.1415926535897932, just to confuse people :-) (Yes, I do remember
> those decimals. Why? For fun.)

Just to continue the off-topic discussion a little, I'd guess that a lot
of people would say "22 over 7" as to what Pi is.

I've always been impressed with the approximation of 355/113 - accurate
to 7 decimal places and easy to remember (two 3's, two 5's and two
1's)...

Simon.
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 20 Oct 1999 12:19:06 +0000
From: "Steinar H . Gunderson" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Re: The Mysterious Ways of Mersenne primes

On Wed, Oct 20, 1999 at 10:53:56AM +0200, Preda Mihailescu wrote:
>What else is pi then 
>1+1/3+1/5+1/7+1/9 and so on ... ?

Ermmm. Isn't that value closer to 2, actually? :-) 

I thought (without checking) that the value of pi was something like:

4/1+4/3-4/5+4/7-4/9+4/11-4/13...

At least it involved 4 in some way, and alternate pluses and minuses.
Anybody with a little more experience, please step in and clarify :-)

/* Steinar */
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 20 Oct 1999 12:09:22 +0100
From: "Ian L McLoughlin" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Re: The Mysterious Ways of Mersenne primes 

Value of pi is the product of the infinite series ...

pi = 4 (1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + 1/13 -1/15 + 1/17.........)

Hope this helps..

Regards,
Ian McLoughlin, Chematek U.K.

Tel/Fax : +44(0)1904 679906
Mobile   : +44(0)7801 823421
Website: www.chematekuk.co.uk
- ----- Original Message -----
From: Simon Burge <[EMAIL PROTECTED]>
To: Steinar H. Gunderson <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: 20 October 1999 10:24
Subject: Re: Mersenne: Re: The Mysterious Ways of Mersenne primes


> "Steinar H. Gunderson" wrote:
>
> > On Tue, Oct 19, 1999 at 01:55:32AM +0300, Jukka Tapani Santala wrote:
> > >you can compare this to something like the value of Pi, relation of the
> > >radius to the circumference, which human mind tries to instantly
> > >rationalize as a "real" number, demonstrated clearly by the actual
attempt
> > >to _legalize_ Pi as 3. In fact, ask anyone what Pi is, and majority of
> > >them will instantly reply to you "3.14".
> >
> > I'd say 3.1415926535897932, just to confuse people :-) (Yes, I do
remember
> > those decimals. Why? For fun.)
>
> Just to continue the off-topic discussion a little, I'd guess that a lot
> of people would say "22 over 7" as to what Pi is.
>
> I've always been impressed with the approximation of 355/113 - accurate
> to 7 decimal places and easy to remember (two 3's, two 5's and two
> 1's)...
>
> Simon.
> _________________________________________________________________
> Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
> Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers
>

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 20 Oct 1999 08:21:33 PDT
From: "david campeau" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Milestone!

>I had a double-check on exponent 1745789 "poached" in early September
>by a user called "rick" who appears to have submitted a lot of
>results in this range using systems described as PIIs. My assignment
>was a "legal" PrimeNet assignment running on an old Sun Sparc 2, it
>was about halfway through a 10-week run when it was "poached". The
>PrimeNet entry for this exponent at the time it was poached would
>have shown that it was over 80 days from expiring.
><...>
>Thanks, Rick. Perhaps you would care to get assignments from PrimeNet
>in future, rather than poaching other people's?
>

Perhaps there should be a warning, not to poach exponent at the top of the 
"Test status, assignment report". I feel I am to blame, because it must look 
to the poachers like all those small exponent are reassigned to the same 
guy. If they look closely at my assignment they will see that 99% of them 
had been assigned at ~6:03 PST, that's when Primenet release overdue 
exponent back into the available pool.
And with the 90 days rule there is no reason to poach.

What, you are not up at that hour! well here is my secret (only available on 
NT system with 24h access to the Net).

make 2 CMD file

part1.cmd :
d:
cd Mersenne
ren worktodo.ini *.tmp
copy start.ini worktodo.ini

part2.cmd :
d:
cd Mersenne
type worktodo.ini >> worktodo.tmp
del worktodo.ini
ren worktodo.tmp *.ini

modify your prime.ini ( look in the documentation for the Time variable, 
because it's quite complex):
DaysOfWork=20
Time=1-7/0:00-2:00,1-7/2:04-2:15,1-7/2:20-24:00

start part1.cmd with the AT command at 6:03 PST
start part2.cmd with the AT command at 6:18 PST

And now the hard and real important part:

monitor your Account report EVERY DAY (I can't stress this enough, because 
EVERY DAY 4-5 new exponent are going to be assigned to you, if you are only 
able to monitor your account on the weekend modify the part* schedule and 
Time setting, to fit your need) and keep ONLY 60 days of work for your 
computer, use the release form on PrimeNet to release high exponent that you 
can't complete in 60 Days. and reorder your assignment so that the smallest 
exponent are finished first.

I've been doing this for about 6 months and let me tell you that it's quite 
tedious, there is NEVER a day off :)

regards

David Campeau a.k.a "DiamondDave"

P.S: this can be used for any type of assignment! there is no reason why it 
should be used ONLY on double-checking (modify your AT command and prime.ini 
accordingly).

P.P.S: if you plan on doing this, DON'T quit in the middle w/o removing your 
part1 & part2 from the scheduled task,

______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 20 Oct 1999 14:20:27 -0400 (EDT)
From: Darxus <[EMAIL PROTECTED]>
Subject: Mersenne: graphing P of 2^P-1

Okay,
http://www.utm.edu/research/primes/notes/faq/NextMersenne.html 
says that "log2(log2(Mn)) versus n" is a line.  

I just loaded Excel, put P for the 1st 37 mersenne primes in a column 1,
and in column 2, I put the formula "=LOG(A2, 2)" (where A is the line
number), which is the log base 2 of the cell in column 1 on the same line.
I graphed it and got a line.

I then thought "wait, it's supposed to be log2(log2(Mn)).. I guess that'll
be a straighter line", so I changed to formula in column 2 to
"=LOG(LOG(A2, 2), 2)", -- log base 2 of log base 2 of Mn.  That gave me a
curve.


Did I misunderstand ?  Is Mn on that page actually referring to the
mersenne prime number n, and not P for the mersenne prime number n ?  And
that log base 2 of the exponent of the mersenne prime number n should be a
line ?

__________________________________________________________________
PGP fingerprint = 03 5B 9B A0 16 33 91 2F  A5 77 BC EE 43 71 98 D4
            [EMAIL PROTECTED] / http://www.op.net/~darxus
          Join the Great Internet Mersenne Prime Search
                http://www.mersenne.org/prime.htm


_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 20 Oct 1999 16:13:52 -0400
From: Jud McCranie <[EMAIL PROTECTED]>
Subject: Re: Mersenne: graphing P of 2^P-1

At 02:20 PM 10/20/99 -0400, Darxus wrote:

>Did I misunderstand ?  Is Mn on that page actually referring to the
>mersenne prime number n, and not P for the mersenne prime number n ?

Yes.  Mn is the prime number, not the exponent.  Take the log of the 
exponent or the log of the log of the actual number.

>And
>that log base 2 of the exponent of the mersenne prime number n should be a
>line ?

Any log will give a straight line (approximately).


+---------------------------------------------------------+
|     Jud McCranie                                        |
|                                                         |
| Programming Achieved with Structure, Clarity, And Logic |
+---------------------------------------------------------+


_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 20 Oct 1999 16:24:10 EDT
From: [EMAIL PROTECTED]
Subject: Mersenne: Re: Small exponents available

George Woltman wrote: 

>   I went through my log files and have requeued as a first-time LL
>test any exponent that had "serious" errors on the first LL test.  A serious
>error is a "SUM(INPUTS) != SUM(OUTPUTS)" or "ROUNDOFF > 0.4".  I'd guess
>that the first test has very roughly a 50% chance of being correct.

One can actually arrive at a quantitative estimate of the probability of
the result being incorrect - such a method is described in the manuscript
www.perfsci.com/free/techpapers/F24.ps. Briefly, if one models the rounding
errors in a floating-point convolution (DWT-based or not) using random-walk
statistics, one can easily derive a relation that predicts the maximum
allowable number of bits per residue digit (there is an undetermined
constant which can be fixed by using empirical error data, but it's
around 1 if IEEE 64-bit floating-point arithmetic is used). In addition,
the model allows one to answer such questions as "if the average error
(fractional part, defined as frac(x) = |x-nint(x)| ) in an output digit
is E << 1, what is the probability that there was an error >= 0.5?"

Note that the above estimate requires knowledge of the AVERAGE error, not
the maximum fractional part (call it fracmax), the latter being the "> 0.4"
George mentions. But it's easy to add a few lines to one's carry propagation
routine to calculate E. The other nice property of the average error is that
(unlike fracmax) for a Mersenne number M(p), E levels off after only about
log2(p) iterations and varies very little after that. In statspeak, measuring
E corresponds to sampling the fat part of the normal distribution, whereas
measuring fracmax corresponds to sampling the tails, which is inherently
less reliable in terms of, e.g. fixing FFT length exponent breakpoints.
(Although one should still calculate fracmax every iteration during an
actual run to be safe.)

The one caveat of using a normal-distribution model is that the asymptotic
estimate of the probability of a fatal error used in the aforementioned
paper requires E << 1; if E is, say, greater than around 0.05, one might
need to include more terms in the asymptotic series for the complementary
error function. Since rounding errors can take only discrete values (and
the number of these discrete values becomes very small when for fractional
errors near 0.5), the tails of such distributions may no longer be well-
approximated by a Gaussian. But it's still better than mere handwaving.

Cheers,
- -Ernst

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 20 Oct 1999 21:37:38 +0100
From: "Brian J. Beesley" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: platform-independent savefile format (Was: ...)

On 19 Oct 99, at 20:17, [EMAIL PROTECTED] wrote:

> [... snip ...]
> Since the all-integer algorithms tend to run 5-10X
> slower on similar hardware as the floating-FFT-based ones, the only way
> to effect an all-integer verification was to parallelize it: a floating-
> point run serves as a "wavefront" and deposits residues periodically, and a
> number of "drone" machines running integer-based convolutions, with each
> grabbing a pair of residue files, say drone Y gets files for squarings x1
> and x2, with x1 < x2. Drone Y then starts with the x1 residue, does (x2-x1)
> modular all-integer squarings, then compares that result with the contents
> of the floating-point generated x2 residue file. In this way, one can use
> multiple slow machines to check the results of one fast machine in about
> the same time the fast machine needs for the calculation.

This is an excellent justification for the idea of keeping savefiles. 
Perhaps there's still useful work for slow machines to do?

> Having a standard,
> platform-independent savefile format means that one can write a high-level
> (e.g. C) code for the verification, compile it on any needed platform, and
> not have to worry about converting savefile formats.

The burden of converting from least-significant-byte-first to most-
significant-byte-first format is hardly significant, given that it 
needs to be done rarely. Only when a savefile written by a system 
with one convention is picked up by a system using a different 
convention.

> Crandall's Fermat-number
> format does a bytewise write of the residue, i.e. one doesn't need to worry
> about whether a given machine uses a big or little-endian scheme for storing
> individual bytes of integer data.

Having made an idiot of myself on the subject of integer divide 
instructions (thanks to Peter Montgomery for correcting me!) I hate 
to sound dogmatic about this. But I can write an entire array of any 
size with one call from a high-level language, whereas writing it 
byte by byte takes as many calls as there are bytes in the array. 
Given that the system call is likely to involve a processor context 
save & restore and a switch from user to kernel privelege & back 
again, I rather suspect that the single call method may be more 
efficient. The problem with the single call is that the data will be 
written in the processor's native format.

It's true that the physical I/O performance will probably regulate 
the time interval needed to read or write a savefile, irrespective of 
any extra processing. Savefile write (for security) is probably going 
to be the commonest operation. We don't neccessarily need to wait for 
the savefile write to complete; we could make a copy of the memory 
buffer & carry on crunching numbers using the spare CPU cycles 
available whilst the I/O subsystem is busy writing.

Unless I'm completely up the garden path (again) I think it's going 
to be worthwhile putting up with the "complication" of having a byte 
order flag.
> 
> Not true - all of the fastest codes (Prime95, Mlucas and MacLucasUNIX) write
> residue digits in 4-byte integer form to one or more savefiles. Only MLU
> writes a floating-point version of the residue, in addition to the integer.

I think MacLucas just writes the residue as it stores it in memory 
i.e. an array of 2^n double-precision floating-point values. This is 
wasteful, since there's a whole bundle of trailing zeroes in the 
mantissae, once rounding to the nearest integer has been carried out.
> 
> I discussed the idea of a platform-independent savefile format with George
> and Scott several times during the past year, in conjunction with the idea
> of central (e.g. PrimeNet) storage of user residues at periodic intervals;
> Scott's major objection was that he didn't think the amount of work lost
> due to hard disk problems, accidental savefile deletion or user dropout
> justifies such an effort. On the other hand, he admitted he didn't really
> have a good estimate of the incidence of the above events (except possibly
> the latter item). My main point was that as exponents get larger, the
> likelihood of the above occurrences is likely to increase, but one still
> needs a reasonable estimate of how frequently it occurs now. The best
> prospect for doing this would seem to be some kind of e-mail poll of the
> current users, and that needs a decent response rate to be significant.
> (And human memories are notoriously unreliable.)

The responses would be biased by the fact that those replying would 
tend to be the more concientious users.
> 
> A less intrusive, logistically simpler method might be to simply look at
> the percentage of exponents that are reassigned by PrimeNet after their
> time limit expires. One might assume that that these were on average
> 20% completed, so if 5% of exponents get reassigned, one estimates that
> 1% of work could be saved if the savefiles for said exponents could be
> reassigned, as well.
> 
> Interestingly, Scott seemed to think the bandwidth needed for storing
> and managing user savefiles was less of a problem than justifying the
> need for such a strategy in the first place. (Scott, correct me if I'm wrong
> in my recollection about this.) Obviously not all users
> will have sufficiently fast Internet connections to allow them to
> participate in such a scheme, but it's not something that should be
> mandatory in any event.

If we could get double-checks done by banks of slower machines 
(perhaps 486's or low-end Pentia, like those often still found in 
secondary school laboratories) we might be able to encourage more 
people to take an interest. This would, in itself, be a 
justification.

I certainly agree that we shouldn't try to make the scheme mandatory!
But, if only 5% of the systems participated, that would still 
generate plenty of work for a large number of slower systems - which 
might well have reasonable network connectivity.

Regards
Brian Beesley
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

End of Mersenne Digest V1 #648
******************************

Reply via email to