Mersenne: PrimeNet Stats Updated

1999-06-28 Thread Scott Kurowski

PrimeNet's updated CPU stats chart is at
http://entropia.com/ips/stats.html

The last chart update was mid February.  There are a handful of days that
reached 800 gigaflop rates, and a nice view of the rate recovery from the v17
bug.  A linear trend is no longer the best fit.

Regards,
scott



Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



RE: Mersenne: A few questions

1999-06-28 Thread Eric Hahn

>How large will the exponent be for a 10,000,000 digit prime number?

To be a 10,000,000 digit prime number the exponent must be at least
33,219,281 (which also happens to be a Mersenne candidate).

>Has the prime number that was found a week ago been announced on this
>list?
>I.E.  What number was it?

It hasn't been announced yet... but from what little information 
that is available, i.e. The Oregonian newspaper article, the
exponent must be =at least= 6,643,859.

Eric


Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



RE: Mersenne: A few questions

1999-06-28 Thread Eric Hahn

CORRECTION TO LAST MESSAGE!

The exponent 33,219,281 happens to be a Mersenne prime candidate!



Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: LL & Factoring DE Crediting

1999-06-28 Thread David A. Miller

Brian J. Beesley writes:

>thereafter. I think George's idea was to encourage people to do the 
>appropriate amount of trial factoring - if you didn't do enough, and 
>proceed to run the LL test, you lose the credit because you should 

This doesn't fit with my understanding of how Prime95 works. I though that
the program automatically chose the factoring depth using a formula that
weighs the chance of finding a factor against the time that would be
required for the LL test. If there is a way for the user to control the
amount of factoring, then it is news to me.


David A. Miller
Rumors of my existence have been greatly exaggerated.


Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Status estimate

1999-06-28 Thread Peter Doherty

At 19:03 06/28/1999 -0400, you wrote:
>The Status estimate of the chance that the number you are testing seems to
>be off by a factor of e, based on Wagstaff's estimate.  
>
>+--+
>| Jud "program first and think later" McCranie |
>+--+ 


Yeah, I was wondering about that thing...if we don't even know if there are
finite or infinite Mersenne primes, how does it generate that number?  Mine
says my odds are about 1 in 60,000 or soI'm doing an LL test on a
number in the 719 range, and back when I was testing numbers in the 400
range the odds were much better...around 1 in 45,000.


--Peter


Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: LL & Factoring DE Crediting

1999-06-28 Thread George Woltman

Hi all,

At 09:59 PM 6/28/99 +0100, Gordon Spence wrote:
>if a factor is later found for a Mersenne number or the
>Lucas-Lehmer result is found to be incorrect, then you will "lose credit"
>for the time spent running the test."
>
>It is on  always struck me as odd I must admit.
>At the time the test was done it obviously WAS required, so why the
>subsequent penalty when we make advances and can factor much deeper?

The explanation is really simple.  It saves me from having to keep track
of too much data (the bad results and the ones that a factor was later
found).

However, since the v17 bug, I've had to modify my procedures anyway and
give credit for those incorrect results.  I suppose it wouldn't be too
hard to do the same for factors found and LL results proven incorrect.

Regards,
George




Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: LL & Factoring DE Crediting

1999-06-28 Thread Jud McCranie

At 09:59 PM 6/28/99 +0100, Gordon Spence wrote:
>The GIMPS home page explains the following
>
>"Finally, if a factor is later found for a Mersenne number or the
>Lucas-Lehmer result is found to be incorrect, then you will "lose credit"
>for the time spent running the test."
>
>It is on  always struck me as odd I must admit.

It isn't very clear, but the purpose may have been to discourage bogus positive
results, because they will eventually be double-checked.

+--+
| Jud "program first and think later" McCranie |
+--+



Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Mersenne: Status estimate

1999-06-28 Thread Jud McCranie

The Status estimate of the chance that the number you are testing seems to
be off by a factor of e, based on Wagstaff's estimate.  

+--+
| Jud "program first and think later" McCranie |
+--+



Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: LL & Factoring DE Crediting

1999-06-28 Thread Brian J. Beesley

On 28 Jun 99, at 21:59, Gordon Spence wrote:

> "Finally, if a factor is later found for a Mersenne number or the
> Lucas-Lehmer result is found to be incorrect, then you will "lose credit"
> for the time spent running the test."

I obviously can't answer for George. PrimeNet works a different way, 
Scott credits any work submitted in good faith, whatever happens 
thereafter. I think George's idea was to encourage people to do the 
appropriate amount of trial factoring - if you didn't do enough, and 
proceed to run the LL test, you lose the credit because you should 
have done more factoring to start off with. Otherwise you maximize 
your credit by simply running LL tests & ignoring factoring 
altogether.

The problem with this approach is that goalposts shift. Different 
processor types have different LL test:factoring performance ratios, 
in particular the "plain Pentium" (Intel Pentium "classic" and Socket 
7 MMX processors) are rather poor at factoring compared with most 
other processor types, and processors with high core:bus speed ratios 
are going to be (relatively) better at factoring because of the lower 
memory bus load. Consequently the appropriate factoring depth varies 
to some extent depending on the mix of processors in use.
> 
> It is on  always struck me as odd I must admit.
> At the time the test was done it obviously WAS required, so why the
> subsequent penalty when we make advances and can factor much deeper?
> 
I agree. However, to take the point to a ridiculous extreme, finding 
a factor saves running a LL test - so why can't I have credit for 
finding 54,522 (small) factors in the range 33.2 million to 36 
million, thus saving (very approximately) 54,522 * 8 P90 CPU years LL 
testing? The job ran in an hour on a PII-350!

Somewhere in here there is a compromise. I'd suggest:

(a) you should lose _double_ credit for a LL test if the result is 
proved incorrect, or if a factor is found in a range which you claim 
to have checked;
(b) a table should be published of the suggested factoring depths for 
various exponent ranges. If you submit a LL test result, then someone 
finds a factor less than the table depth at the date you submitted 
the result, you lose the credit; otherwise, you keep the credit;
(c) however deep you factor, you only get credit for factoring to the 
table depth in the current table.

BTW I don't really understand why PrimeNet doesn't credit you for 
results obtained when the manual testing pages are used instead of 
the automatic system. I suppose Scott is trying to promote the 
automatic system, and I'm in agreement with that sentiment, but it 
still seems a bit strange to me.

Regards
Brian Beesley

Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: A few questions

1999-06-28 Thread Steinar H. Gunderson

On Sun, Jun 27, 1999 at 09:08:15PM -0400, George Woltman wrote:
>Assuming they respond that the claim
>is all in order, then we should be able to announce shortly thereafter.

I'm keeping my fingers, toes and hairs crossed :-) Just too bad nobody
else has participated in my guess-contest... That means I will be the
sole winner! Hooray!

Seriously, though, perhaps Slashdot will post the news when the exponent
is official. I've already submitted it three times...

/* Steinar */

Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Mersenne: RE: Mersenne Digest V1 #584

1999-06-28 Thread Griffith, Shaun

[On the off chance that no one else has replied to this since it was
received...]

On Sat, 19 Jun 1999 12:55:46, Jeff Woods wrote:



Starting at about M13, you see that there are indeed islands:



Given the strong linearity of "log(exponents of mersenne primes)", it is not
surprising that the averages of consecutive pairs or triples will also be
linear. Indeed, taking the "centers" of the islands as data points will
actually produce a stronger correlation than the original data.

If I go back and average consecutive exponents (well, the log of the
exponents, anyway), the correlation with a straight line improves from .9925
to .9935 (compared to the original data). If I get to throw away the ones
that don't "fit" according to some arbitrary criterion, it may improve
marginally. [Note: using your choices actually reduces the correlation to
.9849!]

So perhaps we should remember Mark Twain: "There are three kinds of lies:
lies, damn lies, and statistics."

-Shaun
Quantum Mechanics: The dreams stuff is made of

Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Mersenne: LL & Factoring DE Crediting

1999-06-28 Thread Gordon Spence

The GIMPS home page explains the following

"Finally, if a factor is later found for a Mersenne number or the
Lucas-Lehmer result is found to be incorrect, then you will "lose credit"
for the time spent running the test."

It is on  always struck me as odd I must admit.
At the time the test was done it obviously WAS required, so why the
subsequent penalty when we make advances and can factor much deeper?

regards

Gordon



Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: A few questions

1999-06-28 Thread Jeff Woods

At 07:08 AM 6/28/99 +0100, you wrote:

> > I *suspect* that in light of GIMPS' success, he is likely looking much
> > higher than we are now (and has been for some time, again as a 
> guess).   He
> > knows our current program cannot top P > 20,000,000, so I suspect he's
> > above that range, perhaps by a good margin.  It may be that he breaks 
> GIMPS
> > record(s) again someday.
>
>Are you sure about this?

By no stretch of anyone's imagination.   As I said, i don't know David from 
Adam, and have never talked to him, even by Email.   My speculation was a 
SWAG, and could be 180 degrees from the truth.   Your own information about 
his ability to get CPU time does tend to contradict my guesses, which were 
based on the heresay I noted below.

>George told me that he'd asked David S. to
>verify the newly-discovered prime, but that David was having
>difficulty getting access to sufficient CPU time & would only be able
>to run in the background, taking approx. 4 weeks. Several others of
>us would be able to do an independent double-check run on a 6 million
>range exponent in that time, e.g. I could use MacLucasUNIX on my 533
>MHz Alpha 21164 & complete in 4 weeks (ish). Which is why I contacted
>George, to offer my services.
>
> >  I do know that the last time George expanded
> > Prime95 to hunt up to 20MM (up from 3MM), that George sent several 
> residues
> > in the upper ranges to David for confirmation, and that David was able to
> > confirm SOME of them rather quickly (faster than a Cray could do the
> > calculations on the spot, so they were already tested).  This to me
> > indicates that David is searching the numbers far above our top potential
> > range, especially since a Cray can test such numbers in about a week or
> > two, as a guess.
>
>Well - for testing the algorithm you don't neccessarily have to run
>to completion - I've also run quite a number of exponents between 10
>and 20 million to 400 iterations for the purpose of cross-checking
>results. And I'll be doing a lot more when I get sufficient memory.

While not CERTAIN, I'm fairly sure that the test values David confirmed 
were of "last iteration" residues.

>Verification of 3021377 took DS 8 hours on "his" Cray. A "square law"
>seems reasonable (run time = #iterations * time per iteration, time
>per iteration is O(n log n) where n is the FFT size) so, 20 million
>being approx. sqrt(50) times 3 million, run times of 50 * 8 hours =
>2.5 weeks would be expected - assuming unlimited access to the whole
>system. For exponents >33.22 million (10 million digit numbers) he'd
>be well into months per test.

YOWCH!   How long on a P-III/450, or an Athlon (K7), then?

Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Mersenne: M617 Factored

1999-06-28 Thread Conrad Curry



   M617 Factored
   -

M617 has been completely factored by the Special Number
Field Sieve (SNFS).  It was previously known that

  M617 = 59233 *
 68954123297 *
 c171

The p5 was found by Riesel in 1957 and the p11 found by
Brent and Suyama in 1981.  The c171 is a 171 digit composite
factor given by

  c171 = 133162933696720252644109076239739315294641129598\
 571214674268232878869403201703608966454713865163\
 367575359986404237817149731676559992850220509804\
 718874844608767326097407871

On June 26,1999 it was found that c171 = prp51 * prp120,
where

  prp51  = 1577519781152253854956475324210064781272294056\
   44601

  prp120 = 8441284558692203904329647194935146184026400044\
   2557390949549814809580375481579281829664306776\
   0548829906291314807571121271

The factorization of M617 was 'Most' wanted by the
Cunningham project[1] and is the smallest number of the form
2^n-1 whose complete factorization was not known.  That
distinction now goes to M619.

The sieving was done by a group of 19 volunteers starting
May 7, 1999 and finishing on June 16, 1999.  A total of
15986534 relations was collected requiring about 1.2 Gbytes
of memory in uncompressed ASCII format.  The resulting matrix
was 1563691 x 1566436.  The linear algebra and square-root
phases were done at Centrum voor Wiskunde en Informatica (CWI)
by Peter Montgomery.

Acknowledgments are due to the volunteer sievers

  Pierre Abbat  Ricardo Aguilera
  Brian Briggs  Gary Clayton
  David CrandellConrad Curry
  Kelly HallPhilip Heede
  Jim HowellSkip Key
  Alex Kruppa   Samuli Larvala
  Don Leclair   Ernst Mayer
  Thomas Noekleby   Henrik Oluf Olsen
  Marcio de Moraes Palmeira Guillermo Ballester Valor
  Paulo Vargas

Special thanks to Bob Silverman, Peter Montgomery, Alex
Kruppa, Don Leclair and Ernst Mayer.  Also to CWI, the Department
of Computer Sciences at the Technical University Munich and the
School of Mathematical Sciences at the University of Southern
Mississippi for the use of their computers.

M619 is currently sieving by SNFS.  It is on the Cunningham
projects 'More' wanted list.  If you would like to donate some
of your CPU time visit [2] and download the siever for DOS or
linux.  Source code is available on request to compile on other
platforms.  You will need at least 21 Mbytes of free physical
memory.

[1] http://www.cs.purdue.edu/homes/ssw/cun/index.html
[2] ftp://ftp.netdoor.com/users/acurry/nfs


Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Mersenne: Re: 10,000,000 digit prime

1999-06-28 Thread Jonathan Zylstra

The 10,000,000 digit prime would have an exponent of over
3010299.956, or 3010300
which is found by taking (log 2 * 10,000,000)

J. Zylstra


Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm



Re: Mersenne: Linux VFAT question

1999-06-28 Thread Robin Stevens

On Sun, 27 Jun 1999 11:45:37 -0400, Pierre Abbat <[EMAIL PROTECTED]> wrote:
> > But the fact is that the performance of my mounted hdd win98 (vfat)
> > partition is very low. All my normal linux (ext2) partitions work with
> > normal performance. 
> 
> cd to various directories, then type vdir|wc -l in each. This tells you
> how many files the directory has. The more files, the longer it takes to
> sort them.  vdir -U lists the directory without sorting it, which is a
> little faster.
 
I too find that performance on fat/vfat partitions is low compared to ext2
partitions, but I don't think that's too surprising: the kernel is
understandably optimised for ext2, and FAT partitions are hardly the most
efficient of filesystems in the first place.  To be honest, I don't see a
lot of difference in performance on fat partitions whether or not mprime
is running.

> mprime just writes once every 30 minutes, so whether it's running in the
> Windows drive or the Linux drive is inconsequential. As to the swapping,
> the swap daemon runs at 12 naughty, so I don't see why mprime at 20 nice
> would bother it at all.

On one of my machines, disk performance appears significantly worse with
mprime running than when it's switched off.  On my P200MMX, performance on
a Quantum Fireball ST4.3A drops from about 8.25MB/s to 4.9MB/s (measured
using hdparm).  However the performance of the Seagate drive on the same
machine and the Fujitsu drive on my PII-266 is barely affected by mprime.
Still, I'd rather lose a little on disk performance than waste over 96% of
my workstation's CPU cycles :-)

One write every 30 minutes is hardly going to trouble things much.  I'd far
rather have the machine doing regular disk writes than risk losing several
weeks' (your uptime may vary) work if mprime is not terminated neatly.

-- 
 Robin Stevens <[EMAIL PROTECTED]>  
Merton College, Oxford OX1 4JD, UK   http://www-astro.physics.ox.ac.uk/~rejs/ 
(+44) (0)1865: 726796 (home) 273337 (work) 273390 (fax)Pager: 0839 629682

Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm