Mersenne Digest         Friday, April 21 2000         Volume 01 : Number 722




----------------------------------------------------------------------

Date: Wed, 19 Apr 2000 15:47:24 -0500
From: Jeremy Blosser <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Just curious

Make that "drives" not "drivers"...

And think, its only Wed.

- -Jeremy
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 19 Apr 2000 14:11:52 PDT
From: "Alan Simpson" <[EMAIL PROTECTED]>
Subject: Mersenne: status page confusion

hi,

On http://www.mersenne.org/status.htm, it says
"All exponents below 5,083,600 have been tested at least once."

However, on http://www.entropia.com/primenet/status.shtml, it says that 
there is one exponent between 4500000 and 4599999 still out (this at 19 Apr 
2000 21:00 (19 Apr 2000 14:00 Pacific)).

There are then no more exponents out until 5500000 to 5599999.

Can someone explain to me how I am misreading these pages?

The reports for the double-checking seem consistent from my reading.

thanks,

Alan Simpson
______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 19 Apr 2000 17:18:03 EDT
From: "Nathan Russell" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Just curious

>Also, how long would it take prime95 to save all files, and would running
>prime for 10 mins actually do anything useful?

To my knowledge, saving a file takes finishing the current iteration (at 
most, 1-2 secs for PrimeNet work on a slow pentium) and writing the file 
(max of 5 sec).  This is not much longer than the time needed to power up a 
monitor that has turned itself off.

Regards,
Nathan
______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 19 Apr 2000 19:06:56 -0500
From: [EMAIL PROTECTED]
Subject: Mersenne: Prime 95 & Base 3

How hard would it be to modify the source code of Prime 95 to calculate a
base 3 number mod a prime number and then square and mod the result?

Are there any programs along this line?

Dan

                                              

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 19 Apr 2000 21:00:57 -0400
From: George Woltman <[EMAIL PROTECTED]>
Subject: Mersenne: Facelift (round 3)

- --=====================_1233755937==_.ALT
Content-Type: text/plain; charset="us-ascii"; format=flowed

Hi again,

I looked at the MS site with its dropdown menus.  It was far more javascript
than I want to wade through.  So I've come up with a simpler header that
incorporates a menu.  Let me know if you like this (especially as compared
to the more traditional menu on the side).  It does get us more horizontal
real-estate to display the status and benchmark tables.

The latest incarnation (with non-operational menus) can be viewed at:

http://www.mersenne.org/newhtml2/prime.htm

as opposed to the previous version at:

http://www.mersenne.org/newhtml/prime.htm

More comments are of course welcome!

Thanks again,
George
- --=====================_1233755937==_.ALT
Content-Type: text/html; charset="us-ascii"

<html>
Hi again,<br>
<br>
I looked at the MS site with its dropdown menus.&nbsp; It was far more
javascript<br>
than I want to wade through.&nbsp; So I've come up with a simpler header
that<br>
incorporates a menu.&nbsp; Let me know if you like this (especially as
compared<br>
to the more traditional menu on the side).&nbsp; It does get us more
horizontal<br>
real-estate to display the status and benchmark tables.<br>
<br>
The latest incarnation (with non-operational menus) can be viewed
at:<br>
<br>
<a href="http://www.mersenne.org/newhtml2/prime.htm" 
eudora="autourl">http://www.mersenne.org/newhtml2/prime.</a><a 
href="http://www.mersenne.org/newhtml2/prime.htm" eudora="autourl">htm<br>
<br>
</a>as opposed to the previous version at:<br>
<br>
<font color="#0000FF"><u><a href="http://www.mersenne.org/newhtml/prime.htm" 
eudora="autourl">http://www.mersenne.org/newhtml/prime.</a><a 
href="http://www.mersenne.org/newhtml/prime.htm" eudora="autourl">htm<br>
<br>
</a></u></font>More comments are of course welcome!<br>
<br>
Thanks again,<br>
George</html>

- --=====================_1233755937==_.ALT--

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 19 Apr 2000 22:38:30 -0400
From: George Woltman <[EMAIL PROTECTED]>
Subject: Re: Mersenne: status page confusion

Hi,

At 02:11 PM 4/19/00 -0700, Alan Simpson wrote:
>On http://www.mersenne.org/status.htm, it says
>"All exponents below 5,083,600 have been tested at least once."

Always believe the numbers on the status.htm page.  These numbers
come from the master database consisting of all PrimeNet results and
results from those testing manually.

>However, on http://www.entropia.com/primenet/status.shtml, it says that 
>there is one exponent between 4500000 and 4599999 still out (this at 19 
>Apr 2000 21:00 (19 Apr 2000 14:00 Pacific)).
>
>There are then no more exponents out until 5500000 to 5599999.
>
>Can someone explain to me how I am misreading these pages?

My guess is that this one exponent was handed out by the server and
the user never returned a result.  In the meantime, a manual tester did
return a result.  When the exponent expired, the server handed it back
out as a first time test.  The server does not know this exponent has
been tested because the last database synchronization was on Feb. 8.

BTW, the manual tester may not have been a poacher!  I have handed
out a handful of exponents for double-checking once the primenet user
has gotten a good head start.  There are a few people using version 14
and slower PowerMacs that can only test exponents below 5.26M and 4.8M.

There are no exponents between 5.0M and 5.6M listed by PrimeNet for
first-time checking because I've told the server to make all exponents
from 5.0M to 5.6M available for double-checking.  Due to a bug in the
server this re-lists any active first time checks that are outstanding in
that range as double-checks.  The end-user is unaffected because he
is still the only one that should be testing that exponent.

This is what happens when you manage two databases from 3,000
miles apart on a spare-time basis!

Hope that helps,
George

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 19 Apr 2000 23:21:54 EDT
From: [EMAIL PROTECTED]
Subject: Mersenne: Old HDs

<< Drives have a "landing zone" or parking area where the heads will move to
 when it's powered down.  There's no data on that part of the track, so if
 your heads do get stuck there when it's turned off, there's something you
 can try... >>

Anyone remember the really old drives that needed to be told to park their 
heads?  I seem to remember having to do that to my 286's 20MB HD.

Stephan "20MB?  We'll never run out of 20MB...." Lavavej
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 00:38:53 EDT
From: "Nathan Russell" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: status page confusion

It certainly did help me!  So, why are only certain ranges listed on the 
PrimeNet status page?  For example, from time to time, their front page has 
failed to list the 'cutting edge' of PrimeNet factoring assignments.

As a somewhat related question, when an apparent double-checking milestone 
is present in the assignments cleared/out report, how can we users know 
whether the milestone has actually been reached or whether the residues have 
failed to match?

Finally, someone mentioned on this list several months ago a time each day 
when PrimeNet had much smaller exponents available.  Is this still the case? 
  I recall some talk of it being fixed, but I don't think I heard for sure 
that it was implemented.

Thanks,
Nathan
______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 06:54:30 +0200
From: Martijn Kruithof <[EMAIL PROTECTED]>
Subject: Re: Mersenne: factoring

<snipped a lot> Note that Mfactor (as currently configured)
> doesn't support putting multiple exponents into a to-do
> file, so the best way to trial factor lots of exponents
> is probably just to paste the one-line inputs needed for
> the exponents, one after another, into the same window -
> when the program finishes the current exponent, it'll
> read the next input line from the buffer. Of course,
> this doesn't permit one to log out, so is best done on
> a machine which one owns (then one can at least lock the
> display when one needs to leave.)

We are talking (some flavour of) unix here aren't we?

You could make an input file with the factors (As you must enter them,
including any stuff you must enter before you enter the first factor)
and then run mfactor in the background

nohup mfactor < input.file > output.file &

then you can log out.

Kind Regards, Martijn


- -- 
http://jkf.penguinpowered.com
Linux distributies voor maar
Fl 10 per CD, inclusief verzendkosten!
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Wed, 19 Apr 2000 23:28:26 -0600
From: "Aaron Blosser" <[EMAIL PROTECTED]>
Subject: RE: Mersenne: Just curious

>>Jeremy and I used to do that alot on those first generation IDE drives
>>(which seemed to have this problem much more often) back when we were
>>computer techs...  It sounds funny, I know, but it worked great most of the
>>time.
>
>If I remember correctly, it seemed to happen with certain batches of
>drivers. Like a batch of WD drivers and then later maybe Maxtor or whatever.
>And there was *always* the same problem in that it would spin up and then
>right back down again at boot (usually the first boot). So I think it was
>more of a shipping or drive problem or something more than anything. But, as
>a last ditch effort it works...

That was a different problem.  That's where WD threatened to sue me and my
company when I discovered a LARGE batch of their drives were having failure
rates of nearly 80%.  When I got on the newsgroups to see if others had the
same problem, I found some other cases with those same drives, so I mentioned
my problem of 80% and up failure rates...

Well, WD apparently monitors the newsgroups and they contacted me and
threatened me and the company I worked for with a libel suit.  Geez.

Of course once they got our bad drives back and examined them, sure enough,
that's when they discovered a problem in their manufacturing that was leaving
silica deposits all over inside the sealed case...the flakes would be whizzing
around inside the drive like little asteroids, tearing the heads and platters
to shreds basically.

So in the end, they did find out that it was a manufacturing problem and I
never heard an apology from them for threating me like that. :(  Hmph...I've
never bought a WD drive since and always tell people to avoid them. :)  They
did fix the problems with their drives, but in their initial period of denial,
they refused to swap out the drives until they actually failed and I had about
30 screaming programmers I was supporting who didn't understand that WD was
refusing to proactively replace the drives that hadn't yet failed.  They
didn't understand why we had to wait for them to lose all their data before we
could replace them, and frankly, they were worried because they'd seen all the
other drives of the other programmers that *had* already died and who lost all
their data.  So, thanks a lot WD! :-P

>In reality tho, I think that what is *more* common is the actual controller
>or PCB on the drive starts to flake out before an actual problem w/ the
>platters etc. Usually, if it is a platter/head problem its usually due to
>abuse (such as dropping something heavy on your drive while its
>reading/writing).

At the risk of going too off topic, I do recall that in some cases we were
able to recover data by taking the controller board from another drive of the
same make/model and mounting it on the failed drive.  If we were lucky, it
*was* just the old board that had flaked out and we could still recover the
data.

>I'll never forget the times when we would do data recovery of a bad drive by
>putting it in the freezer for 30 mins, which we *theorized* shrank the PCB
>on the HD thus fixing some stress fracture or whatever temporarily (long
>enough to get the data off the drive before it heated back up again).

You know bro, we were the data recovery pros there! :)

Well, it all goes to show you all that you're FAR more likely to have
something besides the CPU die first, so you can just run Prime95 to your
hearts content... just make sure you backup the files because chances are,
your drive will go first. :)

Aaron

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 07:47:57 -0400
From: "David Campeau" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: V20 beta #4 (will the beta never end?)

Hi all,

Seeing that not every time a stage 1 gcd will result in a factor found, are
we not better to wait until the end of stage 2? This way, we could factor
deeper.

Perhaps this could be yet another option?

some preleminary data (on my machine at home):
Total P-1 test = 91
Stage 1 factor = 2
Stage 2 factor = 1

So on my machine the stage 1 gcd saved me 2 stage 2, so about 2 hour of cpu,
but at the cost of about 90 * 230 sec of stage 1 gcd = 20700sec or 5:45
hours. Seems to me that we could save a little bit by forgoing stage 1 gcd.

regards,

David Campeau
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 14:33:51 -0000
From: "Brian J. Beesley" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: V20 beta #4 (will the beta never end?)

On 20 Apr 00, at 7:47, David Campeau wrote:

> Seeing that not every time a stage 1 gcd will result in a factor found,
> are we not better to wait until the end of stage 2? This way, we could
> factor deeper.
> 
> Perhaps this could be yet another option?

Maybe I'm guilty here; the very first pre-pre-release didn't run the 
GCD after Stage 1 unless an "undocumented" option was set in 
prime.ini (Stage1GCD=1). I commented on this to George and he changed 
it. My reasoning was that early timings on exponents typical of 
"real" PrimeNet assignments seemed to suggest that it made sense to 
run the Stage1 GCD automatically.

There's also a slight edge in clarity from this approach. If you run 
a GCD at the end of each stage then you know where you are. If you 
run GCD only at the end of Stage 2 but then don't run Stage 2 because 
of memory constraints (which is effectively the default option, given 
DayMemory = NightMemory = 8 MBytes) you might as well not bother 
running Stage 1 either, since you won't find any factors that Stage 1 
uncovered until you eventually run the GCD.

There's another consideration, too, in so far that running Stage 2 is 
expensive in terms of memory resources; avoiding this complication 
tends to push the balance in favour of running the extra GCD.

However, there may be a case for changing the strategy - either just 
running one GCD when you're not going to go any further, or to 
(optionally) omit the Stage 1 GCD. This becomes more relevant as 
exponent sizes increase, as the stage run time appears to be (more or 
less) linearly related to both exponent size and B limit, whereas the 
run time of the GCD is independent of the B limit but increases very 
non-linearly with exponent size.

The cost/benefit of running the extra Stage 1 GCD is zero if GCD run 
time is equal to Stage 2 run time * probability of finding a factor 
in Stage 1. If the GCD runs more quickly than this, it's definitely 
worth running. The point is that running Stage 2 is a waste of time 
if you already have a factor in Stage 1 but you just haven't bothered 
to unearth it. If you're working with small exponents, it may be 
that, although the extra GCD is relatively cheap, it's still not 
worth running it because e.g. heavy trial factoring has made the 
chance of actually finding a factor in Stage 1 very low - i.e. in 
this case we're running Stage 1 only as a means of getting to Stage 
2.

It _may_ be sensible & cost effective to automate this decision (i.e. 
"guess" the probability of finding a factor in Stage 1 from the B1 
limit and the trial factoring depth, and estimate the GCD & Stage 2 
run times) & use that as the default option. In which case we should 
probably have manual override capability in _both_ directions.


Regards
Brian Beesley
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 08:39:46 -0600
From: "Alan Vidmar" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Facelift (round 3)

Based upon what I have seen so far, I want to add a vote for the 
addition of a side frame.  Since the Header/Menu contains so many 
links, its really the only nice way of handling it.


All in favor of a side frame please speak up.


Alan


<FontFamily><param>Times New Roman</param><bigger>On 19 Apr 2000, at 21:00, George 
Woltman wrote:


<FontFamily><param>Fixedsys</param><smaller>Date sent:          Wed, 19 Apr 2000 
21:00:57 -0400

To:                     [EMAIL PROTECTED]

From:                   George Woltman <<[EMAIL PROTECTED]>

Subject:                Mersenne: Facelift (round 3)


<FontFamily><param>Times New Roman</param><bigger>Hi again,

I looked at the MS site with its dropdown menus. It was far more javascript
than I want to wade through. So I've come up with a simpler header that
incorporates a menu. Let me know if you like this (especially as compared
to the more traditional menu on the side). It does get us more horizontal
real-estate to display the status and benchmark tables.

The latest incarnation (with non-operational menus) can be viewed at:

<underline><color><param>0000,0000,FF00>href="http://www.mersenne.org/newhtml2/prime.htm"
 eudora="autourl"<underline><color><param>0000,0000,FF00</param>htm
</underline></color>as opposed to the p
ersenne.org/newhtml/prime.htm" eudora=ersenne.org/newhtml/prime.htm" 
eudora="autourl"htm
</underline></color>More comments are
Thanks again,
George

<nofill>
"A programmer is a person who turns coffee into software."
Alan R. Vidmar                   Assistant Director of IT
Office of Financial Aid            University of Colorado
[EMAIL PROTECTED]                    (303)492-3598
*** This message printed with 100% recycled electrons ***
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 11:43:46 -0400
From: George Woltman <[EMAIL PROTECTED]>
Subject: Re: Mersenne: V20 beta #4 (will the beta never end?)

Hi,

At 07:47 AM 4/20/00 -0400, David Campeau wrote:
>Seeing that not every time a stage 1 gcd will result in a factor found, are
>we not better to wait until the end of stage 2? This way, we could factor
>deeper.

Brian Beesley's timings showed that running the Stage 1 GCD will
save time in the long run for exponents above 4 or 5 million.

>Perhaps this could be yet another option?

It is an option.  Set Stage1GCD=0 in prime.ini.

>some preleminary data (on my machine at home):
>Total P-1 test = 91
>Stage 1 factor = 2
>Stage 2 factor = 1
>
>So on my machine the stage 1 gcd saved me 2 stage 2, so about 2 hour of cpu,
>but at the cost of about 90 * 230 sec of stage 1 gcd = 20700sec or 5:45
>hours. Seems to me that we could save a little bit by forgoing stage 1 gcd.

I know you are working on the smallest double-checks (about 3 million
or so).  Thus, it is not surprising you would be better off not running
the stage 1 GCD.  First-time testers will be better off running the stage 1
GCD, and most double-checkers will be neutral to slightly better off.

The GCD cost grows at an N (log N)^2 rate.  The stage 2 cost grows at
N log N (the cost of an FFT multiply) times something (the stage 2
bounds grow as N increases).  I don't know if that something is
O(N) or worse.  It doesn't matter.  It does show that at
some point you are better off doing the stage 1 GCD for a 2%
chance of saving the cost of stage 2.

Regards,
George

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 13:28:17 EDT
From: "Nathan Russell" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: V20 beta #4 (will the beta never end?)

>From: George Woltman <[EMAIL PROTECTED]>
>To: "David Campeau" <[EMAIL PROTECTED]>, <[EMAIL PROTECTED]>
>Subject: Re: Mersenne: V20 beta #4 (will the beta never end?)
>Date: Thu, 20 Apr 2000 11:43:46 -0400
>
>Hi,
>
>At 07:47 AM 4/20/00 -0400, David Campeau wrote:
>>Seeing that not every time a stage 1 gcd will result in a factor found, 
>>are
>>we not better to wait until the end of stage 2? This way, we could factor
>>deeper.
>
>Brian Beesley's timings showed that running the Stage 1 GCD will
>save time in the long run for exponents above 4 or 5 million.

This will, of course, be almost every exponent within a month or so of v20's 
release to the public - as it stands, most of the lower exponents are being 
taken by those of us who connect immediately after they expire.

>>some preleminary data (on my machine at home):
>>Total P-1 test = 91
>>Stage 1 factor = 2
>>Stage 2 factor = 1
>>
>>So on my machine the stage 1 gcd saved me 2 stage 2, so about 2 hour of 
>>cpu,
>>but at the cost of about 90 * 230 sec of stage 1 gcd = 20700sec or 5:45
>>hours. Seems to me that we could save a little bit by forgoing stage 1 
>>gcd.

Of course, some users will be inconvenienced by the memory usage in Stage 2 
and may want to take that into consideration.  Since I myself have 128 megs 
of memory, and rarely run anything except Netscape, AIM, MS Office and 
shareware games, P-1 for PrimeNet is not a major problem for me, aside from 
the slight delay in my assignments when I get new work.

>
>I know you are working on the smallest double-checks (about 3 million
>or so).  Thus, it is not surprising you would be better off not running
>the stage 1 GCD.  First-time testers will be better off running the stage 1
>GCD, and most double-checkers will be neutral to slightly better off.

And, of course, double-checkers tend to be people who have more of an 
interest in the project, and may be running multiple clients.  They are, 
therefore, more likely to take the time to analyze the documentation.  Not 
to be stereotyped, that's just a general pattern that my common sense leads 
me to expect.

>
>The GCD cost grows at an N (log N)^2 rate.  The stage 2 cost grows at
>N log N (the cost of an FFT multiply) times something (the stage 2
>bounds grow as N increases).  I don't know if that something is
>O(N) or worse.  It doesn't matter.  It does show that at
>some point you are better off doing the stage 1 GCD for a 2%
>chance of saving the cost of stage 2.
>
>Regards,
>George
>
>_________________________________________________________________
>Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
>Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 13:35:54 EDT
From: "Nathan Russell" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: V20 beta #4 (will the beta never end?)

>From: George Woltman <[EMAIL PROTECTED]>
>To: "David Campeau" <[EMAIL PROTECTED]>, <[EMAIL PROTECTED]>
>Subject: Re: Mersenne: V20 beta #4 (will the beta never end?)
>Date: Thu, 20 Apr 2000 11:43:46 -0400
>
>Hi,
>
>At 07:47 AM 4/20/00 -0400, David Campeau wrote:
>>Seeing that not every time a stage 1 gcd will result in a factor found, 
>>are
>>we not better to wait until the end of stage 2? This way, we could factor
>>deeper.
>
>Brian Beesley's timings showed that running the Stage 1 GCD will
>save time in the long run for exponents above 4 or 5 million.

Given this, it does make sense to run it.  Also, of course, there may be 
some users who, while able to devote memory to stage II, have to cancel 
scheduled defrags and so forth to do so; they may not mind losing an 
average, say, half hour per exponent to do so.


>>some preleminary data (on my machine at home):
>>Total P-1 test = 91
>>Stage 1 factor = 2
>>Stage 2 factor = 1
>>
>>So on my machine the stage 1 gcd saved me 2 stage 2, so about 2 hour of 
>>cpu,
>>but at the cost of about 90 * 230 sec of stage 1 gcd = 20700sec or 5:45
>>hours. Seems to me that we could save a little bit by forgoing stage 1 
>>gcd.
>
>I know you are working on the smallest double-checks (about 3 million
>or so).  Thus, it is not surprising you would be better off not running
>the stage 1 GCD.  First-time testers will be better off running the stage 1
>GCD, and most double-checkers will be neutral to slightly better off.

Also, of course, Brian's timings are based on only three found factors; this 
sample may be too small to accurately estimate the chance of finding a 
factor.

The smallest double-checks are being run by those who deliberately alter our 
(I have three of my own waiting at the bottom of the list for my current QA 
work to finish) connection times to claim them.  Therefore, the casual users 
who may not read all the documentation files will be better off doing the 
stage 1 GCD.

>
>The GCD cost grows at an N (log N)^2 rate.  The stage 2 cost grows at
>N log N (the cost of an FFT multiply) times something (the stage 2
>bounds grow as N increases).  I don't know if that something is
>O(N) or worse.  It doesn't matter.  It does show that at
>some point you are better off doing the stage 1 GCD for a 2%
>chance of saving the cost of stage 2.

And, if I follow you correctly, that point is well before most of the work 
presently being done.

Regards,
Nathan Russell
______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 14:00:35 EDT
From: "Nathan Russell" <[EMAIL PROTECTED]>
Subject: Mersenne: Sorry for the double post

Well, it wasn't strictly a double post - my browser crashed just as I was 
sending the second, and I wasn't sure whether it'd been sent, so I rewrote 
it.

Nathan
______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 15:07:00 EDT
From: [EMAIL PROTECTED]
Subject: Mersenne: Re: factoring

Brian Beesley writes (re. Mfactor):

> I found that Mfactor on a 21164 was significantly
> faster than Prime95 on a system which runs LL tests at 
> about the same rate. However the operational
> inefficiencies caused by driving Mfactor manually
> were outweighing this even before an improved LL
> tester for the Alpha platform came along (Thanks!)

Re. the manual aspect, Martijn Kruithof suggests

> We are talking unix here aren't we?
>
> You could make an input file with the factors (As you
> must enter them, including any stuff you must enter
> before you enter the first factor) and then run
> mfactor in the background
>
> nohup mfactor < input.file > output.file &
>
> then you can log out.

..which significantly eases the manual aspect, but not
the automatic reporting one.

I wrote:

> > I think the suggestion to have one machine (whether
> > that be a PC running Prime95 or a MIPS or Alpha
> > running Mfactor) do all the factoring needed to keep
> > multiple non-PC machines well-fed with exponents is
> > a good one, since otherwise, juggling factoring and
> > LL work becomes a pain.

to which Brian replies:

> Yes. Despite the inefficiency I find the best thing to
> do is to use an Intel PC running mprime/linux or
> Prime95/windoze to do the factoring. The point is one
> can simply take the worktodo.ini lines off PrimeNet
> (manual testing), change "Test" to "Factor" & stuff
> the file into George's program - without even
> bothering to check the factoring limits. Those that
> are already factored deep enough just get thrown out
> straight away.
>
> This is even more the case since v20 with P-1 factoring
> capability came out. Now one changes
>
> Test=<exponent>,<depth>
>
> to
>
> Pfactor=<exponent>,<depth>,0
>
> or
>
> DoubleCheck=<exponent>,<depth>
>
> to
>
> Pfactor=<exponent>,<depth>,1
>
> & waits for any neccessary trial factoring plus the
> P-1 factoring to be run.
>
> A P100 running (trial & P-1) factoring will easily
> keep a couple of PII-400s (or equivalent) busy running
> nothing but LL tests. The other advantage of doing it
> this way is that the the factoring system will report
> results to PrimeNet for you, automatically if you so
> choose.

Sounds like the right way to go to me, unless one has a
fast Alpha and really wants a thrills-a-minute kind of
ride. Note that the 21064 and 21164 are not as fast at
factoring as the 21264, but it seems a shame to have a
big fat L2 cache (the 21264s typically have 4 or 8MB)
and not use it (Mfactor gains very little in speed from
a large L2, since on the 21264, the 64MB L1 cache is
already big enough to hold a decently large small-primes
sieving table).

Come to think of it, factoring would be an excellent
application for a 21264 without an L2 cache, but I don't
know if they come that way (except perhaps in a
massively parallel setting, where the problem of
maintaining cache coherency in a multilevel cache
hierarchy often is nearly intractable). Then again, if
one turned a MP machine to factoring, GIMPS might soon
run out of factoring assignments, which would not please
those with 486s and other slower CPUs whose only useful
contribution to GIMPS is factoring.

Cheers,
- -Ernst


_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 15:40:23 EDT
From: "Nathan Russell" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: 65 bits (was Re: factoring)

(much snippage dealing with factoring on behalf of non-PC machines)

>Then again, if
>one turned a MP machine to factoring, GIMPS might soon
>run out of factoring assignments, which would not please
>those with 486s and other slower CPUs whose only useful
>contribution to GIMPS is factoring.
>
>Cheers,
>-Ernst

It will only get worse for the 486 users when PrimeNet begins handing out 
exponents that will need factoring to 65 bits.  Of course, they can use 
factoroveride, but that's not helpful to the network as a whole.

For those who don't know, due to CPU design reasons that I don't claim to 
understand, factoring to 65 bits takes easily four or five times as long as 
factoring to 64 bits.  In version 19, it was set to take place for exponents 
13.38M and up.  I believe PrimeNet will reach this point in a few months.

Currently, PrimeNet is set to award half credit for factoring.  Should that 
be adjusted once the LL testers begin to catch up with the factorers, and it 
becomes increasingly difficult for non-PC users to find fully factored 
assignments?

Nathan
______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 19:41:50 +0100
From: "McMac" <[EMAIL PROTECTED]>
Subject: Mersenne: Another distributed project

OK, so I know this might not be that warmly welcomed here, but a
few people have expressed concerns that while finding new Mersennes
is interesting, it has very limited practical value.

A while ago I heard about another distributed project called Casino-21
that would use people's PCs to run climate models and use the results
of these thousands of models to find out what is most likely to happen
over the next 50 years. The run time would be approx. 6 months minimum,
with high memory and disk space requirements as well. I said that I
was interested in the program and then forgot about over the next few
months.

They have just sent me (well, not just me) an e-mail that things are
still moving along, and they have 20'000 odd e-mail addresses showing
interest in the program.

It's certainly an interesting idea, and one with some very real
consequences for science, politics and our general well-being. If
you're interested the site is at http://www.climate-dynamics.rl.ac.uk

Clients won't be available for some time (and probably only for Linux
and Windows this year) but I think I might move to their project when
they do arrive.

McMac
When everything's coming your way, you're in the wrong lane.

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 22:51:26 +0100
From: "Siegmar Szlavik" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Old HDs

On Wed, 19 Apr 2000 23:21:54 EDT, [EMAIL PROTECTED] wrote:

>Anyone remember the really old drives that needed to be told to park their 
>heads?  I seem to remember having to do that to my 286's 20MB HD.
>
"remember" ? I still *use* one... in an old Amstrad-1640 PC (8MHz 8086 CPU)
Its 20MB HD is so slow, the maximum transfer rate is 72 KB/s (yes, 72 and
yes KB/s), which means it would take almost 2 minutes to write an 8 MB file
to disk. :-)

uuh... It's getting very OT now... back to Mersenne stuff:

I have a question regarding the primenet server: Is it possible to 'force'
the server to assign a specific free(!) exponent to me? I added some free
exponents for factoring to the worktodo file and did a manual communication
with the server afterwards and I got the error message, that the exponents 
aren't assigned to me... any idea?

Siegmar


_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 16:12:12 -0700
From: Eric Hahn <[EMAIL PROTECTED]>
Subject: Re: Mersenne: 65 bits (was Re: factoring)

Nathan Russell wrote:
>It will only get worse for the 486 users when PrimeNet begins
>handing out exponents that will need factoring to 65 bits.  Of
>course, they can use factoroveride, but that's not helpful to
>the network as a whole.

However, the factoroverride switch is not designed for use with
the PrimeNet server.  For reasons unknown, the server does not
release the exponent upon report of factoring being done when
using the switch.  That's why there was a complaint a while back
on the list that somebody had several thousand factoring
assignments out.

>For those who don't know, due to CPU design reasons that I
>don't claim to understand, factoring to 65 bits takes easily
>four or five times as long as factoring to 64 bits.  In
>version 19, it was set to take place for exponents 13.38M and
>up.  I believe PrimeNet will reach this point in a few months.
 
While I had calculated it at 4.5 times as long (2x for the same
percentage completion, and 2.25x for the iteration time), I've
been getting ~4.2x as long.  For some reason, IPS assigned me
a couple dozen exponents just above 13.38M, and what would
have normally taken me 38 hrs. for to 2^64, it's been taking
~160 hrs. to do to 2^65...

For comparison sakes, I'll note this:  I've had my system set
up to output once every 60 secs (for 63- and 64-bit factors).
At this point it was getting a % complete of 0.132% every
~59.98 seconds for 2^64 (64-bit factors), and a % complete of
0.066% every ~136.20 seconds for 2^65 (65-bit factors).


Eric 


_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

Date: Thu, 20 Apr 2000 23:24:09 -0000
From: "Brian J. Beesley" <[EMAIL PROTECTED]>
Subject: Re: Mersenne: Re: factoring

On 20 Apr 00, at 15:07, [EMAIL PROTECTED] wrote:

> Come to think of it, factoring would be an excellent
> application for a 21264 without an L2 cache, but I don't
> know if they come that way (except perhaps in a
> massively parallel setting, where the problem of
> maintaining cache coherency in a multilevel cache
> hierarchy often is nearly intractable).

Either the L2 cache is on the chip, a la Celeron/PIIIE, or it's on a 
board - either a cartridge slot, a la PII, or on the mainboard, a la 
Pentium Classic. If the L2 cache is off chip, then clearly systems 
not containing L2 cache can be manufactured. If it's on chip, 
presumably you can still turn it off - though you're going to be 
stuck with the manufacturing cost.

BTW the linux SMP kernel seems to do a pretty good job of managing 
"n" processors with individual caches, by weighting the process 
queues in a manner which allows for the cost of resuming a process on 
a different processor, thereby losing the cache contents. 

> Then again, if
> one turned a MP machine to factoring, GIMPS might soon
> run out of factoring assignments,

Who said we could ever run out? There's certainly an infinite supply 
of wholly untouched Mersenne numbers out there - whether or not the 
number of Mersenne primes is finite!

Since "advanced" factoring techniques like P-1 and ECM have greater 
memory demands than trial factoring (in fact, stage 2 of P-1 seems to 
stress the memory subsystem even more than LL testing the same 
exponent), this type of work would surely also be appropriate to 
large cache systems.

BTW I'm wondering whether the introduction of P-1 means that the 
trial factoring depth should be _reduced_. The last bit in depth 
costs half the total time involved in trial factoring yet gains only 
a small percentage of extra factors, and some of these would in any 
case be found by P-1. 

I suppose it's probably better to leave things as they are, at any 
rate until P-1 following completion of trial factoring becomes 
"compulsory".

As an alternative to the last bit depth of trial factoring, we could 
also consider implementing Pollard's rho method. This would seem to 
be reasonably efficient at finding factors round about that size and 
yet does not have the heavy memory overheads associated with P-1.


Regards
Brian Beesley
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

------------------------------

End of Mersenne Digest V1 #722
******************************

Reply via email to