Re: Mersenne: double-check mismatches

2004-01-16 Thread Daran
On Thu, Jan 15, 2004 at 07:15:46PM +, Brian J. Beesley wrote:

 ...matching 
 residuals mean that the chance of an error getting into the database as a 
 result of a computational error is of the order of 1 in 10^20.

That's per exponent, isn't it?  The chance that one of the roughly quarter
million status-doublechecked exponents being in error is about five orders
of magnitudes higher.

Still acceptible, or at least a minor consern in comparison to the other
security issues.

 Regards
 Brian Beesley

Daran G.
_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Howdy

2003-08-19 Thread Daran
On Tue, Aug 19, 2003 at 09:25:30PM -0700, Aaron wrote:

 Guess who *finally* got his computer equipment back from the EffBeeEye?
 Yuppers... Nearly 5 years later and I'm now the proud re-owner of some
 vintage 1998 equipment.  Well, the DLT 35/70 drive is still useful... Glad
 I got that back.

The Feds seized your computer?  Gosh, I had no idea they took exponent
poaching so seriously.  ;-)


Humour aside, I'm intrigued.  Why was your kit taken?

Regards

Daran G.
_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: 40th Mersenne Prime Found (probably)!

2003-06-02 Thread Daran
- Original Message -
From: George Woltman [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, June 01, 2003 5:06 PM
Subject: Mersenne: 40th Mersenne Prime Found (probably)!


 Hello all,

 This morning a new Mersenne Prime was reported to the primenet server.
 It will be the new largest known prime but is less than 10 million digits.

 For those not familiar with the process regarding new prime discoveries,
 here is what will happen.  The discovery will be verified using a
 different program and different hardware.

I presume you're doing an unofficial DC on it right now.  Can you give us
some idea when this will be complete?

 Enjoy,
 George

Daran G




_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Old files in my mprime dirs

2003-03-30 Thread Daran
On Sun, Mar 30, 2003 at 01:04:10PM -0800, Chris Marble wrote:

 I've been running mprime on Linux for a bit less than 2 years now.
 I was looking at the directory on one box and found some ancient files:
 mE013037.001
 mF320219.001
 mF550789.001
 mF614243.001
 mF687599.001
 
 These are from a dual CPU box.  The leading m suggests they're from
 factoring but but file sizes are 3.5 to 3.9 MB...

They'll be P-1 save files.

 There's been a database
 sync so I don't have results for 14013037 (assuming E=14, F=15) on my page.
 Where would I check to see that all these exponents had been completed so
 I could delete these old files?

They've probably had a first time test, but are unlikely to have been DCed. 
The file ftp://mersenne.org/lucas_v.zip will tell you, but I don't one which
is up-to-date, and it's a multimegabyte download.

I do have the the latest pminus1.txt file, though, and these exponents
aren't listed.

 What happened that these files were left around a year ago?

They shouldn't have an extension, so I'm gessing.   Backup and restore? 
Recovered from a damaged filesystem?

My recommendation would be to complete the P-1 as though
they were DCs.  Just remove the extensions and put Pfactor=exponent,66,1
into your P-1 factory's worktodo file.

 -- 
   [EMAIL PROTECTED] - HMC UNIX Systems Manager
_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Old files in my mprime dirs

2003-03-30 Thread Daran
On Sun, Mar 30, 2003 at 09:58:15PM -0800, Eric Hahn wrote:

 Daran wrote:
 Chris Marble wrote:

 Actually the .001 extension would be expected... especially
 if running MPrime on a dual CPU box... with one instance using
 the -A1 switch...  Using the -A switch will put an extension
 with the instance after it... No -A switch... no extension...

OK.  I've never used a Dual CPU Box.

 However... to complete the P-1s... assuming they were initiated
 by a L-L test... adding Pfactor=exponent,66 lines to the
 WORK0001.INI would do it... no ,1 is needed... since it would
 indicate the P-1 is complete...

Not for Pfactor assignments it doesn't.  Here's what undoc.txt says

#You can do P-1 factoring by adding lines to worktodo.ini:
#   Pfactor=exponent,how_far_factored,has_been_LL_tested_once
#For example, Pfactor=1157,64,0

I have verified that this is correct.  A '1' in the last place causes it to
choose DC limits.

One question:  If Chris is going to finish them on a different - single CPU
machine, would he have to remove the extension?

 Eric

Daran
_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: P-1 on PIII or P4?

2003-03-10 Thread Daran
On Mon, Mar 10, 2003 at 09:05:41PM +, Brian J. Beesley wrote:

 On Monday 10 March 2003 07:49, Daran wrote:

 I just tried Test=8907359,64,0 on two systems - an Athlon XP 1700+ and a 
 P4-2533, both running mprime v23.2 with 384 MB memory configured (out of 512 
 MB total in the system). These were fresh installations, I did nothing apart 
 from adding SelfTest448Passed=1 to local.ini to save running the selftest.
 
 The Athlon system picked B1=105000, B2=1995000 whilst the P4 picked 
 B1=105000, B2=2126250. So it seems that P4 is picking a significantly but not 
 grossly higher B2 value.

My Duron 800 picks values identical to your Athlon with 384MB allowed.

No change at 400MB

At 420MB B2 goes up to 2021250, still lower than your B2 value.

At 504MB B2 remains at 2021250.

I don't think George's '1 or 2 extra temporaries' theory stands up.

 Regards
 Brian Beesley

Daran G.
_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Mersenne: Optimal choice of E in P-1 computations

2003-03-09 Thread Daran
Some time ago I raised the question on this list, whether the client's
choice of the E parameter was optimal in P-1 calculations.  I gave a
somewhat handwavy argument in support of the claim that IF it is worthwhile
choosing E=4 over E=2, i.e., if the benefits in additional factors found
outweigh the cost in extra processing time[1], THEN it should also be
worthwhile choosing E=6, and maybe even E=8 or E=12.  I also argued on
empirical grounds that, for D=420 at least, E=4 and E=12 would need to yield
roughly 2% and 10% respectively more stage 2 factors than E=2 for this to be
worthwhile.[2]

From a theoretical point of view, Peter Montgomery's thesis, as suggested by
Alex Kruppa, is clearly relevant.  (I don't have the URL to hand, but
someone is sure to post it.) Unfortunately it's somewhat beyond my
mathematical ability to grasp.  Therefore, I have concentrated on attempting
to collect empirical evidence, as follows.

I have done a great many P-1s of doublecheck assignments with E=4.  Out of
the 35 stage 2 factors found[3], 1 was 'extended', i.e, would not have been
found using E=2.

In the hope of more quickly collecting data, I have also redone, to 'first
time test' limits, every entry in pminus1.txt which had previously done to
B1=B2=1000, 2000, and 3000.  For these exponents, all in the 1M-3M ranges,
the client was able to choose a plan with E=12.  Unfortunately, I found far
fewer factors in either stage 1 or stage 2 than I would expect, which
suggests to me that exponents in this range have had additional factoring
work (possibly ECM) not recorded in the file.  Of particular concern is the
possibility that in addition to reducing the number of factors available for
me to find, it may have upset the balance between 'normal' and 'extended'
P-1 factors - the very ratio I am trying to measure.  Consequently I am
inclined to exclude these results, though I report them for completeness:

Of the 10 stage 2 factors found, 2 were extended.  They are:-

P-1 found a factor in stage #2, B1=2, B2=395000.
UID: daran/1, M1231753 has a factor: 591108149622595096537
591108149622595096537-1 = 2^3*3*11*743*2689*909829*1231753

P-1 found a factor in stage #2, B1=3, B2=547500.
UID: daran/1, M2008553 has a factor: 9050052090266148529
9050052090266148529-1 = 2^4*3^2*7*71*79*796933*2008553

Finally, with George's permission, I have done a small number of P-1s of
doublechecking assignments with a client modified to use D=420, E=12 - a
plan not available with the standard clients.  So far, I have found only one
stage 2 factor, which was not extended.  I will continue to search for more.

Of particular interest with E=12 extended factors, is whether they would
have been found with a lower value of E.  E=12 will find all factors that
E=4 and E=6 would have found, and some not found by any lower E.  My
handwavy argument predicted that E=6 should yield on average twice as many
extended factors than E=4.  I'm hoping that someone (Alex Kruppa?) might
have a tool to analyse extended factors to determine their minimal E.  If
not, I will write one.

In conclusion, the evidence I have been able to gather, though statistically
insignificant, does not tend to exclude the hypothesis that a higher E would
be worthwhile.

[1]There is also a memory cost, but this is low in comparison with the costs
associated with the D parameter.  For example, for an exponent in the
7779000-9071000 range, in which I am working, D=420, E=4 consumes 446MB, and
because of the client's conservative programming, 465MB must be 'allowed'
before it will choose this plan.  The next level down is D=210, E=4 which
requires 299MB.  Using the modified client with E=12 adds an extra 37MB to
these requirements, which is memory available and going spare if the amount
allowed is between about 350MB and 465MB.

Another way to look at this is to say that there is no memory cost
associated with increasing E for a given value of D.  The memory is either
available, or it is not.

[2]Assuming that the current algorithm for determining optimal B1 and B2
values are accurate, and that this routine would be modified to make it
aware of the costs and benefits of differing values of E.

[3]This total includes both prime components of a composite factor found in
a single P-1 run, since neither was extended.

Regards

Daran
_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: P-1 on PIII or P4?

2003-03-09 Thread Daran
On Fri, Mar 07, 2003 at 09:51:33PM -0800, Chris Marble wrote:

 Daran wrote:
  
  On Thu, Mar 06, 2003 at 08:12:31PM -0800, Chris Marble wrote:
  
   Daran wrote:

 I like my stats but I could certainly devote 1 machine out of 20 to this.

If you're going to use one machine to feed the others, then it won't harm
your stats at all.  Quite the contrary.

 Assume I've got 1GB of RAM.  Do the higher B2s mean I should use a P4 rather
 than a P3 for this task?

I don't know, because I don't know why it gets a higher B2.

B1 and B2 are supposed to be chosen by the client so that the cost/benefit
ratio is optimal.  Does this mean that P4s is choose B2 values which are too
high?  Or does everything else choose values too low?  Or is there some
reason I can't think of, why higher values might be appropriate for a P4?

In fact, I'm not even sure it does get a higher B2 - the apparent difference
could be, as Brian suggested, due to differences between versions.  I don't
have access to a P4, so I can do any testing, But I'd appreciate it if you
or someone else could try starting a P-1 on the same exponent (not in one of
the ranges where it would get a different FFT length) on two different
machines, with the same memory allowed.  You would not need to complete the
runs.  You could abort the tests as soon as they've reported their chosen
limits.

 Would I unreserve all the exponents that are already P-1 complete?
 If I don't change the DoubleCheck into Pfactor then couldn't I just let
 the exponent run and then sometime after P-1 is done move the entry and
 the 2 tmp files over to another machine to finish it off?

If you're going to feed your other machines from this one, then obviously
you won't need to unreserve the exponents they need.  But there's an easier
way to do this.  Put SequentialWorkToDo=0 in prime.ini, then, so long as it
never runs out of P-1 work to do, it will never start a first-time or
doublecheck LL, and there will be no temporary files to move.  I also
suggest putting SkipTrialFactoring=1 in prime.ini.

 That sounds like more work than I care to do...

I agree that with 20 boxes, the work would be onerous.

 ...I can see having 1 machine
 do P-1 on lots of double-checks.

That would be well worth it.  Since one box will *easily* feed the other
twenty or so, you will have to decide whether to unreserve the exponents you
P-1 beyond your needs, or occasionally let that box test (or start testing)
one.

You may find a better match between your rate of production of P-1 complete
exponents, and your rate of consumption, if you do first-time testing.

[...]

 As an mprime user I edit the local.ini file all the time.  Per your notes
 I upped *Memory to 466.

That will certainly help exponents below 9071000 on a P3, or 8908000 on a P4. 
The current DC level is now over 917, so I doubt this will help much,
(though of course, it won't harm, either).  I haven't tried.  I'm still
getting enough sub 9071000 expiries.

 -- 
   [EMAIL PROTECTED] - HMC UNIX Systems Manager

Daran G.
_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: P-1 on PIII or P4?

2003-03-07 Thread Daran
On Thu, Mar 06, 2003 at 08:12:31PM -0800, Chris Marble wrote:

 Daran wrote:
  
  Whichever machine you choose for P-1, always give it absolutely as much
  memory as you can without thrashing.  There is an upper limit to how much it
  will use, but this is probably in the gigabytes for exponents in even the
  current DC range.
 
 So I should use the PIII with 1 3/4 GB of RAM to do nothing but P-1.

This depends upon what it is you want to maximise.  If it's your effective
contribution to the project, then yes.  Absolutely!  This is what I do on my
Duron 800 with a 'mere' 1/2GB.  The idea is that the deep and efficient P-1
you do replaces the probably much less effective effort that the final
recipient of the exponent would otherwise have made (or not made, in the
case of a few very old clients that might still be running).  I've not done
any testing, but I'm pretty sure that it would be worthwhile to put any
machine with more than about 250MB available to exclusive P-1 use.

On the other hand, you will do your ranking in the producer tables no
favours if you go down this route.

 It's an
 older Xeon with 2MB cache.  Will that help too?

You'll have to ask George if there is a codepath optimised for this
processor.  But whether there is or there isn't, should affect only the
absolute speed, not the trade-off between P-1 and LL testing.

I can't see how a 2MB cache can do any harm, though.

 How would I do this?  I see the following in undoc.txt:
 
 You can do P-1 factoring by adding lines to worktodo.ini:
 Pfactor=exponent,how_far_factored,has_been_LL_tested_once
 For example, Pfactor=1157,64,0

Unfortunately, pure P-1 work is not supported by the Primenet server, so
requires a lot of ongoing administration by the user.  First you need to
decide which range of exponents is optimal for your system(s).  (I'll
discuss this below).  Then you need to obtain *a lot* of exponents in that
range to test.  I do roughly eighty P-1s on DC exponents in the ten days or
so it would take me to do a single LL.

The easiest way to get your exponents is probably to email George and tell
him what you want.  Alternatively, if the server is currently making
assignments in your desired range, then you could obtain them by setting
'Always have this many days of work queued up' to 90 days - manual
communication to get some exponents - cut and past from worktodo.ini to a
worktodo.saved file - manual communication to get some more - cut and past -
etc.  This is what I do.

The result of this will be a worktodo.saved file with a lot of entries that
look like this

DoubleCheck=8744819,64,0
DoubleCheck=8774009,64,1
...

(or 'Test=...' etc.)  Now copy some of these back to your worktodo.ini file,
delete every entry ending in a 1 (These ones are already P-1 complete),
change 'DoubleCheck=' or 'Test=' into 'Pfactor=', and change the 0 at the
end to a 1 if the assignment was a 'DoubleCheck'.

When you next contact the server, any completed work will be reported, but
the assignments will not be unreserved, unless you act to make this happen. 
The easiest way to do this is to set 'Always have this many days of work
queued up' to 1 day, and copy your completed exponents from your
worktodo.saved file back to your worktodo.ini (not forgetting any that were
complete when you got them).  You do not need to unreserve exponents
obtained directly from George.

Like I said, It's *a lot* of user administration.  It's not nearly as
complicated as it sounds, once you get into the routine, but it's definitely
not something you can set up, then forget about.

If you're willing to do all this, then there's another optimisation you
might consider.  Since it's only stage 2 that requires the memory, you could
devote your best machine(s) to this task, using your other boxes to feed
them by doing stage 1.  This is assuming that they're networked together. 
Moving multimegabyte date files via Floppy Disk Transfer Protocol is not
recommended.

[...]

  If you are testing an exponent which is greater than an entry in the fifth
  column, but less than the corresponding entry int the third column, then
  avoid using a P4.  This applies to all types of work.

Actually it's worse than this.  The limits are soft, so if you are testing
an exponent *slightly* less than an entry in column 5, or *slightly*
greater than one in column 3, then you should avoid a P4.


Choice of exponent range


Stage two's memory requirements are not continuous.  This remark is probably
best illustrated with an example: on my system, when stage 2-ing an exponent
in the range 777 through 9071000, the program uses 448MB.  If that much
memory isn't available, then it uses 241MB.  If that's out of range, then
the next level down is 199MB, and so on.  There are certainly usage levels
higher than I can give it.

The benefits of using the higher memory levels are threefold.

1.  The algorithm runs faster.
2.  The program responds by deepening the search

Re: Mersenne: P-1 on PIII or P4?

2003-03-06 Thread Daran
- Original Message -
From: Chris Marble [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, March 04, 2003 4:00 PM
Subject: Mersenne: P-1 on PIII or P4?

 I've got a couple of P4s that I can use on weekends.  I've been using them
 to finish off exponents that my PIIIs were working on.  Is that the right
 order?  P-1 on the PIII and then the rest on the P4.  I want to maximize
 my output.

Hmmm.  That's an intriguing question.

Based upon what I know of the algorithms involved, it *ought* to be the case
that you should do any P-1 work on the machine which can give it the most
memory, irrespective of processor type.

However, some time ago, I was given some information on the actual P-1
bounds chosen for exponents of various sizes, running on systems of various
processor/memory configurations.  It turns out that P4s choose *much deeper*
P-1 bounds than do other processors.  For example:

8233409,63,0,Robreid,done,,4,45,,Athlon,1.0/1.3,90
8234243,63,0,Robreid,done,,4,45,,Celeron,540,80
8234257,63,0,Robreid,done,,45000,742500,,P4,1.4,100

The last figure is the amount of available memory.  The differences between
80MB and 100MB, and between 8233409 and 8234257 are too small to account for
the near doubling in the B2 bound in the case of a P4.

Since I do not understand why this should be the case, I don't know for
certain, but it looks like a P4 is better for P-1.

Whichever machine you choose for P-1, always give it absolutely as much
memory as you can without thrashing.  There is an upper limit to how much it
will use, but this is probably in the gigabytes for exponents in even the
current DC range.  Memory is not relevant for factorisation, the actual LL
test, or stage 1 of the P-1.

It used to be the case that TF should be avoided on a P4, but that part of
this processor's code has been improved in recent versions, so I don't know
if this is still the case.  If you ever get an exponent that requires both
P-1 and extra TF, do the P-1 before the last bit of TF.  This doesn't alter
the likelihood of finding a factor, but if you do find one, on average you
will find it earlier, and for less work.

There are a number of ranges of exponent sizes where it is better to avoid
using P4s.  George posted the following table some time ago (Best viewed
with a fixed width font.)

FFT v21  v22.8v21 SSE2 v22.8 SSE2
262144  5255000  5255000  5185000  5158000
327680  652  6545000  6465000  6421000
393216  776  7779000  769  7651000
458752  904  9071000  897  8908000
524288  1033 1038 1024 1018
655360  1283 1289 1272 1265
786432  1530 1534 1516 1507
917504  1785 1789 1766 1755
1048576 2040 2046 2018 2005
1310720 2535 2539 2509 2493
1572864 3015 3019 2992 2969
1835008 3510 3520 3486 3456
2097152 4025 4030 3978 3950
2621440 5000 5002 4935 4910
3145728 5940 5951 5892 5852
3670016 6910 6936 6865 6813
4194304 7930 7930 7836 7791

If you are testing an exponent which is greater than an entry in the fifth
column, but less than the corresponding entry int the third column, then
avoid using a P4.  This applies to all types of work.

Where the considerations discussed above conflict, I don't know what the
balance is between them.

HTH

 --
   [EMAIL PROTECTED] - HMC UNIX Systems Manager

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: Re: Mersenne Digest V1 #1038

2003-01-26 Thread Daran
On Sun, Jan 26, 2003 at 10:01:26PM +, Gordon Spence wrote:

 1. Personally, I don't see any harm in poaching per se, I have had it 
 happen to me. That's life and the way it goes. As I stated earlier, when 
 information is discovered humanity as a whole gains. Period. Look at the 
 big picture.

I don't quite see what 'humanity as a whole gains' when I return a negative
result, or even a new factor, to the server.  The biggest picture in which
what we do has any significance at all, is the GIMPS project as a whole.

And poaching harms the project by

a.  Making it less fun, and
b.  Possibly by putting off participants.

[...]

 4. Get it into perspective. The number of times this actually happens is 
 miniscule. Out of the millions we have checked what are the poached 
 items? Dozens, a few hundred??

It's a thorn in the flesh.  Objectively, the effect of poaching is
insignificant, but its irritation value is out of proportion - witness the
way the issues comes up time and time again in this list.
 
 5. It has correctly been pointed out that life doesn't end if a milestone 
 slips. Well guess what? That is a double-edged sword - life doesn't end if 
 an exponent gets poached either.

But a participants contribution might.
 
 6. Now go back and read the license.txt file again, and this time actually 
 take the time to read and understand it. It specifically excludes liability 
 in the event of poaching. It does *NOT* say the you mustn't do it. The only 
 rules that you agree to be bound by are those in deciding how the 
 cash-prize is split up.

Isn't that what we're discussing?  Changes to the rules and proceedures.
 
 Gordon

Daran
_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Mersenne Digest V1 #1038

2003-01-26 Thread Daran
On Sun, Jan 26, 2003 at 03:03:53PM -0800, spike66 wrote:

 spike
 
 (Please, fellow math lovers, do let us get
 things in the proper perspective here.  GIMPS
 is just for fun.)

Isn't that the point.  Poaching spoils the fun for the poachee.

Regards

Daran G.
_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Just landed a whopper.

2003-01-19 Thread Daran
In an attempt to obtain more empirical data to support/contradict my
contention that we should be using higher values of E where possible in P-1
testing, I have been re-factorising large numbers of small Mersennes which
have had inadequate P-1s to date - so far, just those that had previously
been tested to no higher than B1=B2=3000.  I will report my findings in due
course.

Imagine my delight at finding the following enormous factor in my
results.txt file:-

[Sat Jan 18 20:21:28 2003]
P-1 found a factor in stage #2, B1=3, B2=547500.
UID: daran/1, M2010413 has a factor: 18636782044954323215047758147550989193

At 38 digits, this dwarfs my previous largest find at 26 digits.

As I suspected, it turned out to be composite:

26270256567937408841 * 709425200959031873

Neither are non-smooth to the specified limits, so I will count this as two
negative results in my final tally.

The question is, how are multiple factors handled in the database?

Regards

Daran G.
_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 and non k-smooth factors

2002-12-16 Thread Daran
- Original Message -
From: Alexander Kruppa [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, December 11, 2002 12:11 PM
Subject: Re: Mersenne: P-1 and non k-smooth factors

 Daran wrote:

 Ad far as I can see it is a regular PostScript. I don't know if the
 extra l indicates a variant of the format, but the file behaves
 normally in ghostscript/when printing.

OK I can read it now.  Understanding it is another matter entirely.

   However, primitive factors of x^n-y^n must be ==1 (mod n) and so the
   factors do not behave particularly randomly at all...
   
   
This doesn't make sense.  Any primitive factor f of x^n-y^n is a
primitive factor of x^(kn)-y^(kn) for any k, so the above would imply
that f==1 (mod kn) for any k.
   
What am I missing?

 A prime factor of x^n-y^n is called primitive if it divides no x^m-y^m
 with 0mn. And thus, with x and y distinct (mod p) and neither a
 multiple of p,

Right.  I was confusing 'primitive' with 'irreducible'.

BTW, I don't think I've thanked you enough for your earlier replies on this
subject, which have contributed enormously to my understanding of the
problem.

[snip lengthy analysis which I will need to study]

   This causes clumping of primes included
   by Suyama's powers (some primes appearing often, others not at all)
   and reduces their efficiency...
   
   
Could we use this somehow?  Would it be worth skipping primes in the
B1-B2 range that were highly likely to crop up in a Suyama power?

 Yes, or compute which ones do actually apprear and eliminate them from
 the stage 2 primes...

I was under the impression that this might be too expensive.

 ...Another idea is to exclude from stage 2 those primes
 that appear as factors of (x+y). For D=2310 and B2/B1=100, primes
 between 13 and 97 may divide (x+y) so that the cofactor is a prime in
 the stage 2 interval. That requires a bitfield of all the stage 2 primes
 that must be processed top down which makes getting the prime pairing
 right more difficult. I tried that once in my own P-1 factorer but it
 had a bug that I didn't find yet. The saving was iirc ~10% of the
 multiplies in stage 2.

In my experience the program chooses B2 10-20 times the size of B1.

 All this can be generalised. The goal is to choose a subset of NxN
 (pairs of positive integers) so that the efficiency is maximal for the
 factor we hope to find. Of course we won't know the precise factor we
 want to find, but we can integrate over all remaining candidate factor
 weighted with their respective probability of dividing the input number.

I love this idea!

 An simple algorithm would be to take all the pairs (x,y) with x=mD for a
 hightly composite D and yD, gcd(y,D)=1 (as in the current stage 2, but
 not only pairs where x-y is a prime), factoring all the x^n-y^n over
 primes B1 (and below some other bound)...

Call this other bound C.

 ...and making a bipartite graph of
 (x,y) pairs connected with their factors. A greedy algorithm would
 choose the best (x,y) pair (i.e. where the sum of reciprocals of the
 connected prime factors maximal) and remove those prime factors and
 their edges from the graph. Repeat until the best remaining (x,y) pair
 is worse that some chosen limit.

Call this limit epsilon.

This assumes that each (x,y) pair costs the same, but in fact, there is a
cost associated with increasing x.

 I doubt that the greedy algo is optimal, but it should be pretty fast,
 except for the factoring part maybe.

I'm certain it wouldn't be.  For a start, if you don't take this extra cost
into account, and you choose C too high, then your largest x may be much
higher than is desirable.  If you do take the extra cost into account, by
attaching it to each x larger than any previous x, then you'll tend to fill
up your bit array from the bottom, and lose much of the benefit of the
smaller factors of the larger candidates.

Better would be to use a two-pass greedy algorithm.  The first using a very
large C  1/epsilon, but including the extra cost of large x.  Use the
result of this to reset C to be just large enough to accommodate the largest
x, then redo the algorithm ignoring that extra cost.

You might be able to improve it further by choosing a slightly larger C for
the second pass, than the one suggested by the first, or you could implement
a multipass algorithm as follows:-  Choose limits C  1/epsilon and B2
initially equal to B1.  Then for each pass, factor in the extra cost only
for those x (larger than any previous x) which are also larger than B2.  Use
the result of each pass to reset B2 to accommodate the largest x.  The
result should be independent of C, provided the latter is large enough.

I doubt that this would be optimal, even if iterated to equilibrium.  The
problem bears the hallmarks of computational intractability.

 The so created plan can be stored and distribued to factoring clients.

You mean the bitmap of (x,y) pairs?  You'd need

Re: Mersenne: P-1 and non k-smooth factors

2002-12-06 Thread Daran
- Original Message -
From: Alexander Kruppa [EMAIL PROTECTED]
To: George Woltman [EMAIL PROTECTED]
Cc: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, December 05, 2002 12:18 PM
Subject: Re: Mersenne: P-1 and non k-smooth factors

 George Woltman wrote:
  At 10:31 PM 12/3/2002 +, Daran wrote:
 

  The analysis is more complex than this.  It really depends on the prime
  [...]
  I'd be greatly interested in such a study.

 Peter Montgomery's dissertation, An FFT Extension to the Elliptic Curve
 Method of Factorization,
 ftp://ftp.cwi.nl/pub/pmontgom/ucladissertation.psl.gz ,

What do I need to read a psl file?

 contains in chapter 5 an analysis of Suyama's powers and Dickson
 polynomials for the Brent-Suyama extension to stage 2.

 The number of algebraic factors in the Suyama/Dickson polynomial is the
 interesting part, i.e.

That was the conclusion I came to.

 (x^6-y^6) = (x-y)*(x+y)*(x^2+x*y+y^2)*(x^2-y*x+y^2)

 where (x-y) and (x+y) usually are both =B2, so primes B2 have two
 opportunities of appearing as prime factors of algebraic factors of the
 Suyama polynomial. If we assume that the values taken on by the
 algebraic factors of the poly behave like integers chosen uniformly at
 random, we could conclude that a prime pB2 appears in (x^6-y^6) with
 probability 1-(1-1/p)^2 ~= 2/p.

In addition to the remarks you go on to make, this will only be true for
primes  the algebraic factor; intuitively I feel it ought only to be true
for primes  sqrt(factor) ~= x (in the case of x^6-y^6).  All these will be
 B2.

 However, primitive factors of x^n-y^n must be ==1 (mod n) and so the
 factors do not behave particularly randomly at all...

This doesn't make sense.  Any primitive factor f of x^n-y^n is a primitive
factor of x^(kn)-y^(kn) for any k, so the above would imply that f==1 (mod
kn) for any k.

What am I missing?

 Primes p where p-1
 has many factors in common with n have a better chance of appearing as
 factors, while those with p-1 coprime to n can only appear as factor of
 (x+y)(x-y) and are thus p=B2...

Hence E should be chosen to be highly composite, which conclusion I had
already come to, from consideration of the number of algebraic factors.

 This causes clumping of primes included
 by Suyama's powers (some primes appearing often, others not at all) and
 reduces their efficiency...

Could we use this somehow?  Would it be worth skipping primes in the B1-B2
range that were highly likely to crop up in a Suyama power?

 ...Apparantly Dickson polynomials do better, but
 I don't really know much about them.

 Montgomery's dissertation is probably a very good starting point if
 someone would like to investigate the optimal choice for Suyama's powers
 (or Dickson polynomials) for Prime95.

As soon as I can figure out how to read it.

 Alex

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 and non k-smooth factors

2002-12-06 Thread Daran
- Original Message -
From: Brian J. Beesley [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, December 05, 2002 12:31 PM
Subject: Re: Mersenne: P-1 and non k-smooth factors

 There is obviously a tradeoff here between increasing B2 and simplifying E
 and increasing E compensating for increased run time by lowering B2.
 However it does seem to be obvious that increasing E always has to be paid
 for in increased memory requirements.

It's the larger values of D, rather than E which use the most memory.  The
client rarely uses all it's allowed, except in the low memory situations.
For example, it needs 46 temps to run at D=150, E=2.  If only 45 temps were
available, the client, as currently configured, would run at D=120, E=2
using 38 temps.  But it could afford to run at D=120, E=6 (42 temps) or even
D=120, E=8 (44 temps), although, for reasons given, I don't think the latter
would be a good idea.

 For exponents around 8M, this is not a particular issue. However there is
 a real, practical constraint so far as Prime95/mprime is concerned - the
 entire _virtual_ address space is limited to 4 GBytes by the 32 bit
 address bus, and the OS kernel claims some (usually half) of this, so that
 the total memory usable by a single process is limited to 2 GBytes. (There
 is a big memory variant of the linux kernel which expands this to 3
 GBytes, but the point still stands).

As mentioned by other list members, there's also a 64GB version, which,
apparently, doesn't work.  I expect they'll have it working by the time 4GB
systems become commonplace.

 Since, on my practical experience, a 17M exponent will quite happily use ~
 800 MBytes in P-1 stage 2,...

At 7186MB per temp, that sounds like a plan of D=420, E=4 (104 temps)

 ...the 32 bit address bus may well be a limiting
 factor within the exponent range covered by current versions of
 Prime95/mprime.

Easily, even at 17M.  To run with D=2310, E=12 requires 496 temps.  It would
go higher, if the memory was there.  D is capped at sqrt(B2-B1).

[...]

  What I was thinking of doing was writing a program to read in the factor
  database from, say, 1M up (to avoid any factors found by ECM), discard
 .any below the TF limit, then try to find the smallest B2 which would
  yield the factor given E=4, 6, 8, 10, or 12.  This won't tell us the
absolute
  frequency of extended factors, but the relative frequencies would test,
  and perhaps quantify, my conjecture about the relative effectiveness of
  these values.

 Why not simply use a random sample of numbers of suitable size? Say
 around 2^41, you are looking for factors from 2^65 upwards with exponents
 around 2^23. P-1 is really about the factors of k in f=2kp+1 since the +1
 is implicit and the 2p comes out in the wash.

 (Does the size of the numbers in the sample actually matter from
 the theoretical point of view?)

No, but if I use random numbers rather than genuine factors of Mersenne
numbers, then the suspicion will be there that there's something special
about the latter which invalidates the result.

But it would probably be sensible to try this first.

 Regards
 Brian Beesley

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 and non k-smooth factors

2002-12-06 Thread Daran
- Original Message -
From: George Woltman [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, December 05, 2002 2:35 AM
Subject: Re: Mersenne: P-1 and non k-smooth factors

 The analysis is more complex than this...

I never doubted that.  :-)

[...]

 Why not take your example:
  on an 8.1M exponent, B1=45000,
  B2=731250 and varying values for D and E, I get the following
results
  D E Stage 2 transforms
  420 2 93946
  420 4 98618 (the default for my memory settings)

Done one more:

420 6 103746

It's looking very linear.

 and write a program that emulates stage 2's selection of (x^E - d^E), does
 a prime factorization of the value, and keeps track of which factors above
 B2 get included.  It should be possible to calculate your increased chance
 of finding a factor (someone please fill in the formula here).

That's a rather more extensive programming job than the one I had in mind.
It would also be expensive at runtime, with a prime factorisation in every
cycle.

What I was thinking of, is to take the k of known Mersenne factors, or at
Brian's suggestion, random integers of an appropriate size, factor them to
obtain the second largest and largest factor, a and b, say, then emulate the
stage 2 selection of (x^E - d^E) from B1=a through to B2=b or until I find
one divisible by b, which ever comes first.

Regards

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 and non k-smooth factors

2002-12-04 Thread Daran
- Original Message -
From: Brian J. Beesley [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Wednesday, December 04, 2002 2:55 PM
Subject: Re: Mersenne: P-1 and non k-smooth factors

 Let's see if I get this right.

 Overwhelmingly, the factors produced by P-1 factoring come out because
 they are smooth to the limits selected. The fraction that comes out
 because of the extension is  10%. To double that fraction (i.e. to
 increase the total number of factors found by  10%) we have to double
 the stage 2 run time?

That's not what I meant.

 Doesn't sound that great to me, even without worrying about memory
 considerations.

 If we're talking about the _extra_ computation time in stage 2 then
 obviously the suggestion makes a lot more sense -

Yes.  If it takes Time t to run a stage 2 with E=2, and t+d to run it with
E=4, then we should be willing to spend t+2d to run it with E=6, and t+5d to
run it with E=12, which, as far as I can see, is approximately how the cost
increases anyway.

I've only had time to run a few tests, but on an 8.1M exponent, B1=45000,
B2=731250 and varying values for D and E, I get the following results

D  E  Stage 2 transforms
420 2   93946
420 4   98618(the default for my memory settings)
42012 121520
210 4  104272
150 4  111962
120 4  116464

I did one with D=420, E=6, but I don't seem to have made a note of it.  Nor
did I record running times, because I was using the computer to do other
things.  Transforms ought to be reasonably proportionate.

The first three tests are consistent with the cost increasing linearly with
E, while the latter show the penalty of lowering the memory.  (Note that an
unmodified client would drop to E=2 below D=210, but I wanted to see just
the effect of lowering D)

With D=420, E=12 is over 23% dearer than the E=4 which is currently used.
However, this does *not* mean that it needs to find more 23% more factors to
be worthwhile.  If guess_pminus1_bounds is altered to be aware of D and E,
then it compensates for the extra cost by reducing B2, trading 'normal'
factors for extended ones.  My (limited) tests suggest, as I said, that 2%
more factors with E=6 or 10% more factors with E=12 would be a worthwhile
trade.  (This assumes that my formula for estimating the number of
squarings is accurate, something I have not checked.)

 ...though I think there needs to be a
 careful analysis as to what the extra computation time for actual E values
 might be...

I agree.  My tests have been limited to exponents in the 8.1M range, for no
particular reason than those are the ones I am doing.

 (as opposed to a rather simplistic linear model, which fails to take
 into account that some of the temporaries needed for small E probably
 drop out pretty well for free).

I don't understand.

 If we _don't_ have to worry about memory, at some point it becomes
 cost-effective to run ECM with small limits instead of P-1 with much
 larger limits. And ECM can easily dig out some factors which are more
 or less inaccessible with P-1.

I was under the impression the ECM was only practical for small exponents
well below the current DC range.

 [... snip ... I don't disagree but the basic argument is the same as
 above]
 
  In 2 out of the 29 stage 2 factors I have found so far using E=4, k has
  not been smooth to B2.  This suggests that increasing E from 4 to 12
  could yield about 20% more factors.  I've done a few tests with a
  modified and recompiled client, which suggests that it would worth it
  even if E=12 yielded as few as 10% more factors, though I need to
  investigate this further.

 That's a very small sample.

It's the only sample I have.  I'm trying to increase it by doing some P-1s
on exponents in the 1.2M range which have only been tested to B1=B2=1000.

 Some time ago I found a considerable number of first factors for exponents
 in the range 100,000-150,000 using P-1 with limits up to B1=10^6,
 B2=25x10^6.  The results.txt file doesn't record the E value used; though
 I did have tons of memory available (in relation to the exponent size) and
 seem to remember something about wondering what E=12 meant in the
 console output.  Or maybe I'm confusing this with recollections about
 running ECM?

That's possible as E is also used in the ECM code.  But E=12 output from a
P-1 means what it says.  If the value of E is not output, then it means E=2
(or perhaps E=1, but I've never tried using that little memory.)  E (2) is
recorded in the results.txt file by recent clients, so either the version of
the client you were using then didn't record it, or your memory setting
didn't allow E2.

 My records show 67 factors found; I mailed George on one occasion because
 P-1 found a factor which surprised me, but I don't think it happened
 twice.

How many of these were found during stage 2?  (If half your factors were
found during

Re: Mersenne: P-1 and non k-smooth factors

2002-12-03 Thread Daran
- Original Message -
From: George Woltman [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, November 28, 2002 2:18 AM
Subject: Re: Mersenne: P-1 and non k-smooth factors

 At 06:05 PM 11/27/2002 +, Daran wrote:
if (D = 180) E = 2;
else if (D = 420) E = 4;
else if (D = 2310) E = 12;
else if (D = 6930) E = 30;
else E = 48;
  
 
 I understand why it chooses the values of D that it does, but why these
 values of E?  I understand why E must be even, and that higher powers
 ought to be highly composite, but wouldn't 6 be a good intermediate
 value?  24?  60 for the top end?

 No good reason.  As far as I know, there haven't been any studies on the
 best values to choose for E.

For clarity, let's write mD as x, so that for a Suyama power E, the exponent
(x^E - d^E) is thrown into the mix when either x-d or x+d is prime in
[B1...B2], (and only once if both are prime).  This works because (provide E
is even) x^E - d^E = (x-d)*(x+d)*C where C is a sum of higher order terms.
The benefit of prime-pairing arises when E=2.  The cost of higher E is
AFAICS linear in multiplications.  The benefit of higher E comes from any
additional factors thrown into the mix by C.  This benefit is greatest if C
has factors slightly  B2

For E=4, C = (x^2 + d^2)
For E=6, C = (x^4 + x^2d^2 + d^4) = (x^2 + xd + d^2)*(x^2 - xd + d^2)
For E=8, C = (x^2 + d^2)*(x^4 + d^4)

I can't think of any reason why either of the two algebraic factors of C
when E is 6 should be any better or worse than the single irreducible factor
when E=4.  And there are two of them.  This suggests to me that E=6 should
be about twice as effective as E=4 in providing additional factors, at about
twice the cost (over and above the 'cost' of E=2).  If this is correct, then
it will always be worth going to E=6, whenever it is worth going to E=4,
(provided there is sufficient memory to do so).

Turning to E=8 it is tempting to make a similar argument, that (x^4 + d^4)
should be about as good as (x^4 + x^2d^2 + d^4).  If so then E=8 would be
three times as good as E=4 at about three times the cost.  However this
ignores the fact that (x^4 + d^4) is irreducible, and therefore likely to
have one very large factor, (which would not be much use to us) and fewer
smaller ones.  OTOH an irreducible factor of order four will be better than
any single factor of order two.  I therefore (tentatively) conclude that E=8
will be better than twice as good as E=4, but not three times as good.

With E=10, the situation is even worse.
C = (x^8 + x^6d^2 + x^4d^4 + x^2d^6 + d^8), which (I think) is irreducible.
It is possible that E=10 may be less effective than E=8 or even E=6.

Things improve with E=12.
C = (x^2 + d^2)*(x^2 + xd + d^2)*(x^2 - xd + d^2)*(x^4 - x^2d^2 + d^4)  i.e.
we get every extra factor produced by E=4 and E=6 and then some more,
suggesting that E=12 is between four and five times as effective as E=4 at
about five times the cost.

In 2 out of the 29 stage 2 factors I have found so far using E=4, k has not
been smooth to B2.  This suggests that increasing E from 4 to 12 could yield
about 20% more factors.  I've done a few tests with a modified and
recompiled client, which suggests that it would worth it even if E=12
yielded as few as 10% more factors, though I need to investigate this
further.

I freely admit this analysis is simplistic and lacks rigour.  Perhaps some
of the number theorists on the list could improve upon it.

Regards

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Mersenne.org server news?

2002-12-02 Thread Daran
- Original Message -
From: Chris Marble [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 02, 2002 4:54 AM
Subject: Mersenne: Mersenne.org server news?


 Just like the outage over Labor Day weekend we seem to have an outage that
 started around Wedneday before Thanksgiving.  Most things were functional
 then but the status updates had last run 8am.  Now we've got an
unresponsive
 server.  Anyone heard anything?

Is it my imagination, or has the server gotten far less reliable in recent
months?

 --
   [EMAIL PROTECTED] - HMC UNIX Systems Manager

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 and non k-smooth factors

2002-11-27 Thread Daran
- Original Message -
From: Alexander Kruppa [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, September 25, 2002 12:03 AM
Subject: Re: Mersenne: P-1 and non k-smooth factors

 This is the Brent-Suyama extension, aka Suyama's powers. In short, if
 you choose a Suyama's power E, a factor f will be found if the largest
 factor of f-1 divides some (mD)^E - d^E, where D is an integer chosen
 according to available memory, m and d are integers so that B1  mD-d =
 B2, 1 = d  D and mD-d is prime.

[...]

 ...the choice of D and E depends on the
 implementation. Prime95 chooses D so that phi(D)+E+4 (phi() is Euler's
 totient function) residues fit into memory, and chooses E like this:

  if (D = 180) E = 2;
  else if (D = 420) E = 4;
  else if (D = 2310) E = 12;
  else if (D = 6930) E = 30;
  else E = 48;

 (see ecm.c, choose_pminus1_plan() )

I understand why it chooses the values of D that it does, but why these
values of E?  I understand why E must be even, and that higher powers ought
to be highly composite, but wouldn't 6 be a good intermediate value?  24?
60 for the top end?

 Alex

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Drifting UP(!) in Top Producers ranking?

2002-11-27 Thread Daran
- Original Message -
From: Torben Schlüntz [EMAIL PROTECTED]
To: Brian J. Beesley [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Tuesday, November 26, 2002 10:42 PM
Subject: SV: Mersenne: Drifting UP(!) in Top Producers ranking?


 So now we stick with the 79.3 million limit until somebody finds a 10M
 digit prime? Is a price available for a 100M digit prime?...

Yes, and a 1G digit prime.

 ...And will Gimps
 raise the level once more to make this possible to achieve?...

I've no doubt it will, in due course.

 In that case
 it looks like the new candidates are above M332.192.000. Well maybe we
 have to live forever or we may believe in P5 with SSE7 :-)...

Moore's Law is good for a few years yet.

 ...It's kind of
 easy to see you are right about the 20M going to 79.3M and it is kind of
 easy to see the desparate programmer calculating that the letters of
 the alphabet X, Y and Z will keep us going into at least M35.999.999 for
 the save files. What will happen when Z is used out? I very well know we
 have a couple of years to rethink but the day is going to come...

There's no need to restrict filenames to 8 characters with modern OSs

Regards

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: SV: SV: Mersenne: Drifting UP(!) in Top Producers ranking?

2002-11-25 Thread Daran
- Original Message -
From: Torben Schlüntz [EMAIL PROTECTED]
To: Brian J. Beesley [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Monday, November 25, 2002 10:04 PM
Subject: SV: SV: SV: Mersenne: Drifting UP(!) in Top Producers ranking?

 I'd rather not like the penalty/ punishment. A reward equal to
 the full effort of doing the TF would be much better - and under those
 circumstances no one would try to cheat because a factor found at eg. 63
 bits would reward very well.

That would allow another cheat.  Current Factoring assignments are
prefactored to 2^57, and are intended to be factored to 2^67.  If someone
were instead just to factor to 2^58, they would have about a 1/58 chance of
getting a full credit for less than 1/500th of the effort.  If not, then the
exponent could be abandoned.  This would also have the advantage (from the
cheat's POV) of 'poisoning' potential competitors' factoring efforts.

IMO people should expect (in the mathematical sense of the word) to get the
same amount of credit irrespective of what type of work they do.  Also
credit should be given for work (honestly) done irrespective of whether the
search was successful.  The first criterion (only) could be met by crediting
only found factors, and giving a higher credit for larger ones, up to the TF
limit.  I do not think there is any way to allocate credit that meets both
criteria, which wouldn't reward cheating in some way.

Brian's suggestion is a good one, but I would add that perhaps each user
could get an allowance, proportionate to the number of TF assignments
returned, that would be deemed to be 'honest' errors, and not penalised.

P-1 (which I do almost exclusively) seems to be woefully ill-rewarded.

 tsc

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Drifting UP(!) in Top Producers ranking?

2002-11-20 Thread Daran
- Original Message -
From: Mary K. Conner [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, November 21, 2002 12:31 AM
Subject: Re: Mersenne: Drifting UP(!) in Top Producers ranking?


 At 11:27 PM 11/20/02 +, Russel Brooks wrote:
 I have 3 pcs doing factoring.  I have been checking my position
 on the Primenet Top Producers Factoring list.  I have noticed my
 position drifting up in the standing while I haven't found any
 factors.  How does happen?

 You get credit for your work doing factoring even if you're not finding
 factors.

You also get a small factoring credit for any P-1 you do.

Regards

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Here we go again.... v22.12 now available.

2002-11-09 Thread Daran
- Original Message -
From: George Woltman [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, November 09, 2002 4:29 AM
Subject: Mersenne: Here we go again v22.12 now available.


 This bug did not cause all factors to be missed, in fact it could cause
some
 factors to be found that ordinarily would not be found.  Specifically, in
 this low-memory situation prime95 steps through the range between B1 and
B2 in
 multiples of 210.  When processing a prime p, the code actually included
the
 prime mirrored around the previous multiple of 210.  For example, if
 21+1 is prime, then 21-1 was improperly included.  If 21+3
 is prime, then 21-3 was included.

How could this lead to factors being missed?

Regards

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Bug in version 22.10

2002-11-05 Thread Daran
- Original Message -
From: George Woltman [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, November 05, 2002 5:19 AM
Subject: Mersenne: Bug in version 22.10

 Hi all,

 Sigh   If you downloaded a version 22.10 dated September 28 or later,
 then please upgrade to version 22.11 at
 http://www.mersenne.org/freesoft.htm

Does this version only affect 22.10?  Or versions before that?

 The bug causes factors to be missed in the final stage of P-1 and ECM
 factoring.  While this isn't the end of the world, it will cause you to
 run an unnecessary LL test.

 Sorry for the trouble,

That's what beta testers are for.

 George

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Bug in version 22.10

2002-11-05 Thread Daran
- Original Message -
From: Daran [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, November 05, 2002 6:51 PM
Subject: Re: Mersenne: Bug in version 22.10

 - Original Message -
 From: George Woltman [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Tuesday, November 05, 2002 5:19 AM
 Subject: Mersenne: Bug in version 22.10

  Hi all,
 
  Sigh   If you downloaded a version 22.10 dated September 28 or
later,
  then please upgrade to version 22.11 at
  http://www.mersenne.org/freesoft.htm

 Does this version only affect 22.10?  Or versions before that?

If I'd properly read what George had said, I would know the answer.  Please
disregard.

 Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Bug in version 22.10

2002-11-05 Thread Daran
- Original Message -
From: Brian J. Beesley [EMAIL PROTECTED]
To: George Woltman [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Tuesday, November 05, 2002 7:19 PM
Subject: Re: Mersenne: Bug in version 22.10

 Hi,

 One thing you might consider - when you change to v22.11, check out your
 results file. If you have a P-1 run logged on an exponent you haven't yet
 started LL/DC testing, make it run the P-1 again (change the ,1 at the end
 of the assignment line in worktodo.ini to ,0).

George said The bug causes factors to be missed in the final stage of P-1
and ECM factoring.

The way I read that is that stage 1 is OK, but *some* factors may be missed
in stage 2

If that's correct, then any user whose system does not do stage 2 at all
(because they've allocated no extra memory) should not rerun their stage 1.
Even if it does, it's probably not worth rerunning the entire P-1 just to
get a (possibly reduced) chance of finding a factor in stage 2

 Regards
 Brian Beesley

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Modularising Prime95/mprime - a path to broader development.

2002-11-01 Thread Daran
- Original Message -
From: Brian J. Beesley [EMAIL PROTECTED]
To: Gareth Randall [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, October 31, 2002 10:10 PM
Subject: Re: Mersenne: Modularising Prime95/mprime - a path to broader
development.

 In the final analysis, the best deterrent to anyone who is deliberately
 submitting concocted results is the knowledge that they will
 (eventually) be caught out through the double-checking mechanism.

Attacks upon factoring, and individual tests, whether first-time or
doublecheck, while certainly possible, only delay the project.  They can not
result in a corrupt database.  However, the project is also vulnerable to
attacks upon the doublechecking *system*, for example, an attacker could
connive to submit a concocted result twice for the same exponent, both as a
first-time test and as a doublecheck.  From previous discussions, I know
that this is not trivial, but it remains feasible.

 Regards
 Brian Beesley

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: On v18 factoring

2002-10-25 Thread Daran
- Original Message -
From: Steve Harris [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, October 24, 2002 8:49 PM
Subject: Re: Mersenne: On v18 factoring

 Daran - you ask why highest and not lowest? The discussion started
 regarding old machines running v18 which are no longer in the care,
 custody  control of an active GIMPS participant, AND which are
 asking for factoring assignments which they cannot handle. Whatever
 assignment is given to them, there is no telling how long until (or even
 if) they will finish it...

That's true of any client, not just runaway v18s.

 ...We would not want to give them something that would hold up a
 milestone a year or so down the road...

If it got to the point where a milestone was being blocked, then someone
else would poach it.  I'd rather that happen to a forgotten client than to a
slow but active participant.  Also any server code-change is going to take
the path of least resistance.  The server is programmed to hand out the
smallest exponent available.  To start handing out larger exponents would
involve more work than just changing the assignment type, and would probably
introduce more bugs.

[...]

 You make a good point about P-1 completed assignments, but on further
 reflection I don't think that is necessary. There aren't that many
 available and certainly not at the higher end of the current range. They
 will more than likely be P-1 tested when double-checked.

That's not true.  Most *new* DC assignments (currently  850) are not
P-1 complete as they were originally LLed by v18 or earlier clients.  Many
recycled DC assignments (mostly 700-850) were also never P-1ed  by
the client that let them expire.  I specialise in P-1ing these 'neglected
children'.

However, my above remarks about 'path of least resistance' applies.  There
are probably more important server changes pending.

I've been wondering if it would be possible to compile a list of P-1
incomplete exponents currently assigned to v18 or earlier clients.  If so,
then I would consider giving these a P-1.  This, it could be argued, would
be a form of poaching, in so far as if I were successfully to factorise,
then the 'owner' would get a 'exponent already complete' error, which might
cause some upset.  OTOH, the project gains a factor that wouldn't otherwise
have been found, and people still using v18 aren't likely to be particularly
attentive.

I'd like the views of list members concerning the ethics of this.

 Steve

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: On v18 factoring

2002-10-24 Thread Daran
- Original Message -
From: Brian J. Beesley [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, October 23, 2002 7:52 AM
Subject: Re: Mersenne: On v18 factoring

 Given that the server can tell the difference between a v18 client and a
 later one, would it not make most sense to have the server assign a LL
 test on the _highest_ unallocated exponent which v18 can handle if a
 v18 client asks for a factoring assignment and none suitable are
 available. This action would effectively remove the client from the
 loop for a while (probably a few months, given that most v18 clients
 will be running on slowish systems), thereby alleviating the load on
 the server, and buying time to contact the system administrator - when
 this is still relevant, of course. And some useful work may still be
 completed, eventually!

Why highest?  Why not give it the lowest?  There's a case for only giving
version 18 and below clients DCs regardless of the work requested.  (I'm
assuming that this is possible.)

The only other point I'd add, which isn't particularly relevent to this
question, is that these clients should always be given P-1 complete
assignments if available.

 Regards
 Brian Beesley

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Dissed again

2002-10-23 Thread Daran
- Original Message -
From: E. Weddington [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, October 22, 2002 8:19 PM
Subject: Re: Mersenne: Dissed again


 On 22 Oct 2002 at 14:40, Jeff Woods wrote:

  Either they were really great activists in signing people up, or GIMPS
  has SOMETHING about it that won't get people to participate.   We
  either need to step up our profile, be more active at recruiting, or
  do SOMETHING to get us off the static bubble we've been on, at about
  18,000 members, 31,000 computers, and falling.   Our output goes up
  ONLY because most of our members upgrade as CPUs get better and
  cheaper.

GIMPS's problems are:-  1.  Very long work units.  People - especially
newbies - want to see results fast.  They don't want to have to wait a week
before they find themselves on the chart.  2.  No obvious benefit to
mankind.  3.  'Unsexy' subject matter.

 What do you have in mind, a volunteer marketing effort?

'Volunteer' goes without saying.  Certainly we need a marketing effort.
Does anyone have any ideas?

 Eric

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Dissed again

2002-10-23 Thread Daran
- Original Message -
From: E. Weddington [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, October 22, 2002 4:09 PM
Subject: Mersenne: Dissed again

 Folding@Home's success:
 http://www.sciencedaily.com/releases/2002/10/021022070813.htm

 Again, they mention SETI@home. As if that were the only other
 distributed project out there. *sigh*

Indeed they're not.

And neither are we.  Once upon a time GIMPS was the only show in town, so it
got all the kudos.  Now there are dozens of distributed computing projects.
We're going to have to come to terms with the fact that the the world
doesn't 'owe' us a mention.

 Eric Weddington

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 and non k-smooth factors

2002-09-25 Thread Daran

- Original Message -
From: Alexander Kruppa [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, September 25, 2002 1:03 AM
Subject: Re: Mersenne: P-1 and non k-smooth factors

 This is the Brent-Suyama extension, aka Suyama's powers. In short, if
 you choose a Suyama's power E, a factor f will be found if the largest
 factor of f-1 divides some (mD)^E - d^E, where D is an integer chosen
 according to available memory, m and d are integers so that B1  mD-d =
 B2, 1 = d  D and mD-d is prime.

Right.  And (mD)^E - d^E has mD-d as a factor, which yields all the usual
stage 2 factors, while the cofactor ((mD)^E - d^e) / (mD-d) possibly
introduces new ones.

Thanks to everyone for their replies.

 Alex

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Composite factors.

2002-09-24 Thread Daran

P-1, like any other GCD-based factorisation method, will yield a composite
result in the event that there are two (or more) prime factors within its
search space.  It seems unlikely that this would happen in practice because
unless both were  ~ 64 bits, one of them would most likely have been found
earlier during TF.  However given that some factors found have been  130
bits, then the possibility is there.

I was wondering if returned factors are checked for primality.

Regards

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: TF - an easy way to cheat

2002-09-22 Thread Daran

- Original Message -
From: Torben Schlüntz [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Saturday, September 21, 2002 11:17 PM
Subject: SV: Mersenne: TF - an easy way to cheat

 But my name is Torben...

My apologies.

With an unusual spelling to my own name, I usually try to avoid making this
mistake with others.  But on this occasion I slipped up.

 What is POV?

Sorry.  Point Of View.

Regards

Daran G.



_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Hyper-threading

2002-09-22 Thread Daran

- Original Message -
From: Michael Vang [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Sunday, September 22, 2002 12:19 AM
Subject: Re: Mersenne: Hyper-threading


 Look towards the end of this thread for benchmarks with LL and factoring
on
 one processor via hyperthreading...

 http://www.teamprimerib.com/gimps/viewtopic.php?t=8

Thanks

 Mike (Xyzzy)

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: TF - an easy way to cheat

2002-09-21 Thread Daran

 Anyone receiving a TF task could edit the worktodo.ini from
 Factor=20.abc.def,59
 to
 Factor=20.abc.def,65
 He would receive approx. twice the credit the effort is worth.

In fact the probability of finding a factor would be less than one sixth.

[...]

 Would this cheat be trapped later by P-1 or does P-1 trust earlier work
 so factors below say 67-bits are not considered?

P-1 doesn't 'trust' the amount of TF, in the sense of failing to find (or
ignoring) small factors.  However the calculation of the probability of
success - and hence of the optimal bounds - assumes that none will be found.

P-1 and TF search different but overlapping factor spaces.  My understanding
is that P-1 has about a 1 in 3 chance of finding a factor in the TF range.

However I don't think the server logs who finds a factor, or how it was
found, or even if the exponent was assigned to the finder.

 The above questions are _not_ asked because I intend to use the method.
 :-/ I think it would miscredit GIMPS as we trust the results of GIMPS.

Factorisation is merely a tool for quickly eliminating candidates.  We do
not double check negative results as a proof that there are no factors in
the checked range (though found factors are checked by the server itself).

 And I would be disappointed if I learned that an LL I did could have
 been solved far earlier - and using less effort.

One way to avoid this disappointment personally would be to focus solely
upon TF or
P-1.

 br tsc

Daran G.




_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: TF - an easy way to cheat

2002-09-21 Thread Daran

- Original Message -
From: Brian J. Beesley [EMAIL PROTECTED]
To: Torben Schlüntz [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Saturday, September 21, 2002 8:51 AM
Subject: Re: Mersenne: TF - an easy way to cheat


 On Friday 20 September 2002 22:42, Torben Schlüntz wrote:
  Anyone receiving a TF task could edit the worktodo.ini from
  Factor=20.abc.def,59
  to
  Factor=20.abc.def,65
  He would receive approx. twice the credit the effort is worth.

 Not quite - even allowing for the 1/2^6 effort involved in TF through 59
bits
 ... through 64 bits the algorithm runs much faster than it does for 65
bits
 and above. The factor is around 1.6 rather than 2.

Good point, and one which I didn't consider in my reply.  But the ratio must
be different for the P4, which uses SSE2 code for factorisation over 64
bits.

 Suggestion: TF should report completion of each bit to PrimeNet, not
just
 the on completion to the target depth. I don't see how this would require
 changes to the server, though there would be a (relatively small) increase
in
 load.

According to undoc.txt:- You can limit how far the program tries to factor
a number.  This feature should not be used with the Primenet server., which
implies that something bad will happen if you do.

 Suggestion: the TF savefile should be modified to contain an internal
 consistency check (say the MD5 checksum of the decimal expansion of the
 current factoring position) so that cheating by editing the savefile,
causing
 jumping past a large range of possible factors, would be made a great
deal
 more difficult.

Easily cracked.  Why not just encrypt it?

 Regards
 Brian Beesley

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: TF - an easy way to cheat

2002-09-21 Thread Daran

- Original Message -
From: Jacek Kolonko [EMAIL PROTECTED]
To: GIMPS [EMAIL PROTECTED]
Sent: Saturday, September 21, 2002 5:38 PM
Subject: RE: Mersenne: TF - an easy way to cheat

  One way to avoid this disappointment personally would be to focus
 solely
  upon TF or
  P-1.
 
  To the best of my knowledge, there is no way in my version (22.8.1) to
  set the client to do only P-1.
 

 Read undoc.txt in your Prime95 directory:

 You can force prime95 to skip the trial factoring step prior to
 running a Lucas-Lehmer test.  In prime.ini add this line:
 SkipTrialFactoring=1

That's not what Nathan meant, and it would have the opposite effect of what
Torban wants, which is to guarantee that he never LLs an exponent which has
not had a full factorisation effort.

Nathan is correct that the client can't be set only to do P-1.  I achieve
this effect by setting SequentialWorkToDo=0, and by ensuring that it never
runs out of exponents to P-1.  This requires continual hands-on management,
collecting new exponents, and unreserving completed ones.

One advantage of this, from Torban's POV, is that his chance of finding a
factor through P-1 is *increased* if it has not been properly TFed.

Jacek Kolonko

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Hyper-threading

2002-09-21 Thread Daran

Could this feature of forthcoming Intel processors be used to do trial
factorisation without adversely impacting upon a simultaneous LL?  Could
this be easily implemented?

Regards

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Order of TF and P-1

2002-09-11 Thread Daran

I'm taking the liberty of replying to his on-list, since other people here
might have some input.

- Original Message (Off-list)
From: Anurag Garg [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]
Sent: Tuesday, September 10, 2002 11:11 PM
Subject: Re: Cherub faced individuals with shoes that need tying.

  Also, If you have exponents of your own
  needing both P-1 and TF, it should have the P-1 done before the last TF
bit.
  Brian Beesley has done extensive testing to verify this.

I don't know how much memory he was using, but the more you have available,
the earlier you should do it.

 Are you absolutely sure about this. Do let me know what the reason for
 that might be.

Absolutely sure.  I pointed this out in the list some time ago, and had a
fairly lengthy off-list discussion with him and George about it.

The reason is simple.  If you do Trial Factorisation first, and it finds a
factor, then you save yourself the cost of the P-1.  On the other hand, if
you do the P-1 first, and it finds a factor, then you save yourself the cost
of the TF.  Since the probability of finding a factor with the P-1 is about
twice that of the last bit of factoring, and the cost is much lower, the
expected saving is much higher.

The optimal point during TF at which to do the P-1 is when
cost(TF)*prob(P-1) = cost(P-1)*prob(TF)

This analysis is complicated by the fact that P-1 and TF search overlapping
factor spaces, and thus affect each other's conditional probabilities.
Currently the client assumes that no further TF will be done when it
calculates optimal P-1 limits.  In other words it assumes that a successful
P-1 will save the cost of 1.03 or 2.03 LLs depending upon whether the
assignment is a DC.  It does not consider the possibility that a P-1 may
only save a small amount of subsequent TF factoring, which would be the case
if that factoring also would have found a factor.  (Bear in mind that the
conditional probability of this is *increased* because the P-1 was
successful.)  Consequently, if you do a P-1 before you finish TFing, and you
set the factored bits to the amount you have already done, the client will
choose limits which are too high, while the limits will be (slightly) low if
you set the factored bit to the amount you are going to do, but they will be
much closer to optimal.

When the TF limits were originally decided, it was assumed that a sucessful
TF would save 1.03 or 2.03 LLs.  I can't remember whether George has ever
said whether they have been lowered to take the P-1 step into account.
Perhaps he or Brian could remind me.

Additional complications arrise when you consider that P-1 and TF might be
being done on different machines with different Integer:FP perfomance
ratios.  I have never been able to get my head around this.  :-)

Regards

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Switch from v.21.4 to v.22.8

2002-09-01 Thread Daran

- Original Message -
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, September 01, 2002 5:07 PM
Subject: Mersenne: Switch from v.21.4 to v.22.8

 I just switched from v.21.4 to v. 22.8 because of the
 error 29 message, and my ending date for the number it's
 working on jumped from 9-11 to 9-5, is v.22.8 that much
 more efficient than 21.4?
 This is on my Athalon processor. I have a pentiumIII I'm
 going to upgrade also, will it speed it up too?

You mean Pentium IV, surely?  A Pentium III is a downgrade from an Athlon.

Regards

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: 266 vs 333 ddr on Athlon

2002-08-27 Thread Daran

- Original Message -
From: Brian J. Beesley [EMAIL PROTECTED]
To: Marc Honey [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Tuesday, August 27, 2002 7:27 AM
Subject: Re: Mersenne: 266 vs 333 ddr on Athlon

 On Tuesday 27 August 2002 02:08, Marc Honey wrote:
  Anyone else notice that a kt333 Athlon board using an Athlon XP gets
better
  performance at 266 than at 333?  I was amazed at the difference, and yes
I
  tweaked out the bios under both memory speeds.  AMD really needs a fsb
  speed update!

 Weird. Possibly your 266 MHz DDRAM is CL2 but your 333 MHz DDRAM is CL3.

Or perhaps it's the asynchronous operation of the memory, which may increase
latency.  The increased memory speed per se does not help, because bandwidth
is limited by the 266MHz FSB.  I would suggest that the memory controller is
the issue here.  What chipset?

 Also, I have two near-identical systems using 1.2GHz T'bird Athlons in
 Abit KT7A mobos, with CL2 PC133 memory. The only difference is that one
 of the CPUs is 200 MHz FSB the other is 266 MHz. Both are running the
 memory at 133 MHz (the BIOS on the KT7A lets you do this). The system
 speeds are within 1% of each other.

Here it is the SDR memory bandwidth which is the limiting factor.  The 1%
improvement probably comes again from synchronous operation.

 Regards
 Brian Beesley

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: GIMPS error rates

2002-08-26 Thread Daran

- Original Message -
From: Nick Glover [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, August 27, 2002 5:14 AM
Subject: Re: Mersenne: GIMPS error rates

 Some interesting stats regarding reported errors ( the last 4 digits of
 the error field ):

 Error field = : 2.02% error rate out of 213369 results
 Error field  : 22.24% error rate out of 5765 results

This suggests to me that the behaviour of the program should be changed upon
detection of an error.

22% complete - abandon run
22% complete - restart from last saved file

(Or 25% given the statistic quoted below.)

Perhaps also the program should immediately revert to trial factorisation,
until the operator resets it (presumably after fixing the problem).  If it
has no TF assignments it could overfactor the LL assignments it already has
while until it gets one.

[...]

 The above stats apply to all exponents, but below are some stats that
 apply to only exponents from 1,345,000 to 5,255,000.  This leaves some
 of the lower exponents out which weren't necessarily using George's
 program and also are so small that errors are extremely unlikely.  It also
 leaves out exponents above the current limit of what has been fully
 verified since a disproportionate amount of these exponents will be good
 results.  This is because it requires only 2 LL tests to produce two good
 results, while it takes 3 or more on the same number to produce bad
 results ( basically, a disproportionate amount of the bad results have not
 been uncovered yet ).

You could scavenge hrf3.txt for non-matching duplicate entries, which would
indicate that one of the results is bad, even though you don't know which
one.

[...]

 Error field = : 2.19% out of 99045 results
 Error field  : 25.52% out of 1834 results

 Does anyone want anything else out of this data?  I've gotten to the point
 where I can get most calculations out of it fairly quickly.

Can you do a moving average of the error rates within the 1,345,000 to
5,255,000 range to see if they increase with the exponent size?

 Nick Glover
 [EMAIL PROTECTED]

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Changing Prime95's name

2002-08-25 Thread Daran

- Original Message -
From: Jeff Woods [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, August 25, 2002 11:13 PM
Subject: Re: Mersenne: Re: Changing Prime95's name

 At 02:34 PM 8/25/02 -0700, you wrote:

 PPS - I sure hope to talk about this again in 93 years.

 By then, 95 will stand for exaflops in your quantum computer, and we'll
 be working with FFT's in the gigabyte size range.   ;-)

When we get a GigaQbit QC, we won't be bothering with FFTs.  We'll just test
every exponent simultaneously by trial division.

Regards

Daran



_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Status

2002-08-19 Thread Daran

http://www.mersenne.org/status.htm

All exponents below 10,040,400 have been tested at least once.

So how come I've just been given this assignment:

Test=9620617,64,1

Even if it had two non-matching LLs then it should still be reassigned
as a doublecheck, AAUI.

Since it's P-1 complete, I'll be releasing it back to the server within the
next few days, unless I hear that I shouldn't.

Regards

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Status

2002-08-19 Thread Daran

- Original Message -
From: George Woltman [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Monday, August 19, 2002 3:28 PM
Subject: Re: Mersenne: Status

 The first test of this number had 5 roundoff  0.40 errors.  Since the
 result is highly suspect, it was re-released as a first time test.

Thanks for the explanation.

Regards

Daran




_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Crypto scientists crack prime problem, not Redmond journalists

2002-08-17 Thread Daran

- Original Message -
From: Halliday, Ian [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, August 17, 2002 8:22 AM
Subject: Mersenne: Crypto scientists crack prime problem, not Redmond
journalists


 In http://msn.com.com/2100-1104-949170.html?type=pt we read

 Crypto scientists crack prime problem

 To create encryption keys, RSA uses two huge prime numbers and
 multiplies them together to produce an even bigger prime.

So take any Mersenne prime, and multiply by a Fermat prime to obtain a new
larger Mersenne prime.  Rinse and repeat.

Why didn't we think of that?

 Regards,

 Ian

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Roundoff errors

2002-07-24 Thread Daran

- Original Message -
From: George Woltman [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, July 24, 2002 3:01 AM
Subject: Mersenne: Re: Roundoff errors

 Daran mentioned that he was only doing P-1 factoring,
 leaving the LL test for others.

That was correct.  I have just started a LL test on your advice with ro
checking on.  If it completes, I will return to doing P-1 only.

I'm satisfied that there is no evidence of any problem not attributable to
overheating.

Regards

A very cool running Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Roundoff errors

2002-07-22 Thread Daran

- Original Message -
From: Christian Goetz [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, July 20, 2002 8:39 PM
Subject: Re: Mersenne: Roundoff errors

 Am Samstag, 20. Juli 2002 19:01 schrieb Daran:

 The temp is normal, it shouldn't be over 65 degrees. Try to get a new fan,
 or try to clean your old fan. Dust should be reponsible for the slowdown.

Thank you and everyone else, both on- and off-list, for your helpful
suggestions.  I took the cover off and had a look.  The HSF looked like the
inside of an old vacuum cleaner, so I used a new one on it.  :-)  The fan
speed is now back up to 4600, and the processor temperature has dropped by
10 degrees.

While this is probably what tipped the system into instability, I'm not
convinced it is the sole cause of the problem, if, as you say, 50 degrees is
not excessive.  My prefered contribution to the project for the past several
months has been to do exclusively P-1 testing, but I have taken George's
advice and started a LL run with roundoff checking on - actually a DC, so
that I can check for it's appearence in lucas_v.zip.

 to test your processor, i recommend www.memtest86.com
 it is a memtest in first place, of course, but it also tests the processor

 You can test your system with madonions 3DMark 2001 SE. This  program will
 heat up your ram, cpu and grafik card.

Thanks, I will look at these.

 Greetings,

 Mohk

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: P-1 limits

2002-07-14 Thread Daran

- Original Message -
From: Brian J. Beesley [EMAIL PROTECTED]
To: Richard Woods [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Saturday, July 13, 2002 10:15 PM
Subject: Re: Mersenne: Re: P-1 limits

 On Saturday 13 July 2002 10:04, Richard Woods wrote:

  V22 non-P4 FFTs are 384K in length for exponents 6545000 through
  7779000, then 448K for exponents 7779000-9071000.  The two exponents you
  list are on opposite sides of that division, so the higher exponent will
  require substantially longer for each FFT operation.  That cost
  difference affects the outcome of the algorithm that selects P-1 limits.

 Shouldn't make much difference - P-1 computation rate is directly linked
to
 LL testing computation rate, since the same crossover points are used and
the
 main cost of both is FFT multiplication.

I agree.  I did wonder about the following (from the guess_pminus1_bounds
function in file commonc.c):-

/* Pass 2 FFT multiplications seem to be at least 20% slower than */
/* the squarings in pass 1.  This is probably due to several factors. */
/* These include: better L2 cache usage and no calls to the faster */
/* gwsquare routine. */

 pass2_squarings *= 1.2;

However, the adjustment is linear, so should have no effect upon the
relative costs across the FFT boundary.

 I think the most likely cause is that the P-1 limits depend on the
relative
 speed of P-1/LL computation speed and trial factoring speed. If
 RollingAverage was slightly different, the relative computation rate might
be
 guessed to be different enough for the P-1 limits to vary slightly.

I don't understand this.  Why should the relative speeds of P-1/LL vs TF
vary?  And what difference would it make if they did?  The depth - hence the
cost - of TF is fixed (at 63 bits for these exponents).  The only trade-off
is between the cost of the P-1 vs the expected cost of the LL computation.

 Regards
 Brian Beesley

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: P-1 limits

2002-07-10 Thread Daran

Prime95 version 22.3.1

P-1 factoring with 459MB available memory, the program chose the following
limits for these two exponants.

7786567B1=4, B2=64
7714127B1=4, B2=65

Why did the smaller exponant get the higher B2 limit?

Regards

Daran G.




_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: DC Assignment range?

2002-06-14 Thread Daran

I read a while back that exponents up to 8M had been made available for DC
assignments.  Currently the server is up to 797.  Have any more been
made available?  Is this information available anywhere on the webpages?

Regard

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Synchronization

2002-06-14 Thread Daran

How often does it take place? And are Trial Factoring assignments
synchronised separately?  The reason I ask the second question is that the
Exponents Cleared since last Synchronization section of my report, lists
factorisations from August last year, while the earliest test (actually a
DC) is dated April this year.

Regards

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 Puzzle

2002-06-12 Thread Daran

- Original Message -
From: Brian J. Beesley [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED]; Anurag Garg
[EMAIL PROTECTED]
Sent: Tuesday, June 11, 2002 8:23 PM
Subject: Re: Mersenne: P-1 Puzzle

 On Tuesday 11 June 2002 06:13, Daran wrote:

 [... snip ... interesting but non-contentious]

  Very noticable is the proportion of exponents - in all three ranges -
  which are not getting a stage two effort at all.  26 out the 85
exponents
  between 795 and 796000, 24 out of 54 between 1550 and
  15505000, 35 out of 57 between 33219000 and 33223000.  I do not
  believe that large numbers of P4 systems are being shipped with just
  8MB of RAM!

 This is true. However the philosophy of the project, correctly in my view,
 is that the software should not cause noticeable deterioration in the
 performance of a system when it is being run in the background to normal
 work.

I agree.  My remarks were intended to make the case for spinning P-1 off
into a separate work type, (yeah, I know, it's difficult to change the
server code), and to encourage other readers of this list to consider
focussing on P-1 work.

[...]

 The default has to be safe;...

Again, I agree.  While there will be some people who have made a deliberate
decision not to allocate extra memory, in many cases people will simply have
accepted the default, which means that some machines which could allocate
more memory to stage 2 without adversely affecting the user won't be
configured to do this.  However that same tendency to accept defaults puts
GIMPS programmers under an obligation to set those defaults conservatively.

 ...IMO the current default memory allowance of 8MB
 is entirely reasonable, even though it causes P-1 to run stage 1 only for
 any realistic assignment, and even though _new_ systems are usually
delivered
 with at least 256 MB RAM.

Against that is the observation that the basic memory footprint has barely
changed in the over three years I've been with the project, while typical
system memory has increased by a factor of four or more.  A default set to
10% of available memory would allow a 256MB system to perform a modest stage
2 on low assignments in the test range, and on DC assignments, while still
using proportionately less memory than three years ago.  The effect of this
could be further mitigated if the memory dialog included radio buttons to
limit further the memory usage to 'minimum' (default), with other options
being 'reasonable' and 'desireable', (as described in the helpfile) as well
as 'maximum', and 'Do not run'.

Thus the default would be to run a minimal stage 2 provided it could be done
in 10% of system memory or less.  I would consider this to be reasonable and
conservative.

 Running P-1 on a 10 million digit exponent requires in excess of 64 MB
 memory to be allocated in order to run stage 2 at all. That's a lot to ask
 as a default!

It is.  OTOH if the box has 1GB installed, then it's not so much.

 Regards
 Brian Beesley

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 Puzzle

2002-06-11 Thread Daran

- Original Message -
From: Daran [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, June 10, 2002 6:54 AM
Subject: Re: Mersenne: P-1 Puzzle

 This appears to have happened to me at least once.  I'll spend some time
 later today cross-referencing my WTD file against pminus1.txt to see if
 there are any more I don't need to do.

Yesterday morning, shortly after 6 UTC, I reserved 42 DC exponents between
7.08M and 7.74M, i.e. all were recently expired exponents.  Of these 27 had
the P-1 bit set, all of which were P-1 complete, according to the
pminus1.txt file dated June 9.  Of the 15 with P-1 bit clear, one was in
fact P-1 complete.  I also reserved 32 test exponents between 12.7 and
13.8M.  Of these, 27 had the P-1 bit set, 5 clear.  None of these disagreed
with pminus1.txt.  Finally I checked 28 DC exponents which I had previously
reserved, between 7.23M and 7.74M.  All had the P-1 bit clear.  (I routinely
unreserve any exponents I get with P-1 bit set.)   Of these, two were P-1
complete.

From this, admittedly small and non-random sample, it appears that about
one-third of *reassignments* of expired DC exponents have the P-1 bit clear,
and that about one in fifteen of these are incorrect, leading to a redundant
P-1 rerun by the new owner.  I don't know what proportion of DC assignments
are reassignments, but given that P-1 testing takes about 2% of the time to
run a DC test, I would suggest that the impact of this problem upon the
project as a whole is negligible.

However, for anyone processing a lot of exponents, particularly those
specialising in collecting reassignments, it might be worth putting together
a script to identify these anomalies.

With 512MB I probably have considerably more available memory
 than the average machine doing DCs now.  That will be less true for
 machines doing DCs in the future, and probably not true at all for
 machines doing first time tests.

Inspecting pminus1.txt confirms this.  I would do a 796 DC exponent with
B1=45000,B2=675000.  Ignoring those which were obviously P-1ed as first-time
test exponents, or where the tester has chosen to go well beyond the optimal
limits, there are very few in this range which have been factored deeper
than this, and many which have had stage 1 only or a very limited stage 2.
I would do a 1550 test exponent with B1=19,B2=4227500, which is
better than average, but not to the same extent as with the DC range.  And I
can P-1 many DC assignments in the time I'd take to do a single test
assignment.  I'd do a 10M digit exponent with B1=375000,B2=8156250, still
better than average, but there are quite a few that go much deeper.

Very noticable is the proportion of exponents - in all three ranges - which
are not getting a stage two effort at all.  26 out the 85 exponents between
795 and 796000, 24 out of 54 between 1550 and 15505000, 35 out of 57
between 33219000 and 33223000.  I do not believe that large numbers of P4
systems are being shipped with just 8MB of RAM!

Regards

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 Puzzle

2002-06-09 Thread Daran

- Original Message -
From: Brian J. Beesley [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Sunday, June 09, 2002 2:18 PM
Subject: Re: Mersenne: P-1 Puzzle

 6972001 and 7076669 are starred although the fact bits column seems to
 indicate that both trial factoring to 2^63 and P-1 have been run. This is
 _definitely_ true for P-1 on 7076669, the fact is recorded on my system in
 both results.txt  prime.log.

It also should be recorded in the worktodo.ini file for that exponent.  Now
if you were to unreserve that exponent, then contact the server sufficiently
late in the day so that it is immediately reassigned back to you (rather
than
giving you a smaller one), then check the WTD file, then you'll find that
the P-1 bit will have been unset.  (This is how I discovered the problem in
the first place.)  There is a small risk of losing the assignment, if
someone checks it out seconds after you unreserve it.

 So far as 6972001 is concerned, the database
 (dated 2nd June) indicates P-1 has been run to a reasonable depth but
 trial factoring has only been done through 2^62. My system definitely
 won't have done any more trial factoring yet, let alone reported anything,
 since that system is set up with v22 defaults i.e. defer factoring on new
 assignments until they reach the head of the queue.

I'll bet its worktodo.ini entry also has the P-1 bit unset, meaning that you
will do a redundant P-1 unless you manually fix it.  What does it give for
the factored bits?  62, or 63?

 7009609, 7068857  7099163 are not starred although the fact bits
 column is one short. The nofactor  Pminus1 databases (dated 2nd July)
 give these all trial factored through 2^62  Pminus1 checked to B1=35000,
 B2=297500 (or higher). The P-1 limits seem sensible for DC assignments,
 but shouldn't these have been trial factored through 2^63 like most of the
 other exponents in this range?

Again, what does your WTD file say?

[...]

 I wonder what happens if you're working like Daran and someone returns a
 P-1 result independently (either working outside PrimeNet assignments,
 or perhaps letting an assigned exponent expire but then reporting
results);

Or if someone is working like me who hasn't identified the problem.  If they
unreserve an exponent whose P-1 hasn't been recognised by the server, then
the next owner will do another one, with possibly different limits.

This appears to have happened to me at least once.  I'll spend some time
later today cross-referencing my WTD file against pminus1.txt to see if
there are any more I don't need to do.

 This is not trivial; e.g. if you get no factors, B1=10, B2=100
 and no factors, B1=20, B2=20 there might still be a factor
 which would be found if you ran with B1=20, B2=100. Also, if
 the database says that P-1 stage 1 only has been run (probably due to
 memory constraints on the system it ran on), at what point is it
worthwhile
 running P-1 for the possible benefit of finding a factor in stage 2?

That question generalises.  If the database shows that a shallow P-1 has
been run, at what point is it worth running a deeper one, and with what
limits?

Suppose the a priori optimal bounds for an exponent are, for example
B1=4,B2=60, but it turns out that only stage 1 has been run to
4.  Assuming that it's worth re-running the P-1 at all, it might be
better to drop the B1 limit - to 35000, say, and increase the B2 limit.
This would reduce the factor-space overlap with the first test.

What's missing in all this is any kind of quantitative analysis.  In any
case, as long as there are exponents which haven't had a P-1 at all, I
prefer to stick to them.

[...]

 Daran, my advice would be to concentrate on exponents above the current DC
 assignment range which have already been LL tested but not P-1 checked, or
 on exponents above the current LL assignment range which have been trial
 factored but not P-1 checked, according to the official database (updated
 weekly - you will need the pminus1, nofactor  hrf3 database files, plus
 the decomp utility to unscramble nofactor).

I have these files, but following your advice would undermine my rational
for
doing this.  With 512MB I probably have considerably more available memory
than the average machine doing DCs now.  That will be less true for machines
doing DCs in the future, and probably not true at all for machines doing
first time tests.

I can work around the problem while staying within the current DC
range.  It's just an irritation.

 Regards
 Brian Beesley

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Work being wasted

2002-02-07 Thread Daran

- Original Message -
From: [EMAIL PROTECTED]
To: Robin Stevens [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, February 04, 2002 10:28 PM
Subject: Re: Mersenne: Work being wasted

 Nah, the candle is being burned from both ends. The point is that
 the small ones _are_ being poached. If you work in the same order
 as the poacher, work on lots of exponents will be replicated. If you
 work in the opposite order, only one exponent will be accidentally
 triple-checked.

A solution, then, is not to do small exponants at all.

 Regards
 Brian Beesley

Daran G.




_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Work being wasted

2002-02-07 Thread Daran

- Original Message -
From: Maciej Hrynczyszyn [EMAIL PROTECTED]
To: Mersenne List [EMAIL PROTECTED]
Sent: Sunday, February 03, 2002 10:48 PM
Subject: Re: Mersenne: Work being wasted

 ...what should I do in
 order not to waste my (my A1200) time working on numbers which may be
 taken over by someone else in some uncertain future? Currently I'm
 finishing my first exponent (that's M7505207 double-check)...

Don't disclose your exponants to a list read by Aaron Blosser.  :-)

 M,
 --
  Maciej Hrynczyszyn banshee*uoo.univ.szczecin.pl _ __   __ _  __| | __ _

Daran G


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Primenet Bug

2002-02-07 Thread Daran

I want to do P-1 testing specifically, so I've been unreserving DC exponants
after completing this stage, but before starting the test.  A feature of the
Primenet server is that it hands out the lowest available exponant of the
type requested, which is liable to be the one just unreserved.  The
workaround is obvious and easy, but the feature has revealed a problem.
About half the time when they come back to me, it's not been recorded that a
P-1 has been done.

This seems to be a persistant problem with the exponants it effects, i.e
subsequent unreservations of the same exponant continue to come back with
the P-1 bit clear, so its not just some race condition.  I've been working
around this by first ensuring that the exponant I want to return is lower
than those currently being served, so that I'm very likely to bet it back.
Then I can check that the P-1 bit is set, before returning it permanently.

I now have 90 days of DC work queued, which I have to do, or lose the work
I've done.

Does anyone know about this?

Regards

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Work being wasted

2002-02-07 Thread Daran

- Original Message -
From: Bruce Leenstra [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, February 04, 2002 8:46 PM
Subject: Re: Mersenne: Work being wasted

 So I would urge George and Scott to at least change it so that -- if an
 exponent has been assigned to more than one person -- it stores the last
 two names until a valid doublecheck proves the exponent composite.

Better would be to store every name until a valid doublecheck proves the
exponant composite.  The extra storage would be negligable.

 Bruce Leenstra mailto:[EMAIL PROTECTED]

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Work being wasted

2002-02-07 Thread Daran

- Original Message -
From: Aaron Blosser [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, February 04, 2002 5:18 PM
Subject: RE: Mersenne: Work being wasted

 it just bothers me when someone decides to do Google
 searches in some sort of attempt to find information on someone in
 particular.  That's the sort of thing that borders on stalking, and it
 wouldn't be the first time someone's done a Google search and then
 rejoiced that they found such a juicy tidbit of information from my
 past. :)

I don't see anybody 'rejoicing'.  Quite a few people here seem to be
bothered about poaching.  You don't seem to be bothered about that, so I
wonder why anyone should be bothered about what bothers you.

 That's okay, because in so doing they've just proved
 themselves to be a kook and there's no longer any doubt as to whether I
 should ignore them. :)

No change there, then.

Personally I can think of at least two characteristics of kookdom exhibited
by you - a total disregard for the feelings of others, and making absurd
stalking allegations against them.

Regards

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Work being wasted

2002-02-07 Thread Daran

- Original Message -
From: Steve Elias [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, February 04, 2002 1:43 PM
Subject: Re: Mersenne: Work being wasted

 no way!  i understand that statement is nonsense.  morality  ethics
 are not defined by people's own ideas, they are defined *absolutely*
 and NOT by any human.

 just because today's politically-correct NewSpeak people have
 redefined morality to mean whatever an individual thinks is moral,
 that doesn't mean that morality has actually been redefined.  moral
 relativism is *bullshit* anywhere you find it, including here on this
 project.

Could we please keep off-topic philosophical points off the list?  People
who feel strongly otherwise from you might be tempted to reply.  The result
could be that a heated discussion is inflicted upon those who have no
interest in it.

And that would be immoral,

 DOWN WITH EXPONENT-POACHERS as well as all WEAKLING MORAL RELATIVISTS,

Death to the extremists!

 /eli

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: just some comments to improve our site

2002-01-15 Thread Daran

- Original Message -
From: Torben Schlüntz [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, January 15, 2002 10:26 PM
Subject: Mersenne: just some comments to improve our site

 Why is P-1 factoring not a single schedule task? like the LL and the
 Trail Factoring?

 Why is P1 factoring hooked upon the LL test?
 Why does P1 not have it's own life like the TF and the LL?

This has been discussed at length.  The problem is that it is difficult to
get the server code changed.

[...]

 Some people might have plenty of mem - outdoing my best a 512M - but
 some of the machines I (or maybe Primenet) have selected for P-1 have
 nothing more than 64 or 128M.

If you have several machines, with different amounts of available RAM, then
you could perhaps give P-1 work of the other machines to the one with the
greatest.  That would involve some manual labour.

 Also the newsletter should more often be sent. We make progress every
 day so why don't we tell GIMPS'ers what is happening? Even a small
 progress is a giant leap, like now all numbers below 4.900.000 has been
 doublechecked or All numbers below 16.000.000 has been trial
 factorized.

Have we sent a newletter since finding M#39?

 Just my few words
 happy hunting
 tsc

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Optimizing P4 usage

2002-01-14 Thread Daran

- Original Message -
From: Gerry Snyder [EMAIL PROTECTED]
To: mer [EMAIL PROTECTED]
Sent: Monday, January 14, 2002 5:37 AM
Subject: Mersenne: Optimizing P4 usage

At home I have 2 PC's, both of which devote their spare cycles to GIMPS.
One is a 1.3 GHz P4 running W98, and the other a dual 1 GHz P3 running
linux. One of the P3's is doing LL testing, and the other is doing
mostly ECM, with a little trial factoring thrown in.

If I'm not mistaken, you'd be better to have one of your dual processors
doing just TF.

[...]

 My question is whether it is worth the trouble to shift the trial and
 P-1 factoring of the next one to one of the P3 processors (the non-LL
 one). It might lead to one extra LL test in two years.

Definitely move the TF to your P3.   A P4 doing integer work is wasted.

 The worktodo file for the P4 has:

 Test=current,68,0

Why hasn't your current assignment had a P-1?  You should suspend testing to
do this.  My system with 459MB available memory would choose B1=375000,
B2=8156250 (5.17% chance of success) for a 33M first-time assignment (i.e a
successful P-1 saves two LL tests) and B1=18, B2=297 (3.78%) for a
33M doublecheck assignment.  Your system - with a different amount of
available memory - might choose different limits, but whatever they are, you
should choose limits somewhere between them, closer to the first-time, or
doublecheck values according to how far through the first time check you
are.

 Test=next,60,0

 It should be noted that the W95 machine does not have enough RAM for
 phase 2 of the P-1 factoring, but the linux machine does.

Do P-1 on whichever machine can give it the most memory, and always give it
as much as you can.  340MB or thereabouts is optimal for a 6-7M exponant.
GOK how much a 33M exponant would take.

 Any suggestions?

If you've got less than 512MB on your linux box, email the exponant(s) to
me, (after you've TFed it) and I'll P-1 it for you.  I can do a 6-7M P-1 in
about 3 hours, so I'd guess it'd take a couple of days.

 TIA,   Gerry

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Error: Primenet error 12031

2002-01-09 Thread Daran

?

Regards

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: CNET coverage of M39: Very good

2001-12-15 Thread Daran

- Original Message -
From: Bruce Leenstra [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, December 15, 2001 1:21 AM
Subject: Re: Mersenne: CNET coverage of M39: Very good

 Perhaps the 200 years of work done each day refers to the completed units
 turned in that day. But 73,000 is more than a third of the total ...
 If it represents all the CPU time, that means that 2/3's of the time
 Prime95 has lost control of the cpu. Maybe Prime95 is too polite, it
 shouldn't defer to all those silly business apps.

I'm not sure many outside this list would agree with your last point.  :-)
In my experience prime95/mprime gets 95% of the processor even when I'm
making heavy use of the machine.  That goes up to 99.9% when I'm not using
it.  Also I would guess that a state of the art PC runs at 20 or more times
the speed of a P90, while an entry level machine would be about half
that.  So I'm lead to some combination of the following conclusions:-

1.  Most GIMPS machines are obsolete, i.e less than entry level.
2.  Most GIMPS machines are on for only a few minutes each day on average.
3.  One or both of the two figures published is be incorrect.
4.  The figure for the number of machines includes many that are not really
active.

 Bruce

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: [Fwd: Mersenne: P-1 Stage 2]

2001-12-15 Thread Daran

- Original Message -
From: Alexander Kruppa [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Saturday, December 15, 2001 2:05 AM
Subject: Re: [Fwd: Mersenne: P-1 Stage 2]

  However, there's another generalisation that occurs to me that would be
  quite cheap to implement.  The purpose of stage two is to catch those
  factors which are 'just out of reach' of stage 1.  However another way a
  factor could be 'just out of reach' would be if the power of exactly one
of
  the primes  B1 was exactly 1 more than stage 1 would find.  Stage 2
would
  find these factors *as well* if the algorithm were run for primes in
[2,B2]
  instead of [B1,B2]

Actually that really ought to be [2,~B2/B1] and then [B1,B2].  Another power
on top of a prime in [~B2/B1,B1] will take it above B2.  Since B2 is
typically only an order of magnitude greater than B1 this is not going to
make a lot of difference.

  Has this been investigated?

 Good point. The implementation I described finds a factor f iff the
 factorization of ord(a) mod f has only primes and prime powers = B1 and
 at most one prime = B2.
 It would be more reasonable if it could have primes and prime powers =
 B1 and at most one prime _or prime power_ = B2. That prime power has,
 after all, a probability =1/B2 of dividing a random integer.

This is a generalisation of the above.

 However, that is awful to implement. In stage 1, you exponentiate each
 prime p by floor(log_p(B1)). In stage 2, you'd have to include small
 primes p = sqrt(B2), exponentiated by floor(log_p(B2)) -
 floor(log_p(B1)).

Except that for primes above ~B2/B1, this will be zero, so we could ignore
them.

[...]

 If you want to make really sure that all powers of primes B2 are
 included, you can modify stage 1 to do so, i.e. exponentiate each prime
 pB1 by floor(log_p(B2)). This won't take a lot of extra time and is
 *much* easier to implement.

I agree.  The question is, which algorithm gives us the greatest factors to
work ratio?

 Alex

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: CNET coverage of M39: Very good

2001-12-13 Thread Daran

- Original Message -
From: Jeff Woods [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, December 13, 2001 6:44 PM
Subject: Mersenne: CNET coverage of M39: Very good

 ...His
 system was part of a 210,000-machine quasi-supercomputer stretched across
 the globe.

[...]

 The Mersenne prime search is moving in that direction. Each day, its
 network of computers does work that would take a single 90MHz Pentium
 computer 200 years to accomplish...

200 X 365 = 73000 p90 days/day

So what are the other 137,000 faster than P90 machines doing?

Regards

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Teams and AOL

2001-12-13 Thread Daran

- Original Message -
From: Russel Brooks [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, December 13, 2001 4:27 AM
Subject: Re: Mersenne: Teams and AOL

 My primary objective is to get their pcs running Prime95; stat
 credit is a minor secondary concern.

Or any other distributed computing project.  An idling computer is like a
running tap.

 Cheers... Russ

Daran


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1 Stage 2

2001-12-11 Thread Daran

- Original Message -
From: Alexander Kruppa [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Monday, December 10, 2001 12:16 AM
Subject: Re: Mersenne: P-1 Stage 2

 P-1 stage 1 computes x = a^E (mod N), where E is the product of all
 primes and prime powers = B1.

Right.  The math page says 3^(2*E*P) and neglects to mention the (mod N).
It also doesn't make it clear that E is a product of prime *powers*, and not
just a primordial.  I didn't understand how that could work, but this makes
rather more sense.

So if we were using P-1 to factorise 1000 using B1=15 we would calculate

E = 2^9 * 3^6 * 5^4 * 7^3 * 11^2 * 13^2

Or did you mean

E = 2^3 * 3^2 * 5 * 7 * 11 * 13  ?

[...]

 For each x^p_i we compute this way, we multiply this x^p_i-1 to an
 accumulator.

(Mod N) ?

This generalises surely.  You could have a third bound B3, and allow one
prime between B1  B2, and a second prime between B1  B3.  And then a
fourth bound B4 and so on.  (I'm not suggesting that it would be useful to
implement this.)

Thanks for a full and thought-provoking reply.

 Alex

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Why is the default page...

2001-12-09 Thread Daran

... of the GIMPS website http://www.mersenne.org/ a list of other
distributed computing projects?  By all means link to and support these
projects, but wouldn't prime.htm be a more sensible starting point?

Regards

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: P-1 Stage 2

2001-12-09 Thread Daran

The math page http://www.mersenne.org/math.htm explains how to do a stage 1
P-1 test with given bound B1, but it omits the explanation of the
enhansement that uses B2 in stage 2.  Anyone care to explain
how this is done?

Cheers

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: More on M#39

2001-12-06 Thread Daran

- Original Message -
From: George Woltman [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, December 06, 2001 4:50 PM
Subject: Mersenne: More on M#39


 I presume Scott will now try get the story really rolling in the press.
 Let's wish him luck.

Can you let us know when we can start doing our bit to promote this.  I'm
itching to get this into into my email/newsgroup sig, but I don't want to
preempt the official release by as much as one second.

Also what's the prefered URL?

 Congratulations to all,
 George

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Factoring benefit/cost ratio

2001-12-04 Thread Daran

- Original Message -
From: [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, December 03, 2001 11:22 PM
Subject: Re: Mersenne: Re: Factoring benefit/cost ratio

 On 3 Dec 2001, at 20:45, Daran wrote:

  Shouldn't that be 1.015 for double-checking assignments?

 I think 1.03...

Yes, of course.

 ...However you do have a point. P-1 limits do depend on
 the trial factoring depth,...

Now I'm puzzled.  Obviously both P-1 and trial factoring limits both depend
upon the exponant, so will correlate, but I don't see a direct dependency
between them.

 and are much smaller for DC assignments
 than for first tests, so there is already something built in.

Right.  I'd noticed that my P-1 probabilities for DC assignments were about
half that for LLs, but I'd attributed that to the differences between the
magnitudes of the exponants.  This makes more sense.

  Also does the cost part of the calculation recognise the increased cost
of
  trial-factorisation after 2^64?

 Yes. The trial factoring depth is constant at 64 bits from ~8.5
 million to ~13 million. Don't forget that the number of candidates
 which need to be checked is _inversely_ proportional to the
 exponent.

Because any factor must be of form 2kp+1.

[...]

  Finally if P-1 factorisation were to be spun off into a separate work
unit,
  then the optimal arangement would be to trial factor while
  cost_of_trial_factoring * chance_of_P-1_factoring is less than
  cost_of_P-1_factoring * chance_of_trial_factoring.  Then P-1 factorise.
  Then complete trial factorisation according to the above formula.

 Interesting - but I think the effect would be small.

Perhaps, but the effort to implement (without spinning off P-1 into a
separate work type) would also be small.  The existing formula
(cost_of_trial_factoring  chance_of_trial_factoring * cost_of_LL_test *
2.03) ignores the effect of P-1.  A better formula would be:-
cost_of_trial_factoring  chance_of_trial_factoring * (cost_of_P-1_factoring
+ (1-chance_of_P-1_factoring)*(cost_of_LL_test * 2.03).

This change on its own would surely be trivial to implement.

But then, if the P-1 fails, you might as well carry on and do a little more
TF, which brings us back to the simpler formula I gave earlier.  You might
also want to lower your P-1 bounds a shade to take into account the fact
that a successful factorisation may only save a little extra TF effort.

The end result is a *tiny* reduction in the probability of finding a factor
(because of the slight reduction in P-1 bounds).  But you'll do less work,
and if you do find one, you'll probably find it earlier.  How much earlier?
I've just completed P-1 on M656677 which took 8787 seconds which about 1.5%
of the DC estimated to take 6 days 23 hours 43 minute equals 603780 seconds,
compared with a 2.6% chance of success.  My guess is that this would put it
optimally just before the last round of TF, which is (or should be) close to
the break-even point.

I'd need an estimate for the time of the last round of TF to be any more
precise.  The calculation would also need to be repeated for a first-time LL
assignment.

My final remarks on the subject of early P-1 is the observation that TF and
P-1 search overlapping factor spaces.  Are there any gains to be had in TF
from sieving out some of the smooth k's that an early P-1 failed to find?
Could you structure the factor candidate generation code, so that unsieved
list doesn't contain any smooth k's in the first place?  Even if it's not
possible or cost effective to do either of these, the fact that there are
smooth k's among the candidates should lower the estimate for the
probability of success.

Another reason to spin off P-1 into a separate work type is to allow
machines with heaps of memory to concentrate on this kind of work.  A little
experimentation shows that the probability of a successful P-1 with 400MB is
roughly double that of one with a minimal configuration, and some machines
will be doing LLs/DCs without the ability to P-1 at all.  What I envisiage
is that a virgin exponant would first go out to P-1(which would also do the
first few rounds of TF, according to my earlier formula), then as a
factoring assignment (to completion), then finally to LL and DC.

 What about factoring to a fractional depth? With a roughly
 logarithmic distribution of factors, surely about half the factors
 between 2^n and 2^(n+1) would be smaller than 2^(n+0.5), whilst
 searching to 2^(n+0.5) would take only about 41% of the time
 taken to search the whole interval.

This had occured to me.  One could calculate the exact break-even point and
TF thus far and no further.  However the gains would be minimal for many
exponants in the middle of the depth 'bands', which are already being
near-optimally factored.  Even those at the edges of the depth bands can't
be very far from the break-even point.  Also the server would need to record
more information to enable TF to be taken a further at a later

Re: Mersenne: Re: Factoring benefit/cost ratio

2001-12-04 Thread Daran

- Original Message -
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, December 04, 2001 2:38 AM
Subject: Re: Mersenne: Re: Factoring benefit/cost ratio

  and one that is decades (at least) behind GIMPS. The only reason we
  do any factoring at all is to reduce the time spent on LL testing.

 But if factoring is not really part of GIMPS's purpose (and I agree it
 isn't), how can a separate factoring effort be behind GIMPS at all?
 Aren't they measuring their progress in a different, non-comparable,
 dimension?

Say rather that there are various criteria by which the two projects can be
compared.  It's probably true that it would take them decades or more to
completely factor a set of prime exponents comparable to those which GIMPS
has verified composite.  (and that's ignoring the effort to factorise the
cofactors of Mersennes with composite exponents).  They're probably not that
far behind GIMPS in terms of total computing done.  How far will depend upon
how good they are at mobilising support.

[...]

 In reply to a statement of mine about the extra benefit of finding a
 specific factor,
 Daran G. wrote:
 I can see no way of objectively quantifying this benefit.

 Well -- if there's no objective quantification of the extra benefit of
 finding a specific factor, then it seems to me that there's no
 objectively quantifiable justification for saying that it's not
 valuable to do a certain amount of extra P-1 factoring on a
 once-LL'd Mnumber.  :)

Can't argue with that.

[...]

 Daran G. wrote:
 It seems to be implicitely acknowledged in the way the trial
 factoring
 depths are determined.

 But don't forget that this refers to a depth determined by Prime95
 _in the context of calculating the factoring tradeoff point for
 maximizing GIMPS throughput for the Test= worktype_, NOT the context
 of a Factor= or Pminus1= worktype where the user has explicitly
 specified factoring limits, possibly for reasons beyond the ken of
 Prime95.

That seemed to be the question you were asking.

[...]

 I'm not saying that Prime95 should incorporate into its calculations a
 nonzero value for the benefit of finding a specific factor.

 I'm saying that it is rational for someone to decide to factor past
 the Prime95-calculated tradeoff points, and that it is unjustified to
 criticize extra factoring on the grounds that going past the
 Prime95-calculated tradeoff points is wasted effort.

I agree.  But you can look at this in terms of an even greater project than
GIMPS - The Great Distributed Computing Project that encompasses all such
efforts, and aims to use spare computing cycles to increase the knowledge
base of humankind generally.  What contribution one chooses to make depends
upon ones own personal preferences.

 Richard B. Woods

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Factoring benefit/cost ratio

2001-12-04 Thread Daran

- Original Message -
From: George Woltman [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Tuesday, December 04, 2001 3:41 AM
Subject: Re: Mersenne: Re: Factoring benefit/cost ratio

 I think more of the discussion has centered around stats and the
 formula for picking how far to trial factor, rather than whether factoring
 is of some mathematical value...

That's true, at least for my own recent contributions to this discussion.
However, what I've been trying to do is float a few ideas for increasing the
effectiveness of the factorising effort.  If P-1 was done early, as I
suggested, with bounds unchanged, then exactly the same Mersennes will get
factored sooner, on average, and with less work.  Ditto with the smooth k
sieving idea.  And if the cost of factoring is reduced, the optimal depth
increases.

The benefits of such an exercise are even greater if factors are given an
intrinsic value of their own.

Of course, the real scarce resource is not computer cycles, nor even ideas
for improvement, but in programming effort to implement.  Calculating that
particular trade-off I leave to others.

 -- George

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: New exponents

2001-12-04 Thread Daran

- Original Message -
From: Keith Garland [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 03, 2001 2:53 PM
Subject: Mersenne: New exponents

 Hi from Ireland - I have a few questions about how M39 will affect the
 allocation of current/new exponents.  Once the result has been announced I
 suppose we will recommence searching above M39?

No.  We are looking for /all/ Mersenne primes, not just one larger than the
biggest one we know.  We don't know whether M#39 really is M39 or not, and
the only way to find out is to test all smaller prime exponants.

 I got a couple of new
 exponents over the weekend - I assume that they are not necessarily  M39
in
 order to maintain secrecy.  If so then, after the announcement, should
we
 dump our existing tests or finish off factoring things that are half
 finished?

Please finish them.

 Does prime95 handle this scenario automatically - i.e. will new
 exponents be automatically sent if a manual update is done?

 Thanks,

 Keith

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Mersenne Digest V1 #912

2001-12-04 Thread Daran

- Original Message -
From: Nathan Russell [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 03, 2001 8:29 AM
Subject: Re: Mersenne: Re: Mersenne Digest V1 #912

 At 07:57 PM 12/2/2001 +, Gordon Spence wrote:
 A particularly sore point. If we maintained a top savers list whereby
 for every factor found you were credited with the time an LL test would
 have taken, then I and the other Lone Mersenne Hunters would pulverise
 these big university teams.

 Should George get credit for eliminating all those composite exponents
when
 he made his initial list in the mid-1990's?

 He probably found over a dozen times as many composites in (at a guess)
 under two minutes than we'll EVER find, at least until the project is
 extended past 80M.

Don't forget all the integers that were eliminated because they weren't
Mersenne numbers, the non-integral rationals, the uncountable infinity of
non-rational real numbers, complex numbers, quaternians...

The rest of our efforts have been trivial in comparison.  :-)

 Nathan

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: M#39 news!

2001-12-03 Thread Daran

- Original Message -
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, December 01, 2001 8:02 PM
Subject: Re: Mersenne: M#39 news!

 ...I would have expected
 George's initial announcement to say over 4 million digits rather
 than well over 3.5 million digits (which would nevertheless be
 true!).

Or he might have said nearly 4 million digits which would also have been
true.

 Of course I could simply be misreading George's mind -
 possibly if he had said over 4 million it would have narrowed the
 field sufficiently to make identification easy.

I fear that this may be the case, given what various people have been able
to dig out of what has been said.

[...]

 I would strongly suggest that procedures are changed so that the
 next time a Mersenne prime is discovered, no information at all is
 released except to prior discoverers of Mersenne primes...

As a matter of interest, why should prior discoverers be so privileged?

[...]

 Irritated

Yes, I rather feel the same.  It's like enjoying the build-up to Christmas
only to find that someone's open your present five days ahead of time.

 Brian Beesley

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Re: Factoring benefit/cost ratio

2001-12-03 Thread Daran

- Original Message -
From: George Woltman [EMAIL PROTECTED]
To: Gerry Snyder [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Saturday, December 01, 2001 10:39 PM
Subject: Re: Mersenne: Re: Factoring benefit/cost ratio


 I prefer a factor to a double-check.  But it is hard to quantify prefer
in a
 mathematical formula for computing trial factoring limits.  Prime95 uses
 the formula:   cost_of_factoring must be less than
chance_of_finding_a_factor
 times 2.03 * the cost_of_an_LL_test.

Shouldn't that be 1.015 for double-checking assignments?

Also does the cost part of the calculation recognise the increased cost of
trial-factorisation after 2^64?

I've noticed on occasion that I've had to do an extra round of trial
factoring before proceeding with an doublecheck.  This indicates that
previous factorisation has been non-optimal, or have the estimates for the
relative costs of factoring vs. LL testing changed with the introduction of
new hardware?

Finally if P-1 factorisation were to be spun off into a separate work unit,
then the optimal arangement would be to trial factor while
cost_of_trial_factoring * chance_of_P-1_factoring is less than
cost_of_P-1_factoring * chance_of_trial_factoring.  Then P-1 factorise.
Then complete trial factorisation according to the above formula.

 -- George

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Factoring benefit/cost ratio (was: Mersenne: Fw: The Mersenne Newsletter, issue #18)

2001-11-29 Thread Daran

- Original Message -
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, November 24, 2001 6:49 PM
Subject: Factoring benefit/cost ratio (was: Mersenne: Fw: The Mersenne
Newsletter, issue #18)

 But ones factoring benefit calculation might [should would be in
 line with the popular theme of prescribing what's best for other
 GIMPS participants :)] include not only the time savings of
 eliminating the need for one or two L-L tests, but also the extra
 benefit of finding a specific factor.

I can see no way of objectively quantifying this benefit.

 In the GIMPS Search Status table at www.mersenne.org/status.htm the
 march of progress is from Status Unknown to Composite - One LL to
 Composite - Two LL to ... Composite - Factored.

More desireable - whether or not recorded on that page - would be
Composite - Least (or greatest) factor known.  Most desireable (other than
Prime) would be Composite - Completely factored'.

 This reflects the view (with which I agree) that it is more valuable
 to know a specific factor of a Mnumber than to know that a Mnumber is
 composite but not to know any specific factor of that Mnumber.

 So a Factored status is better for GIMPS than a Two LL status, but
 calculations of factoring benefit that consider only the savings of
 L-L test elimination are neglecting the difference between those two
 statuses.  If one consciously wants to neglect that difference ...
 well, okay ... but I prefer to see that explicitly acknowledged.

It seems to be implicitely acknowledged in the way the trial factoring
depths are determined.  If one places a non-zero value on a known factor,
then the utility of extra factoring work on untested, once tested, and
verified composites would be increased.  It would have to be set very high
indeed to make it worth while returning to verified composite Mersennes.

 Richard Woods

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: hi everyone

2001-11-19 Thread Daran


- Original Message -
From: Nathan Russell [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, November 18, 2001 5:24 PM
Subject: Re: Mersenne: hi everyone

 I cannot help wondering if this is the sequence M(2), M(M(2)), M(M(M(2))),
 etc, which is indeed prime for all terms even vaguely capable of testing
 at the present time.

 The problem is that 90 digits in the exponent seems high for M(M(127)),
 the next term in the sequence.

My understanding is, that in the absence of any number-theoretical reason to
believe this sequence always prime (or otherwise to believe M(M(127))
prime), it's generally expected to be composite.

 Nathan

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Fw: The Mersenne Newsletter, issue #18

2001-11-18 Thread Daran

- Original Message -
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, November 15, 2001 5:38 AM
Subject: Mersenne: Fw: The Mersenne Newsletter, issue #18

 Brian Beesley wrote:

  Eh? Doesn't it make more sense to concentrate on factoring
  Mnumbers that haven't yet been L-L tested? That way success in
  finding a factor reduces the number of LL tests, as well as
  (eventually) the number of double checks.

Not necessarily.  The marginal benefit/cost ratio of doing factoring work on
exponant x awaiting a first time test, is twice what it would be if exponant
x were awaiting a DC.  This does not mean that it is greater that doing
factoring work on exponant y which is awaiting a DC.

 Surely you don't mean to suggest that someone who receives a PrimeNet
 double-checking assignment that starts with some trial or P-1
 factoring
 should stop, return that DC assignment to PrimeNet, then specifically
 request an assignment of factoring a Mnumber that hasn't yet been L-L
 tested, because that would make more sense than doing the second round
 of factoring on the once-L-Led Mnumber, do you?  :-)

Ideally, the program 'should' only do factoring work up to the point where
the benefit/cost ratio equals 1, which means that it 'should' factor to a
lower level for DCs than for comparible 1st time tests.  The amount of
factoring that actually has been done might be lower still.

Whether the program actually works like this is another matter entirely.

 Another way of looking at what those to whom I referred are doing is
 that we (I'm there)...

As am I.

 ...are performing the extra-factoring portions of
 potential future DC assignments.  If we don't do that, whoever gets
 the future DC assignment will do it,

Perhaps they won't.  What I'm doing at the moment is collecting heaps of DC
exponants,  P-1 factorising them, then returning them un-DCed to the server.
My reasoning is that many of the machines doing DCs will be older computers
with relatively small amounts of memory.  These will not be able to do stage
2 P-1 factorisation as effectively as I can with 512MB RAM, if they can do
it at all.  Therefore I am doing work which might not otherwise get done.

[...]

 As an old punchline goes: If you don't, someone else will!  Ahem.

Or not.

 Richard B. Woods

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: hi everyone

2001-11-18 Thread Daran

- Original Message -

From: jowy [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, November 15, 2001 12:42 PM
Subject: Mersenne: hi everyone

 Hi

 I subscribed yesterday to the Mersenne mailing list. I'm a french student,
 and I'm very interested in mathematics and arithmetic. I worked with
friends
 on a sequence of number which we suppose to give only prime numbers.
 It works for the 6 first numbers of the sequence, but I grows very fast.
 The 7th would be a 90 digits exponent.

What do you mean by 90 digits exponant?  That your number has 10^90
digits?  That it has about 10^90 bits?

 Is there a way to test this number? With Lucas Lehmer? Gauss?

Given that the largest verified prime has about 10^6 digits, and that we are
currently testing numbers up to about 10^7 digits, I would be surprised if
there was a practical way to prove your number prime.  There may, however,
be a simple way to prove it composite, or otherwise to prove your sequence
not always prime, using algebraic or number-theoretical techniques.

In general, number theorists do not give much credence to conjectures of
primality based solely upon the values of the first few elements of a
sequence.

Regards

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Fw: The Mersenne Newsletter, issue #18

2001-11-11 Thread Daran


- Original Message -
From: primenetnewsletter [EMAIL PROTECTED]
To: Mersenne Newsletter [EMAIL PROTECTED]
Sent: Wednesday, November 07, 2001 8:49 PM
Subject: The Mersenne Newsletter, issue #18


 * Over 70,000 double-checks completed.
 * Over 100,000 exponents tested for the first time!

Does this mean that we're not doing enough double-checking?

Regards

Daran G.




_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: crash

2001-09-26 Thread Daran

Version 20.6.1 running under Win98 on a Duron 800 O/C to 943MHz with round
off checking enabled.

Might have been communicating with the server at the time through a
congested dial-up.  I was not interacting with the program at the time.

PRIME95 caused an exception 6baH in module RPCRT4.DLL at 0177:7fbd90c9.
Registers:
EAX=06ba CS=0177 EIP=7fbd90c9 EFLGS=0246
EBX=0002 SS=017f ESP=0099efc8 EBP=0099f0f0
ECX= DS=017f ESI=0099f020 FS=2417
EDX=0054001c ES=017f EDI=00640e4c GS=
Bytes at CS:EIP:
e9 4a 46 00 00 55 8b ec 83 ec 08 53 56 57 55 fc
Stack dump:
7fbde7a7 06ba 0099f124 10001327 0099f020 005c 00640e4c 
0099f55c 0002  00640e4c 00640e4c 7fc12c78 005c 000b

Regards

Daran


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: M727 factored!

2001-09-06 Thread Daran

Eric Hahn wrote:

  M727 - 94.3716% probability - 2 factors
  M727 - 52.8693% probability - 3 factors
  M727 -  6.0014% probability - 4+ factors
  M727 - 91.1834% probability - 313-bit min. factor size
  M727 - 93.0447% probability - 428-bit max. factor size

It's impossible for these to be both true, since their product would then
have 741 bits.

  M727 - 21.7336% probability - highly composite factors

What does this mean?

  M751 - 83.8467% probability - 2 factors
  M751 - 74.2974% probability - 3 factors
  M751 - 19.5801% probability - 4+ factors
  M751 - 87.2999% probability - 281-bit min. factor size
  M751 - 81.0003% probability - 526-bit max. factor size

Ditto 807 bits.

  M751 - 30.1716% probability - highly composite factors

Is M751 now the smallest unfactorised composite Mersenne?  What is the
smallest Mersenne not completely factorised?

Regards

Daran G.


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: M727 factored!

2001-09-04 Thread Daran

-Original Message-
From: Alexander Kruppa [EMAIL PROTECTED]
To: Frank Solensky [EMAIL PROTECTED]; [EMAIL PROTECTED]
[EMAIL PROTECTED]
Date: 04 September 2001 17:48
Subject: Re: Mersenne: M727 factored!

Frank Solensky wrote:

  I have what's probably a silly question.
 
  You wrote this:
M727 - 94.3716% probability - 2 factors
M727 - 52.8693% probability - 3 factors
M727 -  6.0014% probability - 4+ factors

 I assumed it was that they aren't mutually exclusive -- 2 factors
should
 have been 2+ factors instead.

Thats what I first thought, but then 2+ factors should have been 100% as
we knew that M727 is not prime.

All of the probabilities are either 0% or 100%, so I would guess that these
are probabilities calculated on some heuristic basis which did not take the
result of the LL test into account.

Presumably we can multiply each of the above by 100/94.3716 to incorporate
this additional information.

Alex

Daran G.


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: What is done during a LL and what are the timing

2001-07-31 Thread Daran

-Original Message-
From: George Woltman [EMAIL PROTECTED]
To: Jeroen [EMAIL PROTECTED]; [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 31 July 2001 04:01
Subject: Re: Mersenne: What is done during a LL and what are the timing

   1% of time  -  Do something with the result to multiply it with itself

Oh, you are a tease.  :-)

What is it that you do?

Regards,
George

Daran


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: M(2) - composite?!

2001-07-31 Thread Daran

-Original Message-
From: Nathan Russell [EMAIL PROTECTED]
To: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 31 July 2001 19:51
Subject: Mersenne: M(2) - composite?!

Hi all,

I'm a bit puzzled.  The other day, I donated blood and kept my mind
busy by doing LL tests on a few exponents mentally.  I kept getting
the result that the LL test for M(2) ends up resulting in a repeating
value of -1, and certainly cannot ever become zero.  Am I missing
something really obvious?  I confirmed it on paper later to make sure
I didn't make a mistake in the mental math.

From http://mersenne.org/math.htm (brackets and exponentiation added for
the sake of clarity)

The Lucas-Lehmer primality test is remarkably simple. It states that 2**P-1
is prime if and only if S(p-2) is zero in this sequence: S(0) = 4, S(N) =
S(N-1)**2 - 2 mod 2**P-1.

If P=2 then S(P-2) is S(0) is 4 which is clearly not 0.  It's not even 0 mod
3

However http://www.utm.edu/research/primes/mersenne.shtml#test does state
the P must be odd.  So I guess the mersenne.org page is wrong.

Nathan

Daran


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Proof for P = 2^170141183460469231731687303715884105727 - 1 is Prime Number

2001-07-28 Thread Daran

-Original Message-
From: leo [EMAIL PROTECTED]
To: [EMAIL PROTECTED] [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 29 July 2001 00:44
Subject: Mersenne: Proof for P = 2^170141183460469231731687303715884105727 -
1 is Prime Number

[...]

Prime Number of Binary Digits All Equal to 1
DIVIDED BY
Even Number of Binary Digits All Equal to 1
HAS A REMAINDER

SO

any prime numbers q less than 170141183460469231731687303715884105727
is NOT a factor of P = 2^170141183460469231731687303715884105727 - 1

Surely all you've proved is that some /multiple/ of any prime number less
than 170...727 is not a factor of P.

Regards,
Leo de Velez

Daran G.


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Resurrecting an lapsed machine?

2001-07-24 Thread Daran

I've just noticed in my personal account report, that I've only got the one
machine listed.  I should have two (this one, and my mother's.)  The most
likely explanation is that her machine is configured to comunicate manually,
and she hasn't been doing this.

I visited her a couple of weeks ago, and it was still happily hacking
through a DC.  I won't be visiting her for some time, and I can't ask her to
do any more than trivial admin.  Would it be sufficient just to tell her how
to set it to automatic communication?  Would this upset the primenet server?

Regards

Daran G.


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1

2001-07-24 Thread Daran

-Original Message-
From: George Woltman [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 24 July 2001 21:17
Subject: Re: Mersenne: P-1


I'll see if I can work out a solution with Scott.  A minimum solution would
have
the server return whether or not the exponent has already had P-1 factoring
done.

Better, perhaps (but more work to implement) would be to have the server
return the B1 and B2 values used.  The client could compare these to the
values it would use if it were going to do this, based upon its available
memory, and caculate whether a rerun would be economical.

A better solution would include giving CPU credit for P-1
factoring and making it a separate work type.


That might be a little complicated, since a machine which has high memory
availability only at night might be better off getting two different types
of work to do.

I'll keep y'all posted.

Best regards,
George


Daran G.


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: P-1

2001-07-23 Thread Daran

-Original Message-
From: CARLETON GARRISON [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]; [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 24 July 2001 00:18
Subject: Re: Mersenne: P-1

Daran,

Again I'm not the most qualified person to reply but...

The name of the game is validate - by duplication.  You cannot make a
case without duplicating the result.  This is to safeguard against the many
gremlins that can occur - faulty overclocked CPUs, etc.  If someone finds a
factor, someone ofer a PrimeNet can do a single division case and confirm
the result.

Yes.  Verifying a factor is trivial.

If not, then it is my understanding a first time LL will also
do factoring.

As far as I can tell, both first time and DC LLs do P-1 factoring, but not
trial factoring.  (I suspect they will trial factor first if the exponant
has not already been trial factored far enough, but I don't recall
encountering this.)  However, it is likely that most of the exponants within
the current DC range were first time LL checked before P-1 factoring was
introduced.  It is possible that once exponants are reached that were P-1
checked the first time round, they might not be checked again.

Whether the first time comes back prime or composite, then
PrimeNet knows that two factorings confirm no factors to a certain
boundary - say 2^60.

P-1 factoring uses boundaries in a different way, i.e searches a different
space (that of smooth factors) from trial factoring.

It's a question of probabilities.  If P-1 factoring takes - say - 2% of the
time to do an LL, and a success eliminates the need for first time /and/
double check LLs, then it's worth doing one if it has a sucess rate of more
than 1%.  For the P-1 tests I've done, the estimates for the probability of
success ranged from 2-5%, depending upon how much memory I allowed it.
(I've not done enough to confirm the accuracy of these estimates, but I've
been told they're good).

However, if the historical error rate for P-1s is the same as for LLs,
(which is a reasonable assumption, given that they use the same FFT code*),
then the probability of a second P-1 (using the same bounds) finding a
factor that a first misses will be about 1% of this, i.e 0.02-0.05%, and
only a single DC would be saved.  Clearly not economical.

[...]

Saying all that doesn't mean we couldn't break out work into more work
unit types, it would just mean we'd also have to have a factoring DC work
unit type if it was to be removed from the LL runs.

No, for the reason given above.

The question boils down
to whether enough people would like to do factoring DCs.

It boils down to whether people with lots of memory (which is likely to be
the newer faster machines) would be willing to do some P-1 factoring on it's
own, and whether the gains would be worth the programming effort.

This computer is a Duron running at 943MHz - hardly cutting edge speed -
with 512MB RAM, which is ample for stage 2 P-1s.  There must be many out
there - particularly those doing DCs - which don't have enough RAM, or are
running older clients.  I'd be happy to do nothing but P-1s on these
exponants that would otherwise never be tested in this way.

Carleton

Daran

*A counterargument is that P-1 can use huge amounts of ram - hundreds of
megabytes - and so might encounter memory errors that LLs don't.


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: P-1

2001-07-22 Thread Daran

I assume that p-1 factorisation is only done once, i.e., it is only done
prior to a DC if it wasn't done at the time of the first-time test.

Isn't there a case for splitting off p-1 into an entirely separate work
unit?  That way machines with insufficient memory either to run stage 2 at
all, or to do so with a good chance of success, could be given pretested
exponants for first-time/DC LLs

Regards

Daran G.


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: 1000 barrier

2001-07-21 Thread Daran

-Original Message-
From: Russel Brooks [EMAIL PROTECTED]
To: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 20 July 2001 23:42
Subject: Mersenne: 1000 barrier

I've been with GIMPS for about two years and yesterday achieved
a personal milestone; I finally broke thru the 1000 barrier and
made it to 996 on the top producers list. Each step forward is
getting smaller and smaller though; I think I'm approaching the
knee of the curve and will eventually start going backwards.

I've been going backwards for so long I stopped looking.  :-)

ID: rlbrooks
866MHz P3  450MHz P2 ( 133MHz P1 doing factoring)

I recently built a new Duron 800 system (OCed to 943MHz) to replace my old
K5/133, so hopefully I'm going forward now.

My most recent personal milestone was to pass 1 P90 year on DCs - not much,
I know.  I was annoyed that I fell just a few percent short when I turned in
my previous result, and would have had to wait five months to turn in the
next if I hadn't upgraded.  The new machine runs fifteen to twenty times as
fast, so I should double that in a couple of months or so.

Cheers... Russ

Daran G.


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Factorisation

2001-07-21 Thread Daran

What is the probability of a successful trial factorisation in the range of
exponants currently being given out (16-17M)?

Are the estimates given by the program for p-1 factorisation accurate?

Regards

Daran G.


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: GIMPS accelerator?

2001-05-16 Thread Daran

-Original Message-
From: Gareth Randall [EMAIL PROTECTED]
To: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 15 May 2001 23:36
Subject: Re: Mersenne: GIMPS accelerator?

Daran,

This is an interesting piece of lateral thinking that deserves to go further
than I think it actually does.

Thank you for taking me seriously.

Essentially, I'm not sure how the operations that a graphics card can
provide, such as line drawing, texture overlaying, raytraced light effects
etc, could be made to implement a LL test or FFT etc which would require
things like bit tests, conditioning branches and loops etc.

What you've listed are the functions of a graphics card, each of which will
have been implemented through the application of one or more primitive
operations.  For example, the function of mapping a set of co-ordinates in
3-space onto screen pixels will be implemented by a linear transformation,
which will itself be implemented through a number of scalar multiplications.

I'm wondering if it might be possible to access any of the available primitive
operations without having to invoke a specific card function.

AFAICS the problem requires affirmative answers to all of the following
questions.

1.Can the hardware theoretically do work useful to GIMPS?
2.Could this be done efficiently enough to be worthwhile?
3.Is it possible to program the hardware to do this work?
4.Would it be possible to read the results back from the card?
5.Is the available technical documentation sufficient for a programmer to
be able to implement this?
6.Would the implementation be acceptable to the user?
7.Are the prospective gains to the project worth the programming effort?

I suspect the answer to 1 is yes, given how simple the requirements are for a
set of primitive operations to be able to universally compute - the Turing
machine and Conway's life spring to mind.  But we wouldn't waste time
programming a hardware Turing machine to do LL tests, even if we had one.

An example of a user issue would be if the only way to program the card is to
'flash upgrade' the GPU's on-card firmware.  I wouldn't be willing to do that,
although I might consider installing a special GIMPS driver, so long as I
could uninstall again.

Conceivably additions could be done by superimposing textures and reading
back the resulting frame buffer, but these wouldn't be 64-bit precision
additions!

That's all you get with CPU integer arithmetic, but you can build longer ones
out of the shorter.

Maybe some form of matrix multiplication could be done by rotating textures
before superimposing? However, I think the resulting calculation efficiency
would be very poor, and may never achieve useful precision.

Could you not build an FFT out of Discrete Cosine Transforms?  Or build a
multiplication from DCTs in some other way?  Some cards have hardware support
for this.

Also, any code would be very hardware specific, and may only work if the
display was not displaying, say, a desktop.

Which would hit 'prospective gains' question hard, since it would not then be
useful on Windows machines.

However, if someone could implement it, it could provide the *ultimate* in
Mersenne related screen savers! What you'd see on the screen would be the
actual calculations themselves taking place before your eyes, and with no
overheads for displaying it either!

That I did not think of.

Yours,

=== Gareth Randall ===

Daran G.


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



  1   2   >