On Sun, 17 Feb 2002 [EMAIL PROTECTED] wrote:
> Ah, but, factoring is actually moving ahead anyway in terms of 
> CPU years required for LL to catch up - because larger exponents 
> take more time to LL test. My impression is that the gap between 
> the head of the factoring assignment queue and the head of the LL 
> testing assignment queue has remained fairly steady at 4 to 5 
> million for at least three years.

And everytime we move up a bit level the time required to factor an
exponent goes up as well.  And the gap is only 3.7M right now, and
PrimeNet seems to be having that problem with large numbers of factoring
assignments not be handed out correctly.  I'm not meaning to be Chicken
Little here, I don't think there is any need for a "call to arms" on
factoring (since the balance of machines on each task can be somewhat
manipulated centrally).  I just wanted to correct the misperception that
factoring was going faster than LL.

> Another point, surely, is that factoring assignments "suddenly 
> slowed down" when we reached the changeover point between 65 
> bit and 66 bit trial factoring depth in the low 18 millions.

The cutoff is at 17.85M if I remember correctly.  I really only look at
"net assignments checked out" because that is an easy number to
eyeball.  That rate won't be affected by a jump to a new factoring depth
right away, but that will cause some slowdown later on as machines given
the bigger assignments are slower to return them and check out new
assignments.  The biggest change lately has been that LL assignments have
slowed down somewhat (and 33M assignments may have picked up, but those
are a lot more variable, and harder to eyeball a trend unless a record is
kept).  Factoring assignments have picked up somewhat recently, but I
don't think this is a permanent trend.  I suspect it is largely if not
entirely due to the challenge that one team is embarking upon, and that is
going to last only a month.

> Eh? The only PrimeNet problem I'm aware of with in this respect is 
> that it can't cope with having LL and DC assignments for the same 
> exponent out at the same time.

It may have been fixed, but in the archives I saw references to PrimeNet
not properly expiring and then handing out again first time LL's if they
were in a range that had been overrun by DC's.  If there is no PrimeNet
problem, then there is no problem with the DC range overrunning the LL
range at all.

> Well, some of us try - personally I have several systems which 
> would be running LL assignments if left to "makes most sense" 
> running DCs instead. 

Me as well.  From what I understand the breakpoints to be, my K6-2 400
would be put on first time LL's by default.  I tried it on DC's, but the
FPU is so poor that it is really only suited to factoring.  My 466 Celeron
could do LL's very slowly, but given the relative balance between LL's and
DC's right now, I chose to put it on DC's.  Unsexy work highly unlikely to
find a prime, and yet still needs to be done.

> The difficulty here (it's purely a matter of perception, not a technical 
> problem) is that, by pushing up the number of systems that get DC 
> assignments by default, we risk alienating new users who find that 
> they're not being given "exciting" assignments, and those with 
> slower systems who find that even DC assignments are taking well 
> over a month.

I don't think there is really anything wrong with the DC breakpoint as it
is right now.  The spread between leading and trailing edge is not
excessive, and the leading edge is not that far behind the trailing edge
of LL.  It's pretty well balanced.

And the point about "exciting" assignments is well taken.  There was a
surge of new users after M39 was discovered, and of those who did not
stick around (and let their assignments expire), the vast majority were
doing first time LL.  Most likely disappointed in how long it takes to
complete one and not willing to do less sexy but less time consuming
work.  But in spite of those who dropped out, it is evident that GIMPS
picked up quite a few users who have stuck it out, so the expiries from
those dropping out is just the cost of recruiting new users.

There's also the issue of all the myriad things that can happen between
the time an assignment is started and the time it ends.  Today I went to
help my father set up his new 800MHz Duron, and I put Prime95 on
it.  Despite the fact that it is quite capable of doing LL's, I put it on
factoring.  His history with screwing up his machine (though I still love
him, bless his heart) doesn't bode well for anything but the shortest of
assignments.  Longer assignments just don't survive those who tend to blow
away their boxes every now and then.

Which leads me to something else that might benefit factoring.  Factoring
doesn't have huge save files like LL does.  They're only 32 bytes.  I'd
suggest that anytime a box makes an update of expected completion dates,
that it send along that saved information.  If the assignment later
expires, the savefile is sent to the new box, so that it does not have to
completely redo everything the previous box did.  Alternatively, at the
very least, Prime95 could send a "No factor to 2^XX" result for the
farthest bit level it has gotten to, and PrimeNet could save that, and
have the next assignee not redo all the previous bit levels.


_________________________________________________________________________
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to