Re: Mersenne: WinXP SP1 slows prime95

2002-09-11 Thread Brian J. Beesley

On Tuesday 10 September 2002 19:09, Jud McCranie wrote:
 Yesterday I went from Windows XP home to service pack 1.  The speed of
 prime95 went down by over 2%.  Has anyone else seen this?  Any ideas on
 what caused it or how it can be fixed?

No, I haven't seen this. I don't even have a copy of Win XP.

2% is the sort of change which can occur when a program is stopped  
restarted without changing anything else. Probably the cause is a change in 
the page table mapping (of physical to virtual memory addresses). It's also 
common to find Prime95/mprime speeding up a little when an assignment 
finishes and the next one starts compared with the speed measured when the 
program is freshly started. This seems to happen on (at least) Win 95, Win 
98, Win NT4, Win 2000 and linux with both 2.2 and 2.4 kernels, with multiple 
versions of Prime95  mprime.

From what I've heard  read about XP SP1, I don't think there's anything in 
it which should affect Prime95 running speed to any significant degree. 
Personally I would not agree to the modified EULA which comes with SP1, as it 
appears to allow M$ to take complete administrative control of your system. 
However, that's irrelevant to the speed problem; in any case, not applying 
the critical patches contained in SP1 is in itself a security risk.

Regards
Brian Beesley

_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Order of TF and P-1

2002-09-11 Thread Daran

I'm taking the liberty of replying to his on-list, since other people here
might have some input.

- Original Message (Off-list)
From: Anurag Garg [EMAIL PROTECTED]
To: Daran [EMAIL PROTECTED]
Sent: Tuesday, September 10, 2002 11:11 PM
Subject: Re: Cherub faced individuals with shoes that need tying.

  Also, If you have exponents of your own
  needing both P-1 and TF, it should have the P-1 done before the last TF
bit.
  Brian Beesley has done extensive testing to verify this.

I don't know how much memory he was using, but the more you have available,
the earlier you should do it.

 Are you absolutely sure about this. Do let me know what the reason for
 that might be.

Absolutely sure.  I pointed this out in the list some time ago, and had a
fairly lengthy off-list discussion with him and George about it.

The reason is simple.  If you do Trial Factorisation first, and it finds a
factor, then you save yourself the cost of the P-1.  On the other hand, if
you do the P-1 first, and it finds a factor, then you save yourself the cost
of the TF.  Since the probability of finding a factor with the P-1 is about
twice that of the last bit of factoring, and the cost is much lower, the
expected saving is much higher.

The optimal point during TF at which to do the P-1 is when
cost(TF)*prob(P-1) = cost(P-1)*prob(TF)

This analysis is complicated by the fact that P-1 and TF search overlapping
factor spaces, and thus affect each other's conditional probabilities.
Currently the client assumes that no further TF will be done when it
calculates optimal P-1 limits.  In other words it assumes that a successful
P-1 will save the cost of 1.03 or 2.03 LLs depending upon whether the
assignment is a DC.  It does not consider the possibility that a P-1 may
only save a small amount of subsequent TF factoring, which would be the case
if that factoring also would have found a factor.  (Bear in mind that the
conditional probability of this is *increased* because the P-1 was
successful.)  Consequently, if you do a P-1 before you finish TFing, and you
set the factored bits to the amount you have already done, the client will
choose limits which are too high, while the limits will be (slightly) low if
you set the factored bit to the amount you are going to do, but they will be
much closer to optimal.

When the TF limits were originally decided, it was assumed that a sucessful
TF would save 1.03 or 2.03 LLs.  I can't remember whether George has ever
said whether they have been lowered to take the P-1 step into account.
Perhaps he or Brian could remind me.

Additional complications arrise when you consider that P-1 and TF might be
being done on different machines with different Integer:FP perfomance
ratios.  I have never been able to get my head around this.  :-)

Regards

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Order of TF and P-1

2002-09-11 Thread Steve Harris

I don't think the TF limits were ever lowered; it seems they may have been
raised, as I have gotten several 8.3M DC exponents which first had to be
factored from 63 to 64 and THEN the P-1. It occurred to me that it might be
more efficient to do it the other way around, but factoring from 63 to 64
goes relative quickly. If it were a question of factoring from 65 to 66
versus P-1 first, then I think the P-1 wins easily.

Steve

-Original Message-
From: Daran [EMAIL PROTECTED]

snip
When the TF limits were originally decided, it was assumed that a sucessful
TF would save 1.03 or 2.03 LLs.  I can't remember whether George has ever
said whether they have been lowered to take the P-1 step into account.

Daran G.


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: Re: Question on repeated squarings in the frequency domain

2002-09-11 Thread Colin Percival

At Tue, 10 Sep 2002 15:33:17 EDT, [EMAIL PROTECTED] wrote:
In that sense, floating-point FFT and NTT are similar -
in both cases you need enough bits to accomodate the full convolution
output digits. The only advantage NTT has here is that you don't sacrifice
any bits of your computer words to roundoff error, nor to the exponent field
floats are required to carry around.

   With regard to performing FFTs using approximate arithmetic, it's worth 
noting that floating-point arithmetic results in an asymptotically better 
worst case error (O(n log n)) than any fixed point arithmetic (O(n^2)).

Colin Percival


_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: WinXP SP1 slows prime95

2002-09-11 Thread Jud McCranie

At 09:31 PM 9/10/2002 +, Brian J. Beesley wrote:

2% is the sort of change which can occur when
a program is stopped  
restarted without changing anything else. 
I check the iteration times pretty often, and when the machine is idle,
the times are pretty consistent, less than that 2% difference.
Well, it is no big deal, I thought that maybe SP1 added something that
was eating up some CPU time.



+-+
| Jud
McCranie
|
|
|
| Programming Achieved with Structure, Clarity, And Logic |
+-+




Re: Mersenne: WinXP SP1 slows prime95

2002-09-11 Thread Mikus Grinbergs

On Tue, 10 Sep 2002 21:31:12 + Brian J. Beesley [EMAIL PROTECTED] wrote:

 2% is the sort of change which can occur when a program is stopped 
 restarted without changing anything else. Probably the cause is a change in
 the page table mapping (of physical to virtual memory addresses).

Oddball phenomena -  On successive LL runs, I'm seeing the following
pattern:  First a run having a given iteration time, then a run with
an iteration time 0.58 percent faster, then a run with the given
iteration time, then a run with the 0.58% faster iteration time -
it keeps on alternating.  This is with the Linux client, but I
suspect such a pattern could be there no matter which client.

mikus

_
Unsubscribe  list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers