Re: Mersenne: A missunderstandment

1998-11-02 Thread Bojan Antonovic

 On Fri, 30 Oct 1998, Bojan Antonovic wrote:
 
 SNIP
  
  Of course 128 bit integers are better for the LL-test then 64 bit integers. 
And 
  64 bit floating point numers are used very frequently in science 
cumputation. 
  But concerning 128 FPU register I don`t know where te special use is, except 
for 
  global economics.
  
  Bojan
  
  
 
   Bah!  128 bit floats are quite useful.   Think about percision.
 There are segments within the 64 bit range where you get terrible decmil
 precision.   (such as with any large number)   This becomes a problem with
 scaling large numbers to small, working math, and then rescaling them to
 large numbers.   I have seen this occur with many programs.   128 bit
 floats would be very nice for large numbers with decmils.

True and not true. To avoid to have very high precision there are reules how to 
compute results of functions. For example if you want to add a serie of numers 
first begin with the smallest and then raise up to the highest.

Am small example:

rounding after 2nd decimal after point

1.00E0 + 1.00E-1 = 1.01E0

but if you do 100 times

x:=1.00E0;

x:=x+1.00E-5;

you will have 1.00 as result, but the correct one is 2.00.

So it`s a myth that higher precision will give a better result.

Bojan



Celestial bodies (nothing to do with primes really...sorry) (was RE: Mersenne: Re: Is 128 bit instruction code needed ?)

1998-11-02 Thread Aaron Blosser

 Some cosmologists prefer the term "gnaB giB" to "Big Crunch".

Hehe...good one.  Who says they have no sense of humor? :-)

 The missing mass is not neccessarily black holes. It could be
 "dark matter" in the form of dust  gas between galaxies, or
 neutrinos (if they have a mass), instead.

Over billions of years, wouldn't most free-floating dust have been attracted
to some heavy object by now?  I know there are still a lot of large bodies
such as comets, meteorites, asteroids (though I doubt the existence of the
hypothetical "Oort Cloud").  I don't know, it just seems that after so much
time, most small bodies, especially dust and what not, would have
accumulated into larger objects, just as the stars and planets did.

Of course, colliding celestial objects would produce a lot of dust and what
not, but still...makes me wonder.

 Most cosmologists seem to think that the universe expansion
 should be critical i.e. the mass should be just sufficient to brake
 the expansion to zero after infinite time. In that case, the ratio of
 missing mass to visible mass is about 8 to 1. If the missing mass
 ratio is more than twice this amount, the gnaB giB should already
 have occurred!

Interesting.  I can see that if there were indeed much more mass, then the
universe would be likely to be compressing rather than expanding by now.  As
for the missing mass, if I recall, the approximate age of the universe and
the rate of expansion would seem to indicate that even given large amounts
of dark matter, there will not be any "gnaB giB".

 How's the Federal Bureau of Idiocy?

Still got all my stuff.  Sigh...I did get a temporary computer, and my work
loaned me a laptop, so maybe I can start working on primes again.

I mentioned to my current boss that we might as well have our backup servers
crunching prime numbers, and he actually seemed amenable to that idea.  Of
course, THIS TIME, I'll be sure to ask EVERYONE!!! :-)



Re: Mersenne: A missunderstandment

1998-11-02 Thread Jud McCranie

At 12:26 PM 11/2/98 +0100, Bojan Antonovic wrote:

True and not true. To avoid to have very high precision there are reules how
to 
compute results of functions. For example if you want to add a serie of
numers

first begin with the smallest and then raise up to the highest.

Am small example:

rounding after 2nd decimal after point

1.00E0 + 1.00E-1 = 1.01E0

but if you do 100 times

x:=1.00E0;

x:=x+1.00E-5;

you will have 1.00 as result, but the correct one is 2.00.

So it`s a myth that higher precision will give a better result.

I think you just disproved yourself.  On the second example, rounding after 2
decimal digits gives the wrong answer, but round after 10 digits and you get
the correct answer, or very close to it.  So this shows how having more
precision helps.


+--+
| Jud McCranie [EMAIL PROTECTED] |
+--+



Mersenne: sciam

1998-11-02 Thread Chuck W.


Just an FYI Scientific American talked about us again. Look at the last
article in the magazine. I don't have it with me and I don't remember the
title or page number. I do remember that they (YET AGAIN) forgot (Or maybe
it was intentional) to print the website URL. I must say though, they did
an AWESOME job explaining some complex mathematical concepts in laymans
terms. Such a thing can really broaden our audience...

 ~~~
: WWW: http://www.silverlink.net/poke   :
: E-Mail:  [EMAIL PROTECTED]:
 ~~~



Re: Mersenne: Question about double checking

1998-11-02 Thread George Woltman

Hi,

At 03:24 PM 10/31/98 +0100, [EMAIL PROTECTED] wrote:
I reserved one machine for double checking using IPS'
manual reservation form. I was suprised to see that
Prime95 started factoring and was even more suprised
that it found several factors. Why the factoring if they
have been LL-tested before? 

The optimal breakeven point for factoring vs. LL testing
changes every time the factoring code or LL code changes.
In the old days, the breakeven point was 2^56, now its 2^57.

Another question is how
I can be sure the program is double-checking and not
doing the original LL test over again, which I am afraid
it does when pasting the form's output into worktodo.ini?

You must be running version 17, otherwise you are just
repeating the previous test.  When you're using the automatic
method the server won't assign double-checks to a version 16
client, but when using the web pages the server just trusts you.

Regards,
George



Re: Mersenne: A missunderstandment

1998-11-02 Thread Peter-Lawrence . Montgomery

Bojan Antonovic writes:

   (Somebody else wrote)
 
  Bah!  128 bit floats are quite useful.   Think about percision.
  There are segments within the 64 bit range where you get terrible decmil
  precision.   (such as with any large number)   This becomes a problem with
  scaling large numbers to small, working math, and then rescaling them to
  large numbers.   I have seen this occur with many programs.   128 bit
  floats would be very nice for large numbers with decmils.

 True and not true. To avoid to have very high precision there are reules how to
 compute results of functions. For example if you want to add a serie of numers
 first begin with the smallest and then raise up to the highest.

 Suppose we regard the original inputs to a computation
as being exact, and a long (floating point) computation is
run with d significant bits in each mantissa, with no exponent
underflow or overflow.

As d varies, the number of significant output bits is
typically d - c, where the constant c depends upon the depth of
the computation (and how carefully the code avoids round-off errors).
For example, a computation x^16 = (((x^2)^2)^2)^2 has four squarings.
The first squaring loses about 0.5 bit of significance.
The second squaring doubles the old error (to 1.0 bit)
and adds some round-off error of its own,
for a net error between 0.5 and 1.5 bits (average 1 bit).
Two more squarings raise the average error to 4 bits.

  How does this relate to Mersenne?
Let's assume that the FFT computation loses 12 bits
of significance for FFT lengths in the range of interest.
Now consider an LL test on Mp where p ~= 5 million:

   Mantissa Output   Radix FFT
 bitsbits length

  53  41  2^11 524288 (Alpha)
  64  52  2^16 327680 (Pentium)
 110  98  2^40 131072 (Proposed)

The last two columns five a possible FFT radix and length.
The radix is 2^r where

2r + log_2(FFT length) = (output bits)
 r * (FFT length) = p ~= 5 million

Going from 53 mantissa bits (in a 64-bit word)
to 110 mantissa bits (in a 128-bit word), we have reduced
the required FFT length by a factor of 4.
That means approximately 25% as floating point operations
are needed for the FFT.  If our hardware can do 128-bit
floating point operations in twice the time needed for similar 64-bit
operations, then our overall time has improved by a factor of 2.




RE: Mersenne: Questions

1998-11-02 Thread William Stuart

GIMPS is a processor intensive operation.  Unless your critically low and
constantly swapping to disk, then probably not.  Although adding cache
memory does help, but I will defer to my collegues to say how much is
optimal.

Unless you are running Linux or NT, dual processors dosen't matter (this is
assuming you are on a PC).  If you are on an NT machine, it will tell you
the number of processors while booting, on the blue screen.

Under Linux...UhhI'm not sure, but if you open the cover and see two
processors... :-)

William

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]]On Behalf Of Lorenzo Fortunato
 Sent: Monday, November 02, 1998 11:38 AM
 To: [EMAIL PROTECTED]
 Subject: Mersenne: Questions


 Hi all,
  I have two question , please reply !!

 1) Increasing my RAM from 32 -- 64 will help GIMPS ??

 2) What do i have to do to discover if my computer is dual or not??

 These could seem very stupid , but i don't know the answer!

 Thank you in advance

 Lorenzo


 __
 Get Your Private, Free Email at http://www.hotmail.com




Re: Mersenne: Questions

1998-11-02 Thread John R Pierce

Lorenzo Fortunato [EMAIL PROTECTED] asks
To: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: Monday, November 02, 1998 2:04 PM
Subject: Mersenne: Questions


1) Increasing my RAM from 32 -- 64 will help GIMPS ??


It will probably help everything else, and prevent gimps from triggering
excess swapping.  GIMPS only uses about 4MB 'working set' but if your machine
doesn't have enough memory for whatever ELSE you are using the computer for,
GIMPS will cause a lot of swapping.

2) What do i have to do to discover if my computer is dual or not??


If you had to ask this, its probably 'not'.  Dual processors are almost
exclusively used for servers.  Windows95/98 won't even USE a 2nd processor,
you would have to be running Windows/NT or a multiprocessor version of Unix.

-jrp