<<This brings up a very interesting question, what is the average speed of a
machine participating in GIMPS? Does PrimeNet have more data then the CPU 
type?
I would assume so. Can we get a breakdown of the average CPU speed of a GIMPS
producer and track this over time to see if we are keeping up with Moore's 
law?>>

Well, not only are our machines getting faster as people see the light and 
upgrade for the good of the project (more likely because their copy of Duke 
Quakem 64 won't run on old computers, hee hee), people may be leaving them on 
longer/shorter, and we always have new people joining. So GIMPS has the 
potential to run past Moore's Law, until a copy of GIMPS is on every computer 
in the world! *cackle*

<<Can the average machine speed be tracked from previous data? Account for the
growth in the number of participating machines and come up with an average
machine speed from year to year?>>

One would need access to some sort of logs to do that calculation, but I'll 
crunch some numbers I see on entropia.com right now.

<<Mersenne PrimeNet Server 4.0 (Build 4.0.017)
Status Summary Report 29 Jul 1999 18:00 (29 Jul 1999 11:00 Pacific)

              ------- Aggregate CPU Statistics, P90 Units* -------
 
                  Last 7 Days Average           Cumulative Today
                 from 23 Jul 1999 06h         from 29 Jul 1999 06h
 
  Test Type     CPU yr/day    GFLOP/s        CPU years    CPU yr/day
  ------------  ----------  ----------      ----------    ----------
  Lucas-Lehmer     59.773     719.524          27.914        55.912
  Factoring         2.286      27.515           0.858         1.718
                ----------  ----------      ----------    ----------
  TOTALS           62.058     747.039          28.772        57.630
 
                ------- Internet CPU and Server Resources -------
 
Machines Applied on  14033 Accounts    Server Synchronization 07 May 1999 
08:42
 
  Intel Pentium PII/Pro :  10727       Total exponents merged       :   211640
  Intel Pentium         :   6029       Updated only                 :    98047
  AMD K6                :   1701       Added for testing            :   109629
  Intel 486             :    345       Retained in IPS cleared list :     4112
  Cyrix                 :    495       Cleared IPS tests removed    :    20384
  Unspecified type      :   3955       GIMPS tests removed / purged :      427
  ---------------------- -------
                  TOTAL :  23252       Total Cleared by IPS to date :   
104385>>

So, the important things I'll be working with are:
Total machines: 23252. P90 CPU Years/day, 7 day average: 62.058.

However, years/day is an awkward unit to work with, so I'll convert it to P90 
days/day (i.e. how many P90s we'd need running at full tilt [constantly on] 
to produce the same work). I'll spare you the conversion and say it's 
22,666.2 P90s. Let's be stupid and multiply that by 90: 2,039,958 P1s (what 
has Intel been smoking?). And now let's bring the actual number of machines 
we're running:
23252 * P_MegaHertz = 2,039,958 P1.
Solving, we see that the average GIMPSter runs a machine that is never used 
for anything and is left constantly on for 24 hours a day with a P87.74 
processor. (Obviously by the machine tabulation above, the average GIMPSter 
runs a significantly faster computer, but doesn't leave it on for 24 hours a 
day). Eh? This is different than the old figure I remember. However, I 
calculated that BEFORE I knew how many computers GIMPS had. So, I'll redo the 
calculation based on accounts.
14033 (accounts) * P_MegaHerta = 2,039,958 P1.
Solving here, we see that the average account is equivalent to a single 
machine running at full tilt with a P145.37 processor. That's more like it.

If I've had a major brain drain in my calculations, feel free to correct my 
error on the list.

S. "I want 2 million P1 processors!" L.
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to