On 1 Oct 2009 at 11:54, Richard wrote:
> Paul D. Buck wrote
>
> > The problem is that Eric's script will mean that over a 4 year life of
> > a computer the CS it earns in year four will take more processing time
> > to obtain. That is the flaw ... as the performance average increases
> > the award goes down. So the CS I earned when I started BOINC 5-6
> > years ago, what ever it was, took less processing power over time than
> > the same CS today (all other things being equal).
>
> I'm not 100% sure, but I *think* that is a mis-reading of the purpose and
> behaviour of Eric's script.
>
> The script compares the credit awarded to the current cohort of machines
> under that project's active credit-scoring scheme (flop counting, in the
> case of SETI), with the credit that would be awarded to the *same* cohort of
> machines under the benchmark*time 'cobblestone' scheme. Then adjusts
> accordingly, with smoothing and median-taking to reduce the effect of
> outliers.
>
> So, Eric's script will *not* alter the value of the cobblestone over time
> merely because machines get, on average, faster: in principle, both
> flopcounts and benchmarks should increase in proportion.
>
> In practice, architectural changes and more efficient processor designs will
> mean that flopcounts increase more rapidly than benchmarks with
> technological advances (already evident in the different flop / benchmark
> ratios of Intel and AMD processors). So Paul's predicted behaviour is real,
> but it's a second-order effect and *not* an automatic, deliberate, design
> intention of the script.
>
> The other problem with the script is that will be thrown into total
> confusion if GPU processing ever approaches the median. At the time Eric
> wrote it, neither elapsed time nor GPU speed was available in the result
> table, so a _G_PU result would be compared against _C_PU time and _C_PU
> benchmark! Not pretty. I don't know of Eric has had time to consider how to
> include the (very new) recorded GPU metrics in the script, but it needs to
> be on the to-do list if the script is going to be considered for wider
> long-term use.
As usual, Richard's observations are on target. I'll add a few of
my own hoping to do as well.
The script method is clearly intended to keep the overall project
credit in line with what the same set of hosts would have produced
under the original method using benchmarks and time. Before
introduction of the script, Eric was making similar adjustments
manually from time to time, both to allow for better hardware and to
allow for software updates which increased efficiency by using the
hardware more effectively.
The original method was never Cobblestones by strict definition. For
instance, my 200 MHz. Pentium MMX overdrive host[1] would be expected
to get more than 20 Cobblestones per day (total benchmarks 417 MIPS
vs. 2000 for the hypothetical reference), but even in 2005 grants
were about 17 per day and gradually declining. That was because the
spot average embodied in granting the middle claim out of three had
the same deflationary aspect as Eric's script. The host with the
least additional capability over what the benchmarks measure made
the highest claim, the host with the most capability finished much
earlier than the benchmarks alone could predict and made the lowest
claim. The median host of the three established what was granted.
The benchmarks are effectively locked into 1970's technology when they
were originally designed. As such they provide a basic measurement
which indicates the minimum capability of the host. So long as they're
used with that in mind, such as for an initial estimate of how much
work to fetch, they serve admirably.
--
Joe
[1]http://setiathome.berkeley.edu/show_host_detail.php?hostid=1033899
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.