Seems to me that projects which award credit on FLOPs are at some point 
already there.

... and at least for SETI, the "reference machine" is a moving average 
of 100 median machines.

Credit is calculated on benchmark * time and FLOPs, and the multiplier 
is adjusted.

john.mcl...@sybase.com wrote:
> I would still keep the current benchmarks though.  However, if they are not
> tied to Credit, the frequency for running the benchmarks could be reduced
> to those times where there is some change to the hardware or OS that might
> affect the benchmarks.  In this case, the benchmarks would be purely for
> the use of the CPU scheduler and work fetch algorithms.  The credit
> modifier would come from a comparison with the reference machine somehow.
> 
> jm7
> 
> <boinc_dev-boun...@ssl.berkeley.edu> wrote on 09/29/2009 09:24:49 AM:
> 
>> Martin <m_boinc...@ml1.co.uk>
>> Sent by: <boinc_dev-boun...@ssl.berkeley.edu>
>>
>> 09/29/2009 09:24 AM
>>
>> To
>>
>> BOINC Developers Mailing List <boinc_dev@ssl.berkeley.edu>
>>
>> cc
>>
>> Subject
>>
>> Re: [boinc_dev] [boinc_alpha] Card Gflops in BOINC 6.10
>>
>> john.mcl...@sybase.com wrote:
>>> I have mostly not been hearing that live work would be the reference.
> What
>>> I have mostly been hearing is that we should do reference tasks where
> the
>>> result is known, and the FLOP count can be known as well.  Running
> these
>>> frequently is what I was objecting to as it wastes large amounts of
>>> otherwise useful processor time.
>> The main aim is to eliminate the need for adding extra code into the
>> clients just for the sake of adding up FLOPS or IOPS or whatever.
>> Further, the aim is to also eliminate the need for adding /any/
>> 'performance instrumentation' into the clients. (Optimisations are
>> inherently allowed for also.)
>>
>> Referencing back to known hardware looks to be a lot easier, and also
>> offers the chance to be "scientific" about the credits. We can then do
>> meaningful performance comparisons for hardware, algorithms, or
>> whatever... Who knows what new Computer Science can be then uncovered.
>>
>> The credits freaks then also get a 'currency' that is stable and
>> referenced back to known hardware/performance.
>>
>> Better still, all the calibration can be coordinated purely server-side.
>>
>> Just eliminating all the arguments and extended discussions has just got
>> to be a good plus point! :-)
>>
>>
>>> I agree that having a reference machine doing work would eliminate the
> need
>>> for gold plated benchmarks.  However, it does not entirely eliminate
> the
>>> need for basic benchmarks as there is a very wide range in computation
>>> speeds for the computers that are in use on BOINC projects.  We still
> need
>>> the basic benchmarks (the 5 minute variety that we have now) to give us
> a
>>> starting point for the CPU scheduler and work fetch algorithms.
>> For unknown 'new' hosts, indeed so.
>>
>> Alternatively, new hosts on their first attach to a project could
>> download a (live but with redundancy) WU deliberately to characterise
>> what their performance is. The scheduler would just need to permit only
>> one WU to be downloaded initially so as to see how long it takes. Sort
>> of NNT for a new project until the first task is returned...
>>
>>
>> I guess running the whetstone/dhrystone benchmark once per new host is
>> still good for conjuring up the "supercomputer" benchmark for gaining
>> further grants!
>>
>> Regards,
>> Martin
>>
>>
>>> <boinc_dev-boun...@ssl.berkeley.edu> wrote on 09/28/2009 04:58:05 PM:
>>>
>>>> Martin <m_boinc...@ml1.co.uk>
>>>> Sent by: <boinc_dev-boun...@ssl.berkeley.edu>
>>>>
>>>> 09/28/2009 04:58 PM
>>>>
>>>> To
>>>>
>>>> BOINC Developers Mailing List <boinc_dev@ssl.berkeley.edu>
>>>>
>>>> cc
>>>>
>>>> Subject
>>>>
>>>> Re: [boinc_dev] [boinc_alpha] Card Gflops in BOINC 6.10
>>>>
>>>> john.mcl...@sybase.com wrote:
>>>>> I was trying to state something similar.  There are computers doing
>>> useful
>>>>> work for projects and increasing the burden of time spent on
> benchmarks
>>>>> will reduce the availability of those resources to the project.
>>>> There's no burden of benchmarks when the live work itself is in effect
>>>> it's own benchmark as referenced back to the performance of a known
>>>> piece of hardware.
>>>>
>>>> You can then waste as much benchmarking time as you like to
> characterise
>>>> your reference machine. Meanwhile, the rest of world of Boinc
> continues
>>>> with useful work undisturbed.
>> --
>> --------------------
>> Martin Lomas
>> m_boincdev ml1 co uk.ddSPAM.dd
>> --------------------
>> _______________________________________________
>> boinc_dev mailing list
>> boinc_dev@ssl.berkeley.edu
>> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
>> To unsubscribe, visit the above URL and
>> (near bottom of page) enter your email address.
>>
> 
> _______________________________________________
> boinc_dev mailing list
> boinc_dev@ssl.berkeley.edu
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.
> 
_______________________________________________
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to