My educated guess is that once a system is in place the current  
benchmark could be scrapped as a good try that never made it.

Most of the purposes of the benchmark can be performed by either  
changing the way we do business, policy changes as we discussed  
elsewhere or by the, ahem, "Gold Plated" one I have proposed.

NOt only do we get an accurate characterization of the system but we  
also know from task one if the system is capable of doing the work. or  
not ...


On Oct 1, 2009, at 2:55 AM, Raistmer wrote:

> Agreed.
> It means, current benchmark can be simplified and carried out not on
> schedule basis (once per week or so) but on event basis (attach to new
> project, BOINC noticed changes in hardware).
> Correct? Maybe it's worth to implement this then ?
>
> ----- Original Message -----
> From: "Lynn W. Taylor" <l...@buscom.net>
> To: "BOINC dev" <boinc_dev@ssl.berkeley.edu>
> Sent: Thursday, October 01, 2009 8:17 AM
> Subject: Re: [boinc_dev] [boinc_alpha] Card Gflops in BOINC 6.10
>
>
>> I would be perfectly happy with:
>>
>>  time(&t);
>>  unsigned long i;
>>  for(i=0;i<4294967295;i++);
>>  speed=1/time(NULL)-t;
>>
>> ... or a trivially refined version of that to prevent excessive  
>> time on
>> slow versions, and prevent very fast machines from completing the  
>> loop
>> too quickly to measure.
>>
>> Sometimes "good enough" really is good enough.  For the scheduler,  
>> all
>> we need is a rough guestimate of how much work the machine can't do  
>> in a
>> week.
>>
>> It can be off either way by a factor of at least two and still be  
>> good
>> enough for that first work request.
>>
>> We don't need to spend hours characterizing the machine.  Get a quick
>> guestimate and get to work!
>>
>> Credit is a different discussion: our unit of credit calls for
>> Whetstones and Dhrystones.
>>
>> Paul D. Buck wrote:
>>>
>>> On Sep 30, 2009, at 2:30 AM, Lynn W. Taylor wrote:
>>>
>>>> Two ways to do this:
>>>>
>>>> 1) Implement some unique, non-portable method for each  
>>>> architecture.
>>>>
>>>> 2) Use an architecture-independent, OS-independent benchmark.
>>>>
>>>> The first requires some unique code to recognize each CPU, and  
>>>> may need
>>>> to be customized.  The SPARC code can't crash if run on a 68000- 
>>>> family.
>>>>
>>>> The second is written once.
>>>
>>> Which is why using the actual project applications and known tasks  
>>> is so
>>> perfect as a benchmark ... it IS only written once ...
>>>
>> _______________________________________________
>> boinc_dev mailing list
>> boinc_dev@ssl.berkeley.edu
>> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
>> To unsubscribe, visit the above URL and
>> (near bottom of page) enter your email address.
>>
>
> _______________________________________________
> boinc_dev mailing list
> boinc_dev@ssl.berkeley.edu
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.

_______________________________________________
boinc_dev mailing list
boinc_dev@ssl.berkeley.edu
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to