zombie67 wrote:
> On Jul 21, 2009, at 1:19 PM, Lynn W. Taylor wrote:
>> I don't see a way out, short of doing exactly what Eric Korpela's  
>> script
>> tries to do -- normalize FLOPS credit to that predicted by the
>> (imperfect) benchmarks.
> 
> Solution:  Give up on cross-project credit parity.  It's an impossible  
> goal.  QCN anyone?  If cross-project comparison is needed, do it via  
> rank.  As long as the credits within each project are stable, that  
> works fine.

The present Boinc run benchmarks are too unrepresentative across 
different hardware to work well enough even within a project. Hence the 
dire inaccuracies when trying to award credit using run time.

Maintaining credits parity across projects is in my opinion a very good 
idea to encourage the free mobility of user resources between projects. 
If anything, it'll at least avoid a lot of confusion. It also acts as a 
very good test to get the credits measurements fundamentally 'right'.

Regards,
Martin

-- 
--------------------
Martin Lomas
m_boincdev ml1 co uk.ddSPAM.dd
--------------------
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to