My first reaction, on reading this, was that resource share should be based 
on work done, measured directly, rather than by using credit as a surrogate. 
But I've read the design document, and you've covered that point - and also 
taken into account the immediate consequence, which is that projects which 
over-inflate their credits get automatically penalised with a reduced 
effective resource share, and hence lower work rate. I'm happy with that.

But I'm worried by any proposal which uses credit as an input to drive any 
functional aspect of BOINC. As a trivial example, just an hour ago someone 
on a message board celebrated that, according to 
http://aqua.dwavesys.com/server_status.php, AQUA has now "surpassed 200,000 
GFlops of compute power". BOINCstats is currently displaying 169,077.8 
GigaFLOPS. I suspect the difference may be due to AQUA's current 
transitioner troubles, which results in a 'lumpy' allocation of credit, and 
hence RAC. I suspect that the poster's conclusion, that "This would place 
a...@h at Rank #22 in the Top500 list" http://www.top500.org/list/2010/06/100, 
would be treated with a certain amount of scepticism by the developers of 
LINPACK.

If credit is to be used as an input, it will, and should, be subject to 
greater scrutiny with regards to derivation, methodology, and, dare I say, 
accuracy.

There has only been one large-scale trial (so far as I know) of the 
third-generation "Credit New" scoring algorithm, and that is the s...@home 
project - where it has been running, with some interim fine-tuning, since 
early June 2010. We are fortunate that SETI, prior to this roll-out, had one 
of the more mature flops-based credit algorithms - though one fatally flawed 
by the flop-count discrepancy between CPU and CUDA applications processing 
identical work for the project.

But, with care, one can still obtain the 'claimed' credit (flop-count based) 
for pre-validation SETI tasks, and compare them with the granted credit for 
the same tasks post-validation.

I performed this analysis over the weekend of 27/28 August 2010, and plotted 
the results for 980 tasks processed - exclusively by CUDA applications - on 
four of my hosts (I can supply raw data, and details of the methodology, for 
anyone interested). I had planned to repeat the analysis before publishing 
the results, but SETI's server problems have scuppered that plan. Here is 
the single-weekend outcome:

http://img231.imageshack.us/img231/1146/seticlaimedvsgrantedcre.png

The legend shows SETI hostIDs:

4292666 - GTX 470 (Fermi, Windows XP)
2901600 - 9800GTX+ (Windows 7)
3751792)
3755243) - 9800GT, Windows XP

I am worried by the non-deterministic value of the individual credit awards, 
even though the average figures seem reasonable. I think we should 
double-check that all is going to plan with Credit New, before we start 
building greater edifices on foundations, possibly, of sand.

----- Original Message ----- 
From: "David Anderson" <[email protected]>
To: "BOINC Developers Mailing List" <[email protected]>
Sent: Tuesday, October 26, 2010 10:13 PM
Subject: [boinc_dev] proposed scheduling policy changes


> Experiments with the client simulator using Richard's scenario
> made it clear that the current scheduling framework
> (based on STD and LTD for separate processor types) is fatally flawed:
> it may divide resources among projects in a way that makes no sense
> and doesn't respect resource shares.
>
> In particular, resource shares, as some have already pointed out,
> should apply to total work (as measured by credit)
> rather than to individual processor types.
> If two projects have equal resource shares,
> they should ideally have equal RAC,
> even if that means that one of them gets 100% of a particular processor 
> type.
>
> I think it's possible to do this,
> although there are difficulties due to delayed credit granting.
> I wrote up a design for this:
> http://boinc.berkeley.edu/trac/wiki/ClientSchedOctTen
> Comments are welcome.
>
> BTW, the new mechanisms would be significantly simpler than the old ones.
> This is always a good sign.
>
> -- David
> _______________________________________________
> boinc_dev mailing list
> [email protected]
> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
> To unsubscribe, visit the above URL and
> (near bottom of page) enter your email address.
> 


_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to