[email protected] wrote: > From your list, I would remove a few items. > > Turn Around Time would penalize dialup users and multi project users, and > does not affect the final outcome at all. Of course, if work is returned > after the deadline, it counts for 0, so there is a penalty already there. > A low Success Rate has its own penalty - no credit for the time spent on > the failures.
Indeed so and rightly so. Directly using a turn-around time reward is a much finer control than a rather harsh non-linear deadline fail. For those projects where turn-around time is of no consequence, then those projects would offer no reward for a faster turn around of WUs. For others that depend on a prompt turn around, directly rewarding the thing that they require as a resource, that is "turn around time", then allows for optimisations to be made directly on that measure. > Please define Availability, but this should probably be struck as well. Proportion of time available for a particular project. So... If you set a project as a "backup project", you get no additional reward. Whereas, if you dedicate your system to be 100% available to just one favoured project, you gain additional reward for your system always being available for that project (and/or for showing such favouritism). > Dialup users and multi project users would be penalized with no affect on > the final outcome. Only if they connect so infrequently that they delay the turn around time. Such users will be penalised in that other users with a higher bandwidth will have a wider choice of projects and credits available to them, as is the case now. > Please define Data Quality. Is this the same as Success Rate? You can describe it as that. Also include whether a user system can give in some way more useful results if their hardware can support a deeper analysis or more thorough simulations for their WUs compared to other hardware. This may well become important as the range of hardware capability becomes ever wider. We already have that with the GPUs that can only support single precision floats vs those supporting double precision. Anyhow, this should now go into another thread. Can the trac pages be updated to note the distinction between the 'quick fix' now: http://boinc.berkeley.edu/trac/wiki/AutoFlops and a more thorough fix: http://boinc.berkeley.edu/trac/wiki/CreditProposal soon? Regards, Martin > <[email protected]> wrote on 09/24/2009 12:15:21 PM: > >> Martin <[email protected]> >> Lynn W. Taylor wrote: >>> Paul D. Buck wrote: >>>> The best benchmark for a system is the actual running application. > This >>>> truism has been long known as established fact. The only reason >>>> synthetic benchmarks are used is that they are a lot easier to port to > >>>> other systems. >>> The moment you pick one work unit as your "reference" work unit, you >>> have a synthetic benchmark -- and a circular argument. >> A vital point is to decide _what_ we are referencing against. >> >> Is it: >> >> s...@home; >> A mythical cobblestone of cpu cached integer rate and cpu cached >> floating point rate; >> A golden reference PC of a certain architecture; >> A mythical "statistically averaged" 'computer' bistromathematically >> determined from the current s...@home participants; >> Or? >> >> Sooo... >> >> First question is what are we referencing against? >> >> At the moment we seem to be doing an add-hoc mix of all of the above >> with various 'fiddle-factors'. >> >> >> From: >> >> http://boinc.berkeley.edu/trac/wiki/AutoFlops >> >> the premise/basis of the whole idea of credits is based upon: >> >> "should be roughly proportional to *FLOPs* performed". >> >> >> Can we get the idea of *ALL resources* to be included in there right >> from the outset? >> >> >> >> I'll start with a list of: >> >> network usage; >> RAM max usage; >> disk storage; >> turn-around time; >> success rate; >> availability; >> data quality; >> single float calculations; >> double float calculations; >> single integer calculations; >> double integer calculations; >> single logic operations; >> double logic operations; >> others... >> >> and then have a weighted summation of all those to give a cobblestones+ >> value that in some proportion reflects the resources utilised. >> >> Hence, such as quake-catcher or other monitoring projects can reward for >> the availability and network usage of their more dedicated participants. >> Long compute projects such as CPDN can additionally reward for a >> participant keeping with the same one or two year simulation for a >> completed run, and also additionally reward for the disk space and RAM >> space utilised. For the various primes projects, the old cobblestone >> perhaps is fine. >> >> >> "How" to do that is for another thread ;-) >> >> (Sorry, if this has already been discussed, then add a reference to that >> trac page please.) -- -------------------- Martin Lomas m_boincdev ml1 co uk.ddSPAM.dd -------------------- _______________________________________________ boinc_dev mailing list [email protected] http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev To unsubscribe, visit the above URL and (near bottom of page) enter your email address.
