That's pretty much what it does.  Have you looked at the code?
-- David

[email protected] wrote:
> Adaptive replication should track a machines validation and error history.
> Machines that have high error rates (and the machine you are describing has
> a high error rate) will have a very low chance of running without
> validation.  On the other hand machines that never have validation errors
> will have a very high chance of running solo.
> 
> The way I would do it is to store a success fraction per computer (1 -
> (errors + aborts + invalid)/total tasks).  The calculation of whether to
> actually issue another task after this one would be:  (R - (N + 1))*F*C
> where R is the replication level requested by the project (one based), and
> N is the replication number of this replication (0 based), and F is the
> Success Fraction for this project on this computer, and C is some constant
> to prevent computers have regular errors from ever running solo.  Since (R
> - (N + 1)) is 0 for the last requested replicant, no others will be issued
> unless there is an error or late task.  If C is 10, then only tasks that
> have better than 90% success rate will EVER run solo in a 2 replicant
> system.  C could be a project setting, but it should never be allowed to be
> set to less than 1.  Arguably, 10 is about right.
>
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to