I have been thinking about it, and what we really want to capture is the
typical longest time between the time that a task is first reported as 100%
complete and the time that the task is reported to the project.  It really
does not matter shy the task takes that long to report.  There are several
non-zero time events that may occur during this time period.

1)  Time spent processing after the project first reports 100% complete.
2)  Time spent uploading or waiting to upload because of server or network
difficulties.
3)  Time spent waiting to report because of network difficulties.
4)  Time spent reporting (not quite 0 time).

However, these can be neatly packaged as the time between the first notice
of 100% complete and the time that confirmation of report (which is either
the reply from the server noting the report, or the reply from the server
noting that the report had been made some time previously).

We can either use a decaying formula of 0.99 * saved value + 0.01*current
value, or we can use mean + 3 * standard deviation.  If we use the decay,
we can reset the value every time that the user change the setting for
min_queue (Connect Every X).

jm7

_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to