If uploads are working, any order will do.  No reason to sort.

The only time you need to sort is if you're going to slow down the 
retries (so you don't keep hammering the servers to death).

Stop hammering servers, reduce the load, and things will go much more 
smoothly.  Recovery after an outage will be faster.

But, if you do slow the client way down, you really don't want to get 
lucky, hit that one-in-fifty successful upload, and have it upload work 
that isn't due for weeks, when you've got something that will expire in 
a few hours.

I do see your point.  This would work:

Keep a count of failed uploads.  A successful retry sets the count to 
zero, a failure increments.

If the counter reaches two or three, re-sort the work units by due-date, 
and reset the timers so that they run out in deadline-order.

I agree that a saturated link has a much lower data rate, but instead of 
slowing down after you get a connection, how about slowing down the 
connections so the link isn't saturated to start with?

-- Lynn

Martin wrote:
> David Anderson wrote:
>> If you have a build environment, check out the trunk and build.
>> Otherwise we'll have to wait for Rom to either backport this to 6.6 or 6.8.
>>
>> -- David
>>
>> Lynn W. Taylor wrote:
> [---]
>>>>> So, when any retry timer runs out, instead of retrying that WU, retry 
>>>>> the one with the earliest deadline -- the one at the highest risk.
> 
> Beware the case where an upload fails because of some db or storage 
> problem for a singular WU... One problem WU upload shouldn't cause all 
> others to fail.
> 
> I suggest use round-robin ordering when any upload fails (after all, no 
> WUs are getting through in any case) and use earliest deadline first 
> order only when any upload has succeeded.
> 
> 
> A second idea is to have the Boinc *client* dynamically reduce it's 
> upload data rate in real time if it detects any lost data packets 
> (detects TCP resends). The upload is abandoned for a backoff time if the 
> upload rate reduces to too low a rate. This is a more sensitive attempt 
> to avoid a DDOS on the project servers than just using an arbitrary 
> backoff mechanism.
> 
> Note that a saturated link has a much LOWER effective bandwidth than a 
> maximally utilised link.
> 
> Regards,
> Martin
> 
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to