That is one of the options - Move EVERYTHING to the other end of that
100MBit stretch. The validator needs massive communications with the
Database, as does everything else.
Other options include making the back end not enforce deadlines for some
time after packets finish being dropped, making the client not retry each
upload separately, and upgrading the last mile of connection.
It is still probably a good idea to modify the client so that has a project
wide backoff for uploads, and is also probably a good idea for there to be
a flag to not enforce deadlines for 24 hours. This later could be usefully
used when a project comes back on line after an outage so that uploads and
reports can occur during that period without creating and sending out new
tasks for tasks that expired during the outage. This is in addition to
recovering from an overload.
jm7
Carl Christensen
<[email protected]
m> To
Sent by: [email protected]
boinc_dev-bounces cc
@ssl.berkeley.edu
Subject
Re: [boinc_dev] Optimizing
07/14/2009 10:59 uploads.....
AM
if the data won't go to the validator, the validator must go to the data!
that's how we did it -- have a couple procs on the upload servers for
validation, make a simple file with validator info, a "crawler" proc with
db access can read these files via http etc. there's plenty of sensible
ways to handle this.
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.