Lynn W. Taylor wrote:
> I think the first step is to get the ability to tune clients, and then 
> experimentally determine how to optimally tune clients.

That's fine for controlling the uploads.

You also need to fix the servers to avoid saturating for downloads.

Note also that there is no "broadcast" available. Boinc is strictly 
"pull" technology, There is no "push" to 'broadcast' with.

You could include something in the response to the clients. However, 
minimum system change seems to be the modus operandi...

> That's where the BFI method comes in.  I know from long experience 
> running mail servers that you can often speed up throughput by reducing 
> sessions.

BFI? What's that in this context?


Note that, using a retry protocol such as tcp, link bandwidth is 
effectively reduced when you hit the link maximum data rate. The retries 
then DOS the link with repeated data so that you get a critical sudden 
cut-off to ever more degraded performance. Depending on the proportion 
of overload and the failed connections timeouts, you can bring a link to 
near zero effective bandwidth.

Reducing the number of simultaneous connections accepted will ofcourse 
help that and will in effect increase the available bandwidth 'seen'.

The real problem is still that you randomly loose a proportion of the 
data on the overloaded link, and then suffer the fallout from that.

To avoid data loss (and so avoid the expensive and wasteful retries, 
whether at the tcp level or at the higher application level), you must 
contrive the data rates so that you never exceed the router buffer space 
at the slowest point on the network.

Hence, for "bursty" data, you can never achieve 100% utilisation. The 
cricket graphs for s...@h show a max of about 90Mb/s. So about 90% of that 
(about 81Mb/s or rounded to 80% of the 100Mb/s) should give loss-free 
and reliable operation to maintain maximum data transfer rates.

Hit 81% (or whatever the critical point is for the router buffer 
overflow and dropping packets), and the retries will bomb you down to 
something less than 81%. The retries from the other connections 
subsequently affected then continue a cascade into a disgraceful 
degradation where you get an ever increasing proportion of retries. The 
proportions are limited by the connection retry limits/timeouts.

You still loose an awful lot of bandwidth. Far far more than if you were 
to "waste" a few percent to deliberately keep the link unsaturated.

Hence, you must keep for a rolling average for min router buffer time 
that is:

connections * connection_bandwidth < link_max_bandwidth

Or... You suddenly hit data loss and a disgraceful degrade.


> Your "one fast upload" scenario has one saving grace: it's fast, it's 
> going fast, it won't last long -- the faster it goes, the sooner it is 

It won't last long only if it can get through unimpeded.

If it bungs up the pipe, it and everyone else descends into a flood of 
retries and then nothing gets through, ever, until everyone hits their 
tcp retry limits and give up.

Other new connections then jump in again.


> gone.  As long as we're operating at a reasonable load, that is -- 
> different story when we're overloaded by an order of magnitude.

Do not confuse a DDOS of *new* connections with what I think is 
happening for s...@h of simply a data amplification overload that then 
saturates and effectively degrades the link capacity.


My present view is that a *combination* of restricting the number of 
connections serviced, AND restricting the permitted data rates is needed.


The number of simultaneous connections that can be permitted to be 
serviced without any data rates limiting is really surprisingly low 
compared with "website mentality" that is to serve thousands of 
simultaneous connections.

As an experiment, can s...@h limit the max number of simultaneous 
upload/download connections to see what happens to the data rates?

I suggest a 'first try' max simultaneous connections of 150 for uploads 
and 20 for downloads. Adjust as necessary to keep the link at an average 
that is no more than just *80* Mbit/s.


Aside: For example I've got uploads limited to just 200kb/s so that my 
upload link is never DOSed by Boinc.

Plausible?

Regards,
Martin

-- 
--------------------
Martin Lomas
m_boincdev ml1 co uk.ddSPAM.dd
--------------------
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to