Lynn W. Taylor wrote:
[...]
> BFI (in any context): Brute Force and Ignorance.  I believe it's usually 
> referred to as BFBI (Brute Force and Bloody Ignorance) in the U.K.

Shouldn't be needed in this instance. There should be enough information 
for Berkeley to do the calculations.

[...]
> You might be able to filter just some IP blocks in the router (and block 
> everything else) so that the server only sees a subset, and doesn't even 
> see the SYN packets for filtered use.
> 
> But I'm not convinced.

Shouldn't be needed.

If the requests for new connections exceed those available, then they 
get a NACK or even just nothing.

But I don't think that the link is being brought down merely by a flood 
of tcp requests for new connections.

(Perhaps server requests should be via UDP but would that not be blocked 
by silly overly paranoid firewalls?)


> Also, you're thinking router buffer space, and I'm thinking server TCP 
> Control Blocks -- but I don't really care what we're optimizing, we're 
> optimizing the system.  We've got a "drain" at the server side, and a 
> set of taps at the client side.  If we turn on too many taps, the floor 
> gets wet, and I don't care if it's the pipe between the clients and the 
> server, the basin, or the drain.

Indeed so except that the network links and (smallest) router buffer 
space is a critical part of the system.

If the flow of data was constant, then there is no need of any buffers. 
In reality for s...@h, they have a 1Gb/s link that can dump data onto a 
router that has a 100Mb/s outlet.


So for your analogy, you have the basin with it's drain wide open 
(100Mb/s) and you are dumping a jugful (a batch of WUs) of water at a 
time into the basin in one big splosh (1000Mb/s). You need to ensure a 
long enough pause and a small enough jug to avoid the basin overflowing 
(and losing water/data onto the floor).


> Martin wrote:
[...]
>> Do not confuse a DDOS of *new* connections with what I think is 
>> happening for s...@h of simply a data amplification overload that then 
>> saturates and effectively degrades the link capacity.
>>
>>
>> My present view is that a *combination* of restricting the number of 
>> connections serviced, AND restricting the permitted data rates is needed.
>>
>>
>> The number of simultaneous connections that can be permitted to be 
>> serviced without any data rates limiting is really surprisingly low 
>> compared with "website mentality" that is to serve thousands of 
>> simultaneous connections.
>>
>> As an experiment, can s...@h limit the max number of simultaneous 
>> upload/download connections to see what happens to the data rates?
>>
>> I suggest a 'first try' max simultaneous connections of 150 for 
>> uploads and 20 for downloads. Adjust as necessary to keep the link at 
>> an average that is no more than just *80* Mbit/s.
>>
>>
>> Aside: For example I've got uploads limited to just 200kb/s so that my 
>> upload link is never DOSed by Boinc.
>>
>> Plausible?

Regards,
Martin


-- 
--------------------
Martin Lomas
m_boincdev ml1 co uk.ddSPAM.dd
--------------------
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.

Reply via email to