On Thu, Aug 07, 2003 at 11:45:29AM -0700, Ian Clarke wrote:
> The current bandwidth limiting mechanism seems to be causing serious
> problems.  It works by limiting the speed of transmission of data on a
> byte-per-byte basis.  Unfortunately this creates a situation where
> transfers of data occur more slowly, which means they take longer, which
> means that we have more concurrent transfers overall, which slows them
> down even further - and the cycle continues - with the entire Freenet 
> network getting bogged down in a web of extremely slow transfers.
> 
> The alternative is for a node to try to maximize the per-transfer 
> connection speed by rejecting new requests when the upstream connection 
> speed is maxed out.  Some claim that this is a terrible idea and will 
> screw up routing because it will be impossible to get a node to accept a 
> datarequest, but I disagree.  
> 
> Imagine you go to McDonalds and ask a server for some food, they take 
> your order.  Now, you didn't know, but that server is actually serving 
> 20 other people at the same time and consequently it takes you ages to 
> get your food.  Wouldn't it be better if that server said "Sorry Sir, 
> I'm really busy - please try another server".
> 
> In short, by making a node try to service its existing transfers as 
> quickly as it can, it gets them out of the way faster and can thus serve 
> just as many requests as a node which accepts all requests but takes 
> ages to serve each individual one.

So how are we going to measure them? 2 simple possibilities:
1. We reject requests after we have N connections open.
Attack: simple client side snail attack. Request N large files that you
know the node has, and read from them as slowly as possible. You do not
need to know anything special about the node for this as you can insert
those files - in fact, this attack would probably work better as just
inserting N files simultaneously, very slowly.
2. We reject requests after transfers are using more than some
proportion of our outbound (for example) bandwidth limit. Attack is a
little harder: attacker needs bandwidth greater than or equal to this
proportion of the victim's bwlimit. Insert a single huge file at HTL 0.
When finished, do it again. Repeat indefinitely.

The former is IMHO not acceptable. The latter, well, maybe we can put up
with, since there are many ways to disable a node if you have more
bandwidth than it has and you know where it is. Especially in the
absence of NIOv2. Either attack will prevent the node from serving any
useful traffic to the rest of the network.

Thoughts?
> 
> Thoughts?
> 
> Ian.
> 
> -- 
> Ian Clarke                                                [EMAIL PROTECTED]
> Coordinator, The Freenet Project            http://freenetproject.org/
> Weblog                                     http://slashdot.org/~sanity/journal



-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to