Our current load limiting scheme is based on an analogy with TCP/IP. We
have a window, which is adjusted up and down according to an
additive-increase-multiplicative-decrease formula depending on when we
get successes (no timeout, no RejectedOverload) and failures (timeout or
RejectedOverload). And we divide the typical request time by that.

Typically failures are caused by pre-emptive request rejection (since
actual timeouts are a very bad thing), as a result of bandwidth
pre-allocation or other reasons. These are propagated back to the
original request source, so inserts are far more likely to be rejected,
because they visit more nodes. We have a common window for both requests
and inserts; the effect of this is slowing down requests in order to
speed up inserts.

How does this differ from TCP? How may it be sub-optimal?

1. Modelling the entire network as one connection. TCP models one route
across a large network. Whereas we are modelling a large number of
requests which will go to nodes all over the network.
2. Propagation, and inserts vs requests: Failures are propagated back
along the chain. At each node that an insert or request visits, there is
an opportunity for it to be pre-emptively rejected. This is reasonably
close to TCP: At each router, a packet may be dumped. However, dumping a
packet on TCP/IP is quite rare, and almost entirely results from there
simply being too many packets on the pipe. Is this the dominant cause of
failure on freenet? I think so (rejection due to bandwidth
preallocation). I'm not sure. Some are caused by nodes which are
seriously overloaded on e.g. CPU. Also in TCP, packets are actually
dropped, rather than being rerouted, as they are usually in Freenet. If
they were not rerouted, inserts would never get anywhere. The only way to
"bridge the gap" between inserts and requests further would be to throttle
on a local level rather than a sender level; but then load is not
propagated back to sender, which is a basic requirement to manage load. We
can have the best of both worlds with token passing...

Anyway, what we can do:
- Multiply the request throttle window by the number of connected peers!

The number of connected peers is the number of routes that a request can
take, at least at the requester's end. So this seems to make sense.

Maybe we should try this. (Probably after xmas).

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to