-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

> The rationale was to treat the network collectively as a single 
> receiver.  Do you see reasons why that approach won't work?

The path to each receiver has a different round trip time and capacity,
so I'm not sure it would be meaningful to try to calculate things like
the bandwidth delay product or RTT variance, both of which TCP relies
on. I'm not saying it definitely won't work, but it's so far outside of
what TCP was designed for that I don't think it can really be considered
a well-tested system.

>>> * TCP's congestion control also assumes the sender is well behaved - a
>>> badly behaved sender can cause all other flows to back off, for  selfish
>>> or malicious reasons
> 
> 
> True, but is this even a solvable problem, noting that it hasn't been 
> solved with the Internet?

I believe it may be solvable for networks where the endpoints trust one
another but don't necessarily trust intermediate nodes; I'm not sure
it's solvable if the endpoints must remain anonymous to one another.
(Sorry for the hand-waving - I've got some specific ideas in this area
but the paper's under anonymous review for a conference... but anyway it
isn't applicable to anonymous communication.)

>>> * Despite being well designed and widely tested, TCP is not  suitable for
>>> end-to-end congestion control in Freenet
> 
> 
> So you say, but you are assuming the point that is being debated -  why
> isn't it suitable, and what is the better alternative?

I think Matthew's right about pushing load back to the sender - the
question is how to do this over multiple hops in a way that doesn't
reveal the identity of the sender and gives the sender an incentive to
slow down (rather than a polite request to do so).

>>> * "Route as greedily as possible, given the available capacity"
> 
> 
> The problem here, and it is one we have faced before, is that this 
> degrades routing, which means requests take longer, which further 
> increases the load on the network, isn't this the last thing we want  to
> do in this situation - or perhaps we don't have a choice?

You're right, redirecting load over longer paths is a bad idea. The
current explicit congestion notification approach is better, but the
question is how to encourage senders to respond correctly to
RejectedOverloads...

Perhaps each well behaved node could throttle its neighbours based on
the number of RejectedOverloads returned to them (directly or
indirectly)? That way there'd be no need for the creator of the
RejectedOverload to know the identity of the sender, but the throttling
would propagate back to the right place...

Cheers,
Michael

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)

iD8DBQFEPWeJyua14OQlJ3sRAk8wAJ9ApjAy1NqMJ8ubHq01XcaX37CTkQCgioYP
iE/vHgBnvzZm6tW2YDJ5xl4=
=d6GR
-----END PGP SIGNATURE-----

Reply via email to