-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

You seem to state some things as fact here that are still rather  
debatable:

On 12 Apr 2006, at 10:25, Michael Rogers wrote:
> * TCP's congestion control is designed for use between a single sender
> and a single receiver - it doesn't work for a single sender and  
> multiple
> receivers (see reliable multicast)

The rationale was to treat the network collectively as a single  
receiver.  Do you see reasons why that approach won't work?

> * TCP's congestion control also assumes the sender is well behaved - a
> badly behaved sender can cause all other flows to back off, for  
> selfish
> or malicious reasons

True, but is this even a solvable problem, noting that it hasn't been  
solved with the Internet?

> * Despite being well designed and widely tested, TCP is not  
> suitable for
> end-to-end congestion control in Freenet

So you say, but you are assuming the point that is being debated -  
why isn't it suitable, and what is the better alternative?

> * However, TCP-like congestion control between neighbours would make
> sense (see DCCP for TCP-friendly congestion control algorithms)

That is one way that we are using it already (although it appears  
that our implementation was flawed, Matthew is now fixing that).  We  
were basically using it at two different layers of Freenet, the node- 
to-node layer, and at the insert and request layer.

> * I suggest replacing the backoff and ping mechanisms:
>     * TCP-like congestion control between neighbours
>     * Forward each packet to the neighbour that:
>         * Is online
>         * Is ready to receive more packets
>         * Is closest to the target
>
> * Rather than having an explicit backoff timer, congested  
> neighbours are
> temporarily avoided until they are ready to receive more packets
> (according to the local TCP-like congestion control)
>
> * "Route as greedily as possible, given the available capacity"

The problem here, and it is one we have faced before, is that this  
degrades routing, which means requests take longer, which further  
increases the load on the network, isn't this the last thing we want  
to do in this situation - or perhaps we don't have a choice?

> 2) Restrict the amount of load a node can place on the network

I need to give these further thought.

Ian.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (Darwin)

iD8DBQFEPVxyQtgxRWSmsqwRAsgUAJ9J7p2M/t0AC/vGZhYZ7JHdtS0MhgCeLcSL
+lRyjp1jFMFFLwnl7H6A9I4=
=qbYc
-----END PGP SIGNATURE-----

Reply via email to