On Apr 8, 2009, at 6:21 AM, Bruce Lowekamp wrote:

One of the (if not THE) fundamental decisions that makes the Internet
work is that reliability is end-to-end, not hop-by-hop.  Hops are
allowed to drop traffic due to congestion, and that is taken as
implicit feedback that there is congestion in the network.

You may want to re-read the end-to-end argument paper. You do need end- to-end reliability to ensure reliability (since nodes in the middle can do bad things to packets), but performance often dictates "hop-by- hop", particularly if the "hop" is essentially the whole Internet. Since you like to use the notion of a link layer in P2P: this is the reason why 802.11 has link-layer reliability. By your argument, 802.11 should dispense with that.





The proposal I sent out drops traffic in two places.  First, each peer
maintains a limited queue for fragments and drops excess fragments
(the literal analog of drops due to congestion in routers).  Second,
the overlay link protocol is only semi-reliable (which I debated
having at all), with the assumption that on the Internet, loss is due
to congestion.  So we have two congestion signals to the peer, queue
length and link protocol drops.

Unfortunately, that assumption is only somewhat true, particularly with wireless links.




Congestion needs to cause load shedding in a network (overlay or not)
or else it will collapse.  Arguing that we should do extra work for
hop-by-hop reliability makes no sense to me.

Please consult the literature; this is a topic that has received more than its share of real measurement work, albeit ten and twenty years ago, when this topic was of greater practical interest.



Bruce


_______________________________________________
P2PSIP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/p2psip

Reply via email to