On Apr 8, 2009, at 10:49 PM, Bruce Lowekamp wrote:

802.11 is a bad example.  It needs reliability because its losses
aren't well-correlated with congestion.

I admit that I can't follow this logic. The reason for reliability in 802.11 has nothing to do with congestion. If 802.11 did not have reliability built-in, end-to-end throughput at the network layer would be horrible. The same principle applies here as well.




I'm not terribly concerned about TCP having reliability when used in
this context, although the stream-oriented nature of TCP is
unfortunate.  I don't want to go to a lot of effort adding reliability
to a UDP-based protocol, because I don't think it will add much (and
could hurt if done wrong).

Can you support your arguments with references to the literature or at least not contradict standard intro to networking course materials?



Even if the overlay link protocol does have reliability, you still
need to shed load with finite-length queues (or push back) to avoid
congestion collapse/infinite queues.  I'm curious what literature
you're thinking off that argues otherwise.

Yes, you do need (and will, by nature, have) finite buffers at intermediate hops, but they will drop P2P-layer messages very rarely, as long as they can store a full "message" worth.




Bruce

_______________________________________________
P2PSIP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/p2psip

Reply via email to