On Thu, Apr 9, 2009 at 8:20 AM, Henning Schulzrinne <[email protected]> wrote: > > On Apr 8, 2009, at 10:49 PM, Bruce Lowekamp wrote: > >> 802.11 is a bad example. It needs reliability because its losses >> aren't well-correlated with congestion. > > I admit that I can't follow this logic. The reason for reliability in 802.11 > has nothing to do with congestion. If 802.11 did not have reliability > built-in, end-to-end throughput at the network layer would be horrible. The > same principle applies here as well.
I'm not sure how we are talking past each other here. The basic principle of how TCP works is that loss==congestion. 802.11 needs a reliability protocol because it doesn't have this correlation, i.e. loss can be caused by signal problems that are not related to congestion. The principle I was trying to apply a few messages ago was that if loss in the overlay link layer is caused by congestion, and end-to-end protocols react to loss by backing off, we have the same property as TCP. Your suggestion that if 802.11 needing reliability implies that an overlay link protocol only makes sense if you believe that the overlay link protocol experiences losses that are not due to congestion. I don't believe that is likely to be true, so 802.11 is a bad example. > >> >> >> I'm not terribly concerned about TCP having reliability when used in >> this context, although the stream-oriented nature of TCP is >> unfortunate. I don't want to go to a lot of effort adding reliability >> to a UDP-based protocol, because I don't think it will add much (and >> could hurt if done wrong). > > Can you support your arguments with references to the literature or at least > not contradict standard intro to networking course materials? I have no idea what you're referring to here. But since you have claimed I am misapplying/unaware of the end-to-end argument, etc, I will quote what I think is the most relevant part of Saltzer's paper here: ----------------- Clearly, some effort at the lower levels to improve network reliability can have a significant effect on application performance. But the key idea here is that the lower levels need not provide "perfect" reliability. Thus the amount of effort to put into reliability measures within the data communication system is seen to be an engineering tradeoff based on performance, rather than a requirement for correctness. Note that performance has several aspects here. --------------------- Obviously the paper goes on to discuss more aspects of the corner cases and performance tradeoffs, but my initial suggestion that what we should build is a semi-reliable protocol is entirely in line with this, as far as I can tell. Bruce _______________________________________________ P2PSIP mailing list [email protected] https://www.ietf.org/mailman/listinfo/p2psip
