Bruce Lowekamp wrote:
> 802.11 is a bad example.  It needs reliability because its losses
> aren't well-correlated with congestion.

I do not expect network congestion to be the main cause of packet losses
in distributed overlays either. Factors like peer failures and link
instability in wireless/ad-hoc networks will probably have greater
impact, no?

> Even if the overlay link protocol does have reliability, you still
> need to shed load with finite-length queues (or push back) to avoid
> congestion collapse/infinite queues.  I'm curious what literature
> you're thinking off that argues otherwise.

The fact that hop-by-hop reliability cannot replace end-to-end -- and I
don't think that's what Henning was arguing for -- doesn't mean that the
contrary is always true. It happens to be a good approximation in some
scenarios, e.g. when the hops are very stable hardware routers and the
links wires, but in the general case it is an oversimplistic assumption.

-- 
Ciao,
Enrico

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
P2PSIP mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/p2psip

Reply via email to