End-to-end queueing delay (aggregate of delays in all queues except for the 
queues in the endpoints themselves) should typically never (never means <99.9% 
of any hour-long period)  exceed 200 msec. in the worst case, and if at all 
possible never exceed 100 msec.   in networks capable of carrying more than 1 
Mbit/sec to and from endpoints (I would call that high-bitrate nets, the stage 
up from "dialup" networks).
 
There are two reasons for this:
 
1) round-trip "RPC" response time for interactive applications > 100 msec. 
become unreasonable.
 
2) flow control at the source that stanches the entry of data into the network 
(which can be either switching media codecs or just pushing back on the 
application rate - whether it is driven by the receiver or the sender, both of 
which are common) must respond quickly, lest more packets be dumped into the 
network that sustain congestion.
 
Fairness is a different axis, but I strongly suggest that there are other ways 
to achieve approximate fairness of any desired type without building up queues 
in routers.  It's perfectly reasonable to remember (in all the memory that 
*would otherwise have caused trouble by holding packets rather than discarding 
them*) the source/dest information and sizes of recently processed (forwarded 
or discarded) packets.  This information takes less space than the packets 
themselves, of course!  It can even be further compressed by "coding or 
hashing" techniques.  Such live data about *recent behavior* is all you need 
for fairness in balancing signaling back to the source.
 
If all of the brainpower on this list cannot take that previous paragraph and 
expand it to implement the solution I am talking about, I would be happy (at my 
consulting rates, which are quite high) to write the code for you.  But I have 
a day job that involves low-level scheduling and queueing work in a different 
domain of application.
 
Can we please get rid of the nonsense that implies that the only information 
one can have at a router/switch is the set of packets that are clogging its 
outbound queues?  Study some computer algorithms that provide memory of recent 
history....  and please, please, please stop insisting that intra-network 
queues should build up for any reason whatsoever other than instantaneous 
transient burstiness of convergent traffic.  They should persist as briefly as 
possible, and not be sustained for some kind of "optimum" throughput that can 
be gained by reframing the problem.
 
 
 


On Thursday, January 2, 2014 1:31am, "Fred Baker (fred)" <[email protected]> said:



> 
> On Dec 15, 2013, at 10:56 AM, Curtis Villamizar <[email protected]>
> wrote:
> 
> > So briefly, my answer is: as a WG, I don't think we want to go there.
> > If we do go there at all, then we should define "good AQM" in terms of
> > acheving a "good" tradeoff between fairness, bulk transfer goodput,
> > and bounded delay.  IMHO sometimes vague is better.
> 
> As you may have worked out from my previous comments in these threads, I agree
> with you. I don't think this can be nailed down in a universal sense. What 
> can be
> described is the result in the network, in that delays build up that persist, 
> as
> opposed to coming and going, and as a result applications don't work as well 
> as
> they might - and at that point, it is appropriate for the network to inform 
> the
> transport.
>
_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to