On Sat, 8 Aug 2009, Lars Eggert wrote:
If we assume that FIFO queuing will be used, yes, the buffer size has a
large, direct impact on the observable behavior. Not so for various AQM
schemes (RED, etc.). Since my personal preference would be to recommend
something smarter than FIFO, I don't see buffer size as very critical
then. (Now, how to correctly parameterize an AQM scheme for a given
link, *that* would be critical...)
Hi, in some discussion that came up during the IETF I talked about my home
scheme, so just to give an example of a heuristic that I personally have
found to work well in Cisco MQC (not proposing something concrete, just
giving an example):
class-map match-all small-packets
match packet length max 200
class-map match-all prio-data
match access-group name outgoing-prio-data
policy-map out-child
class prio-data
bandwidth 300
class small-packets
bandwidth 300
class class-default
fair-queue 1024
random-detect
random-detect ecn
policy-map out-parent
class class-default
shape average 700000
service-policy out-child
ip access-list extended outgoing-prio-data
permit tcp any any eq 22
permit udp any any eq domain
permit icmp any any
permit ospf any any
permit tcp any any eq telnet
permit tcp any any eq www
permit 41 any any
So basically, I have 1 meg upstream on my cable modem and since I have no
influence on the buffering there (and the shaping I'm doing is on a ipip
tunnel), I make sure I use way less than my bw and then I assure a minimum
bw to outgoing packets to a few services/ports on the Internet
(ssh/dns/telnet/http), then I also assure a certain amount of bw to
smaller packets (to catch the typical ACKs and smaller packets typically
seen in interactive applications such as VOIP and others), then everything
else is going into fair-queue with WRED and ECN (yes, I actually use ECN
because I wanted to test it :P)).
This queuing scheme is actually very helpful even on faster links, because
if you have a T3 link (45 meg) and do FIFO on it with good TCP windows on
the end hosts, a single TCP flow can make this jump to 100+ ms buffer
easily, thus you want fair-queue to make it do something intelligent with
the different flows to avoid your SSH traffic sitting behind this 100ms of
file transfer. Thus I think any proposal should recommend/inform about
this even for 100 meg uplinks and higher, as soon as you have big buffers
you want to do something intelligent queuing where the higher speeds you
have and lesser buffers, the less intelligence is needed.
I tried replicating this using "tc" in Linux a few years back but gave up
after a few hours because I just couldn't make it work, so anyone being a
master at that, I'd be very helpful translating this into "tc" conf :)
--
Mikael Abrahamsson email: [email protected]