OH, thank you! One of the things that irked me on my htb+netem delay/loss setup was that the need to also feed arp and ND through it resulted in major startup transients that wouldn't exist in the real world.
So now I understand how to feed those protocols through a separate unshaped htb queue. thankyou, thankyou,thankyou On Sun, Jun 16, 2013 at 8:54 PM, Eric Dumazet <[email protected]> wrote: > This 'direct' queue is only used if one of your filter explicitly > targets the '0' classid of htb qdisc as in : > > (Note the 'direct_packets_stat 1145408' in tc qdisc output) > > > > lpq83:~# cat fq_codel0.sh > DEV=eth4 > CEIL="ceil 10000Mbit" > EST="est 1sec 4sec" > > tc qdisc del dev $DEV root 2>/dev/null > > tc qdisc add dev $DEV root handle 1: est 1sec 8sec htb default 1 > tc class add dev $DEV parent 1: classid 1:1 $EST htb rate 1Gbit $CEIL > mtu 1500 quantum 1514 > tc class add dev $DEV parent 1: classid 1:2 $EST htb rate 1Gbit $CEIL > mtu 1500 quantum 1514 > > # access to direct queue : following trafic doesnt need overhead of an > htb_class, > # as we do not want it to be shaped at all. > tc filter add dev $DEV parent 1: protocol ip u32 \ > match ip dst 7.7.7.84 flowid 1:0 > > lpq83:~# ./netperf -H 7.7.7.84;tc -s qd sh dev eth4 > MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to > 7.7.7.84 () port 0 AF_INET > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 87380 16384 16384 10.00 18390.46 > qdisc htb 1: root refcnt 257 r2q 10 default 1 direct_packets_stat 1145408 > Sent 67913929786 bytes 46865570 pkt (dropped 0, overlimits 0 requeues 11) > rate 13754Mbit 1186190pps backlog 0b 0p requeues 11 > > On Sun, Jun 16, 2013 at 2:32 PM, Dave Taht <[email protected]> wrote: >> In looking over the htb mess, I noticed that htb has an internal pfifo >> queue, which is controllable via new stuff in net-next. >> >> http://snapon.lab.bufferbloat.net/~cero2/htb_mess/11256-htb-add-HTB_DIRECT_QLEN-attribute.patch >> >> I have no idea what, if anything, it affects. I was under the >> impression, until now, that htb buffered up one, and only one packet >> no matter how many classes you have. Still am retaining that >> impression... >> >> Ah well... fiddling with setting txqueuelen to 2 on the device on >> 3.8.13-7 before running the aqm script might have an interesting >> result on simple.qos, if anyone wants to try it. >> >> such a backlog of stuff to sort through! >> >> -- >> Dave Täht >> >> Fixing bufferbloat with cerowrt: >> http://www.teklibre.com/cerowrt/subscribe.html -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html _______________________________________________ Cerowrt-devel mailing list [email protected] https://lists.bufferbloat.net/listinfo/cerowrt-devel
