Re: [LARTC] htb overrate with 2.6.16
Yanko Kaneti wrote: On Sun, 2006-04-16 at 03:03 +0100, Andy Furniss wrote: Yanko Kaneti wrote: Setting mtu 16500 for the class fixed it. But I wonder where did these giants come from in the first place? The mtu of the interface is and was 1500. Or so ifconfig and ip link tell me. Or this is some other mtu we are talking about... Hmm I didn't expect that - maybe there is some problem with the nic drivers not obeying kernel - is there any tso offload etc. at work here ? Yes and its on by default. The interface mtu still says 1500. I've tried deleting and attaching the qdisc+class (without explicit large mtu) with both tso on (ethtool -K eth0 tso on) and tso off , it doesnt seem to matter - giants appear in both cases. With large mtu for the class no giants with both tso on and off. Yanko I think you need to ask fedora or intel driver maintainer about this. AIUI tso is not in vanilla kernels and the patches are quite invasive. Andy. ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] htb overrate with 2.6.16
On Sun, 2006-04-16 at 19:40 +0100, Andy Furniss wrote: Yanko Kaneti wrote: On Sun, 2006-04-16 at 03:03 +0100, Andy Furniss wrote: Yanko Kaneti wrote: Setting mtu 16500 for the class fixed it. But I wonder where did these giants come from in the first place? The mtu of the interface is and was 1500. Or so ifconfig and ip link tell me. Or this is some other mtu we are talking about... Hmm I didn't expect that - maybe there is some problem with the nic drivers not obeying kernel - is there any tso offload etc. at work here ? Yes and its on by default. The interface mtu still says 1500. I've tried deleting and attaching the qdisc+class (without explicit large mtu) with both tso on (ethtool -K eth0 tso on) and tso off , it doesnt seem to matter - giants appear in both cases. With large mtu for the class no giants with both tso on and off. I think you need to ask fedora or intel driver maintainer about this. AIUI tso is not in vanilla kernels and the patches are quite invasive. Well, as much as google tells me TSO has been in the kernel and enabled since 2.5.33 and e1000 was the first driver to support it. The FC4 2.6.16 kernel doesn't have any tso related patches as can be seen here http://cvs.fedora.redhat.com/viewcvs/rpms/kernel/FC-4/ Since my immediate problem was solved with the mtu param I plan on forgetting about htb and traffic control in general for the time being :) Thanks again. Cheers Yanko ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] htb overrate with 2.6.16
Yanko Kaneti wrote: I think you need to ask fedora or intel driver maintainer about this. AIUI tso is not in vanilla kernels and the patches are quite invasive. Well, as much as google tells me TSO has been in the kernel and enabled since 2.5.33 and e1000 was the first driver to support it. The FC4 2.6.16 kernel doesn't have any tso related patches as can be seen here http://cvs.fedora.redhat.com/viewcvs/rpms/kernel/FC-4/ Ahh OK I must have been thinking of another card. Andy. ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
[LARTC] how to do probabilistic packet loss in kernel?
Hi, I am using iproute2 to setup fowarding, adding routes like ip route add 192.168.1.3 via 192.168.1.2 I was wondering where in the kernel I can insert probabilistic packet loss only for forwarded packets? So that for instance I can drop 5% of all forwarded packets? I don't need help with the actual code, just need help finding where to insert this code :) Thanks! George ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] how to do probabilistic packet loss in kernel?
Greetings George, : I am using iproute2 to setup fowarding, adding routes like ip : route add 192.168.1.3 via 192.168.1.2 : : I was wondering where in the kernel I can insert probabilistic : packet loss only for forwarded packets? So that for instance I : can drop 5% of all forwarded packets? : : I don't need help with the actual code, just need help finding : where to insert this code :) I believe you are looking for the netem qdisc [0]. Here's just a snippet from Stephen Hemminger's wiki page to help you imagine how you could use netem to introduce probabilistic packet loss. # tc qdisc add dev eth0 parent 1:3 handle 30: netem \ delay 200ms 10ms distribution normal Good luck, -Martin [0] http://linux-net.osdl.org/index.php/Netem -- Martin A. Brown --- http://linux-ip.net/ --- [EMAIL PROTECTED] ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] how to do probabilistic packet loss in kernel?
Thanks Martin and Roman. I will definately look into this. I have a question for you though... in terms of adding loss like this, this will not interact with hardware layer rate control of wireless cards right? For instance... dropping from 54Mbit to 11Mbit on an 802.11g card when loss certain loss begins occuring Martin A. Brown wrote: Greetings George, : I am using iproute2 to setup fowarding, adding routes like ip : route add 192.168.1.3 via 192.168.1.2 : : I was wondering where in the kernel I can insert probabilistic : packet loss only for forwarded packets? So that for instance I : can drop 5% of all forwarded packets? : : I don't need help with the actual code, just need help finding : where to insert this code :) I believe you are looking for the netem qdisc [0]. Here's just a snippet from Stephen Hemminger's wiki page to help you imagine how you could use netem to introduce probabilistic packet loss. # tc qdisc add dev eth0 parent 1:3 handle 30: netem \ delay 200ms 10ms distribution normal Good luck, -Martin [0] http://linux-net.osdl.org/index.php/Netem ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] how to do probabilistic packet loss in kernel?
Hello again, : I have a question for you though... in terms of adding loss like : this, this will not interact with hardware layer rate control of : wireless cards right? : : For instance... dropping from 54Mbit to 11Mbit on an 802.11g card : when loss certain loss begins occuring Outgoing packets pass through the traffic control system (netem qdisc, in this case) just before being dequeued to the driver. The actual behaviour of the kernel, in this case, depends on a sanely coded driver. I assume a sanely coded driver, in which case this is what you should see when the hardware cannot accept packet for transmission: 0. netem (or any other qdisc, for that matter) will operate as configured (inducing loss, delaying, reordering or prioritizing your outgoing packets) 1. eventually qdisc_restart() will call hardware driver 2A. [if success] packet is transmitted 2B. [if failure] the hardware driver cannot handle the packet for some reason (TX ring full, link failure or other problem); it will propagate an error condition to higher layer 3. qdisc_restart(), receiving such an error will cause the packet to be requeued [0] 4. goto step 1 My source for this answer documents kernel 2.4, although the code in the networking stack seems to be fundamentally the same in this case. See the DataTAG report entitled A Map of the Networking Code in Linux Kernel 2.4.20 [1]. On page 19, Section 4.3.1, the authors refer to the function net/sched/sch_generic.c which includes qdisc_restart(). So, strictly speaking, there should be no interaction between your use of the netem qdisc and lower layer rate control (lossy transmissions and any compensatory mechanisms between radios). Note! Both of the sources for my answer are from old documentation (and, of course, ongoing general knowledge of the traffic control system). I believe that the kernel still operates in this fashion, but would absolutely welcome any corrections from those who are more intimately familiar with the kernel and hardware perspective. Good luck, George, -Martin [0] http://qos.ittc.ku.edu/howto/node11.html http://qos.ittc.ku.edu/howto/ [1] http://datatag.web.cern.ch/datatag/papers/tr-datatag-2004-1.pdf -- Martin A. Brown --- http://linux-ip.net/ --- [EMAIL PROTECTED] ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] how to do probabilistic packet loss in kernel?
Okay i need a little help. So, I have this qdisc module q_xcp.so that was compiled for tc version 2.4.7, so I have version 2.4.7 for q_xcp.so to work properly. I tried using q_xcp.so with a the newest version of iproute/tc but it seems as though since version 2.4.7 qdisc_util structure has changed some and i keep getting seg faults trying to use q_xcp. For instance, in the newer versions of tc, i see qdisc's structures defined like: struct qdisc_util netem_qdisc_util = { }; and in the same newer versions, in tc.c get_qdisc_kind, before it calls q = dlsym(dlh, buf); ... it sets buf to %s_qdisc_util however in the old version of tc that q_xcp was made for, it sets buf to %s_util so I guess my main question is, do you know of a version of iproute/tc that uses this old structure that has netem support? I tried copying the q_netem.c code from the newer version of iproute and trying to compile it for an old version of ip route... but its just not working out nicely at all I'd greatly appreciate any help. Thanks! George Hello again, : I have a question for you though... in terms of adding loss like : this, this will not interact with hardware layer rate control of : wireless cards right? : : For instance... dropping from 54Mbit to 11Mbit on an 802.11g card : when loss certain loss begins occuring Outgoing packets pass through the traffic control system (netem qdisc, in this case) just before being dequeued to the driver. The actual behaviour of the kernel, in this case, depends on a sanely coded driver. I assume a sanely coded driver, in which case this is what you should see when the hardware cannot accept packet for transmission: 0. netem (or any other qdisc, for that matter) will operate as configured (inducing loss, delaying, reordering or prioritizing your outgoing packets) 1. eventually qdisc_restart() will call hardware driver 2A. [if success] packet is transmitted 2B. [if failure] the hardware driver cannot handle the packet for some reason (TX ring full, link failure or other problem); it will propagate an error condition to higher layer 3. qdisc_restart(), receiving such an error will cause the packet to be requeued [0] 4. goto step 1 My source for this answer documents kernel 2.4, although the code in the networking stack seems to be fundamentally the same in this case. See the DataTAG report entitled A Map of the Networking Code in Linux Kernel 2.4.20 [1]. On page 19, Section 4.3.1, the authors refer to the function net/sched/sch_generic.c which includes qdisc_restart(). So, strictly speaking, there should be no interaction between your use of the netem qdisc and lower layer rate control (lossy transmissions and any compensatory mechanisms between radios). Note! Both of the sources for my answer are from old documentation (and, of course, ongoing general knowledge of the traffic control system). I believe that the kernel still operates in this fashion, but would absolutely welcome any corrections from those who are more intimately familiar with the kernel and hardware perspective. Good luck, George, -Martin [0] http://qos.ittc.ku.edu/howto/node11.html http://qos.ittc.ku.edu/howto/ [1] http://datatag.web.cern.ch/datatag/papers/tr-datatag-2004-1.pdf -- Martin A. Brown --- http://linux-ip.net/ --- [EMAIL PROTECTED] -- ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
Re: [LARTC] how to do probabilistic packet loss in kernel?
Hey Tim, Thanks for your constant help. I managed to port q_xcp.c to work in modern versions of tc. I am using the 2.4.32 kernel and compiled in netem support into the kernel. I then compiled iproute2-2.6.11 including tc that came with it, which installed q_xcp.so to /usr/lib/tc and q_netem.so to /usr/lib/tc I then try: tc qdisc change dev ath0 root netem loss .1% and get: RTNETLINK answers: Invalid argument I also tried: tc qdisc change dev eth0 root netem duplicate 1% and get the same exact Invalid argument response... Any ideas here? Thanks! George It sounds like you should simply port your research q_xcp.c to work in the more modern versions of tc. That port should be straightforward once you understand how the netem_qdisc_util works. To make debugging easier, you can always link q_xcp.o into the binary at build time by adding the appropriate thing to the make file, and compile the whole thing with -g so that it is easier to debug. -Tim Shepard [EMAIL PROTECTED] -- ___ LARTC mailing list LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc