On 2/27/16 9:53 AM, Polina Goltsman wrote: > > > On 02/27/2016 06:21 PM, Dave Täht wrote: >> >> On 2/26/16 6:17 AM, Rasool Al-Saadi wrote: >>> Dear all, >>> >>> I would like to announce that we (myself and Grenville Armitage) >>> released Dummynet AQM v0.1, which is an independent implementation of >>> CoDel and FQ-CoDel for FreeBSD's ipfw/dummynet framework, based on >>> the IETF CoDel [1] and FQ-CoDel [2] Internet-Drafts. >>> We prepared patches for FreeBSD11-CURRENT-r295345 and FreeBSD >>> 10.x-RELEASE (10.0, 10.1, 10.2), and a technical report of our >>> implementation. >>> >>> Patches and documentation can be found in: >>> http://caia.swin.edu.au/freebsd/aqm >>> >>> Technical report: >>> http://caia.swin.edu.au/reports/160226A/CAIA-TR-160226A.pdf >> In browsing this it appears that shaped rates were tested only (?). I am >> curious what native performance (10,100,1gbit) looked like. I think that >> freebsd lacks a BQL-like mechanism to control the driver queues, and on >> the other hand freebsd did not go as nuts with offloads as linux did. Is >> this code generally applicable (to things like pfsense?) >> >> Aside from that, looks pretty good. I am curious also as to what caused >> the offset difference in sawtooth pattern between linux and bsd >> implementations (like in fig 2) Different initcwnd? ssthresh? don't seem >> to be it - linux reno vs bsd reno? > I can't distinguish between the maximum congestion windows, but Linux's RTT > (especially the minimum one) is smaller on all three figures (2,3, and 4), > and packet losses for FreeBSD are a little less frequent. So, assuming that > the AQM behaves the same, there is another queue somewhere.
Two other variables worth tweaking on a linux server and client (besides the kernel version) are sch_fq and tcp_limit_output_bytes. On bare metal this was set to 4k or so, which proved too small to drive linux wifi's existing drivers on certain tests, got bumped to 128k, and then the xen folk noticed that regressed their throughput by like 30%, so it got bumped again to 256k default. I liked it at 4k. A lot. :) it is plausible that tweaking tcp_limit_output_bytes will change your results. > What is strange is that Linux's goodput is higher on figures 2 and 3 > despite > lower queueing delay. On the other hand for Cubic both delay and goodput > are > slightly smaller. Is that advanced loss recovery in Linux? > > Also, is it normal that NewReno resets its congestion window (to initcwnd) > after each packet loss aka. why it looks like Tahoe? Does CoDeL go into > dropping more than one packet per RTT if the RTT is much smaller than an > interval? >>> >>> >>> [1] "Controlled Delay Active Queue Management", >>> https://tools.ietf.org/html/draft-ietf-aqm-codel-02 >>> [2] "FlowQueue-Codel" , >>> https://tools.ietf.org/html/draft-ietf-aqm-fq-codel-04 >>> >>> Regards, >>> Rasool Al-Saadi >>> >>> _______________________________________________ >>> aqm mailing list >>> [email protected] >>> https://www.ietf.org/mailman/listinfo/aqm >>> >> _______________________________________________ >> aqm mailing list >> [email protected] >> https://www.ietf.org/mailman/listinfo/aqm > _______________________________________________ aqm mailing list [email protected] https://www.ietf.org/mailman/listinfo/aqm
