On Mon, Feb 24, 2014 at 10:51 AM, Dave Taht <[email protected]> wrote: > On Mon, Feb 24, 2014 at 9:36 AM, Rich Brown <[email protected]> wrote: >> >> CeroWrt 3.10.28-14 is doing a good job of keeping latency low. But... it has >> two other effects: >> >> - I don't get the full "7 mbps down, 768 kbps up" as touted by my DSL >> provider (Fairpoint). In fact, CeroWrt struggles to get above 6.0/0.6 mbps. > > 0) try the tcp_upload or tcp_download or tcp_bidir tests to get > results closer to what your provider claims. > > since your plots are pretty sane, you can get cleaner ones with using > the 'totals' plot type > and/or comparing multiple runs to get a cdf > > -p totals or -p icmp (theres a few different ones, --list-plots > > -i somerun.json.gz -i somerun2.json.gz > > > 1) > http://richb-hanover.com/wp-content/uploads/2014/02/6854-777-dflt-sqm-disabled1.png > > is your baseline without SQM? > > If so why do you compare the providers stated rate... > > with the measured rate with/without SQM? > > These are two measures of the truth - one with and without a change. > > Vs a providers claim for link rate that doesn't account for real > packet dynamics.
I awoke mildly grumpy this morning, sorry. The the sqm-disabled link above shows you getting less than a mbit down under the providers default settings. So rather than saying you lose 10% of link bandwidth relative to the stated ISP specification, I prefer to think you are getting 6x more usable bandwidth from using SQM, and somewhere around 1/25th or more less latency. Making tcp's congestion avoidance work rapidly and avoiding bursty packet loss leads to more usable bandwidth. > 2) the netperf reporting interval is too high to get good measurements > at below a few > mbit, so you kind of have to give up on the upload chart at these > rates. (totals chart is > clearer) > > Note that the tcp acks are invisible - you are getting >6mbit down, > and sending back approximately > 150kbit in acks which we can't easily measure. The overhead in the > measurement streams is > relative to the RTT as well. > > I'd really like to get to a test that emulated tcp and got a fully > correct measurement. > > 3) Generally using a larger fq_codel target will give you better > upload throughput and > better utiliziation at these rates. try target 40ms as a start. We've > embedded a version > of the calculation in the latest cero build attempts (but other stuff is > broke) > > nfq_codel seems also do to give a better balance between up and > downloads at low rates, > also with a larger target. > > it looks like overhead 44 is about right and your first set of charts > about right. so if you could repeat your first set of tests changing the target to at least 40ms on the upload, and trying both nfq_codel and fq_codel, you'll be getting somewhere. nfq_codel behaves more like SFQ, and is probably closer to what more people want at these speeds. > > > >> >> - When I adjust the SQM parameters to get close to those numbers, I get >> increasing levels of packet loss (5-8%) during a concurrent ping test. > > Shows the pings are now accruing delay. > >> >> So my question to the group is whether this behavior makes sense: that we >> can have low latency while losing ~10% of the link capacity, or that getting >> close to the link capacity should induce large packet loss... > > You never had the 10% in the first place. > >> >> Experimental setup: >> >> I'm using a Comtrend 583-U DSL modem, that has a sync rate of 7616 kbps >> down, 864 kbps up. Theoretically, I should be able to tell SQM to use >> numbers a bit lower than those values, with an ATM plus header overhead with >> default settings. >> >> I have posted the results of my netperf-wrapper trials at >> http://richb-hanover.com - There are a number of RRUL charts, taken with >> different link rates configured, and with different link layers. >> >> I welcome people's thoughts for other tests/adjustments/etc. >> >> Rich Brown >> Hanover, NH USA >> >> PS I did try the 3.10.28-16, but ran into troubles with wifi and ethernet >> connectivity. I must have screwed up my local configuration - I was doing it >> quickly - so I rolled back to 3.10.28.14. > > > manually adjust the target. > >> _______________________________________________ >> Cerowrt-devel mailing list >> [email protected] >> https://lists.bufferbloat.net/listinfo/cerowrt-devel > > > > -- > Dave Täht > > Fixing bufferbloat with cerowrt: > http://www.teklibre.com/cerowrt/subscribe.html -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html _______________________________________________ Cerowrt-devel mailing list [email protected] https://lists.bufferbloat.net/listinfo/cerowrt-devel
