On 29/07/15 18:42, Dave Taht wrote:
On Wed, Jul 29, 2015 at 7:07 PM, David Lang <[email protected]> wrote:
On Wed, 29 Jul 2015, Alan Jenkins wrote:
On 29/07/15 12:24, Alan Jenkins wrote:
On 29/07/15 05:32, Rosen Penev wrote:
Anyone know what the situation is with kirkwood and BQL? I found a
patch for it but have no idea if there are any issues.
I have such a system but have no idea how to ascertain the efficacy of
BQL.
To the latter:
BQL works for transmissions that reach the full line rate (e.g. for
1000MB ethernet). It limits the queue that builds in the driver/device to
the minimum they need. Then queue mostly builds in the generic networking
stack, where it can be managed effectively e.g. by fq_codel.
So a simple efficacy test is to run a transmission at full speed, and
monitor latency (ping) at the same time. Just make sure the device qdisc is
set to fq_codel. fq_codel effectively prioritizes ping, so the difference
will be very easy to see.
I don't know if there's any corner cases that want testing as well.
BQL adjusts the number of packets that can be queued based on their size, so
you can have far more 64 byte packets queued than you can have 1500 byte
packets.
do a ping flood of your network with different packet sizes and look at the
queue lengths that are allowed, the queue length should be much higher with
small packets.
BQL can be disabled at runtime for comparison testing:
http://lists.openwall.net/netdev/2011/12/01/112
There's a BQL tool to see it working graphically (using readouts from the
same sysfs directory):
https://github.com/ffainelli/bqlmon
My Kirkwood setup at home is weak, I basically never reach full link
speed. So this might be somewhat academic unless you set the link speed to
100 or 10 using the ethtool command. (It seems like a good idea to test
those speeds even if you can do better though). You probably also want to
start with offloads (tso, gso, gro) disabled using ethtool, because they
aggregate packets.
a quick test with a 100M setting, connected to gigabit switch, and flent
tcp_download, shows ping under load increases to about 8ms. Conclusion: the
Debian kirkwood kernel probably isn't doing BQL for me :).
Wrong way I think. Try tcp_upload.
"flent tcp_download" running on the connected x86 laptop. So I didn't
have to use Flent on the Kirkwood device, only netperf's netserver.
_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat