Hi Eric,

This compares Cubic with BBR with 4.13-rc2 (which has a few new commits
to bbr.c):

wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode
DORMANT group default qlen 1000

$ cat  /proc/sys/net/ipv4/tcp_congestion_control
cubic
$ iperf3 -c
Connecting to host , port 5201
[  4] local  port 35242 connected to  port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  2.34 MBytes  19.6 Mbits/sec    0   26.9 KBytes
[  4]   1.00-2.00   sec  2.45 MBytes  20.6 Mbits/sec    0   26.9 KBytes
[  4]   2.00-3.00   sec  2.36 MBytes  19.8 Mbits/sec    0   26.9 KBytes
[  4]   3.00-4.00   sec  2.43 MBytes  20.4 Mbits/sec    0   26.9 KBytes
[  4]   4.00-5.00   sec  2.42 MBytes  20.3 Mbits/sec    0   26.9 KBytes
[  4]   5.00-6.00   sec  2.33 MBytes  19.5 Mbits/sec    0   29.7 KBytes
[  4]   6.00-7.00   sec  2.48 MBytes  20.8 Mbits/sec    0   29.7 KBytes
[  4]   7.00-8.00   sec  2.27 MBytes  19.1 Mbits/sec    0   29.7 KBytes
[  4]   8.00-9.00   sec  2.45 MBytes  20.6 Mbits/sec    0   29.7 KBytes
[  4]   9.00-10.00  sec  2.43 MBytes  20.4 Mbits/sec    0   29.7 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  24.0 MBytes  20.1 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  23.9 MBytes  20.1 Mbits/sec                  receiver
$ echo bbr > /proc/sys/net/ipv4/tcp_congestion_control
$ iperf3 -c
Connecting to host , port 5201
[  4] local  port 35246 connected to  port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.69 MBytes  14.2 Mbits/sec    0   8.48 KBytes
[  4]   1.00-2.00   sec  1.59 MBytes  13.4 Mbits/sec    0   8.48 KBytes
[  4]   2.00-3.00   sec  1.43 MBytes  12.0 Mbits/sec    0   8.48 KBytes
[  4]   3.00-4.00   sec  1.63 MBytes  13.6 Mbits/sec    0   8.48 KBytes
[  4]   4.00-5.00   sec  1.59 MBytes  13.4 Mbits/sec    0   8.48 KBytes
[  4]   5.00-6.00   sec  1.50 MBytes  12.6 Mbits/sec    0   8.48 KBytes
[  4]   6.00-7.00   sec  1.59 MBytes  13.3 Mbits/sec    0   8.48 KBytes
[  4]   7.00-8.00   sec  1.59 MBytes  13.3 Mbits/sec    0   8.48 KBytes
[  4]   8.00-9.00   sec  1.60 MBytes  13.4 Mbits/sec    0   8.48 KBytes
[  4]   9.00-10.00  sec  1.63 MBytes  13.6 Mbits/sec    0   8.48 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  15.8 MBytes  13.3 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  15.8 MBytes  13.2 Mbits/sec                  receiver

Hope that helps.

Taking a step back, the original issue for me here is that using a
MacBook Air at the same location and with the same BSSID throughput (for
TCP is about 100Mbits/sec). Of course it's a different phy and stack but
we should be able to get much better throughput from the Atheros phy,
driver and TCP stack in this scenario without very much tuning.

Patching tcp_output.c as above did make a significant difference (though
not quite making up the full difference) but it looks like using BBR on
the client doesn't.

Are you thinking the solution lies down the path of  using BBR? Could
you tell what you would expect to see results-wise in using BBR over
others in this scenario?

Thanks! Appreciate your input to this!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1670041

Title:
  Poor performance of Atheros QCA6174 802.11ac (rev 32) (Killer Wireless
  1535)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1670041/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to