Forgot to mention, be careful about the perspective of the measurement, i.e. tx or rx. Here's a run where the window is big enough such that the writes won't block and the TX offered load scales to the requested. The tx side os is dropping the packets ahead of the driver for this to occur.
[root@localhost iperf2-code]# iperf -c 192.168.1.4 -u -S 0xC0 -b 40m -e -i 1 -t 10 -l 20 -w 4M ------------------------------------------------------------ Client connecting to 192.168.1.4, UDP port 5001 with pid 16506 Sending 20 byte datagrams, IPG target: 4.00 us (kalman adjust) UDP buffer size: 8.00 MByte (WARNING: requested 4.00 MByte) ------------------------------------------------------------ [ 3] local 192.168.1.1 port 45023 connected with 192.168.1.4 port 5001 [ ID] Interval Transfer Bandwidth Write/Err PPS [ 3] 0.0000-1.0000 sec 4.77 MBytes 40.0 Mbits/sec 249999/0 249998 pps [ 3] 1.0000-2.0000 sec 4.77 MBytes 40.0 Mbits/sec 250000/0 250000 pps [ 3] 2.0000-3.0000 sec 4.77 MBytes 40.0 Mbits/sec 249999/0 250000 pps [ 3] 3.0000-4.0000 sec 4.77 MBytes 40.0 Mbits/sec 250001/0 250000 pps [ 3] 4.0000-5.0000 sec 4.77 MBytes 40.0 Mbits/sec 249999/0 250000 pps [ 3] 5.0000-6.0000 sec 4.77 MBytes 40.0 Mbits/sec 250001/0 250000 pps [ 3] 6.0000-7.0000 sec 4.77 MBytes 40.0 Mbits/sec 250000/0 250000 pps [ 3] 7.0000-8.0000 sec 4.77 MBytes 40.0 Mbits/sec 249999/0 250000 pps [ 3] 8.0000-9.0000 sec 4.77 MBytes 40.0 Mbits/sec 250000/0 250000 pps [ 3] 9.0000-10.0000 sec 4.77 MBytes 40.0 Mbits/sec 250001/0 250000 pps [ 3] 0.0000-10.0000 sec 47.7 MBytes 40.0 Mbits/sec 2500003/0 249999 pps [ 3] Sent 2500003 datagrams [ 3] Server Report: [ 3] 0.0000-10.3171 sec 1.58 MBytes 1.29 Mbits/sec 0.048 ms 2417137/2500002 (97%) 308.326/ 0.725/321.018/ 0.134 ms 242315 pps 0.52 [ 3] 0.0000-10.3171 sec 1 datagrams received out-of-order While the server side shows the actual pps xfered across the network: [root@localhost iperf2-code]# iperf -s -u -e -i 1 ------------------------------------------------------------ Server listening on UDP port 5001 with pid 6207 Receiving 1470 byte datagrams UDP buffer size: 208 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.4 port 5001 connected with 192.168.1.1 port 45023 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Latency avg/min/max/stdev PPS NetPwr [ 3] 0.0000-1.0000 sec 157 KBytes 1.28 Mbits/sec 0.033 ms 162740/170766 (95%) 266.751/ 0.725/321.018/91.674 ms 8025 pps 0.60 [ 3] 1.0000-2.0000 sec 157 KBytes 1.29 Mbits/sec 0.045 ms 241842/249878 (97%) 318.332/317.715/320.499/ 0.271 ms 8037 pps 0.50 [ 3] 2.0000-3.0000 sec 157 KBytes 1.29 Mbits/sec 0.033 ms 242044/250082 (97%) 318.375/317.601/319.184/ 0.251 ms 8037 pps 0.50 [ 3] 3.0000-4.0000 sec 157 KBytes 1.29 Mbits/sec 0.036 ms 241956/249990 (97%) 318.412/317.706/319.697/ 0.313 ms 8035 pps 0.50 [ 3] 4.0000-5.0000 sec 157 KBytes 1.29 Mbits/sec 0.046 ms 241951/249986 (97%) 318.404/317.836/319.689/ 0.283 ms 8034 pps 0.50 [ 3] 5.0000-6.0000 sec 157 KBytes 1.29 Mbits/sec 0.035 ms 241913/249949 (97%) 318.341/317.664/319.369/ 0.305 ms 8036 pps 0.50 [ 3] 6.0000-7.0000 sec 157 KBytes 1.29 Mbits/sec 0.038 ms 242209/250246 (97%) 318.627/317.529/319.630/ 0.416 ms 8036 pps 0.50 [ 3] 7.0000-8.0000 sec 157 KBytes 1.29 Mbits/sec 0.033 ms 241992/250024 (97%) 318.536/317.791/320.060/ 0.292 ms 8032 pps 0.50 [ 3] 8.0000-9.0000 sec 156 KBytes 1.28 Mbits/sec 0.038 ms 241975/249983 (97%) 266.109/ 0.869/320.053/91.344 ms 8009 pps 0.60 [ 3] 9.0000-10.0000 sec 157 KBytes 1.29 Mbits/sec 0.044 ms 241961/249998 (97%) 318.339/317.693/319.559/ 0.261 ms 8036 pps 0.50 [ 3] 0.0000-10.3171 sec 1.58 MBytes 1.29 Mbits/sec 0.049 ms 2417137/2500002 (97%) 308.326/ 0.725/321.018/45.186 ms 8031 pps 0.52 [ 3] 0.0000-10.3171 sec 1 datagrams received out-of-order Bob On Tue, Nov 20, 2018 at 11:59 AM Bob McMahon <bob.mcma...@broadcom.com> wrote: > Just an FYI, comparing a VO with 1 mpdu per ampdu vs 32 and small packets > (minimize airtime.) If VO can break 10K pps it's most likely aggregating. > > [root@localhost iperf2-code]# wl -i ap0 ampdu_mpdu 1 > root@localhost iperf2-code]# iperf -c 192.168.1.4 -u -S 0xC0 -b 40m -e -i > 1 -t 10 -l 20 > ------------------------------------------------------------ > Client connecting to 192.168.1.4, UDP port 5001 with pid 16463 > Sending 20 byte datagrams, IPG target: 4.00 us (kalman adjust) > UDP buffer size: 208 KByte (default) > ------------------------------------------------------------ > [ 3] local 192.168.1.1 port 41010 connected with 192.168.1.4 port 5001 > [ ID] Interval Transfer Bandwidth Write/Err PPS > [ 3] 0.0000-1.0000 sec 161 KBytes 1.32 Mbits/sec 8243/0 8167 pps > [ 3] 1.0000-2.0000 sec 158 KBytes 1.29 Mbits/sec 8085/0 8035 pps > [ 3] 2.0000-3.0000 sec 155 KBytes 1.27 Mbits/sec 7928/0 8040 pps > [ 3] 3.0000-4.0000 sec 158 KBytes 1.29 Mbits/sec 8075/0 8035 pps > [ 3] 4.0000-5.0000 sec 158 KBytes 1.30 Mbits/sec 8094/0 8019 pps > [ 3] 5.0000-6.0000 sec 155 KBytes 1.27 Mbits/sec 7954/0 8032 pps > [ 3] 6.0000-7.0000 sec 158 KBytes 1.30 Mbits/sec 8097/0 8040 pps > [ 3] 7.0000-8.0000 sec 155 KBytes 1.27 Mbits/sec 7930/0 8034 pps > [ 3] 8.0000-9.0000 sec 158 KBytes 1.30 Mbits/sec 8095/0 8034 pps > [ 3] 0.0000-10.0153 sec 1.54 MBytes 1.29 Mbits/sec 80596/0 8047 pps > [ 3] Sent 80596 datagrams > [ 3] Server Report: > [ 3] 0.0000-10.0293 sec 1.54 MBytes 1.29 Mbits/sec 1.179 ms > 0/80596 (0%) 26.031/ 0.709/36.593/ 0.197 ms 8036 pps 6.17 > > [root@localhost iperf2-code]# wl -i ap0 ampdu_mpdu 32 > [root@localhost iperf2-code]# iperf -c 192.168.1.4 -u -S 0xC0 -b 40m -e > -i 1 -t 10 -l 20 > ------------------------------------------------------------ > Client connecting to 192.168.1.4, UDP port 5001 with pid 16467 > Sending 20 byte datagrams, IPG target: 4.00 us (kalman adjust) > UDP buffer size: 208 KByte (default) > ------------------------------------------------------------ > [ 3] local 192.168.1.1 port 58139 connected with 192.168.1.4 port 5001 > [ ID] Interval Transfer Bandwidth Write/Err PPS > [ 3] 0.0000-1.0000 sec 797 KBytes 6.53 Mbits/sec 40826/0 40801 pps > [ 3] 1.0000-2.0000 sec 796 KBytes 6.52 Mbits/sec 40737/0 40727 pps > [ 3] 2.0000-3.0000 sec 794 KBytes 6.50 Mbits/sec 40652/0 40685 pps > [ 3] 3.0000-4.0000 sec 797 KBytes 6.53 Mbits/sec 40815/0 40708 pps > [ 3] 4.0000-5.0000 sec 793 KBytes 6.50 Mbits/sec 40613/0 40656 pps > [ 3] 5.0000-6.0000 sec 794 KBytes 6.51 Mbits/sec 40663/0 40692 pps > [ 3] 6.0000-7.0000 sec 796 KBytes 6.52 Mbits/sec 40779/0 40722 pps > [ 3] 7.0000-8.0000 sec 793 KBytes 6.50 Mbits/sec 40624/0 40704 pps > [ 3] 8.0000-9.0000 sec 797 KBytes 6.53 Mbits/sec 40813/0 40713 pps > [ 3] 0.0000-10.0018 sec 7.77 MBytes 6.51 Mbits/sec 407178/0 40710 > pps > [ 3] Sent 407178 datagrams > [ 3] Server Report: > [ 3] 0.0000-10.0022 sec 7.77 MBytes 6.51 Mbits/sec 0.276 ms > 0/407178 (0%) 5.159/ 1.293/ 8.469/ 0.091 ms 40708 pps 157.82 > > Bob > > On Tue, Nov 20, 2018 at 11:26 AM Bob McMahon <bob.mcma...@broadcom.com> > wrote: > >> VO AC may not use ampdu aggregation so double check that. The reason is >> that VO are small packet payloads (~200 bytes) and have low relative >> frequency of transmits. A wireless driver waiting around to fill an AMPDU >> from a VO queue will cause excessive latency and impact the voice session. >> Many drivers won't aggregate them. Since VO isn't likely using ampdus >> and since it has the most preferred wme access settings, a 40 Mb/s iperf >> flow with VO is going to consume most all the TXOPs. The other access >> classes won't have many TXOPS at all. (The whole point of WiFi aggregation >> technologies is to amortize the media access cost which is expensive per >> collision avoidance. A rough theoretical max estimate for TXOPs is 10,000 >> per second.) Maybe measure the packets per second of just VO to get an >> empirical number. It's probably better to reduce this to something more >> typical of lets say 10 voice calls (or 20 call legs.) >> >> You might also want to consider measuring "network power" which, while a >> bit of a misnomer, is defined as average throughput over delay. Also, >> maybe consider changing AIFS too. This can lead to interesting things. >> Not sure you're exact goals but just some other things to consider. >> >> Bob >> >> On Tue, Nov 20, 2018 at 9:08 AM Kasper Biegun <kas...@alert.pl> wrote: >> >>> Hi Bruce, >>> >>> I am making a thesis project and I have to measure influence of a-mpdu >>> and TXOPlimit on the 802.11ac network efficiency and I have to obtain >>> plot of BK, BE, VI and VO throughputs, when they are running at the same >>> time. I have to measure some scenarios with different ampdu sizes and >>> when TXOPlimit is enabled and disabled. >>> >>> Thank you very much for your help, thanks to you I was able to run the >>> tests, but I am still having some troubles. >>> >>> Today I have tried running tests again, but with lower iperf bandwidth >>> (and window parameter) and it worked better, because I was able to run >>> all four tests at the same time. But I had troubles with the highest >>> priority traffic VO - it "takes" whole bandwidth. I mean that when I set >>> e.g bandwidth (-b) to 40Mbits, VO traffic takes 40Mbits (the rest >>> 20Mbits is divided to the rest of traffics) and it is undesirable during >>> my measurements, because it is like manual limitation than network >>> limitation. >>> >>> >>> Example command: >>> >>> #iperf3 -c 10.10.0.1 -p 5025 -b 40M -u -S 0xc0 -i 1 -w 8192B --bind >>> 10.10.0.2 --cport 5023 >>> >>> >>> Kasper >>> >>> >>> W dniu 2018-11-20 o 00:24, Bruce A. Mah pisze: >>> > If memory serves me right, Kasper Biegun wrote: >>> > >>> >> to connect server and client I am using WiFi 5GHz (802.11ac), band: >>> >> 20MHz with MCS (7 or 8) so I can get 65 to 78 Mbit/s. When I am >>> testing >>> >> now, I get around 60Mbit/s for the first test and after that second >>> test >>> >> starts and reaches around 60Mbit/s. But according to my assumptions it >>> >> should work at the same time and divide possible bandwidth (I set >>> values >>> >> of TXOPlimit according to 802.11ac standard). >>> > Hi Kasper-- >>> > >>> > I'm not sure why the tests are serialized. The only thing I can think >>> > of is that the first test has completely blocked the wireless link so >>> > that the second test can't even start. >>> > >>> > Wait. You're testing over wireless? Yet you're doing 1000Mbps = 1Gbps >>> > tests. I don't think there's any wireless link that can handle two >>> > 1Gbps tests, or even one. I think you really are saturating the link. >>> > Because the tests are using UDP they'll just blast out packets at >>> > whatever rate you specify regardless of whether the path is actually >>> > capable of supporting it. You need to turn down the bitrate on both >>> > tests (especially the first). What scenario are you trying to simulate >>> > anyway? >>> > >>> > Bruce. >>> > >>> >> W dniu 2018-11-19 o 21:50, Bruce A. Mah pisze: >>> >> >>> >>> If memory serves me right, Kasper Biegun wrote: >>> >>> >>> >>>> currently I'm working on project connected with testing UDP >>> throughput >>> >>>> for different QoS values. To get the needed results I have to use >>> Best >>> >>>> Effort (0x70), Background (0x28), Voice (0xc0) and Video (0xb8) >>> traffic >>> >>>> at the same time. I have troubles with running them, because one >>> stream >>> >>>> was waiting for another to finish or if they started - only one was >>> >>>> working (the rest had 0 bandwidth). Then I updated iperf to newest >>> >>>> version available on github and installed it. Now, for testing >>> purposes, >>> >>>> I am running only Voice and Video at the same time, and I am still >>> >>>> getting the same issue - one transfer is waiting for the second one >>> to >>> >>>> finish. >>> >>>> >>> >>>> I am using commands: >>> >>>> >>> >>>> - on server: #iperf3 -s -p 5015 and #iperf -s -p 5020 >>> >>>> >>> >>>> - on transmitter: #iperf3 -c 10.10.0.1 -p 5015 -S 192 --bind >>> >>>> 10.10.0.2 --cport 5013 -u -b 1000m and >>> >>>> >>> >>>> #iperf3 -c 10.10.0.1 -p 5020 -S 184 --bind 10.10.0.2 >>> --cport >>> >>>> 5018 -u -b 1000m >>> >>>> >>> >>>> Server is working on Ubuntu 16.04 and transmitter is on Raspberry >>> Pi 3. >>> >>>> >>> >>>> >>> >>>> Could you please help me with that? >>> >>> Hi Kasper-- >>> >>> >>> >>> Is it possible that one test is starving the other? What is the >>> >>> bandwidth on the path between your server and client? >>> >>> >>> >>> (For what it's worth you're setting up the tests correctly by using >>> two >>> >>> independent sets of client and server.) >>> >>> >>> >>> Bruce. >>> >>> >>> > >>> >>> >>> >>> _______________________________________________ >>> Iperf-users mailing list >>> Iperf-users@lists.sourceforge.net >>> https://lists.sourceforge.net/lists/listinfo/iperf-users >>> >>
_______________________________________________ Iperf-users mailing list Iperf-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/iperf-users