Re: [Iperf-users] UDP and QoS tests in Iperf3

2018-11-20 Thread Bob McMahon via Iperf-users
Just an FYI, comparing a VO with 1 mpdu per ampdu vs 32 and small packets
(minimize airtime.)  If VO can break 10K pps it's most likely aggregating.

[root@localhost iperf2-code]# wl -i ap0 ampdu_mpdu 1
root@localhost iperf2-code]# iperf -c 192.168.1.4 -u -S 0xC0 -b 40m -e -i 1
-t 10 -l 20

Client connecting to 192.168.1.4, UDP port 5001 with pid 16463
Sending 20 byte datagrams, IPG target: 4.00 us (kalman adjust)
UDP buffer size:  208 KByte (default)

[  3] local 192.168.1.1 port 41010 connected with 192.168.1.4 port 5001
[ ID] IntervalTransfer Bandwidth  Write/Err  PPS
[  3] 0.-1. sec   161 KBytes  1.32 Mbits/sec  8243/0 8167 pps
[  3] 1.-2. sec   158 KBytes  1.29 Mbits/sec  8085/0 8035 pps
[  3] 2.-3. sec   155 KBytes  1.27 Mbits/sec  7928/0 8040 pps
[  3] 3.-4. sec   158 KBytes  1.29 Mbits/sec  8075/0 8035 pps
[  3] 4.-5. sec   158 KBytes  1.30 Mbits/sec  8094/0 8019 pps
[  3] 5.-6. sec   155 KBytes  1.27 Mbits/sec  7954/0 8032 pps
[  3] 6.-7. sec   158 KBytes  1.30 Mbits/sec  8097/0 8040 pps
[  3] 7.-8. sec   155 KBytes  1.27 Mbits/sec  7930/0 8034 pps
[  3] 8.-9. sec   158 KBytes  1.30 Mbits/sec  8095/0 8034 pps
[  3] 0.-10.0153 sec  1.54 MBytes  1.29 Mbits/sec  80596/0 8047 pps
[  3] Sent 80596 datagrams
[  3] Server Report:
[  3] 0.-10.0293 sec  1.54 MBytes  1.29 Mbits/sec   1.179 ms0/80596
(0%) 26.031/ 0.709/36.593/ 0.197 ms 8036 pps  6.17

[root@localhost iperf2-code]# wl -i ap0 ampdu_mpdu 32
[root@localhost iperf2-code]# iperf -c 192.168.1.4 -u -S 0xC0 -b 40m -e -i
1 -t 10 -l 20

Client connecting to 192.168.1.4, UDP port 5001 with pid 16467
Sending 20 byte datagrams, IPG target: 4.00 us (kalman adjust)
UDP buffer size:  208 KByte (default)

[  3] local 192.168.1.1 port 58139 connected with 192.168.1.4 port 5001
[ ID] IntervalTransfer Bandwidth  Write/Err  PPS
[  3] 0.-1. sec   797 KBytes  6.53 Mbits/sec  40826/040801 pps
[  3] 1.-2. sec   796 KBytes  6.52 Mbits/sec  40737/040727 pps
[  3] 2.-3. sec   794 KBytes  6.50 Mbits/sec  40652/040685 pps
[  3] 3.-4. sec   797 KBytes  6.53 Mbits/sec  40815/040708 pps
[  3] 4.-5. sec   793 KBytes  6.50 Mbits/sec  40613/040656 pps
[  3] 5.-6. sec   794 KBytes  6.51 Mbits/sec  40663/040692 pps
[  3] 6.-7. sec   796 KBytes  6.52 Mbits/sec  40779/040722 pps
[  3] 7.-8. sec   793 KBytes  6.50 Mbits/sec  40624/040704 pps
[  3] 8.-9. sec   797 KBytes  6.53 Mbits/sec  40813/040713 pps
[  3] 0.-10.0018 sec  7.77 MBytes  6.51 Mbits/sec  407178/040710 pps
[  3] Sent 407178 datagrams
[  3] Server Report:
[  3] 0.-10.0022 sec  7.77 MBytes  6.51 Mbits/sec   0.276 ms
0/407178 (0%)  5.159/ 1.293/ 8.469/ 0.091 ms 40708 pps  157.82

Bob

On Tue, Nov 20, 2018 at 11:26 AM Bob McMahon 
wrote:

> VO AC may not use ampdu aggregation so double check that.  The reason is
> that VO are small packet payloads (~200 bytes) and have low relative
> frequency of transmits.   A wireless driver waiting around to fill an AMPDU
> from a VO queue will cause excessive latency and impact the voice session.
>  Many drivers won't aggregate them.Since VO isn't likely using ampdus
> and since it has the most preferred wme access settings, a 40 Mb/s iperf
> flow with VO is going to consume most all the TXOPs.  The other access
> classes won't have many TXOPS at all. (The whole point of WiFi aggregation
> technologies is to amortize the media access cost which is expensive per
> collision avoidance.  A rough theoretical max estimate for TXOPs is 10,000
> per second.)   Maybe measure the packets per second of just VO to get an
> empirical number.  It's probably better to reduce this to something more
> typical of lets say 10 voice calls (or 20 call legs.)
>
> You might also want to consider measuring "network power" which, while a
> bit of a misnomer, is defined as average throughput over delay.Also,
> maybe consider changing AIFS too.  This can lead to interesting things.
> Not sure you're exact goals but just some other things to consider.
>
> Bob
>
> On Tue, Nov 20, 2018 at 9:08 AM Kasper Biegun  wrote:
>
>> Hi Bruce,
>>
>> I am making a thesis project and I have to measure influence of a-mpdu
>> and TXOPlimit on the 802.11ac network efficiency and I have to obtain
>> plot of BK, BE, VI and VO throughputs, when they are running at the same
>> time. I have to measure some scenarios with different ampdu sizes and
>> when TXOPlimit is enabled and disabled.
>>
>> Thank you very much for your help, thanks to you I was able to run the
>> tests, but I am still having some troubles.

Re: [Iperf-users] UDP and QoS tests in Iperf3

2018-11-20 Thread Bob McMahon via Iperf-users
VO AC may not use ampdu aggregation so double check that.  The reason is
that VO are small packet payloads (~200 bytes) and have low relative
frequency of transmits.   A wireless driver waiting around to fill an AMPDU
from a VO queue will cause excessive latency and impact the voice session.
 Many drivers won't aggregate them.Since VO isn't likely using ampdus
and since it has the most preferred wme access settings, a 40 Mb/s iperf
flow with VO is going to consume most all the TXOPs.  The other access
classes won't have many TXOPS at all. (The whole point of WiFi aggregation
technologies is to amortize the media access cost which is expensive per
collision avoidance.  A rough theoretical max estimate for TXOPs is 10,000
per second.)   Maybe measure the packets per second of just VO to get an
empirical number.  It's probably better to reduce this to something more
typical of lets say 10 voice calls (or 20 call legs.)

You might also want to consider measuring "network power" which, while a
bit of a misnomer, is defined as average throughput over delay.Also,
maybe consider changing AIFS too.  This can lead to interesting things.
Not sure you're exact goals but just some other things to consider.

Bob

On Tue, Nov 20, 2018 at 9:08 AM Kasper Biegun  wrote:

> Hi Bruce,
>
> I am making a thesis project and I have to measure influence of a-mpdu
> and TXOPlimit on the 802.11ac network efficiency and I have to obtain
> plot of BK, BE, VI and VO throughputs, when they are running at the same
> time. I have to measure some scenarios with different ampdu sizes and
> when TXOPlimit is enabled and disabled.
>
> Thank you very much for your help, thanks to you I was able to run the
> tests, but I am still having some troubles.
>
> Today I have tried running tests again, but with lower iperf bandwidth
> (and window parameter) and it worked better, because I was able to run
> all four tests at the same time. But I had troubles with the highest
> priority traffic VO - it "takes" whole bandwidth. I mean that when I set
> e.g bandwidth (-b) to 40Mbits, VO traffic takes 40Mbits (the rest
> 20Mbits is divided to the rest of traffics) and it is undesirable during
> my measurements, because it is like manual limitation than network
> limitation.
>
>
> Example command:
>
> #iperf3 -c 10.10.0.1 -p 5025 -b 40M -u -S 0xc0 -i 1 -w 8192B --bind
> 10.10.0.2 --cport 5023
>
>
> Kasper
>
>
> W dniu 2018-11-20 o 00:24, Bruce A. Mah pisze:
> > If memory serves me right, Kasper Biegun wrote:
> >
> >> to connect server and client I am using WiFi 5GHz (802.11ac), band:
> >> 20MHz with MCS (7 or 8) so I can get 65 to 78 Mbit/s. When I am testing
> >> now, I get around 60Mbit/s for the first test and after that second test
> >> starts and reaches around 60Mbit/s. But according to my assumptions it
> >> should work at the same time and divide possible bandwidth (I set values
> >> of TXOPlimit according to 802.11ac standard).
> > Hi Kasper--
> >
> > I'm not sure why the tests are serialized.  The only thing I can think
> > of is that the first test has completely blocked the wireless link so
> > that the second test can't even start.
> >
> > Wait.  You're testing over wireless?  Yet you're doing 1000Mbps = 1Gbps
> > tests.  I don't think there's any wireless link that can handle two
> > 1Gbps tests, or even one.  I think you really are saturating the link.
> > Because the tests are using UDP they'll just blast out packets at
> > whatever rate you specify regardless of whether the path is actually
> > capable of supporting it.  You need to turn down the bitrate on both
> > tests (especially the first).  What scenario are you trying to simulate
> > anyway?
> >
> > Bruce.
> >
> >> W dniu 2018-11-19 o 21:50, Bruce A. Mah pisze:
> >>
> >>> If memory serves me right, Kasper Biegun wrote:
> >>>
>  currently I'm working on project connected with testing UDP throughput
>  for different QoS values. To get the needed results I have to use Best
>  Effort (0x70), Background (0x28), Voice (0xc0) and Video (0xb8)
> traffic
>  at the same time. I have troubles with running them, because one
> stream
>  was waiting for another to finish or if they started - only one was
>  working (the rest had 0 bandwidth). Then I updated iperf to newest
>  version available on github and installed it. Now, for testing
> purposes,
>  I am running only Voice and Video at the same time, and I am still
>  getting the same issue - one transfer is waiting for the second one to
>  finish.
> 
>  I am using commands:
> 
>    - on server: #iperf3 -s -p 5015 and #iperf -s -p 5020
> 
>    - on transmitter: #iperf3 -c 10.10.0.1 -p 5015 -S 192 --bind
>  10.10.0.2 --cport 5013 -u -b 1000m and
> 
>    #iperf3 -c 10.10.0.1 -p 5020 -S 184 --bind 10.10.0.2 --cport
>  5018 -u -b 1000m
> 
>  Server is working on Ubuntu 16.04 and transmitter is on Raspberry Pi
> 3.
> 
> 
>  

Re: [Iperf-users] UDP and QoS tests in Iperf3

2018-11-20 Thread Kasper Biegun

Hi Bruce,

I am making a thesis project and I have to measure influence of a-mpdu 
and TXOPlimit on the 802.11ac network efficiency and I have to obtain 
plot of BK, BE, VI and VO throughputs, when they are running at the same 
time. I have to measure some scenarios with different ampdu sizes and 
when TXOPlimit is enabled and disabled.


Thank you very much for your help, thanks to you I was able to run the 
tests, but I am still having some troubles.


Today I have tried running tests again, but with lower iperf bandwidth 
(and window parameter) and it worked better, because I was able to run 
all four tests at the same time. But I had troubles with the highest 
priority traffic VO - it "takes" whole bandwidth. I mean that when I set 
e.g bandwidth (-b) to 40Mbits, VO traffic takes 40Mbits (the rest 
20Mbits is divided to the rest of traffics) and it is undesirable during 
my measurements, because it is like manual limitation than network 
limitation.



Example command:

#iperf3 -c 10.10.0.1 -p 5025 -b 40M -u -S 0xc0 -i 1 -w 8192B --bind 
10.10.0.2 --cport 5023



Kasper


W dniu 2018-11-20 o 00:24, Bruce A. Mah pisze:

If memory serves me right, Kasper Biegun wrote:


to connect server and client I am using WiFi 5GHz (802.11ac), band:
20MHz with MCS (7 or 8) so I can get 65 to 78 Mbit/s. When I am testing
now, I get around 60Mbit/s for the first test and after that second test
starts and reaches around 60Mbit/s. But according to my assumptions it
should work at the same time and divide possible bandwidth (I set values
of TXOPlimit according to 802.11ac standard).

Hi Kasper--

I'm not sure why the tests are serialized.  The only thing I can think
of is that the first test has completely blocked the wireless link so
that the second test can't even start.

Wait.  You're testing over wireless?  Yet you're doing 1000Mbps = 1Gbps
tests.  I don't think there's any wireless link that can handle two
1Gbps tests, or even one.  I think you really are saturating the link.
Because the tests are using UDP they'll just blast out packets at
whatever rate you specify regardless of whether the path is actually
capable of supporting it.  You need to turn down the bitrate on both
tests (especially the first).  What scenario are you trying to simulate
anyway?

Bruce.


W dniu 2018-11-19 o 21:50, Bruce A. Mah pisze:


If memory serves me right, Kasper Biegun wrote:


currently I'm working on project connected with testing UDP throughput
for different QoS values. To get the needed results I have to use Best
Effort (0x70), Background (0x28), Voice (0xc0) and Video (0xb8) traffic
at the same time. I have troubles with running them, because one stream
was waiting for another to finish or if they started - only one was
working (the rest had 0 bandwidth). Then I updated iperf to newest
version available on github and installed it. Now, for testing purposes,
I am running only Voice and Video at the same time, and I am still
getting the same issue - one transfer is waiting for the second one to
finish.

I am using commands:

  - on server: #iperf3 -s -p 5015 and #iperf -s -p 5020

  - on transmitter: #iperf3 -c 10.10.0.1 -p 5015 -S 192 --bind
10.10.0.2 --cport 5013 -u -b 1000m and

      #iperf3 -c 10.10.0.1 -p 5020 -S 184 --bind 10.10.0.2 --cport
5018 -u -b 1000m

Server is working on Ubuntu 16.04 and transmitter is on Raspberry Pi 3.


Could you please help me with that?

Hi Kasper--

Is it possible that one test is starving the other?  What is the
bandwidth on the path between your server and client?

(For what it's worth you're setting up the tests correctly by using two
independent sets of client and server.)

Bruce.







___
Iperf-users mailing list
Iperf-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iperf-users