for completeness, here is a concurrent "working load" example:

[root@ryzen3950 iperf2-code]# iperf -c 192.168.1.58%enp4s0 -i 1 -e --bounceback --working-load=up,4 -t 3
------------------------------------------------------------
Client connecting to 192.168.1.58, TCP port 5001 with pid 3125575 via enp4s0 (1 flows)
Write buffer size:  100 Byte
Bursting:  100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs & tcp_quickack)
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 2] local 192.168.1.69%enp4s0 port 49268 connected with 192.168.1.58 port 5001 (bb w/quickack len/hold=100/0) (sock=7) (icwnd/mss/irtt=14/1448/243) (ct=0.29 ms) on 2023-03-12 14:18:25.658 (PDT) [ 5] local 192.168.1.69%enp4s0 port 49244 connected with 192.168.1.58 port 5001 (prefetch=16384) (sock=3) (qack) (icwnd/mss/irtt=14/1448/260) (ct=0.31 ms) on 2023-03-12 14:18:25.658 (PDT) [ 4] local 192.168.1.69%enp4s0 port 49254 connected with 192.168.1.58 port 5001 (prefetch=16384) (sock=4) (qack) (icwnd/mss/irtt=14/1448/295) (ct=0.35 ms) on 2023-03-12 14:18:25.658 (PDT) [ 1] local 192.168.1.69%enp4s0 port 49256 connected with 192.168.1.58 port 5001 (prefetch=16384) (sock=6) (qack) (icwnd/mss/irtt=14/1448/270) (ct=0.31 ms) on 2023-03-12 14:18:25.658 (PDT) [ 3] local 192.168.1.69%enp4s0 port 49252 connected with 192.168.1.58 port 5001 (prefetch=16384) (sock=5) (qack) (icwnd/mss/irtt=14/1448/263) (ct=0.31 ms) on 2023-03-12 14:18:25.658 (PDT) [ ID] Interval Transfer Bandwidth Write/Err Rtry Cwnd/RTT(var) NetPwr [ 5] 0.00-1.00 sec 41.8 MBytes 351 Mbits/sec 438252/0 3 73K/53(3) us 826892 [ 1] 0.00-1.00 sec 39.3 MBytes 330 Mbits/sec 412404/0 24 39K/45(3) us 916455 [ ID] Interval Transfer Bandwidth BB cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS [ 2] 0.00-1.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.323/0.093/2.147/0.641 ms 0 14K/119 us 3098 rps [ 4] 0.00-1.00 sec 34.2 MBytes 287 Mbits/sec 358210/0 15 55K/53(3) us 675869 [ 3] 0.00-1.00 sec 33.4 MBytes 280 Mbits/sec 349927/0 11 127K/53(4) us 660241
[SUM] 0.00-1.00 sec   109 MBytes   917 Mbits/sec  1146389/0        29
[ 5] 1.00-2.00 sec 42.1 MBytes 353 Mbits/sec 441376/0 1 73K/55(9) us 802502 [ 1] 1.00-2.00 sec 39.6 MBytes 333 Mbits/sec 415644/0 0 39K/51(6) us 814988 [ 2] 1.00-2.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.079/0.056/0.127/0.019 ms 0 14K/67 us 12658 rps [ 4] 1.00-2.00 sec 33.8 MBytes 283 Mbits/sec 354150/0 0 55K/58(7) us 610603 [ 3] 1.00-2.00 sec 33.7 MBytes 283 Mbits/sec 353392/0 2 127K/53(6) us 666777
[SUM] 1.00-2.00 sec   110 MBytes   919 Mbits/sec  1148918/0         3
[ 5] 2.00-3.00 sec 42.2 MBytes 354 Mbits/sec 442685/0 0 73K/50(8) us 885370 [ 1] 2.00-3.00 sec 36.9 MBytes 310 Mbits/sec 387381/0 0 39K/48(4) us 807044 [ 2] 2.00-3.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.073/0.058/0.093/0.012 ms 0 14K/60 us 13774 rps [ 4] 2.00-3.00 sec 33.9 MBytes 284 Mbits/sec 355533/0 0 55K/52(4) us 683717 [ 3] 2.00-3.00 sec 29.4 MBytes 247 Mbits/sec 308725/0 1 127K/54(4) us 571713
[SUM] 2.00-3.00 sec   106 MBytes   886 Mbits/sec  1106943/0         1
[ 5] 0.00-3.00 sec 126 MBytes 353 Mbits/sec 1322314/0 4 73K/57(18) us 773072 [ 2] 0.00-3.00 sec 7.81 KBytes 21.3 Kbits/sec 40=0.134/0.053/2.147/0.328 ms 0 14K/58 us 7489 rps [ 2] 0.00-3.00 sec BB8(f)-PDF: bin(w=100us):cnt(40)=1:31,2:8,22:1 (5.00/95.00/99.7%=1/2/22,Outliers=1,obl/obu=0/0) [ 3] 0.00-3.00 sec 96.5 MBytes 270 Mbits/sec 1012045/0 14 127K/57(6) us 591693 [ 1] 0.00-3.00 sec 116 MBytes 324 Mbits/sec 1215431/0 24 39K/51(5) us 794234 [ 4] 0.00-3.00 sec 102 MBytes 285 Mbits/sec 1067895/0 15 55K/55(9) us 647061
[SUM] 0.00-3.00 sec   324 MBytes   907 Mbits/sec  3402254/0        33
[ CT] final connect times (min/avg/max/stdev) = 0.292/0.316/0.352/22.075 ms (tot/err) = 5/0

iperf 2 uses responses per second and also provides the bounce back
times as well as one way delays.

The hypothesis is that network engineers have to fix KPI issues,
including latency, ahead of shipping products.

Asking companies to act on consumer complaints is way too late. It's
also extremely costly. Those running Amazon customer service can
explain how these consumer calls about their devices cause things like
device returns (as that's all the call support can provide.) This
wastes energy to physically ship things back, causes a stack of
working items that now go to ewaste, etc.

It's really on network operators, suppliers and device mfgs to get
ahead of this years before consumers get their stuff.

As a side note, many devices select their WiFi chanspec (AP channel+)
based on the strongest RSSI. The network paths should be based on KPIs
like low latency. Strong signal just means an AP is yelling to loudly
and interfering with the neighbors. Try the optimal AP chanspec that
has 10dB separation per spatial dimension and the whole apartment
complex would be better for it.

We're so focused on buffer bloat we're ignoring everything else where
incremental engineering has led to poor products & offerings.

[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c 192.168.1.72 -i 1 -e
--bounceback --trip-times
------------------------------------------------------------
Client connecting to 192.168.1.72, TCP port 5001 with pid 3123814 (1 flows)
Write buffer size:  100 Byte
Bursting:  100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs & tcp_quickack)
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[  1] local 192.168.1.69%enp4s0 port 41336 connected with 192.168.1.72
port 5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times)
(sock=3) (icwnd/mss/irtt=14/1448/284) (ct=0.33 ms) on 2023-03-12
14:01:24.820 (PDT)
[ ID] Interval        Transfer    Bandwidth         BB
cnt=avg/min/max/stdev         Rtry  Cwnd/RTT    RPS
[  1] 0.00-1.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.311/0.209/0.755/0.159 ms    0   14K/202 us    3220 rps
[  1] 1.00-2.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.254/0.180/0.335/0.051 ms    0   14K/210 us    3934 rps
[  1] 2.00-3.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.266/0.168/0.468/0.088 ms    0   14K/210 us    3754 rps
[  1] 3.00-4.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.294/0.184/0.442/0.078 ms    0   14K/233 us    3396 rps
[  1] 4.00-5.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.263/0.150/0.427/0.077 ms    0   14K/215 us    3802 rps
[  1] 5.00-6.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.325/0.237/0.409/0.056 ms    0   14K/258 us    3077 rps
[  1] 6.00-7.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.259/0.165/0.410/0.077 ms    0   14K/219 us    3857 rps
[  1] 7.00-8.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.277/0.193/0.415/0.068 ms    0   14K/224 us    3608 rps
[  1] 8.00-9.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.292/0.206/0.465/0.072 ms    0   14K/231 us    3420 rps
[  1] 9.00-10.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.256/0.157/0.439/0.082 ms    0   14K/211 us    3908 rps
[  1] 0.00-10.01 sec  19.5 KBytes  16.0 Kbits/sec
100=0.280/0.150/0.755/0.085 ms    0   14K/1033 us    3573 rps
[  1] 0.00-10.01 sec  OWD Delays (ms) Cnt=100
To=0.169/0.074/0.318/0.056 From=0.105/0.055/0.162/0.024
Asymmetry=0.065/0.000/0.172/0.049    3573 rps
[  1] 0.00-10.01 sec BB8(f)-PDF:
bin(w=100us):cnt(100)=2:14,3:57,4:20,5:8,8:1
(5.00/95.00/99.7%=2/5/8,Outliers=0,obl/obu=0/0)


Bob
Dave,

your presentation was awesome, I fully agree with you ;). I very much
liked your practical funnel demonstration which was boiled down to the
bare minimum (I only partly asked myself, will the liquid spill in in
your laptops keyboard, and if so is it water-proof, but you clearly
had rehearsed/tried that before).
BTW, I always have to think of this
h++ps://www.youtube.com/watch?v=R7yfISlGLNU somehow when you present
live from the marina ;)


I am still not through watching all of the presentations and panels,
but can already say, team L4S continues to over-promise and
under-deliver, but Koen's presentation itself was done well and might
(sadly) convince people to buy-in into L4(S) = 2L2L = too little, too
late.

Stuart's RPM presentation was great, making a convincing point.
(Except for pitching L4S and LLD as "solutions", I will accept them as
a step in the right direction, but why not go in all the way and
embrace proper scheduling?)

In detail though, I am not fully convinced about the decision of
taking the inverse of delay increase as singular measure here as I
consider that as a bit of a squandered opportunity at public
outreach/education and as comparing idle and working RPM is
non-intuitive, while idle and working RTT can immediately subtracted
to see the extent of the queueing damage in actionable terms.

Try the same with RPM values:

123-1234567:~ user$ networkQuality -v
==== SUMMARY ====

Upload capacity: 22.208 Mbps
Download capacity: 88.054 Mbps
Upload flows: 12
Download flows: 12
Responsiveness: High (2622 RPM)
Base RTT: 18
Start: 3/12/23, 21:00:58
End: 3/12/23, 21:01:08
OS Version: Version 12.6.3 (Build 21G419)

here we can divide 60 [sec/minute] * 1000 [ms/sec] by the RPM [1/min]
to get: 60000/2622 = 22.88 ms loaded delay and subtract the base RTT
of 18 for 60000/2622 - 18 = 4.88 ~5ms of loaded delay which is a
useful quantity when managing a delay budget (this test was performed
over wired ethernet with competent AQM and traffic shaping on the
link, so no surprise about the outcome there). Let's look at the
reverse and convert the base RTT into a base RPM score instead:
6000/18 = 333 rpm, what exactly does the delta RPM of 2622-333 =
2289rpm now tell us about the difference between idle and working
conditions? [Well, since conversion is not witchcraft, I will be fine
as will other interested in actual evoked delay, but we could have
gotten a better measure*]

And all for the somewhat unhelpful car analogy... (it is not that for
internal combustion engines bigger is necessarily better for RPM,
either for torque or fuel efficiency).

I guess that ship has sailed though and RPM it is

*) Stuart notes that milliseconds and Hertz sound to sciency, but they
could simply have given the delay increase in milliseconds a fancier
name to solve that specific problem...


On Mar 12, 2023, at 20:31, Dave Taht via Rpm <r...@lists.bufferbloat.net> wrote:

https://www.reddit.com/r/HomeNetworking/comments/11pmc9a/comment/jbypj0z/?context=3

--
Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
Dave Täht CEO, TekLibre, LLC
_______________________________________________
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm

_______________________________________________
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm
_______________________________________________
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to