I use virtual machines from linode (which was bought by Akamai)

You may want to use --permit-key (or -t on the server side) to protect against unauthorized use.

--permit-key [=<value>]
Set a key value that must match for the server to accept traffic on a connection. If the option is given without a value on the server a key value will be autogenerated and displayed in its initial settings report. The lifetime of the key is set using --permit-key-timeout and defaults to twenty seconds. The value is required on clients. The value will also be used as part of the transfer id in reports. The option set on the client but not the server will also cause the server to reject the client's traffic. TCP only, no UDP support.

--permit-key-timeout <value>
Set the lifetime of the permit key in seconds. Defaults to 20 seconds if not set. A value of zero will disable the timer.

-t, --time n
time in seconds to listen for new traffic connections, receive traffic or send traffic

Bob

Hi Bob,


funny, that is a feature we wanted recently for cake-autorate (not for
the controller but for hypothesis testing of what funny things might
happen over LTE). Our "poor man's" version was ICMP echo requests
against 8.8.8.8 as google accepts large echo requests, but only sends
"truncated" replys....

Have a real tool like iperf2 allow to request the size per direction
directly is much better (well, it leaves the challenge of getting
one's own iperf2 server up somewhee accessible on the internet).

Regards
        Sebastian


On May 12, 2023, at 17:46, rjmcmahon via Rpm <r...@lists.bufferbloat.net> wrote:

Hi All,

I received a recent diff for iperf 2 to support independent request and reply sizes for the bounceback test. It's nice to get diffs that can be patched in!

[root@ctrl1fc35 ~]# iperf -c 192.168.1.231 --bounceback --bounceback-reply 512K
------------------------------------------------------------
Client connecting to 192.168.1.231, TCP port 5001 with pid 305401 (1 flows) Bounceback test (req/reply size = 100 Byte/ 512 KByte) (server hold req=0 usecs & tcp_quickack)
Bursting request 10 times every 1.00 second(s)
TCP congestion control using reno
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 1] local 192.168.1.15%enp2s0 port 42800 connected with 192.168.1.231 port 5001 (bb w/quickack len/hold=100/0) (sock=3) (icwnd/mss/irtt=14/1448/3302) (ct=3.36 ms) on 2023-05-12 08:36:57.163 (PDT) [ ID] Interval Transfer Bandwidth BB cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS(avg) [ 1] 0.00-1.00 sec 5.00 MBytes 42.0 Mbits/sec 10=10.924/7.497/27.463/5.971 ms 0 14K/3992 us 92 rps [ 1] 1.00-2.00 sec 5.00 MBytes 42.0 Mbits/sec 10=10.068/7.274/21.120/3.963 ms 0 14K/4307 us 99 rps [ 1] 2.00-3.00 sec 5.00 MBytes 42.0 Mbits/sec 10=9.674/8.148/17.413/2.798 ms 0 14K/4243 us 103 rps [ 1] 3.00-4.00 sec 5.00 MBytes 42.0 Mbits/sec 10=9.858/7.587/20.889/3.961 ms 0 14K/4474 us 101 rps [ 1] 4.00-5.00 sec 5.00 MBytes 42.0 Mbits/sec 10=9.872/7.558/17.720/2.842 ms 0 14K/4692 us 101 rps [ 1] 5.00-6.00 sec 5.00 MBytes 42.0 Mbits/sec 10=9.649/6.844/18.537/3.205 ms 0 14K/4301 us 104 rps [ 1] 6.00-7.00 sec 5.00 MBytes 42.0 Mbits/sec 10=9.502/7.083/19.839/3.697 ms 0 14K/4153 us 105 rps [ 1] 7.00-8.00 sec 5.00 MBytes 42.0 Mbits/sec 10=9.965/7.747/22.194/4.350 ms 0 14K/4357 us 100 rps [ 1] 8.00-9.00 sec 5.00 MBytes 42.0 Mbits/sec 10=10.072/7.936/20.307/3.730 ms 0 14K/4442 us 99 rps [ 1] 9.00-10.00 sec 5.00 MBytes 42.0 Mbits/sec 10=10.031/8.109/19.907/3.551 ms 0 14K/4086 us 100 rps [ 1] 0.00-10.02 sec 50.0 MBytes 41.9 Mbits/sec 100=9.962/6.844/27.463/3.740 ms 0 14K/4152 us 100 rps [ 1] 0.00-10.02 sec BB8(f)-PDF: bin(w=100us):cnt(100)=69:1,71:1,73:1,75:1,76:3,77:1,78:2,79:3,80:3,81:1,82:6,83:7,84:1,85:3,86:4,87:4,88:4,89:5,90:7,91:3,92:4,93:2,95:8,96:3,97:1,98:1,99:1,101:3,102:1,103:1,104:1,106:2,123:1,175:1,178:1,186:1,199:1,200:1,204:1,209:1,212:1,222:1,275:1 (5.00/95.00/99.7%=76/204/275,Outliers=1,obl/obu=0/0)

Bob
_______________________________________________
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to