gt;>> [ 4] 3.00-4.00 sec 1.15 GBytes 9.88 Gbits/sec 0 1.08 MBytes
>>> [ 4] 4.00-5.00 sec 1.15 GBytes 9.88 Gbits/sec 0 1.08 MBytes
>>> [ 4] 5.00-6.00 sec 1.15 GBytes 9.90 Gbits/sec 0 1.21 MBytes
>>> [ 4] 6.00-7.00 sec 1.15 GBytes 9.89 Gbit
:ceph-users@ceph.io>>;
doustar mailto:dous...@rayanexon.ir>>;
kdhall mailto:kdh...@binghamton.edu>>;
sstkadu mailto:sstk...@gmail.com>>
> Asunto: [ceph-users] Re: [External Email] Re: Ceph Nautius not
working after setting MTU 9000
>
>
-
> > De: Marc Roos
> > Enviado el: miƩrcoles, 27 de mayo de 2020 12:01
> > Para: chris.palmer ; paul.emmerich <
> paul.emmer...@croit.io>
> > CC: amudhan83 ; anthony.datri <
> anthony.da...@gmail.com>; ceph-users ; doustar <
> dous...@rayanexon.ir>
-users
; doustar ; kdhall ; sstkadu
Asunto: [ceph-users] Re: [External Email] Re: Ceph Nautius not working after
setting MTU 9000
Interesting table. I have this on a production cluster 10gbit at a
datacenter (obviously doing not that much).
[@]# iperf3 -c 10.0.0.13 -P 1 -M 9000
Connecting