I'm no expert but maybe another test might be iperf and watch your cpu 
utilization while doing it
You can set iperf to run between a couple monitors and OSD servers
Try setting it at 1500 or your switch's stock MTU
then put the servers at 9000 and the switch at 9128 (for packet 
then run iperf between the servers for both MTU settings
then do the same and increase the streams so as to saturate the network
iperf will give your network throughput which is usually 90% of the listed 
Network speed
Also by using jumbo frames it will reduce cpu /network cycles as more data is 
pushed out per ethernet frame
for our jumbo frame configuration we saw 26 Gb/s with 1 stream and 37 Gb/s with 
10 streams
we didn't record it for our stock mtu settings
Thanks Joe

>>> Sameer Tiwari <stiw...@salesforce.com> 8/11/2017 11:21 AM >>>

We ran a test with 1500 MTU and 9000MTU on a small ceph test cluster (3mon + 10 
hosts with 2 SSD each, one for journal and one for data) and found minimal ~10% 
perf improvements.

We tested with FIO for 4K, 8K and 64K block sizes, using RBD directly.

Anyone else have any experience with this?


ceph-users mailing list

Reply via email to