Hi ceph users,

We have two ceph cluster with the same hardware/CPU(ARM)/OS(debian 12)/network , and both deployed with cephadm ( default configuration , 2 osds per device nvme ) , except that one is on reef version and the seco,d is on squid version.

We noticed a big difference in our benchmark ( the Reef cluster is far better the Squid cluster ).


Does mean that Ceph Reef is more performant than Ceph Squid ?


Squid RandRW bench =>

fio -ioengine=rbd -name=test-2 -direct=1 -bs=4k -iodepth=1 -rw=randrw -rwmixwrite=100 -runtime=10 -pool=nova -rbdname=myimage test-2: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=2488KiB/s][w=622 IOPS][eta 00m:00s]
test-2: (groupid=0, jobs=1): err= 0: pid=331890: Fri Jun 20 11:35:14 2025
  write: IOPS=627, BW=2511KiB/s (2571kB/s)(24.5MiB/10002msec); 0 zone resets
    slat (nsec): min=4400, max=51722, avg=15476.42, stdev=4695.32
    clat (usec): min=670, max=7704, avg=1574.43, stdev=341.38
     lat (usec): min=676, max=7730, avg=1589.91, stdev=341.83
    clat percentiles (usec):
     |  1.00th=[ 1074],  5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1434],
     | 30.00th=[ 1483], 40.00th=[ 1516], 50.00th=[ 1549], 60.00th=[ 1582],
     | 70.00th=[ 1614], 80.00th=[ 1647], 90.00th=[ 1696], 95.00th=[ 1778],
     | 99.00th=[ 2769], 99.50th=[ 3458], 99.90th=[ 5800], 99.95th=[ 5932],
     | 99.99th=[ 7701]
   bw (  KiB/s): min= 2000, max= 2608, per=100.00%, avg=2512.42, stdev=131.03, samples=19
   iops        : min=  500, max=  652, avg=628.11, stdev=32.76, samples=19
  lat (usec)   : 750=0.14%, 1000=0.61%
  lat (msec)   : 2=96.21%, 4=2.64%, 10=0.40%
  cpu          : usr=1.34%, sys=0.93%, ctx=6280, majf=0, minf=5
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,6278,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1


Reef RandRW bench=> |
|

|fio -ioengine=rbd -name=test -direct=1 -bs=4K -iodepth=1 -rw=randrw -rwmixwrite=100 -runtime=10 -pool=nova -rbdname=image1 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=1 fio-3.33 |

|Starting 1 process |

|Jobs: 1 (f=1): [w(1)][100.0%][w=3931KiB/s][w=982 IOPS][eta 00m:00s] |

|test: (groupid=0, jobs=1): err= 0: pid=1866868: Tue Jun 17 15:24:42 2025 |

|write: IOPS=964, BW=3860KiB/s (3952kB/s)(37.7MiB/10001msec); 0 zone resets |

|slat (nsec): min=3880, max=30402, avg=6499.36, stdev=1597.11 |

|clat (usec): min=617, max=3219, avg=1027.83, stdev=161.62 |

|lat (usec): min=625, max=3224, avg=1034.33, stdev=161.62 |

|clat percentiles (usec): | 1.00th=[ 824], 5.00th=[ 881], 10.00th=[ 906], 20.00th=[ 938], | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 1004], 60.00th=[ 1020], | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1221], | 99.00th=[ 1860], 99.50th=[ 1942], 99.90th=[ 2147], 99.95th=[ 2343], | 99.99th=[ 3228] |

|bw ( KiB/s): min= 3152, max= 3984, per=100.00%, avg=3861.89, stdev=183.90, samples=19 |

|iops : min= 788, max= 996, avg=965.47, stdev=45.98, samples=19 |

|lat (usec) : 750=0.16%, 1000=49.50% |

|lat (msec) : 2=50.07%, 4=0.27% |

|cpu : usr=0.91%, sys=0.51%, ctx=9657, majf=0, minf=13 |

|IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,9650,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 |
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to