Hi Keep in mind that when you compare a local NVMe with RBD you are adding to the mix the following: - Network connectivity (link speed and bandwidth, switching latency, ...) - All the lines of code that make Ceph (both client and server side) - The ability to process the lines of code above as fast as possible (CPU dependent)
So you are comparing two things that are radically different in terms of features and architecture. The Ceph community is working hard to make Ceph always better and you can take a look at the Crimson project <https://ceph.io/en/news/crimson/> that aims at improving the OSD side when it comes to latency and performance for flash based Ceph clusters. Best JC > On Jul 23, 2025, at 15:52, Devender Singh <deven...@netskrt.io> wrote: > > root@node01:~/fio-cdm# python3 fio-cdm ./ > tests: 5, size: 1.0GiB, target: /root/fio-cdm 6.3GiB/64.4GiB > |Name | Read(MB/s)| Write(MB/s)| > |------------|------------|------------| > |SEQ1M Q8 T1 | 8441.37| 3588.71| > |SEQ1M Q1 T1 | 3074.86| 1172.46| > |RND4K Q32T16| 723.65| 733.76| > |. IOPS | 176671.80| 179141.74| > |. latency us| 2892.49| 2839.37| > |RND4K Q1 T1 | 71.05| 57.88| > |. IOPS | 17347.13| 14131.57| > |. latency us| 56.13| 66.40| _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io