I have replaced Samsung with Intel P4600 6.4TB nvme (I have created 3 OSDs
on top of nvme)
Here is the result:
(venv-openstack) root@os-ctrl1:~# rados -p test-nvme -t 64 -b 4096
bench 10 write
hints = 1
Maintaining 64 concurrent writes of 4096 bytes to objects of size 4096
for up to 10 seconds
> Thank you for reply,
>
> I have created two class SSD and NvME and assigned them to crush maps.
You don't have enough drives to keep them separate. Set the NVMe drives back
to "ssd" and just make one pool.
>
> $ ceph osd crush rule ls
> replicated_rule
> ssd_pool
> nvme_pool
>
>
>
Thank you for reply,
I have created two class SSD and NvME and assigned them to crush maps.
$ ceph osd crush rule ls
replicated_rule
ssd_pool
nvme_pool
Running benchmarks on nvme is the worst performing. SSD showing much better
results compared to NvME. NvME model is Samsung_SSD_980_PRO_1TB
this should be possible by specifying a "data_devices" and "db_devices"
fields in the OSD spec file each with different filters. There's some
examples in the docs
https://docs.ceph.com/en/latest/cephadm/services/osd/#the-simple-case that
show roughly how that's done, and some other sections (