[ceph-users] Re: cephadm to setup wal/db on nvme

2023-08-28 Thread Satish Patel
I have replaced Samsung with Intel P4600 6.4TB nvme (I have created 3 OSDs on top of nvme) Here is the result: (venv-openstack) root@os-ctrl1:~# rados -p test-nvme -t 64 -b 4096 bench 10 write hints = 1 Maintaining 64 concurrent writes of 4096 bytes to objects of size 4096 for up to 10 seconds

[ceph-users] Re: cephadm to setup wal/db on nvme

2023-08-25 Thread Anthony D'Atri
> Thank you for reply, > > I have created two class SSD and NvME and assigned them to crush maps. You don't have enough drives to keep them separate. Set the NVMe drives back to "ssd" and just make one pool. > > $ ceph osd crush rule ls > replicated_rule > ssd_pool > nvme_pool > > >

[ceph-users] Re: cephadm to setup wal/db on nvme

2023-08-25 Thread Satish Patel
Thank you for reply, I have created two class SSD and NvME and assigned them to crush maps. $ ceph osd crush rule ls replicated_rule ssd_pool nvme_pool Running benchmarks on nvme is the worst performing. SSD showing much better results compared to NvME. NvME model is Samsung_SSD_980_PRO_1TB

[ceph-users] Re: cephadm to setup wal/db on nvme

2023-08-23 Thread Adam King
this should be possible by specifying a "data_devices" and "db_devices" fields in the OSD spec file each with different filters. There's some examples in the docs https://docs.ceph.com/en/latest/cephadm/services/osd/#the-simple-case that show roughly how that's done, and some other sections (