>> So, my first question is whether it's possible to specify a separate DB via >> "ceph orch daemon add osd"?
> I believe it is, don’t have the syntax to hand. Thanks for the response, and the service spec examples — that gave me some courage to try a few things. What I settled on for my case was pre-slicing the SSDs with LVM — that is, for my 8 new OSDs, I used lvcreate to make 8 logical volumes for DB+WAL, with the sizes that seemed appropriate. Then I could manually specify the data device and db device like this: ceph orch daemon add osd ceph-hvcore1:data_devices=/dev/sdb,db_devices=/dev/mapper/nvmepool-dbwal.26 Where ceph-hvcore1 is my host, /dev/sdb is the spinning disk, and /dev/mapper/nvmepool-dbwal.26 is the path to the LVM logical volume I had created on the SSDs (well, actually, NVMe, but I'm passing it through to a ceph VM, so ceph doesn't see its full character). This appears to be working well — the cluster is backfilling successfully. And it gives me a lot of flexibility for my heterogenous (and occasionally upgraded) infrastructure. _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io