Apologies if this has been asked dozens of times before, but most answers are 
from pre-Jewel days, and want to double check that the methodology still holds.

Currently have 16 OSD’s across 8 machines with on-disk journals, created using 
ceph-deploy.

These machines have NVMe storage (Intel P3600 series) for the system volume, 
and am thinking about carving out a partition for SSD journals for the OSD’s. 
The drives don’t make tons of use of the local storage, so should have plenty 
of io overhead to support the OSD journaling, as well as the P3600 should have 
the endurance to handle the added write wear.

From what I’ve read, you need a partition per OSD journal, so with the 
probability of a third (and final) OSD being added to each node, I should 
create 3 partitions, each ~8GB in size (is this a good value? 8TB OSD’s, is the 
journal size based on size of data or number of objects, or something else?).

So:
{create partitions}
set noout
service ceph stop osd.$i
ceph-osd -i osd.$i —flush-journal
rm -f rm -f /var/lib/ceph/osd/<osd-id>/journal
ln -s  /var/lib/ceph/osd/<osd-id>/journal /dev/<ssd-partition-for-journal>
ceph-osd -i osd.$i -mkjournal
service ceph start osd.$i
ceph osd unset noout

Does this logic appear to hold up?

Appreciate the help.

Thanks,

Reed
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to