Hi Folks,

After recently upgrading from 16.2.13 to 16.2.15, when I run:

ceph-volume lvm create --data /dev/sda --block.db /dev/nvme0n1p1

to create a new OSD after replacement of a failed disk, ceph-volume no-longer 
creates a volume group/logical volume on the block.db partition*. This is the 
old behaviour we saw whilst running Octopus but we have not seen this since our 
upgrade to Pacific until now. Any ideas why this is happening? Nothing is 
jumping out of the release notes, but they do mention a lot of work was done on 
the LVM code in ceph-volume since 16.2.13. Is this now a configuration setting 
somewhere?

Background:
Our storage nodes are 60 HDDs per chassis with two 2TB NVMe drives, each 
containing 30 partitions used as block.db storage, one partition and one HDD 
per OSD. We are running Ubuntu 20.04 and Ceph Pacific 16.2.15, just upgraded 
from 16.2.13. We are using the official Ubuntu packages.

*so /var/lib/ceph/osd/ceph-1090/block.db points to /dev/nvme0n1p1
and not 
/dev/ceph-11992480-4ea6-4c1a-80b0-e025a66539b2/osd-db-52655aa3-9102-4d92-8a63-f2c93eed984f
 which would be on /dev/nvme0n1p1

----------------------------------------------------------------------
The Wellcome Sanger Institute is operated by Genome Research Limited, a charity 
registered in England with number 1021457 and a company registered in England 
with number 2742969, whose registered office is Wellcome Sanger Institute, 
Wellcome Genome Campus, Hinxton, CB10 1SA.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to