Re: [ceph-users] Bluestore nvme DB/WAL size

2018-12-21 Thread Anthony D'Atri
> It'll cause problems if yours the only one NVMe drive will die - you'll lost > all the DB partitions and all the OSDs are going to be failed The severity of this depends a lot on the size of the cluster. If there are only, say, 4 nodes total, for sure the loss of a quarter of the OSDs will

Re: [ceph-users] Bluestore nvme DB/WAL size

2018-12-21 Thread David C
I'm in a similar situation, currently running filestore with spinners and journals on NVME partitions which are about 1% of the size of the OSD. If I migrate to bluestore, I'll still only have that 1% available. Per the docs, if my block.db device fills up, the metadata is going to spill back onto

Re: [ceph-users] Bluestore nvme DB/WAL size

2018-12-21 Thread Konstantin Shalygin
I am considering using logical volumes of an NVMe drive as DB or WAL devices for OSDs on spinning disks. The documentation recommends against DB devices smaller than 4% of slow disk size. Our servers have 16x 10TB HDDs and a single 1.5TB NVMe, so dividing it equally will result in each OSD

Re: [ceph-users] Bluestore nvme DB/WAL size

2018-12-21 Thread Janne Johansson
Den tors 20 dec. 2018 kl 22:45 skrev Vladimir Brik : > Hello > I am considering using logical volumes of an NVMe drive as DB or WAL > devices for OSDs on spinning disks. > The documentation recommends against DB devices smaller than 4% of slow > disk size. Our servers have 16x 10TB HDDs and a

Re: [ceph-users] Bluestore nvme DB/WAL size

2018-12-20 Thread Stanislav A. Dmitriev
: Friday, December 21, 2018 12:09 AM To: ceph-users@lists.ceph.com Subject: [ceph-users] Bluestore nvme DB/WAL size [External Email] Hello I am considering using logical volumes of an NVMe drive as DB or WAL devices for OSDs on spinning disks. The documentation

[ceph-users] Bluestore nvme DB/WAL size

2018-12-20 Thread Vladimir Brik
Hello I am considering using logical volumes of an NVMe drive as DB or WAL devices for OSDs on spinning disks. The documentation recommends against DB devices smaller than 4% of slow disk size. Our servers have 16x 10TB HDDs and a single 1.5TB NVMe, so dividing it equally will result in