Re: [ceph-users] Help with Bluestore WAL

2018-02-21 Thread David Turner
There WAL sis a required party of the osd. If you remove that, then the osd is missing a crucial part of itself and it will be unable to start until the WAL is back online. If the SSD were to fail, then all osds using it would need to be removed and recreated on the cluster. On Tue, Feb 20, 2018,

Re: [ceph-users] Help with Bluestore WAL

2018-02-20 Thread Konstantin Shalygin
Hi, We were recently testing luminous with bluestore. We have 6 node cluster with 12 HDD and 1 SSD each, we used ceph-volume with LVM to create all the OSD and attached with SSD WAL (LVM ). We create individual 10GBx12 LVM on single SDD for each WAL. So all the OSD WAL is on the singe SSD.

Re: [ceph-users] Help with Bluestore WAL

2018-02-20 Thread Balakumar Munusawmy
Hi, We were recently testing luminous with bluestore. We have 6 node cluster with 12 HDD and 1 SSD each, we used ceph-volume with LVM to create all the OSD and attached with SSD WAL (LVM ). We create individual 10GBx12 LVM on single SDD for each WAL. So all the OSD WAL is on the singe SSD. P