Re: [ceph-users] Help with Bluestore WAL

2018-02-21 Thread David Turner
There WAL sis a required party of the osd. If you remove that, then the osd
is missing a crucial part of itself and it will be unable to start until
the WAL is back online. If the SSD were to fail, then all osds using it
would need to be removed and recreated on the cluster.

On Tue, Feb 20, 2018, 9:44 PM Konstantin Shalygin  wrote:

> Hi,
> We were recently testing luminous with bluestore. We have 6 node cluster 
> with 12 HDD and 1 SSD each, we used ceph-volume with LVM to create all the 
> OSD and attached with SSD WAL (LVM ). We create individual 10GBx12 LVM on 
> single SDD for each WAL. So all the OSD WAL is on the singe SSD. Problem is 
> if we pull the SSD out, it brings down all the 12 OSD on that node. Is that 
> expected behavior or we are missing any configuration ?
>
>
> Yes, you should plan your failure domain, i.e. what will be happens with
> your cluster if one backend ssd suddenly dies.
>
> Also you should plan mass failures of your ssd/nvme, so rule of thumb -
> don't overload your flash backend with osd. Recommend is ~4 osd per
> ssd/nvme.
>
>
>
> k
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Help with Bluestore WAL

2018-02-20 Thread Konstantin Shalygin

Hi,
 We were recently testing luminous with bluestore. We have 6 node cluster 
with 12 HDD and 1 SSD each, we used ceph-volume with LVM to create all the OSD 
and attached with SSD WAL (LVM ). We create individual 10GBx12 LVM on single 
SDD for each WAL. So all the OSD WAL is on the singe SSD. Problem is if we pull 
the SSD out, it brings down all the 12 OSD on that node. Is that expected 
behavior or we are missing any configuration ?



Yes, you should plan your failure domain, i.e. what will be happens with 
your cluster if one backend ssd suddenly dies.


Also you should plan mass failures of your ssd/nvme, so rule of thumb - 
don't overload your flash backend with osd. Recommend is ~4 osd per 
ssd/nvme.




k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Help with Bluestore WAL

2018-02-20 Thread Balakumar Munusawmy
Hi,
We were recently testing luminous with bluestore. We have 6 node cluster 
with 12 HDD and 1 SSD each, we used ceph-volume with LVM to create all the OSD 
and attached with SSD WAL (LVM ). We create individual 10GBx12 LVM on single 
SDD for each WAL. So all the OSD WAL is on the singe SSD. Problem is if we pull 
the SSD out, it brings down all the 12 OSD on that node. Is that expected 
behavior or we are missing any configuration ?


Thanks and Regards,
Balakumar Munusawmy
Mobile:+19255771645
Skype: 
bala.munusa...@latticeworkinc.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com