Re: [ceph-users] best practices for EC pools

2019-02-08 Thread Scheurer François
Best Regards Francois Scheurer From: Caspar Smit Sent: Friday, February 8, 2019 11:47 AM To: Scheurer François Cc: Alan Johnson; Eugen Block; ceph-users@lists.ceph.com Subject: Re: [ceph-users] best practices for EC pools Op vr 8 feb. 2019 om 11:31 schreef

Re: [ceph-users] best practices for EC pools

2019-02-08 Thread Caspar Smit
72:6806/79121 is > reporting failure:0 > 2019-02-06 23:10:57.660481 7f14d8ed6700 0 log_channel(cluster) log [DBG] > : osd.23 10.38.66.71:6807/79639 failure report canceled by osd.18 > 10.38.67.72:6806/79121 > > > > Best Regards > Francois Scheurer > > > >

Re: [ceph-users] best practices for EC pools

2019-02-08 Thread Scheurer François
7.72:6806/79121 Best Regards Francois Scheurer From: ceph-users on behalf of Alan Johnson Sent: Thursday, February 7, 2019 8:11 PM To: Eugen Block; ceph-users@lists.ceph.com Subject: Re: [ceph-users] best practices for EC pools Just to add, that

Re: [ceph-users] best practices for EC pools

2019-02-07 Thread Alan Johnson
AM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] best practices for EC pools Hi Francois, > Is that correct that recovery will be forbidden by the crush rule if a > node is down? yes, that is correct, failure-domain=host means no two chunks of the same PG can be on the same

Re: [ceph-users] best practices for EC pools

2019-02-07 Thread Eugen Block
Hi Francois, Is that correct that recovery will be forbidden by the crush rule if a node is down? yes, that is correct, failure-domain=host means no two chunks of the same PG can be on the same host. So if your PG is divided into 6 chunks, they're all on different hosts, no recovery is

[ceph-users] best practices for EC pools

2019-02-07 Thread Scheurer François
Dear All We created an erasure coded pool with k=4 m=2 with failure-domain=host but have only 6 osd nodes. Is that correct that recovery will be forbidden by the crush rule if a node is down? After rebooting all nodes we noticed that the recovery was slow, maybe half an hour, but all