Make sure to test that stuff.  I've never had to modify the min_size on an
EC pool before.

On Wed, Jul 12, 2017 at 11:12 AM Jake Grimmett <[email protected]>
wrote:

> Hi David,
>
> put that way, the docs make complete sense, thank you!
>
> i.e. to allow writing to a 5+2 EC cluster with one node down:
>
> default is:
> # ceph osd pool get ecpool min_size
> min_size: 7
>
> to tolerate one node failure, set:
> # ceph osd pool set ecpool min_size 6
> set pool 1 min_size to 6
>
> to tolerate two nodes failing, set:
> # ceph osd pool set ecpool min_size 5
> set pool 1 min_size to 5
>
> thanks again!
>
> Jake
>
> On 12/07/17 14:36, David Turner wrote:
> > As long as you have the 7 copies online if you're using 7+2 then you can
> > still work and read to the EC pool.  For EC pool, size is equivalently 9
> > and min_size is 7.
> >
> > I have a 3 node cluster with 2+1 and I can restart 1 node at a time with
> > host failure domain.
> >
> >
> > On Wed, Jul 12, 2017, 6:34 AM Jake Grimmett <[email protected]
> > <mailto:[email protected]>> wrote:
> >
> >     Dear All,
> >
> >     Quick question; is it possible to write to a degraded EC pool?
> >
> >     i.e. is there an equivalent to this setting for a replicated pool..
> >
> >     osd pool default size = 3
> >     osd pool default min size = 2
> >
> >     My reason for asking, is that it would be nice if we could build a EC
> >     7+2 cluster, and actively use the cluster while a node was off-line
> by
> >     setting osd noout.
> >
> >     BTW, Currently testing the Luminous RC, it's looking really nice!
> >
> >     thanks,
> >
> >     Jake
> >
> >
> >     _______________________________________________
> >     ceph-users mailing list
> >     [email protected] <mailto:[email protected]>
> >     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to