I've also had some luck with the following crush ruleset, erasure profile
failure domain is set to OSD
rule ecpool_test2 {
ruleset 3
type erasure
min_size 3
max_size 20
step set_chooseleaf_tries 5
step set_choose_tries 100
step take ceph1
step chooseleaf indep 2 type osd
step emit
step take ceph2
step chooseleaf indep 2 type osd
step emit
step take ceph3
step chooseleaf indep 2 type osd
step emit
step take ceph4
step chooseleaf indep 2 type osd
step emit
}
Is this a sane way to accomplish withstanding 1 host failure and 1 disk
failure (k=5 m=3) and making sure not more than 2 PG's are on each host. Or
does anybody have any clever ideas?
Nick
-----Original Message-----
From: ceph-users [mailto:[email protected]] On Behalf Of
Nick Fisk
Sent: 06 January 2015 07:43
To: 'Loic Dachary'; [email protected]
Subject: Re: [ceph-users] Erasure Encoding Chunks > Number of Hosts
Hi Loic,
That's an interesting idea, I suppose the same could probably be achieved by
just creating more "Crush Host Buckets" for each actual host and then treat
the actual physical host as a chassis (Chassis-1 contains Host-1-A,
Host-1-B...etc)
I was thinking about this some more and I don't think my original idea of
k=6 m=2 will allow me to sustain a host + disk failure as that would involve
3 disk failures in total (assuming 2 failed chunks are on failed host).
I believe k=5 m=3 would be a better match.
Nick
-----Original Message-----
From: Loic Dachary [mailto:[email protected]]
Sent: 05 January 2015 17:38
To: Nick Fisk; [email protected]
Subject: Re: [ceph-users] Erasure Encoding Chunks > Number of Hosts
Hi Nick,
What about subdividing your hosts using containers ? For instance four
container per host on your four hosts which gives you 16 hosts. When you add
more hosts you move containers around and reduce the number of containers
per host. But you don't need to change the rulesets.
Cheers
On 05/01/2015 17:58, Nick Fisk wrote:
> Hi All,
>
>
>
> Would anybody have an idea a) If its possible and b) if its a good
> idea
>
> to have more EC chunks than the total number of hosts?
>
>
>
> For instance if I wanted to have a k=6 m=2, but only across 4 hosts
> and I
wanted to be able to withstand 1 host failure and 1 disk failure(any host),
would a crush map rule be able to achieve that?
>
>
>
> Ie It would first instruct data to be 1^st split evenly across hosts
> and
then across OSDs?
>
>
>
> If I set the erasure profile failure domain to OSD and the crushmap to
chooseleaf host, will this effectively achieve what I have described?
>
>
>
> I would be interested in doing this for two reasons, one being for
> better
increased capacity than k=2 m=2 and the other is that when I expand this
cluster in the near future to 8 hosts I wont have to worry about
re-creating the pool. I fully understand I would forfeit the ability to
withstand to lose 2 hosts, but I would think this to be quite an unlikely
event having only 2 hosts to start with.
>
>
>
> Many thanks,
>
> Nick
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Loïc Dachary, Artisan Logiciel Libre
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com