[ceph-users] erasure coded pool

2015-02-20 Thread Deneau, Tom
Is it possible to run an erasure coded pool using default k=2, m=2 profile on a single node? (this is just for functionality testing). The single node has 3 OSDs. Replicated pools run fine. ceph.conf does contain: osd crush chooseleaf type = 0 -- Tom Deneau

Re: [ceph-users] erasure coded pool

2015-02-20 Thread Loic Dachary
Hi Tom, On 20/02/2015 22:59, Deneau, Tom wrote: Is it possible to run an erasure coded pool using default k=2, m=2 profile on a single node? (this is just for functionality testing). The single node has 3 OSDs. Replicated pools run fine. For k=2 m=2 to work you need four (k+m) OSDs. As

Re: [ceph-users] erasure coded pool why ever k1?

2015-01-22 Thread Loic Dachary
Hi, On 22/01/2015 16:37, Chad William Seys wrote: Hi Loic, The size of each chunk is object size / K. If you have K=1 and M=2 it will be the same as 3 replicas with none of the advantages ;-) Interesting! I did not see this explained so explicitly. So is the general explanation of k

Re: [ceph-users] erasure coded pool why ever k1?

2015-01-22 Thread Chad William Seys
Hi Loic, The size of each chunk is object size / K. If you have K=1 and M=2 it will be the same as 3 replicas with none of the advantages ;-) Interesting! I did not see this explained so explicitly. So is the general explanation of k and m something like: k, m: fault tolerance of m+1

[ceph-users] erasure coded pool why ever k1?

2015-01-21 Thread Chad William Seys
Hello all, What reasons would one want k1? I read that m determines the number of OSD which can fail before loss. But I don't see explained how to choose k. Any benefits for choosing k1? Thanks! Chad. ___ ceph-users mailing list

Re: [ceph-users] erasure coded pool why ever k1?

2015-01-21 Thread Loic Dachary
On 21/01/2015 22:42, Chad William Seys wrote: Hello all, What reasons would one want k1? I read that m determines the number of OSD which can fail before loss. But I don't see explained how to choose k. Any benefits for choosing k1? The size of each chunk is object size / K. If you

Re: [ceph-users] erasure coded pool why ever k1?

2015-01-21 Thread Don Doerner
Of Loic Dachary Sent: 21 January, 2015 15:18 To: Chad William Seys; ceph-users@lists.ceph.com Subject: Re: [ceph-users] erasure coded pool why ever k1? On 21/01/2015 22:42, Chad William Seys wrote: Hello all, What reasons would one want k1? I read that m determines the number of OSD which

[ceph-users] erasure coded pool k=7,m=5

2014-12-23 Thread Stéphane DUGRAVOT
Hi all, Soon, we should have a 3 datacenters (dc) ceph cluster with 4 hosts in each dc. Each host will have 12 OSD. We can accept the loss of one datacenter and one host on the remaining 2 datacenters. In order to use erasure coded pool : 1. Is the solution for a strategy k = 7, m =

Re: [ceph-users] erasure coded pool k=7,m=5

2014-12-23 Thread Loic Dachary
Hi Stéphane, On 23/12/2014 14:34, Stéphane DUGRAVOT wrote: Hi all, Soon, we should have a 3 datacenters (dc) ceph cluster with 4 hosts in each dc. Each host will have 12 OSD. We can accept the loss of one datacenter and one host on the remaining 2 datacenters. In order to use erasure

Re: [ceph-users] Erasure coded pool suitable for MDS?

2014-06-20 Thread Loic Dachary
On 20/06/2014 00:06, Erik Logtenberg wrote: Hi Loic, That is a nice idea. And if I then use newfs against that replicated cache pool, it'll work reliably? It will not be limited by the erasure coded pool features, indeed. Cheers Kind regards, Erik. On 06/19/2014 11:09 PM, Loic

[ceph-users] Erasure coded pool suitable for MDS?

2014-06-19 Thread Erik Logtenberg
Hi, Are erasure coded pools suitable for use with MDS? I tried to give it a go by creating two new pools like so: # ceph osd pool create ecdata 128 128 erasure # ceph osd pool create ecmetadata 128 128 erasure Then looked up their id's: # ceph osd lspools ..., 6 ecdata,7 ecmetadata # ceph