Thank you for your reply.

I am finding it confusing to understand the profile structure.
Consider a cluster of 8 OSD servers with 3 disks on each server.

If I use a profile setting of k=5, m=3 and  ruleset-failure-domain=host ;

Encoding Rate (r) : r = k / n , where n = k+m = 5/8 = 0.625
Storage Required : 1 / r = 1 / 0.625 = 1.6 times original file

Is this correct? And more importantly, will the profile work without
failure?

As far as I understand it can tolerate failure of 3 OSDs and 1 host, am I
right ?

I can't find much information from this link :
---------
http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/
--------

Is there a better article that I can refer to ?


Karun Josy

On Tue, Oct 24, 2017 at 1:23 AM, David Turner <[email protected]> wrote:

> This can be changed to a failure domain of OSD in which case it could
> satisfy the criteria.  The problem with a failure domain of OSD, is that
> all of your data could reside on a single host and you could lose access to
> your data after restarting a single host.
>
> On Mon, Oct 23, 2017 at 3:23 PM LOPEZ Jean-Charles <[email protected]>
> wrote:
>
>> Hi,
>>
>> the default failure domain if not specified on the CLI at the moment you
>> create your EC profile is set to HOST. So you need 14 OSDs spread across 14
>> different nodes by default. And you only have 8 different nodes.
>>
>> Regards
>> JC
>>
>> On 23 Oct 2017, at 21:13, Karun Josy <[email protected]> wrote:
>>
>> Thank you for the reply.
>>
>> There are 8 OSD nodes with 23 OSDs in total. (However, they are not
>> distributed equally on all nodes)
>>
>> So it satisfies that criteria, right?
>>
>>
>>
>> Karun Josy
>>
>> On Tue, Oct 24, 2017 at 12:30 AM, LOPEZ Jean-Charles <[email protected]>
>> wrote:
>>
>>> Hi,
>>>
>>> yes you need as many OSDs that k+m is equal to. In your example you need
>>> a minimum of 14 OSDs for each PG to become active+clean.
>>>
>>> Regards
>>> JC
>>>
>>> On 23 Oct 2017, at 20:29, Karun Josy <[email protected]> wrote:
>>>
>>> Hi,
>>>
>>> While creating a pool with erasure code profile k=10, m=4, I get PG
>>> status as
>>> "200 creating+incomplete"
>>>
>>> While creating pool with profile k=5, m=3 it works fine.
>>>
>>> Cluster has 8 OSDs with total 23 disks.
>>>
>>> Is there any requirements for setting the first profile ?
>>>
>>> Karun
>>> _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to