Re: [ceph-users] erasure-code-profile: what's "w=" ?

2018-02-26 Thread Gregory Farnum
On Mon, Feb 26, 2018 at 5:09 AM Wolfgang Lendl <
wolfgang.le...@meduniwien.ac.at> wrote:

> hi,
>
> I have no idea what "w=8" means and can't find any hints in docs ...
> maybe someone can explain
>
>
> ceph 12.2.2
>
> # ceph osd erasure-code-profile get ec42
> crush-device-class=hdd
> crush-failure-domain=host
> crush-root=default
> jerasure-per-chunk-alignment=false
> k=4
> m=2
> plugin=jerasure
> technique=reed_sol_van
> w=8
>

I think that's exposing the "word" size it uses when doing the erasure
coding. It is technically configurable but I would not fuss about it.
-Greg

>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] erasure-code-profile: what's "w=" ?

2018-02-26 Thread Wolfgang Lendl
hi,

I have no idea what "w=8" means and can't find any hints in docs ...
maybe someone can explain


ceph 12.2.2

# ceph osd erasure-code-profile get ec42
crush-device-class=hdd
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=4
m=2
plugin=jerasure
technique=reed_sol_van
w=8


thx
wolfgang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Erasure code profile

2017-10-24 Thread jorpilo
That's a pretty hard question, I don't think it would speed writes so much 
because you end writing the same amount of data but I think on a 4+4 setup 
re-building or serving data while a node is down will go faster and will use 
less resources because it has to rebuild a smallers chunks of data.
Another question, when you create a EC pool you also create a crush EC rule, so 
what would happen if you set a failure domain of node and on the rule you 
divide it by OSDs?  How do failure domaing and crush rule interact?
 Mensaje original De: Oliver Humpage <oli...@watershed.co.uk> 
Fecha: 24/10/17  10:32 p. m.  (GMT+01:00) Para: Karun Josy 
<karunjo...@gmail.com> Cc: ceph-users <ceph-users@lists.ceph.com> Asunto: Re: 
[ceph-users] Erasure code profile 

Consider a cluster of 8 OSD servers with 3 disks on each server. 
If I use a profile setting of k=5, m=3 and  ruleset-failure-domain=host ;
As far as I understand it can tolerate failure of 3 OSDs and 1 host, am I right 
?
When setting up your pool, you specify a crush map which says what your 
"failure domain” is. You can think of a failure domain as "what’s the largest 
single thing that could fail and the cluster would still survive?”. By default 
this is a node (a server). Large clusters often use a rack instead.  Ceph 
places your data across the OSDs in your cluster so that if that large single 
thing (node or rack) fails, your data is still safe and available.
If you specify a single OSD (a disk) as your failure domain, then ceph might 
end up placing lots of data on different OSDs on the same node. This is a bad 
idea since if that node goes down you'll lose several OSDs, and so your data 
might not survive.
If you have 8 nodes, and erasure of 5+3, then with the default failure domain 
of a node your data will be spread across all 8 nodes (data chunks on 5 of 
them, and parity chunks on the other three). Therefore you could tolerate 3 
whole nodes failing. You are right that 5+3 encoding will result in 1.6xdata 
disk usage.
If you were being pathological about minimising disk usage, I think you could 
in theory set a failure domain of an OSD, then use 8+2 encoding with a crush 
map that never used more than 2 OSDs in each node for a placement group. Then 
technically you could tolerate a node failure. I doubt anyone would recommend 
that though!
That said, here’s a question for others: say a cluster only has 4 nodes (each 
with many OSDs), would you use 2+2 or 4+4? Either way you use 2xdata space and 
could lose 2 nodes (assuming a proper crush map), but presumably the 4+4 would 
be faster and you could lose more OSDs?
Oliver.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Erasure code profile

2017-10-24 Thread Oliver Humpage

> Consider a cluster of 8 OSD servers with 3 disks on each server. 
> 
> If I use a profile setting of k=5, m=3 and  ruleset-failure-domain=host ;
> 
> As far as I understand it can tolerate failure of 3 OSDs and 1 host, am I 
> right ?

When setting up your pool, you specify a crush map which says what your 
"failure domain” is. You can think of a failure domain as "what’s the largest 
single thing that could fail and the cluster would still survive?”. By default 
this is a node (a server). Large clusters often use a rack instead.  Ceph 
places your data across the OSDs in your cluster so that if that large single 
thing (node or rack) fails, your data is still safe and available.

If you specify a single OSD (a disk) as your failure domain, then ceph might 
end up placing lots of data on different OSDs on the same node. This is a bad 
idea since if that node goes down you'll lose several OSDs, and so your data 
might not survive.

If you have 8 nodes, and erasure of 5+3, then with the default failure domain 
of a node your data will be spread across all 8 nodes (data chunks on 5 of 
them, and parity chunks on the other three). Therefore you could tolerate 3 
whole nodes failing. You are right that 5+3 encoding will result in 1.6xdata 
disk usage.

If you were being pathological about minimising disk usage, I think you could 
in theory set a failure domain of an OSD, then use 8+2 encoding with a crush 
map that never used more than 2 OSDs in each node for a placement group. Then 
technically you could tolerate a node failure. I doubt anyone would recommend 
that though!

That said, here’s a question for others: say a cluster only has 4 nodes (each 
with many OSDs), would you use 2+2 or 4+4? Either way you use 2xdata space and 
could lose 2 nodes (assuming a proper crush map), but presumably the 4+4 would 
be faster and you could lose more OSDs?

Oliver.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Erasure code profile

2017-10-24 Thread Ronny Aasen
yes you can. but just like a raid5 array with a lost disk, it is not a 
comfortable way to run your cluster for any significant time. you also 
get performance degradations.


having a warning active all the time makes it harder to detect new 
issues, and such. One becomes numb to the warning allways beeing on.


strive to have your cluster in health ok all the time. and design so 
that you have the fault tolerance you want as overhead. having more 
nodes then strictly needed allow ceph to self heal quickly. and also 
gives better performance, by spreading load over more machines.

10+4 on 14 nodes means each and every  nodes are hit on each write.


kind regards
Ronny Aasen


On 23. okt. 2017 21:12, Jorge Pinilla López wrote:
I have one question, what can or can't do a cluster working on degraded 
mode?


With K=10 + M = 4 if one of my OSDs node fails it will start working on 
degraded mode, but can I still do writes and reads from that pool?



El 23/10/2017 a las 21:01, Ronny Aasen escribió:

On 23.10.2017 20:29, Karun Josy wrote:

Hi,

While creating a pool with erasure code profile k=10, m=4, I get PG 
status as

"200 creating+incomplete"

While creating pool with profile k=5, m=3 it works fine.

Cluster has 8 OSDs with total 23 disks.

Is there any requirements for setting the first profile ?



you need K+M+X  osd nodes. K and M comes from the profile, X is how 
many nodes you want to be able to tolerate failure of, without 
becoming degraded. (how many failed nodes ceph should be able to 
automatically heal)


so with K=10 + M = 4 you need minimum 14 nodes and you have 0 fault 
tolerance (a single failure = a degreded cluster)  so you have to 
scramble to replace the node to get HEALTH OK again.  if you have 15 
nodes you can loose 1 node and cehp will automatically rebalance to 
the 14 needed nodes, and you can replace the lost node at your leisure.


kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--

*Jorge Pinilla López*
jorp...@unizar.es
Estudiante de ingenieria informática
Becario del area de sistemas (SICUZ)
Universidad de Zaragoza
PGP-KeyID: A34331932EBC715A 





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Erasure code profile

2017-10-23 Thread Jorge Pinilla López
If you use a OSD failure domain, if a node goes down you can lose your
data and the cluster wont be able to work.

If you restart the OSD it might work, but you could even lose your data
as your cluster can't rebuild itself.

You can try to know where the CRUSH rule is going to set your data but I
wouldn't risk so much.

If you have 8 nodes, maybe you could have M=8 and K=2, divided by nodes
so you would have 6 nodes with 1 chunk and 2 nodes with 2 chunks,  so if
you unlucky lose a 2 chunks node, you can still rebuild the data.


El 23/10/2017 a las 21:53, David Turner escribió:
> This can be changed to a failure domain of OSD in which case it could
> satisfy the criteria.  The problem with a failure domain of OSD, is
> that all of your data could reside on a single host and you could lose
> access to your data after restarting a single host.
>
> On Mon, Oct 23, 2017 at 3:23 PM LOPEZ Jean-Charles  > wrote:
>
> Hi,
>
> the default failure domain if not specified on the CLI at the
> moment you create your EC profile is set to HOST. So you need 14
> OSDs spread across 14 different nodes by default. And you only
> have 8 different nodes.
>
> Regards
> JC
>
>> On 23 Oct 2017, at 21:13, Karun Josy > > wrote:
>>
>> Thank you for the reply.
>>
>> There are 8 OSD nodes with 23 OSDs in total. (However, they are
>> not distributed equally on all nodes)
>>
>> So it satisfies that criteria, right?
>>
>>
>>
>> Karun Josy
>>
>> On Tue, Oct 24, 2017 at 12:30 AM, LOPEZ Jean-Charles
>> > wrote:
>>
>> Hi,
>>
>> yes you need as many OSDs that k+m is equal to. In your
>> example you need a minimum of 14 OSDs for each PG to become
>> active+clean.
>>
>> Regards
>> JC
>>
>>> On 23 Oct 2017, at 20:29, Karun Josy >> > wrote:
>>>
>>> Hi,
>>>
>>> While creating a pool with erasure code profile k=10, m=4, I
>>> get PG status as
>>> "200 creating+incomplete"
>>>
>>> While creating pool with profile k=5, m=3 it works fine.
>>>
>>> Cluster has 8 OSDs with total 23 disks.
>>>
>>> Is there any requirements for setting the first profile ?
>>>
>>> Karun 
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com 
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 

*Jorge Pinilla López*
jorp...@unizar.es
Estudiante de ingenieria informática
Becario del area de sistemas (SICUZ)
Universidad de Zaragoza
PGP-KeyID: A34331932EBC715A


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Erasure code profile

2017-10-23 Thread David Turner
This can be changed to a failure domain of OSD in which case it could
satisfy the criteria.  The problem with a failure domain of OSD, is that
all of your data could reside on a single host and you could lose access to
your data after restarting a single host.

On Mon, Oct 23, 2017 at 3:23 PM LOPEZ Jean-Charles 
wrote:

> Hi,
>
> the default failure domain if not specified on the CLI at the moment you
> create your EC profile is set to HOST. So you need 14 OSDs spread across 14
> different nodes by default. And you only have 8 different nodes.
>
> Regards
> JC
>
> On 23 Oct 2017, at 21:13, Karun Josy  wrote:
>
> Thank you for the reply.
>
> There are 8 OSD nodes with 23 OSDs in total. (However, they are not
> distributed equally on all nodes)
>
> So it satisfies that criteria, right?
>
>
>
> Karun Josy
>
> On Tue, Oct 24, 2017 at 12:30 AM, LOPEZ Jean-Charles 
> wrote:
>
>> Hi,
>>
>> yes you need as many OSDs that k+m is equal to. In your example you need
>> a minimum of 14 OSDs for each PG to become active+clean.
>>
>> Regards
>> JC
>>
>> On 23 Oct 2017, at 20:29, Karun Josy  wrote:
>>
>> Hi,
>>
>> While creating a pool with erasure code profile k=10, m=4, I get PG
>> status as
>> "200 creating+incomplete"
>>
>> While creating pool with profile k=5, m=3 it works fine.
>>
>> Cluster has 8 OSDs with total 23 disks.
>>
>> Is there any requirements for setting the first profile ?
>>
>> Karun
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Erasure code profile

2017-10-23 Thread LOPEZ Jean-Charles
Hi,

the default failure domain if not specified on the CLI at the moment you create 
your EC profile is set to HOST. So you need 14 OSDs spread across 14 different 
nodes by default. And you only have 8 different nodes.

Regards
JC

> On 23 Oct 2017, at 21:13, Karun Josy  wrote:
> 
> Thank you for the reply.
> 
> There are 8 OSD nodes with 23 OSDs in total. (However, they are not 
> distributed equally on all nodes)
> 
> So it satisfies that criteria, right?
> 
> 
> 
> Karun Josy
> 
> On Tue, Oct 24, 2017 at 12:30 AM, LOPEZ Jean-Charles  > wrote:
> Hi,
> 
> yes you need as many OSDs that k+m is equal to. In your example you need a 
> minimum of 14 OSDs for each PG to become active+clean.
> 
> Regards
> JC
> 
>> On 23 Oct 2017, at 20:29, Karun Josy > > wrote:
>> 
>> Hi,
>> 
>> While creating a pool with erasure code profile k=10, m=4, I get PG status as
>> "200 creating+incomplete"
>> 
>> While creating pool with profile k=5, m=3 it works fine.
>> 
>> Cluster has 8 OSDs with total 23 disks.
>> 
>> Is there any requirements for setting the first profile ?
>> 
>> Karun 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> 
> 
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Erasure code profile

2017-10-23 Thread Karun Josy
Thank you for the reply.

There are 8 OSD nodes with 23 OSDs in total. (However, they are not
distributed equally on all nodes)

So it satisfies that criteria, right?



Karun Josy

On Tue, Oct 24, 2017 at 12:30 AM, LOPEZ Jean-Charles 
wrote:

> Hi,
>
> yes you need as many OSDs that k+m is equal to. In your example you need a
> minimum of 14 OSDs for each PG to become active+clean.
>
> Regards
> JC
>
> On 23 Oct 2017, at 20:29, Karun Josy  wrote:
>
> Hi,
>
> While creating a pool with erasure code profile k=10, m=4, I get PG status
> as
> "200 creating+incomplete"
>
> While creating pool with profile k=5, m=3 it works fine.
>
> Cluster has 8 OSDs with total 23 disks.
>
> Is there any requirements for setting the first profile ?
>
> Karun
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Erasure code profile

2017-10-23 Thread Jorge Pinilla López
I have one question, what can or can't do a cluster working on degraded
mode?

With K=10 + M = 4 if one of my OSDs node fails it will start working on
degraded mode, but can I still do writes and reads from that pool?


El 23/10/2017 a las 21:01, Ronny Aasen escribió:
> On 23.10.2017 20:29, Karun Josy wrote:
>> Hi,
>>
>> While creating a pool with erasure code profile k=10, m=4, I get PG
>> status as
>> "200 creating+incomplete"
>>
>> While creating pool with profile k=5, m=3 it works fine.
>>
>> Cluster has 8 OSDs with total 23 disks.
>>
>> Is there any requirements for setting the first profile ?
>
>
> you need K+M+X  osd nodes. K and M comes from the profile, X is how
> many nodes you want to be able to tolerate failure of, without
> becoming degraded. (how many failed nodes ceph should be able to
> automatically heal)
>
> so with K=10 + M = 4 you need minimum 14 nodes and you have 0 fault
> tolerance (a single failure = a degreded cluster)  so you have to
> scramble to replace the node to get HEALTH OK again.  if you have 15
> nodes you can loose 1 node and cehp will automatically rebalance to
> the 14 needed nodes, and you can replace the lost node at your leisure.
>
> kind regards
> Ronny Aasen
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-- 

*Jorge Pinilla López*
jorp...@unizar.es
Estudiante de ingenieria informática
Becario del area de sistemas (SICUZ)
Universidad de Zaragoza
PGP-KeyID: A34331932EBC715A


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Erasure code profile

2017-10-23 Thread Ronny Aasen

On 23.10.2017 20:29, Karun Josy wrote:

Hi,

While creating a pool with erasure code profile k=10, m=4, I get PG 
status as

"200 creating+incomplete"

While creating pool with profile k=5, m=3 it works fine.

Cluster has 8 OSDs with total 23 disks.

Is there any requirements for setting the first profile ?



you need K+M+X  osd nodes. K and M comes from the profile, X is how many 
nodes you want to be able to tolerate failure of, without becoming 
degraded. (how many failed nodes ceph should be able to automatically heal)


so with K=10 + M = 4 you need minimum 14 nodes and you have 0 fault 
tolerance (a single failure = a degreded cluster)  so you have to 
scramble to replace the node to get HEALTH OK again.  if you have 15 
nodes you can loose 1 node and cehp will automatically rebalance to the 
14 needed nodes, and you can replace the lost node at your leisure.


kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Erasure code profile

2017-10-23 Thread LOPEZ Jean-Charles
Hi,

yes you need as many OSDs that k+m is equal to. In your example you need a 
minimum of 14 OSDs for each PG to become active+clean.

Regards
JC

> On 23 Oct 2017, at 20:29, Karun Josy  wrote:
> 
> Hi,
> 
> While creating a pool with erasure code profile k=10, m=4, I get PG status as
> "200 creating+incomplete"
> 
> While creating pool with profile k=5, m=3 it works fine.
> 
> Cluster has 8 OSDs with total 23 disks.
> 
> Is there any requirements for setting the first profile ?
> 
> Karun 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Erasure code profile

2017-10-23 Thread Karun Josy
Hi,

While creating a pool with erasure code profile k=10, m=4, I get PG status
as
"200 creating+incomplete"

While creating pool with profile k=5, m=3 it works fine.

Cluster has 8 OSDs with total 23 disks.

Is there any requirements for setting the first profile ?

Karun
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] erasure code profile

2017-09-25 Thread Vincent Godin
If you have at least 2 hosts per room, you can use a k=3 and m=3 and
place 2 shards per room (one on each host). So you'll need 3 shards to
read the data : you can loose a room and one host in the two other
rooms and still get your data.It covers a double faults which is
better.
It will take more space : your proposal uses 1,5 x Data; this one uses 2 x Data
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] erasure code profile

2017-09-23 Thread Eric Goirand

Hello Luis,

To find what EC profile would be best in your environment, you would 
need to know :


- how mans disks or hosts failure would you accept : I understood from 
your email that you want to be able to loose one room, but won't you 
need a bit more, such as loosing 1 disk (or 1 host) in another room 
while the first one is down ?


- how many OSD nodes you can (or will) have per room or will you adapt 
this number from the EC profile you set up ?


These two questions answered, you will be able to set up the m parameter 
of the EC profile and you would then need to compute the k parameter so 
that you have the same number ((k+m) / 3) necessary OSD nodes per room 
at minimum.


In each situation, you would then certainly need to adapt the CRUSH 
ruleset associated with the EC profile to have exactly ((k+m) / 3) x EC 
chunks per room to be able to have access to all your data when one room 
is down.


Suppose that we only accept one room down and nothing more :

   - if m=1, k will be mandatory equal to 2 as you arrived to it, and
   you would only have 1 OSD node per room.

   - if m=2, k will be equal to 4 If I apply it and you would need 2
   OSD nodes per room and you would need to change EC 4+2 ruleset to
   have 2 chunks per room.

Suppose now that you want to have more possible downtime, for example 
you want to be able to perform the maintenance of one OSD node when one 
room is down, then you would need to have at least m = (number of OSD 
node in 1 room) + 1.


   - if I have 2 OSD nodes per room, m will need to be equal to 3 and
   by deduction k would be equal to 3 and I would need exactly 2 ((3+3)
   / 3) ruleset chunks per room.

   - if I have 3 OSD nodes per room, then m=4 and k=5 and you would
   need 3 chunks per room.

Now, this is a minimum and for a given EC profile (let's say 5+4) I 
would recommend to have one spare OSD node per room so that you could 
perform backfilling inside one room in case another OSD is down.


Thus, if you can have 12 OSD nodes in total, 4 OSD nodes per room, I 
would still be using profile EC 5+4 and changing the ruleset to have 
exactly 3 chunks per room, the efficiency of your cluster will be 55% 
(55 TiB per 100 TiB of raw capacity).


Also remember that you would still need a good network between rooms 
(both for speed and latency) and powerful CPUs on OSD nodes to compute 
the EC chunks all the time.


Best Regards,

Eric.

On 09/22/2017 10:39 AM, Luis Periquito wrote:

Hi all,

I've been trying to think what will be the best erasure code profile,
but I don't really like the one I came up with...

I have 3 rooms that are part of the same cluster, and I need to design
so we can lose any one of the 3.

As this is a backup cluster I was thinking on doing a k=2 m=1 code,
with ruleset-failure-domain=room as the OSD tree is correctly built.

Can anyone think of a better profile?

thanks,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] erasure code profile

2017-09-22 Thread Luis Periquito
On Fri, Sep 22, 2017 at 9:49 AM, Dietmar Rieder
 wrote:
> Hmm...
>
> not sure what happens if you loose 2 disks in 2 different rooms, isn't
> there is a risk that you loose  data ?

yes, and that's why I don't really like the profile...
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] erasure code profile

2017-09-22 Thread Dietmar Rieder
Hmm...

not sure what happens if you loose 2 disks in 2 different rooms, isn't
there is a risk that you loose  data ?

Dietmar

On 09/22/2017 10:39 AM, Luis Periquito wrote:
> Hi all,
> 
> I've been trying to think what will be the best erasure code profile,
> but I don't really like the one I came up with...
> 
> I have 3 rooms that are part of the same cluster, and I need to design
> so we can lose any one of the 3.
> 
> As this is a backup cluster I was thinking on doing a k=2 m=1 code,
> with ruleset-failure-domain=room as the OSD tree is correctly built.
> 
> Can anyone think of a better profile?
> 
> thanks,
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
_
D i e t m a r  R i e d e r, Mag.Dr.
Innsbruck Medical University
Biocenter - Division for Bioinformatics



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] erasure code profile

2017-09-22 Thread Luis Periquito
Hi all,

I've been trying to think what will be the best erasure code profile,
but I don't really like the one I came up with...

I have 3 rooms that are part of the same cluster, and I need to design
so we can lose any one of the 3.

As this is a backup cluster I was thinking on doing a k=2 m=1 code,
with ruleset-failure-domain=room as the OSD tree is correctly built.

Can anyone think of a better profile?

thanks,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com