On Mon, Feb 26, 2018 at 5:09 AM Wolfgang Lendl <
wolfgang.le...@meduniwien.ac.at> wrote:
> hi,
>
> I have no idea what "w=8" means and can't find any hints in docs ...
> maybe someone can explain
>
>
> ceph 12.2.2
>
> # ceph osd erasure-code-profile get ec42
> crush-device-class=hdd
>
hi,
I have no idea what "w=8" means and can't find any hints in docs ...
maybe someone can explain
ceph 12.2.2
# ceph osd erasure-code-profile get ec42
crush-device-class=hdd
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=4
m=2
plugin=jerasure
.uk>
Fecha: 24/10/17 10:32 p. m. (GMT+01:00) Para: Karun Josy
<karunjo...@gmail.com> Cc: ceph-users <ceph-users@lists.ceph.com> Asunto: Re:
[ceph-users] Erasure code profile
Consider a cluster of 8 OSD servers with 3 disks on each server.
If I use a profile setting of k=5
> Consider a cluster of 8 OSD servers with 3 disks on each server.
>
> If I use a profile setting of k=5, m=3 and ruleset-failure-domain=host ;
>
> As far as I understand it can tolerate failure of 3 OSDs and 1 host, am I
> right ?
When setting up your pool, you specify a crush map which
yes you can. but just like a raid5 array with a lost disk, it is not a
comfortable way to run your cluster for any significant time. you also
get performance degradations.
having a warning active all the time makes it harder to detect new
issues, and such. One becomes numb to the warning
If you use a OSD failure domain, if a node goes down you can lose your
data and the cluster wont be able to work.
If you restart the OSD it might work, but you could even lose your data
as your cluster can't rebuild itself.
You can try to know where the CRUSH rule is going to set your data but I
This can be changed to a failure domain of OSD in which case it could
satisfy the criteria. The problem with a failure domain of OSD, is that
all of your data could reside on a single host and you could lose access to
your data after restarting a single host.
On Mon, Oct 23, 2017 at 3:23 PM
Hi,
the default failure domain if not specified on the CLI at the moment you create
your EC profile is set to HOST. So you need 14 OSDs spread across 14 different
nodes by default. And you only have 8 different nodes.
Regards
JC
> On 23 Oct 2017, at 21:13, Karun Josy
Thank you for the reply.
There are 8 OSD nodes with 23 OSDs in total. (However, they are not
distributed equally on all nodes)
So it satisfies that criteria, right?
Karun Josy
On Tue, Oct 24, 2017 at 12:30 AM, LOPEZ Jean-Charles
wrote:
> Hi,
>
> yes you need as many
I have one question, what can or can't do a cluster working on degraded
mode?
With K=10 + M = 4 if one of my OSDs node fails it will start working on
degraded mode, but can I still do writes and reads from that pool?
El 23/10/2017 a las 21:01, Ronny Aasen escribió:
> On 23.10.2017 20:29, Karun
On 23.10.2017 20:29, Karun Josy wrote:
Hi,
While creating a pool with erasure code profile k=10, m=4, I get PG
status as
"200 creating+incomplete"
While creating pool with profile k=5, m=3 it works fine.
Cluster has 8 OSDs with total 23 disks.
Is there any requirements for setting the
Hi,
yes you need as many OSDs that k+m is equal to. In your example you need a
minimum of 14 OSDs for each PG to become active+clean.
Regards
JC
> On 23 Oct 2017, at 20:29, Karun Josy wrote:
>
> Hi,
>
> While creating a pool with erasure code profile k=10, m=4, I get
Hi,
While creating a pool with erasure code profile k=10, m=4, I get PG status
as
"200 creating+incomplete"
While creating pool with profile k=5, m=3 it works fine.
Cluster has 8 OSDs with total 23 disks.
Is there any requirements for setting the first profile ?
Karun
If you have at least 2 hosts per room, you can use a k=3 and m=3 and
place 2 shards per room (one on each host). So you'll need 3 shards to
read the data : you can loose a room and one host in the two other
rooms and still get your data.It covers a double faults which is
better.
It will take more
Hello Luis,
To find what EC profile would be best in your environment, you would
need to know :
- how mans disks or hosts failure would you accept : I understood from
your email that you want to be able to loose one room, but won't you
need a bit more, such as loosing 1 disk (or 1 host) in
On Fri, Sep 22, 2017 at 9:49 AM, Dietmar Rieder
wrote:
> Hmm...
>
> not sure what happens if you loose 2 disks in 2 different rooms, isn't
> there is a risk that you loose data ?
yes, and that's why I don't really like the profile...
Hmm...
not sure what happens if you loose 2 disks in 2 different rooms, isn't
there is a risk that you loose data ?
Dietmar
On 09/22/2017 10:39 AM, Luis Periquito wrote:
> Hi all,
>
> I've been trying to think what will be the best erasure code profile,
> but I don't really like the one I
Hi all,
I've been trying to think what will be the best erasure code profile,
but I don't really like the one I came up with...
I have 3 rooms that are part of the same cluster, and I need to design
so we can lose any one of the 3.
As this is a backup cluster I was thinking on doing a k=2 m=1
18 matches
Mail list logo