Hello.
I get it. I will do a test and let you know.
Thank you much.
Nguyen Huu Khoi
On Fri, Nov 24, 2023 at 5:01 PM Janne Johansson wrote:
> Den fre 24 nov. 2023 kl 08:53 skrev Nguyễn Hữu Khôi <
> nguyenhuukho...@gmail.com>:
> >
> > Hello.
> > I have 10 nodes. My goal is to ensure that I
Den fre 24 nov. 2023 kl 08:53 skrev Nguyễn Hữu Khôi :
>
> Hello.
> I have 10 nodes. My goal is to ensure that I won't lose data if 2 nodes
> fail.
Now you are mixing terms here.
There is a difference between "cluster stops" and "losing data".
If you have EC 8+2 and min_size 9, then when you
Hello.
I have 10 nodes. My goal is to ensure that I won't lose data if 2 nodes
fail.
Nguyen Huu Khoi
On Fri, Nov 24, 2023 at 2:47 PM Etienne Menguy
wrote:
> Hello,
>
> How many nodes do you have?
>
> > -Original Message-
> > From: Nguyễn Hữu Khôi
> > Sent: vendredi 24 novembre 2023
Hello,
How many nodes do you have?
> -Original Message-
> From: Nguyễn Hữu Khôi
> Sent: vendredi 24 novembre 2023 07:42
> To: ceph-users@ceph.io
> Subject: [ceph-users] [CEPH] Ceph multi nodes failed
>
> Hello guys.
>
> I see many docs and threads talking about osd failed. I have a
Hello.
I am reading.
Thank you for information.
Nguyen Huu Khoi
On Fri, Nov 24, 2023 at 1:56 PM Eugen Block wrote:
> Hi,
>
> basically, with EC pools you usually have a min_size of k + 1 to
> prevent data loss. There was a thread about that just a few days ago
> on this list. So in your case
Hi,
basically, with EC pools you usually have a min_size of k + 1 to
prevent data loss. There was a thread about that just a few days ago
on this list. So in your case your min_size is probably 9, which makes
IO pause in case two chunks become unavailable. If your crush failure
domain is