Hi,

Thank you so much for your kind information.  We will review the setting.

One things, If we want to use ssd replica size=2

As failure domain is host,  it should make sure two replica in two
different host,
Is there any drawback?

Regards,
Munna



On Thu, 9 Dec 2021, 20:35 Stefan Kooman, <[email protected]> wrote:

> On 12/9/21 13:01, Md. Hejbul Tawhid MUNNA wrote:
> > Hi,
> >
> > This is ceph.conf during the cluster deploy. ceph version is mimic.
> >
> > osd pool default size = 3
> > osd pool default min size = 1
> > osd pool default pg num = 1024
> > osd pool default pgp num = 1024
> > osd crush chooseleaf type = 1
> > mon_max_pg_per_osd = 2048
> > mon_allow_pool_delete = true
> > mon_pg_warn_min_per_osd = 0
> > mon_pg_warn_max_per_osd = 0
> > osd_max_pg_per_osd_hard_ratio = 8
>
>
>  > osd pool default size = 3
>  > osd pool default min size = 2
>
>  > osd pool default pg num = 32
>  > osd pool default pgp num = 32
>
> ^^ It depends a lot on how many pools you plan to make. And how many
> OSDs. Do net set it too high. It will lead to high mem usage of OSDs and
> will bring you into trouble when Ceph has to do recovery.
>
>  > mon_allow_pool_delete = false
>
> ^^ Do not accidentally allow pools to be deleted
>
> mon_max_pg_per_osd = 200
>
>
> Basically don't touch the defaults and set it to crazy high or crazy low
> values.
>
> This seems almost like a deliberate attempt at making the Ceph cluster a
> time bomb.
>
> Your cluster is full. Don't make any of those changes now. First make
> sure you have enough capacity in your cluster and / or add OSDs. Then
> fix the size=3 and min_size=2.
>
> Gr. Stefan
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to