Only change it with a custom profile, no with built-in profiles, i am
configuring it from ceph dashboard.
osd_mclock_scheduler_client_wgt=6 -> this is my setting
Vào Th 7, 13 thg 1, 2024 vào lúc 02:19 Anthony D'Atri
đã viết:
>
>
> > On Jan 12, 2024, at 03:31, Phong Tran Thanh
> wrote:
> >
>
> On Jan 12, 2024, at 03:31, Phong Tran Thanh wrote:
>
> Hi Yang and Anthony,
>
> I found the solution for this problem on a HDD disk 7200rpm
>
> When the cluster recovers, one or multiple disk failures because slowop
> appears and then affects the cluster, we can change these configurations
--
> Agoda Services Co., Ltd.
> e: istvan.sz...@agoda.com
> ---
>
>
> --
> *From:* Phong Tran Thanh
> *Sent:* Friday, January 12, 2024 3:32 PM
> *To:* David Yang
> *Cc:* ceph-user
g Tran Thanh
Sent: Friday, January 12, 2024 3:32 PM
To: David Yang
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: About ceph disk slowops effect to cluster
Email received from the internet. If in doubt, don't click any link nor open
any attachment !
I
I update the config
osd_mclock_profile=custom
osd_mclock_scheduler_background_recovery_lim=0.2
osd_mclock_scheduler_background_recovery_res=0.2
osd_mclock_scheduler_client_wgt=6
Vào Th 6, 12 thg 1, 2024 vào lúc 15:31 Phong Tran Thanh <
tranphong...@gmail.com> đã viết:
> Hi Yang and Anthony,
>
>
Hi Yang and Anthony,
I found the solution for this problem on a HDD disk 7200rpm
When the cluster recovers, one or multiple disk failures because slowop
appears and then affects the cluster, we can change these configurations
and may reduce IOPS when recovery.
osd_mclock_profile=custom
osd_mclock
The 2*10Gbps shared network seems to be full (1.9GB/s).
Is it possible to reduce part of the workload and wait for the cluster
to return to a healthy state?
Tip: Erasure coding needs to collect all data blocks when recovering
data, so it takes up a lot of network card bandwidth and processor
resour