t; > >>>>> 20hdd9.02330 1.0 9.0 TiB 2.8 TiB 1.1 TiB 5.6
>>> MiB
>>> > >>>>> 3.8 GiB 6.2 TiB 31.55 0.90 20 up osd.20
>>> > >>>>> 23hdd9.02330 1.0 9.0 TiB 2.6 TiB 828
0 1.0 9.0 TiB 2.8 TiB 1.1 TiB 5.8
>> MiB
>> > >>>>> 3.7 GiB 6.2 TiB 31.56 0.91 22 up osd.31
>> > >>>>> 34hdd9.02330 1.0 9.0 TiB 3.3 TiB 1.5 TiB 8.2
>> MiB
>> > >>&
1.4 TiB 8.4 MiB
> > >>>>> 4.4 GiB 5.9 TiB 34.85 1.00 23 up osd.46
> > >>>>> TOTAL 433 TiB 151 TiB 67 TiB 364 MiB
> > >>>>> 210 GiB 282 TiB 34.86
> > >>>>> MIN/M
I now concur you should increase the pg_num as a first step for this
> >>>>>> cluster. Disable the pg autoscaler for and increase the volumes pool to
> >>>>>> pg_num 256. Then likely re-asses and make the next power of 2 jump to
> >>>>>>
t;> continue past 256.
>>>>>>
>>>>>> with the ultimate target of around 100-200 PGs per OSD which "ceph
>>>>>> osd df tree" will show you in the PGs column.
>>>>>>
>>>>>> Respectfully,
&
>>>>>
>>>>>> Dear team,
>>>>>>
>>>>>> below is the output of ceph df command and the ceph version I am
>>>>>> running
>>>>>>
>>>>>> ceph df
>>>>>> --- RAW
ice_health_metrics 11 1.1 MiB3 3.2 MiB 0
>>>>> 73 TiB
>>>>> .rgw.root 2 32 3.7 KiB8 96 KiB 0
>>>>> 73 TiB
>>>>> default.rgw.log 3 32 3.6 KiB 209 408 KiB 0
>>
gt;>> images 7 32 878 GiB 112.50k 2.6 TiB 1.17 73
>>>> TiB
>>>> backups 8 32 0 B0 0 B 0 73
>>>> TiB
>>>> vms 9 32 881 GiB 174.30k 2.5 TiB 1.13 73
>>>> TiB
>>>> testb
)
>>> root@ceph-mon1:~#
>>>
>>> please advise accordingly
>>>
>>> Michel
>>>
>>> On Mon, Jan 29, 2024 at 9:48 PM Frank Schilder wrote:
>>>
>>> > You will have to look at the output of "ceph df" and make a decision to
>>> >
e-evaluate. Take the
>> > time for it. The better you know your cluster and your users, the better
>> > the end result will be.
>> >
>> > Best regards,
>> > =
>> > Frank Schilder
>> > AIT Risø Campus
>> > Bygnin
or it. The better you know your cluster and your users, the better
> > the end result will be.
> >
> > Best regards,
> > =
> > Frank Schilder
> > AIT Risø Campus
> > Bygning 109, rum S14
> >
> >
> > From: Michel Niyoyita
, rum S14
>
>
> From: Michel Niyoyita
> Sent: Monday, January 29, 2024 2:04 PM
> To: Janne Johansson
> Cc: Frank Schilder; E Taka; ceph-users
> Subject: Re: [ceph-users] Re: 6 pgs not deep-scrubbed in time
>
> This is how it is set
result will
be.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Michel Niyoyita
Sent: Monday, January 29, 2024 2:04 PM
To: Janne Johansson
Cc: Frank Schilder; E Taka; ceph-users
Subject: Re: [ceph-users] Re: 6 pgs not deep-scrub
lingham
Sent: Monday, January 29, 2024 7:14 PM
To: Michel Niyoyita
Cc: Josh Baergen; E Taka; ceph-users
Subject: [ceph-users] Re: 6 pgs not deep-scrubbed in time
Respond back with "ceph versions" output
If your sole goal is to eliminate the not scrubbed in time errors you can
increas
Respond back with "ceph versions" output
If your sole goal is to eliminate the not scrubbed in time errors you can
increase the aggressiveness of scrubbing by setting:
osd_max_scrubs = 2
The default in pacific is 1.
if you are going to start tinkering manually with the pg_num you will want
to tu
You need to be running at least 16.2.11 on the OSDs so that you have
the fix for https://tracker.ceph.com/issues/55631.
On Mon, Jan 29, 2024 at 8:07 AM Michel Niyoyita wrote:
>
> I am running ceph pacific , version 16 , ubuntu 20 OS , deployed using
> ceph-ansible.
>
> Michel
>
> On Mon, Jan 29,
I am running ceph pacific , version 16 , ubuntu 20 OS , deployed using
ceph-ansible.
Michel
On Mon, Jan 29, 2024 at 4:47 PM Josh Baergen
wrote:
> Make sure you're on a fairly recent version of Ceph before doing this,
> though.
>
> Josh
>
> On Mon, Jan 29, 2024 at 5:05 AM Janne Johansson
> wrot
Make sure you're on a fairly recent version of Ceph before doing this, though.
Josh
On Mon, Jan 29, 2024 at 5:05 AM Janne Johansson wrote:
>
> Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita :
> >
> > Thank you Frank ,
> >
> > All disks are HDDs . Would like to know if I can increase the num
This is how it is set , if you suggest to make some changes please advises.
Thank you.
ceph osd pool ls detail
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 1407
flags hashpspool stripe_width 0 pg_nu
Thank you Janne ,
no need of setting some flags like ceph osd set nodeep-scrub ???
Thank you
On Mon, Jan 29, 2024 at 2:04 PM Janne Johansson wrote:
> Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita :
> >
> > Thank you Frank ,
> >
> > All disks are HDDs . Would like to know if I can increa
Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita :
>
> Thank you Frank ,
>
> All disks are HDDs . Would like to know if I can increase the number of PGs
> live in production without a negative impact to the cluster. if yes which
> commands to use .
Yes. "ceph osd pool set pg_num "
where the nu
ch with disk performance.
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> ____
> From: Michel Niyoyita
> Sent: Monday, January 29, 2024 7:42 AM
> To: E Taka
> Cc: ceph-users
m S14
From: Michel Niyoyita
Sent: Monday, January 29, 2024 7:42 AM
To: E Taka
Cc: ceph-users
Subject: [ceph-users] Re: 6 pgs not deep-scrubbed in time
Now they are increasing , Friday I tried to deep-scrubbing manually and
they have been successfully done , but M
Now they are increasing , Friday I tried to deep-scrubbing manually and
they have been successfully done , but Monday morning I found that they are
increasing to 37 , is it the best to deep-scrubbing manually while we are
using the cluster? if not what is the best to do in order to address that .
22 is more often there than the others. Other operations may be blocked
because of a deep-scrub is not finished yet. I would remove OSD 22, just to
be sure about this: ceph orch osd rm osd.22
If this does not help, just add it again.
Am Fr., 26. Jan. 2024 um 08:05 Uhr schrieb Michel Niyoyita <
mi
It seems that are different OSDs as shown here . how have you managed to
sort this out?
ceph pg dump | grep -F 6.78
dumped all
6.78 44268 0 0 00
1786796401180 0 10099 10099
active+clean 2024-01-26T03:51:26.781438+0200 1
We had the same problem. It turned out that one disk was slowly dying. It
was easy to identify by the commands (in your case):
ceph pg dump | grep -F 6.78
ceph pg dump | grep -F 6.60
…
This command shows the OSDs of a PG in square brackets. If is there always
the same number, then you've found th
27 matches
Mail list logo