[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-31 Thread Eugen Block
_______________ From: Eugen Block Sent: Friday, May 24, 2024 2:51 PM To: ceph-users@ceph.io Subject: [ceph-users] Re: unknown PGs after adding hosts in different subtree I start to think that the root cause of the remapping is just the fact that the crush rule(s) contain(s) th

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-24 Thread Eugen Block
ards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 ________________________ From: Eugen Block Sent: Friday, May 24, 2024 2:51 PM To: ceph-users@ceph.io Subject: [ceph-users] Re: unknown PGs after adding hosts in different subtree I start to think tha

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-24 Thread Frank Schilder
egards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Eugen Block Sent: Friday, May 24, 2024 2:51 PM To: ceph-users@ceph.io Subject: [ceph-users] Re: unknown PGs after adding hosts in different subtree I start to think that t

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-24 Thread Eugen Block
From: Frank Schilder Sent: Thursday, May 23, 2024 6:32 PM To: Eugen Block Cc: ceph-users@ceph.io Subject: [ceph-users] Re: unknown PGs after adding hosts in different subtree Hi Eugen, I'm at home now. Could you please check all the remapped PGs that they have no shards on

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-24 Thread Eugen Block
= Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Frank Schilder Sent: Thursday, May 23, 2024 6:32 PM To: Eugen Block Cc: ceph-users@ceph.io Subject: [ceph-users] Re: unknown PGs after adding hosts in different subtree Hi Eugen,

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-23 Thread Frank Schilder
lock Cc: ceph-users@ceph.io Subject: [ceph-users] Re: unknown PGs after adding hosts in different subtree Hi Eugen, I'm at home now. Could you please check all the remapped PGs that they have no shards on the new OSDs, i.e. its just shuffling around mappings within the same set of OSDs under rooms?

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-23 Thread Frank Schilder
Hi Eugen, I'm at home now. Could you please check all the remapped PGs that they have no shards on the new OSDs, i.e. its just shuffling around mappings within the same set of OSDs under rooms? If this is the case, it is possible that this is partly intentional and partly buggy. The remapping

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-23 Thread Eugen Block
rank Schilder Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: unknown PGs after adding hosts in different subtree Hi Frank, thanks for chiming in here. Please correct if this is wrong. Assuming its correct, I conclude the following. You assume correctly. Now, from your description it

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-23 Thread Eugen Block
= Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Eugen Block Sent: Thursday, May 23, 2024 1:26 PM To: Frank Schilder Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: unknown PGs after adding hosts in different subtree Hi Frank,

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-23 Thread Eugen Block
Eugen Block Sent: Thursday, May 23, 2024 1:26 PM To: Frank Schilder Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: unknown PGs after adding hosts in different subtree Hi Frank, thanks for chiming in here. Please correct if this is wrong. Assuming its correct, I conclude the follo

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-23 Thread Frank Schilder
the crush map. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Eugen Block Sent: Thursday, May 23, 2024 1:26 PM To: Frank Schilder Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: unknown PGs after adding hosts in

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-23 Thread Eugen Block
and describe at which step exactly things start diverging from my expectations. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 ____ From: Eugen Block Sent: Thursday, May 23, 2024 12:05 PM To: ceph-users@ceph.io Subject: [ceph-use

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-23 Thread Frank Schilder
Sent: Thursday, May 23, 2024 12:05 PM To: ceph-users@ceph.io Subject: [ceph-users] Re: unknown PGs after adding hosts in different subtree Hi again, I'm still wondering if I misunderstand some of the ceph concepts. Let's assume the choose_tries value is too low and ceph can't fin

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-23 Thread Eugen Block
Hi again, I'm still wondering if I misunderstand some of the ceph concepts. Let's assume the choose_tries value is too low and ceph can't find enough OSDs for the remapping. I would expect that there are some PG chunks in remapping state or unknown or whatever, but why would it affect the

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-21 Thread Eugen Block
Thanks, Konstantin. It's been a while since I was last bitten by the choose_tries being too low... Unfortunately, I won't be able to verify that... But I'll definitely keep that in mind, or least I'll try to. :-D Thanks! Zitat von Konstantin Shalygin : Hi Eugen On 21 May 2024, at 15:26,

[ceph-users] Re: unknown PGs after adding hosts in different subtree

2024-05-21 Thread Konstantin Shalygin
Hi Eugen > On 21 May 2024, at 15:26, Eugen Block wrote: > > step set_choose_tries 100 I think you should try to increase set_choose_tries to 200 Last year we had an Pacific EC 8+2 deployment of 10 racks. And even with 50 hosts, the value of 100 not worked for us k ___