Hi again,

I'm still wondering if I misunderstand some of the ceph concepts. Let's assume the choose_tries value is too low and ceph can't find enough OSDs for the remapping. I would expect that there are some PG chunks in remapping state or unknown or whatever, but why would it affect the otherwise healthy cluster in such a way? Even if ceph doesn't know where to put some of the chunks, I wouldn't expect inactive PGs and have a service interruption.
What am I missing here?

Thanks,
Eugen

Zitat von Eugen Block <ebl...@nde.ag>:

Thanks, Konstantin.
It's been a while since I was last bitten by the choose_tries being too low... Unfortunately, I won't be able to verify that... But I'll definitely keep that in mind, or least I'll try to. :-D

Thanks!

Zitat von Konstantin Shalygin <k0...@k0ste.ru>:

Hi Eugen

On 21 May 2024, at 15:26, Eugen Block <ebl...@nde.ag> wrote:

step set_choose_tries 100

I think you should try to increase set_choose_tries to 200
Last year we had an Pacific EC 8+2 deployment of 10 racks. And even with 50 hosts, the value of 100 not worked for us


k


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to