[ceph-users] Re: Rebalance after draining - why?

2022-05-28 Thread denispolom
Hi, draining is initialized by ceph osd crush reweight osd. 0 28. 5. 2022 22:09:05 Nico Schottelius : > > Good evening dear fellow Ceph'ers, > > when removing OSDs from a cluster, we sometimes use > >     ceph osd reweight osd.XX 0 > > and wait until the OSD's content has been

[ceph-users] Re: Drained OSDs are still ACTIVE_PRIMARY - casuing high IO latency on clients

2022-05-20 Thread denispolom
Hi, no pool is EC. 20. 5. 2022 18:19:22 Dan van der Ster : > Hi, > > Just a curiosity... It looks like you're comparing an EC pool in octopus to a > replicated pool in nautilus. Does primary affinity work for you in octopus on > a replicated pool? And does a nautilus EC pool work? > > ..

[ceph-users] Re: Drained OSDs are still ACTIVE_PRIMARY - casuing high IO latency on clients

2022-05-20 Thread denispolom
Hi, yes, I had to change the procedure also. 1. Stop osd daemon 2. mark osd out in crush map But as you are writing, that makes PGs degraded. However it still looks like bug to me. 20. 5. 2022 17:25:47 Wesley Dillingham : > This sounds similar to an inquiry I submitted a couple years ago [1]

[ceph-users] Re: monitor not joining quorum

2021-10-19 Thread denispolom
Hi Adam, it's ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be) nautilus (stable) 19. 10. 2021 18:19:29 Adam King : > Hi Denis, > > Which ceph version is your cluster running on? I know there was an issue with > mons getting dropped from the monmap (and therefore being stuck out

[ceph-users] Re: ceph IO are interrupted when OSD goes down

2021-10-18 Thread denispolom
no, disks utilization is around 86%. What is safe value for min_size in this case? 18. 10. 2021 15:46:44 Eugen Block : > Hi, > > min_size = k is not the safest option, it should be only used in case  of > disaster recovery. But in this case it's not related to IO  interruption, it > seems.