Re: [ceph-users] Shall host weight auto reduce on hdd failure?

2019-12-05 Thread Milan Kupcevic
On 2019-12-05 02:33, Janne Johansson wrote: > Den tors 5 dec. 2019 kl 00:28 skrev Milan Kupcevic > mailto:milan_kupce...@harvard.edu>>: > > > > There is plenty of space to take more than a few failed nodes. But the > question was about what is going on inside a node with a few failed >

Re: [ceph-users] Shall host weight auto reduce on hdd failure?

2019-12-04 Thread Janne Johansson
Den tors 5 dec. 2019 kl 00:28 skrev Milan Kupcevic < milan_kupce...@harvard.edu>: > > > There is plenty of space to take more than a few failed nodes. But the > question was about what is going on inside a node with a few failed > drives. Current Ceph behavior keeps increasing number of placement

Re: [ceph-users] Shall host weight auto reduce on hdd failure?

2019-12-04 Thread Milan Kupcevic
On 2019-12-04 04:11, Janne Johansson wrote: > Den ons 4 dec. 2019 kl 01:37 skrev Milan Kupcevic > mailto:milan_kupce...@harvard.edu>>: > > This cluster can handle this case at this moment as it has got plenty of > free space. I wonder how is this going to play out when we get to 90% of >

Re: [ceph-users] Shall host weight auto reduce on hdd failure?

2019-12-04 Thread Janne Johansson
Den ons 4 dec. 2019 kl 01:37 skrev Milan Kupcevic < milan_kupce...@harvard.edu>: > This cluster can handle this case at this moment as it has got plenty of > free space. I wonder how is this going to play out when we get to 90% of > usage on the whole cluster. A single backplane failure in a node

[ceph-users] Shall host weight auto reduce on hdd failure?

2019-12-03 Thread Milan Kupcevic
On hdd failure the number of placement groups on the rest of osds on the same host goes up. I would expect equal distribution of failed placement groups across the cluster, not just on the troubled host. Shall the host weight auto reduce whenever an osd gets out? Exibit 1: Attached osd-df-tree