On 2019-12-05 02:33, Janne Johansson wrote:
> Den tors 5 dec. 2019 kl 00:28 skrev Milan Kupcevic
> mailto:milan_kupce...@harvard.edu>>:
>
>
>
> There is plenty of space to take more than a few failed nodes. But the
> question was about what is going on inside a node with a few failed
>
Den tors 5 dec. 2019 kl 00:28 skrev Milan Kupcevic <
milan_kupce...@harvard.edu>:
>
>
> There is plenty of space to take more than a few failed nodes. But the
> question was about what is going on inside a node with a few failed
> drives. Current Ceph behavior keeps increasing number of placement
On 2019-12-04 04:11, Janne Johansson wrote:
> Den ons 4 dec. 2019 kl 01:37 skrev Milan Kupcevic
> mailto:milan_kupce...@harvard.edu>>:
>
> This cluster can handle this case at this moment as it has got plenty of
> free space. I wonder how is this going to play out when we get to 90% of
>
Den ons 4 dec. 2019 kl 01:37 skrev Milan Kupcevic <
milan_kupce...@harvard.edu>:
> This cluster can handle this case at this moment as it has got plenty of
> free space. I wonder how is this going to play out when we get to 90% of
> usage on the whole cluster. A single backplane failure in a node
On hdd failure the number of placement groups on the rest of osds on the
same host goes up. I would expect equal distribution of failed placement
groups across the cluster, not just on the troubled host. Shall the host
weight auto reduce whenever an osd gets out?
Exibit 1: Attached osd-df-tree