Hi,
After set chooseleaf_descend_once=0, and migration 20% PGs ceph is HEALTH_OK.
"chooseleaf_descend_once" optimal value is 1 :-(
--
Regards
Dominik
2014-08-21 15:59 GMT+02:00 Dominik Mostowiec :
> Hi,
> I have 2 PG in active+remapped state.
>
> ceph health detail
> HEALTH_WARN 2 pgs stuck uncl
Hi,
I have 2 PG in active+remapped state.
ceph health detail
HEALTH_WARN 2 pgs stuck unclean; recovery 24/348041229 degraded (0.000%)
pg 3.1a07 is stuck unclean for 29239.046024, current state
active+remapped, last acting [167,80,145]
pg 3.154a is stuck unclean for 29239.039777, current state
acti
After replace broken disk and ceph osd in it, cluster:
ceph health detail
HEALTH_WARN 2 pgs stuck unclean; recovery 60/346857819 degraded (0.000%)
pg 3.884 is stuck unclean for 570722.873270, current state
active+remapped, last acting [143,261,314]
pg 3.154a is stuck unclean for 577659.917066, curr
Hi,
After ceph osd out ( 1 osd ) cluster stopped rebalancing on
10621 active+clean, 2 active+remapped, 1 active+degraded+remapped;
My crushmap is clean, there is not 'empty' device's.
grep device /tmp/crush1.txt | grep -v osd | grep -v '^#' | wc -l
0
Can You help me with this?
"up": [