I think it's because you have too full osds like in warning message. I had
similiar problem recently and i did:

ceph osd reweight-by-utilization

But first read what this command does. It solved problem for me.

2014-10-20 14:45 GMT+02:00 Harald Rößler <[email protected]>:

> Dear All
>
> I have in them moment a issue with my cluster. The recovery process stops.
>
> ceph -s
>    health HEALTH_WARN 188 pgs backfill; 156 pgs backfill_toofull; 4 pgs
> backfilling; 55 pgs degraded; 49 pgs recovery_wait; 297 pgs stuck unclean;
> recovery 111487/1488290 degraded (7.491%)
>    monmap e2: 3 mons at {0=
> 10.99.10.10:6789/0,12=10.99.10.22:6789/0,6=10.99.10.16:6789/0}, election
> epoch 332, quorum 0,1,2 0,12,6
>    osdmap e6748: 24 osds: 23 up, 23 in
>     pgmap v43314672: 3328 pgs: 3031 active+clean, 43
> active+remapped+wait_backfill, 3 active+degraded+wait_backfill, 96
> active+remapped+wait_backfill+backfill_toofull, 31 active+recovery_wait, 19
> active+degraded+wait_backfill+backfill_toofull, 36 active+remapped, 3
> active+remapped+backfilling, 18 active+remapped+backfill_toofull, 6
> active+degraded+remapped+wait_backfill, 15 active+recovery_wait+remapped,
> 21 active+degraded+remapped+wait_backfill+backfill_toofull, 1
> active+recovery_wait+degraded, 1 active+degraded+remapped+backfilling, 2
> active+degraded+remapped+backfill_toofull, 2
> active+recovery_wait+degraded+remapped; 1698 GB data, 5206 GB used, 971 GB
> / 6178 GB avail; 24382B/s rd, 12411KB/s wr, 320op/s; 111487/1488290
> degraded (7.491%)
>
>
> I have tried to restart all OSD in the cluster, but does not help to
> finish the recovery of the cluster.
>
> Have someone any idea
>
> Kind Regards
> Harald Rößler
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to