Well, ok - I found the solution:
ceph health detail
HEALTH_WARN 50 pgs stale; 50 pgs stuck stale
pg 34.225 is stuck inactive since forever, current state
creating, last acting []
pg 34.225 is stuck unclean since forever, current state
creating, last acting []
pg 34.226 is stuck stale for 77328.923060, current state
stale+active+clean, last acting [21]
pg 34.3cb is stuck stale for 77328.923213, current state
stale+active+clean, last acting [21]
....
root@ceph-admin:~# ceph pg map 34.225
osdmap e18263 pg 34.225 (34.225) -> up [16] acting [16]
After restart osd.16, pg 34.225 is fine.
So, I recreate all the broken PG's:
for pg in `ceph health detail | grep stale | cut -d' ' -f2`; do ceph pg
force_create_pg $pg; done
and restart all (or the necessary) OSD's..
Now, the cluster is HEALTH_OK again.
root@ceph-admin:~# ceph health
HEALTH_OK
Best regards
Danny
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
