Hi,
Yesterday I removed two OSD's, to replace them with new disks. Ceph was
not able to completely reach all active+clean state, but some degraded
objects remain. However, the amount of degraded objects is negative
(-82), see below:
2014-10-30 13:31:32.862083 mon.0 [INF] pgmap v209175: 768 pgs: 761
active+clean, 7 active+remapped; 1644 GB data, 2524 GB used, 17210 GB /
19755 GB avail; 2799 B/s wr, 1 op/s; -82/1439391 objects degraded (-0.006%)
According to "rados df", the -82 degraded objects are part of the
cephfs-data-cache pool, which is an SSD-backed replicated pool, that
functions as a cache pool for an HDD-backed erasure coded pool for cephfs.
The cache should be empty, because I isseud "rados
cache-flush-evict-all"-command, and "rados -p cephfs-data-cache ls"
indeed shows zero objects in this pool.
"rados df" however does show 192 objects for this pool, with just 35KB
used and -82 degraded:
pool name category KB objects clones
degraded unfound rd rd KB wr wr KB
cephfs-data-cache - 35 192 0
-82 0 1119 348800 1198371 1703673493
Please advice...
Thanks,
Erik.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com