Interesting. Any idea why degraded could be negative? :) 

2015-07-02 17:27:11.551959 mon.0 [INF] pgmap v23198138: 36032 pgs: 35468 
active+clean, 551 active+recovery_wait, 13 active+recovering; 13005 GB data, 
48944 GB used, 21716 GB / 70660 GB avail; 11159KB/s rd, 129MB/s wr, 5059op/s; 
-2688/11949162 degraded (-0.022%)


This happened after I shut down all OSDs on one node and then started them 
again after 30 minutes.
First the percentage went down, reached 0% and then started going negative… now 
it’s going “up” (towards zero) again.

The number of pgs to recover is still going down and I’m not that worried. I 
find it a bit funny, though ;-)

Jan


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to