Hello,

A routine reboot of one of the osd servers resulted in one unfound object. 
Following the documentation on unfound objects I have run

ceph pg 5.306 mark_unfound_lost delete

But, I've still got:

# ceph health detail
HEALTH_WARN recovery 1/2661869 unfound (0.000%)
recovery 1/2661869 unfound (0.000%)

This is our test cluster running Giant 0.87.1. What has happened and how can I 
resolve this?

-- 
  Eino Tuominen
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to