Hi Sam,
> 'ceph pg <pgid> query'.
Thanks.
Looks like ceph is looking for and osd.20 which no longer exists:
"probing_osds": [
"1",
"7",
"15",
"16"],
"down_osds_we_would_probe": [
20],
So perhaps during my attempts to rehabilitate the cluster after the upgrade I
removed this OSD before it was fully drained. ?
What way forward?
Should I
ceph osd lost {id} [--yes-i-really-mean-it]
and move on?
Thanks for your help!
Chad.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com