Le lundi 15 avril 2013 à 10:57 -0700, Gregory Farnum a écrit :
> On Mon, Apr 15, 2013 at 10:19 AM, Olivier Bonvalet <[email protected]> 
> wrote:
> > Le lundi 15 avril 2013 à 10:16 -0700, Gregory Farnum a écrit :
> >> Are you saying you saw this problem more than once, and so you
> >> completely wiped the OSD in question, then brought it back into the
> >> cluster, and now it's seeing this error again?
> >
> > Yes, it's exactly that.
> >
> >
> >> Are any other OSDs experiencing this issue?
> >
> > No, only this one have the problem.
> 
> Did you run scrubs while this node was out of the cluster? If you
> wiped the data and this is recurring then this is apparently an issue
> with the cluster state, not just one node, and any other primary for
> the broken PG(s) should crash as well. Can you verify by taking this
> one down and then doing a full scrub?

That OSD staid down more than one week, but "IN" the cluster to avoid
rebalance. So the full scrub was already done.



_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to