Just to update this issue.
I stopped OSD.6, removed the PG from disk, and restarted it. Ceph rebuilt
the object and it went to HEALTH_OK.
During the weekend the disk for OSD.6 started giving smart errors and will
be replaced.
Thanks for your help Greg. I've opened a bug report in the tracker.
Hi Greg,
thanks for your help. It's always highly appreciated. :)
On Thu, Dec 11, 2014 at 6:41 PM, Gregory Farnum g...@gregs42.com wrote:
On Thu, Dec 11, 2014 at 2:57 AM, Luis Periquito periqu...@gmail.com
wrote:
Hi,
I've stopped OSD.16, removed the PG from the local filesystem and
On Thu, Dec 11, 2014 at 2:57 AM, Luis Periquito periqu...@gmail.com wrote:
Hi,
I've stopped OSD.16, removed the PG from the local filesystem and started
the OSD again. After ceph rebuilt the PG in the removed OSD I ran a
deep-scrub and the PG is still inconsistent.
What led you to remove it
Be very careful with running ceph pg repair. Have a look at this
thread:
http://thread.gmane.org/gmane.comp.file-systems.ceph.user/15185
--
Tomasz Kuzemko
tomasz.kuze...@ovh.net
On Thu, Dec 11, 2014 at 10:57:22AM +, Luis Periquito wrote:
Hi,
I've stopped OSD.16, removed the PG from the
Hi,
In the last few days this PG (pool is .rgw.buckets) has been in error after
running the scrub process.
After getting the error, and trying to see what may be the issue (and
finding none), I've just issued a ceph repair followed by a ceph
deep-scrub. However it doesn't seem to have fixed the