I just filed a ticket after trying ceph-objectstore-tool:
http://tracker.ceph.com/issues/12428
On Fri, Jul 17, 2015 at 3:36 PM, Dan van der Ster d...@vanderster.com wrote:
A bit of progress: rm'ing everything from inside current/36.10d_head/
actually let the OSD start and continue deleting
A bit of progress: rm'ing everything from inside current/36.10d_head/
actually let the OSD start and continue deleting other PGs.
Cheers, Dan
On Fri, Jul 17, 2015 at 3:26 PM, Dan van der Ster d...@vanderster.com wrote:
Thanks for the quick reply.
We /could/ just wipe these OSDs and start from
Hi Greg + list,
Sorry to reply to this old'ish thread, but today one of these PGs bit
us in the ass.
Running hammer 0.94.2, we are deleting pool 36 and the OSDs 30, 171,
and 69 all crash when trying to delete pg 36.10d. They all crash with
ENOTEMPTY suggests garbage data in osd data dir
I think you'll need to use the ceph-objectstore-tool to remove the
PG/data consistently, but I've not done this — David or Sam will need
to chime in.
-Greg
On Fri, Jul 17, 2015 at 2:15 PM, Dan van der Ster d...@vanderster.com wrote:
Hi Greg + list,
Sorry to reply to this old'ish thread, but
Thanks for the quick reply.
We /could/ just wipe these OSDs and start from scratch (the only other
pools were 4+2 ec and recovery already brought us to 100%
active+clean).
But it'd be good to understand and prevent this kind of crash...
Cheers, Dan
On Fri, Jul 17, 2015 at 3:18 PM, Gregory
On Wed, Jun 17, 2015 at 8:56 AM, Dan van der Ster d...@vanderster.com wrote:
Hi,
After upgrading to 0.94.2 yesterday on our test cluster, we've had 3
PGs go inconsistent.
First, immediately after we updated the OSDs PG 34.10d went inconsistent:
2015-06-16 13:42:19.086170 osd.52
On Wed, Jun 17, 2015 at 10:52 AM, Gregory Farnum g...@gregs42.com wrote:
On Wed, Jun 17, 2015 at 8:56 AM, Dan van der Ster d...@vanderster.com wrote:
Hi,
After upgrading to 0.94.2 yesterday on our test cluster, we've had 3
PGs go inconsistent.
First, immediately after we updated the OSDs PG