Re: [ceph-users] 1 PG stuck unclean (active+remapped) after OSD replacement

2017-02-13 Thread Eugen Block
Thanks for your quick responses, while I was writing my answer we had a rebalancing going on because I started a new crush reweight to get rid of the old re-activated OSDs again, and now that it finished, the cluster is back in healthy state. Thanks, Eugen Zitat von Gregory Farnum : On M

Re: [ceph-users] 1 PG stuck unclean (active+remapped) after OSD replacement

2017-02-13 Thread Gregory Farnum
On Mon, Feb 13, 2017 at 7:05 AM Wido den Hollander wrote: > > > Op 13 februari 2017 om 16:03 schreef Eugen Block : > > > > > > Hi experts, > > > > I have a strange situation right now. We are re-organizing our 4 node > > Hammer cluster from LVM-based OSDs to HDDs. When we did this on the > > firs

Re: [ceph-users] 1 PG stuck unclean (active+remapped) after OSD replacement

2017-02-13 Thread Wido den Hollander
> Op 13 februari 2017 om 16:03 schreef Eugen Block : > > > Hi experts, > > I have a strange situation right now. We are re-organizing our 4 node > Hammer cluster from LVM-based OSDs to HDDs. When we did this on the > first node last week, everything went smoothly, I removed the OSDs > fro

[ceph-users] 1 PG stuck unclean (active+remapped) after OSD replacement

2017-02-13 Thread Eugen Block
Hi experts, I have a strange situation right now. We are re-organizing our 4 node Hammer cluster from LVM-based OSDs to HDDs. When we did this on the first node last week, everything went smoothly, I removed the OSDs from the crush map and the rebalancing and recovery finished successfull