Amusingly, that's what I'm working on this week.

http://tracker.ceph.com/issues/7862

There are pretty good reasons for why it works the way it does right
now, but it certainly is unexpected.
-Sam

On Thu, Nov 6, 2014 at 7:18 AM, Chad William Seys
<[email protected]> wrote:
> Hi Sam,
>
>> Sounds like you needed osd 20.  You can mark osd 20 lost.
>> -Sam
>
> Does not work:
>
> # ceph osd lost 20 --yes-i-really-mean-it
> osd.20 is not down or doesn't exist
>
>
> Also, here is an interesting post which I will follow from October:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-October/044059.html
>
> "
> Hello, all. I got some advice from the IRC channel (thanks bloodice!) that I
> temporarily reduce the min_size of my cluster (size = 2) from 2 down to 1.
> That immediately caused all of my incomplete PGs to start recovering and
> everything seemed to come back OK. I was serving out and RBD from here and
> xfs_repair reported no problems. So... happy ending?
>
> What started this all was that I was altering my CRUSH map causing significant
> rebalancing on my cluster which had size = 2. During this process I lost an
> OSD (osd.10) and eventually ended up with incomplete PGs. Knowing that I only
> lost 1 osd I was pretty sure that I hadn't lost any data I just couldn't get
> the PGs to recover without changing the min_size.
> "
>
> It is good that this worked for him, but it also seems like a bug that it
> worked!  (I.e. ceph should have been able to recover on its own without weird
> workarounds.)
>
> I'll let you know if this works for me!
>
> Thanks,
> Chad.
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to