ceph-disk-prepare will give you the next unused number. So this will work only if the osd you remove is greater than 20.
On Thu, Nov 6, 2014 at 12:12 PM, Chad Seys <cws...@physics.wisc.edu> wrote: > Hi Craig, > > > You'll have trouble until osd.20 exists again. > > > > Ceph really does not want to lose data. Even if you tell it the osd is > > gone, ceph won't believe you. Once ceph can probe any osd that claims to > > be 20, it might let you proceed with your recovery. Then you'll probably > > need to use ceph pg <pgid> mark_unfound_lost. > > > > If you don't have a free bay to create a real osd.20, it's possible to > fake > > it with some small loop-back filesystems. Bring it up and mark it OUT. > It > > will probably cause some remapping. I would keep it around until you get > > things healthy. > > > > If you create a real osd.20, you might want to leave it OUT until you get > > things healthy again. > > Thanks for the recovery tip! > > I would guess I could safely remove an OSD (mark OUT, wait for migration to > stop, then crush osd rm) and then add back in as osd.20 would work? > > New switch: > --yes-i-really-REALLY-mean-it > > ;) > Chad. >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com