Hi everyone,

There has been a long-standing request [1] to implement an OSD
"destroy" capability to ceph-deploy.  A community user has submitted a
pull request implementing this feature [2].  While the code needs a
bit of work (there are a few things to work out before it would be
ready to merge), I want to verify that the approach is sound before
diving into it.

As it currently stands, the new feature would do allow for the following:

ceph-deploy osd destroy <host> --osd-id <id>

>From that command, ceph-deploy would reach out to the host, do "ceph
osd out", stop the ceph-osd service for the OSD, then finish by doing
"ceph osd crush remove", "ceph auth del", and "ceph osd rm".  Finally,
it would umount the OSD, typically in /var/lib/ceph/osd/...


Does this high-level approach seem sane?  Anything that is missing
when trying to remove an OSD?


There are a few specifics to the current PR that jump out to me as
things to address.  The format of the command is a bit rough, as other
"ceph-deploy osd" commands take a list of [host[:disk[:journal]]] args
to specify a bunch of disks/osds to act on at one.  But this command
only allows one at a time, by virtue of the --osd-id argument.  We
could try to accept [host:disk] and look up the OSD ID from that, or
potentially take [host:ID] as input.

Additionally, what should be done with the OSD's journal during the
destroy process?  Should it be left untouched?

Should there be any additional barriers to performing such a
destructive command?  User confirmation?


 - Travis

[1] http://tracker.ceph.com/issues/3480
[2] https://github.com/ceph/ceph-deploy/pull/254
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to