ceph-deploy doesn't support that. You can use ceph-disk or ceph-volume
directly (with basically the same syntax as ceph-deploy), but you can
only explicitly re-use an OSD id if you set it to destroyed before.
I.e., the proper way to replace an OSD while avoiding unnecessary data
movement is:
ceph
Gents,
My cluster had a gap in the OSD sequence numbers at certain point.
Basically, because of missing osd auth del/rm" in a previous disk
replacement task for osd.17, a new osd.34 was created. It did not really
bother me until recently when I tried to replace all smaller disks to
bigger disks.