sorry for bringing up that old topic again, but we just faced a
corresponding situation and have successfully tested two migration
Zitat von ceph-users-requ...@lists.ceph.com:
Date: Sat, 24 Feb 2018 06:10:16 +0000
From: David Turner <drakonst...@gmail.com>
To: Nico Schottelius <nico.schottel...@ungleich.ch>
Cc: Caspar Smit <caspars...@supernas.eu>, ceph-users
Subject: Re: [ceph-users] Proper procedure to replace DB/WAL SSD
Content-Type: text/plain; charset="utf-8"
Caspar, it looks like your idea should work. Worst case scenario seems like
the osd wouldn't start, you'd put the old SSD back in and go back to the
idea to weight them to 0, backfilling, then recreate the osds. Definitely
with a try in my opinion, and I'd love to hear your experience after.
Nico, it is not possible to change the WAL or DB size, location, etc after
it is possible to move a separate WAL/DB to a new device, whilst
without changing the size. We have done this for multiple OSDs, using
only existing (mainstream :) ) tools and have documented the procedure
. It will *not* allow to separate WAL / DB after OSD creation, nor
does it allow changing the DB size.
As we faced a failing WAL/DB SSD during one of the moves (fatal read
errors from the DB block device), we also established a procedure to
initialize the OSD to "empty" during that operation, so that the OSD
will get re-filled without changing the OSD map:
PS: Live WAL/DB migration is something that can be done easily when
using logical volumes, which is why I'd highly recommend to go that
route, instead of using partitions. LVM not only helps when the SSDs
reach their EOL, but with live changes to load balancing (WAL/DB LVs
distributing across multiple SSDs), too.
ceph-users mailing list