On 11/8/18 12:29 AM, Hayashida, Mami wrote:
> Yes, that was indeed a copy-and-paste mistake.  I am trying to use
> /dev/sdh (hdd) for data and a part of /dev/sda (ssd)  for the journal. 
> That's how the Filestore is set-up.  So, for the Bluestore, data on
> /dev/sdh,  wal and db on /dev/sda. 

/dev/sda is the SSD you use for all OSDs on each node, right? Keep in
mind that what you're doing here is wiping that SSD entirely and
converting it to LVM. If any FileStore OSDs are using that SSD as
journal then this will kill them. If you're doing one node at a time
that's fine, but then you need to out and stop all the FileStore OSDs on
that node first.

You should throw a "systemctl daemon-reload" in there after tweaking
fstab and the systemd configs, to make sure systemd is aware of the
changes, e.g. after the `ln -s /dev/null ...`. FWIW I don't think that
symlink is necessary, but it won't hurt.

Also, `ceph-volume lvm zap` doesn't seem to trigger a re-read of the
partition table, from a quick look. Since you used partitions before, it
might be prudent to do that. Try `hdparm -z /dev/sdh` after the zap
(same for sda). That should get rid of any /dev/sdh1 etc partition
devices and leave only /dev/sdh. Do the same for sda and anything else
you zap. This also shouldn't strictly be needed as those will disappear
after a reboot anyway, and it's possible some other tool implicitly does
this for you, but it's good to be safe, and might avoid trouble if some
FileStore remnant tries to mount phantom partitions.

-- 
Hector Martin (hec...@marcansoft.com)
Public Key: https://mrcn.st/pub
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to