Wow, after all of this, everything went well and I was able to convert
osd.120-129 from Filestore to Bluestore.

***
root@osd2:~# ls -l /var/lib/ceph/osd/ceph-120
total 48
-rw-r--r-- 1 ceph ceph 384 Nov  7 14:34 activate.monmap
lrwxrwxrwx 1 ceph ceph  19 Nov  7 14:38 block -> /dev/hdd120/data120
lrwxrwxrwx 1 ceph ceph  15 Nov  7 14:38 block.db -> /dev/ssd0/db120
-rw-r--r-- 1 ceph ceph   2 Nov  7 14:34 bluefs
-rw-r--r-- 1 ceph ceph  37 Nov  7 14:38 ceph_fsid
-rw-r--r-- 1 ceph ceph  37 Nov  7 14:38 fsid
-rw------- 1 ceph ceph  57 Nov  7 14:38 keyring
-rw-r--r-- 1 ceph ceph   8 Nov  7 14:34 kv_backend
-rw-r--r-- 1 ceph ceph  21 Nov  7 14:34 magic
-rw-r--r-- 1 ceph ceph   4 Nov  7 14:34 mkfs_done
-rw-r--r-- 1 ceph ceph  41 Nov  7 14:34 osd_key
-rw-r--r-- 1 ceph ceph   6 Nov  7 14:38 ready
-rw-r--r-- 1 ceph ceph  10 Nov  7 14:38 type
-rw-r--r-- 1 ceph ceph   4 Nov  7 14:38 whoami
***

and df -h showing
tmpfs           126G   48K  126G   1% /var/lib/ceph/osd/ceph-120
tmpfs           126G   48K  126G   1% /var/lib/ceph/osd/ceph-121
tmpfs           126G   48K  126G   1% /var/lib/ceph/osd/ceph-122
tmpfs           126G   48K  126G   1% /var/lib/ceph/osd/ceph-123 ....

******
It seems like wipefs did delete all the remnants of the filestore partition
correctly since I did not have to do any additional clean-up this time. I
basically followed all the steps that I wrote out (with a few minor edits
Hector suggested).   THANK YOU SO MUCH!!!  After I work on the rest of this
node, I will go back to the previous node and see if I can zap it and start
all over again.

On Wed, Nov 7, 2018 at 12:21 PM, Hector Martin <hec...@marcansoft.com>
wrote:

> On 11/8/18 2:15 AM, Hayashida, Mami wrote:
> > Thank you very much.  Yes, I am aware that zapping the SSD and
> > converting it to LVM requires stopping all the FileStore OSDs whose
> > journals are on that SSD first.  I will add in the `hdparm` to my steps.
> > I did run into remnants of gpt information lurking around when trying to
> > re-use osd disks in the past -- so that's probably a good preemptive
> move.
>
> Just for reference, "ceph-volume lvm zap" runs wipefs and also wipes the
> beginning of the device separately. It should get rid of the GPT
> partition table. hdparm -z just tells the kernel to re-read it (which
> should remove any device nodes associated with now-gone partitions).
>
> I just checked the wipefs manpage and it seems it does trigger a
> partition table re-read itself, which would make the hdparm unnecessary.
> It might be useful if you can check that the partition devices (sda1
> etc) exist before the zap command and disappear after it, confirming
> that hdparm is not necessary. And if they still exist, then run hdparm,
> and if they persist after that too, something's wrong and you should
> investigate. GPT partition tables can be notoriously annoying to wipe
> because there is a backup at the end of the device, but wipefs *should*
> know about that as far as I know.
>
> --
> Hector Martin (hec...@marcansoft.com)
> Public Key: https://mrcn.st/pub
>



-- 
*Mami Hayashida*

*Research Computing Associate*
Research Computing Infrastructure
University of Kentucky Information Technology Services
301 Rose Street | 102 James F. Hardymon Building
Lexington, KY 40506-0495
mami.hayash...@uky.edu
(859)323-7521
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to