Sorry my mixup.

Therefore you shouldn’t be running ZAP against /dev/sda as this will wipe
the whole SSD.

I Guess currently in its setup it’s using a partition on /dev/sda? Like
/dev/sda2 for example.

,Ashley

On Wed, 7 Nov 2018 at 11:30 PM, Hayashida, Mami <mami.hayash...@uky.edu>
wrote:

> Yes, that was indeed a copy-and-paste mistake.  I am trying to use
> /dev/sdh (hdd) for data and a part of /dev/sda (ssd)  for the journal.
> That's how the Filestore is set-up.  So, for the Bluestore, data on
> /dev/sdh,  wal and db on /dev/sda.
>
>
> On Wed, Nov 7, 2018 at 10:26 AM, Ashley Merrick <singap...@amerrick.co.uk>
> wrote:
>
>> ceph osd destroy 70  --yes-i-really-mean-it
>>
>> I am guessing that’s a copy and paste mistake and should say 120.
>>
>> Is the SSD @ /dev/sdh fully for the OSD120 is a partition on this SSD the
>> journal and other partitions are for other SSD’s?
>>
>> On Wed, 7 Nov 2018 at 11:21 PM, Hayashida, Mami <mami.hayash...@uky.edu>
>> wrote:
>>
>>> I would agree with that.  So, here is what I am planning on doing
>>> today.  I will try this from scratch on a different OSD node from the very
>>> first step and log input and output for every step.  Here is the outline of
>>> what I think (based on all the email exchanges so far) should happen.
>>>
>>> *******
>>> Trying to convert osd.120 to Bluestore.  Data is on /sda/sdh.
>>>  Filestore Journal is on a partition drive (40GB) on /dev/sda.
>>>
>>> #Mark those OSDs out
>>> ceph osd out 120
>>>
>>> # Stop the OSDs
>>> systemctl kill ceph-osd@120
>>>
>>> # Unmount the filesystem
>>> sudo umount /var/lib/ceph/osd/ceph-120
>>>
>>> # Destroy the data
>>> ceph-volume lvm zap /dev/sdh --destroy   # data disk
>>> ceph-volume lvm zap /dev/sda --destroy   # ssd for wal and db
>>>
>>> # Inform the cluster
>>> ceph osd destroy 70  --yes-i-really-mean-it
>>>
>>> # Check all the /etc/fstab and /etc/systemd/system to make sure that all
>>> the references to the filesystem is gone. Run
>>> ln -sf /dev/null /etc/systemd/system/ceph-disk@70.service
>>>
>>> # Create PVs, VGs, LVs
>>> pvcreate /dev/sda # for wal and db
>>> pvcreate /dev/sdh # for data
>>>
>>> vgcreate ssd0 /dev/sda
>>> vgcreate hdd120  /dev/sdh
>>>
>>> lvcreate -L 40G -n db120 ssd0
>>> lvcreate -l 100%VG data120 hdd120
>>>
>>> # Run ceph-volume
>>> ceph-volume lvm prepare --bluestore --data hdd120/data120 --block.db
>>> ssd0/db120  --osd-id 120
>>>
>>> # Activate
>>> ceph-volume lvm activate 120 <osd fsid>
>>>
>>> ******
>>> Does this sound right?
>>>
>>> On Tue, Nov 6, 2018 at 4:32 PM, Alfredo Deza <ad...@redhat.com> wrote:
>>>
>>>> It is pretty difficult to know what step you are missing if we are
>>>> getting the `activate --all` command.
>>>>
>>>> Maybe if you try one by one, capturing each command, throughout the
>>>> process, with output. In the filestore-to-bluestore guides we never
>>>> advertise `activate --all` for example.
>>>>
>>>> Something is missing here, and I can't tell what it is.
>>>> On Tue, Nov 6, 2018 at 4:13 PM Hayashida, Mami <mami.hayash...@uky.edu>
>>>> wrote:
>>>> >
>>>> > This is becoming even more confusing. I got rid of those 
>>>> > ceph-disk@6[0-9].service
>>>> (which had been symlinked to /dev/null).  Moved
>>>> /var/lib/ceph/osd/ceph-6[0-9] to  /var/...../osd_old/.  Then, I ran
>>>> `ceph-volume lvm activate --all`.  I got once again
>>>> >
>>>> > root@osd1:~# ceph-volume lvm activate --all
>>>> > --> Activating OSD ID 67 FSID 17cd6755-76f9-4160-906c-1bf13d09fb3d
>>>> > Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-67
>>>> > --> Absolute path not found for executable: restorecon
>>>> > --> Ensure $PATH environment variable contains common executable
>>>> locations
>>>> > Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir
>>>> --dev /dev/hdd67/data67 --path /var/lib/ceph/osd/ceph-67
>>>> >  stderr: failed to read label for /dev/hdd67/data67: (2) No such file
>>>> or directory
>>>> > -->  RuntimeError: command returned non-zero exit status: 1
>>>> >
>>>> > But when I ran `df` and `mount` ceph-67 is the only one that exists.
>>>> (and in  /var/lib/ceph/osd/)
>>>> >
>>>> > root@osd1:~# df -h | grep ceph-6
>>>> > tmpfs           126G     0  126G   0% /var/lib/ceph/osd/ceph-67
>>>> >
>>>> > root@osd1:~# mount | grep ceph-6
>>>> > tmpfs on /var/lib/ceph/osd/ceph-67 type tmpfs (rw,relatime)
>>>> >
>>>> > root@osd1:~# ls /var/lib/ceph/osd/ | grep ceph-6
>>>> > ceph-67
>>>> >
>>>> > But in I cannot restart any of these 10 daemons (`systemctl start
>>>> ceph-osd@6[0-9]`).
>>>> >
>>>> > I am wondering if I should zap these 10 osds and start over although
>>>> at this point I am afraid even zapping may not be a simple task....
>>>> >
>>>> >
>>>> >
>>>> > On Tue, Nov 6, 2018 at 3:44 PM, Hector Martin <hec...@marcansoft.com>
>>>> wrote:
>>>> >>
>>>> >> On 11/7/18 5:27 AM, Hayashida, Mami wrote:
>>>> >> > 1. Stopped osd.60-69:  no problem
>>>> >> > 2. Skipped this and went to #3 to check first
>>>> >> > 3. Here, `find /etc/systemd/system | grep ceph-volume` returned
>>>> >> > nothing.  I see in that directory
>>>> >> >
>>>> >> > /etc/systemd/system/ceph-disk@60.service    # and 61 - 69.
>>>> >> >
>>>> >> > No ceph-volume entries.
>>>> >>
>>>> >> Get rid of those, they also shouldn't be there. Then `systemctl
>>>> >> daemon-reload` and continue, see if you get into a good state.
>>>> basically
>>>> >> feel free to nuke anything in there related to OSD 60-69, since
>>>> whatever
>>>> >> is needed should be taken care of by the ceph-volume activation.
>>>> >>
>>>> >>
>>>> >> --
>>>> >> Hector Martin (hec...@marcansoft.com)
>>>> >> Public Key: https://mrcn.st/pub
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > Mami Hayashida
>>>> > Research Computing Associate
>>>> >
>>>> > Research Computing Infrastructure
>>>> > University of Kentucky Information Technology Services
>>>> > 301 Rose Street
>>>> <https://maps.google.com/?q=301+Rose+Street&entry=gmail&source=g> |
>>>> 102 James F. Hardymon Building
>>>> > Lexington, KY 40506-0495
>>>> > mami.hayash...@uky.edu
>>>> > (859)323-7521
>>>>
>>>
>>>
>>>
>>> --
>>> *Mami Hayashida*
>>>
>>> *Research Computing Associate*
>>> Research Computing Infrastructure
>>> University of Kentucky Information Technology Services
>>> 301 Rose Street
>>> <https://maps.google.com/?q=301+Rose+Street&entry=gmail&source=g> | 102
>>> James F. Hardymon Building
>>> Lexington, KY 40506-0495
>>> mami.hayash...@uky.edu
>>> (859)323-7521
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>
>
> --
> *Mami Hayashida*
>
> *Research Computing Associate*
> Research Computing Infrastructure
> University of Kentucky Information Technology Services
> 301 Rose Street
> <https://maps.google.com/?q=301+Rose+Street&entry=gmail&source=g> | 102
> James F. Hardymon Building
> Lexington, KY 40506-0495
> mami.hayash...@uky.edu
> (859)323-7521
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to