# ceph-volume lvm zap --destroy
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
Running command: /usr/sbin/cryptsetup status /dev/mapper/
--> Zapping: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
--> Destroying physical volume
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz because --destroy was
given
Running command: /usr/sbin/pvremove -v -f -f
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
 stderr: Device osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz not
found.
--> Unable to remove vg osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
-->  RuntimeError: command returned non-zero exit status: 5

This is how destroy failed before I started deleting volumes.

On Thu, Apr 18, 2019 at 2:26 PM Alfredo Deza <[email protected]> wrote:

> On Thu, Apr 18, 2019 at 3:01 PM Sergei Genchev <[email protected]> wrote:
> >
> >  Thank you Alfredo
> > I did not have any reasons to keep volumes around.
> > I tried using ceph-volume to zap these stores, but none of the command
> worked, including yours 'ceph-volume lvm zap
> osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz'
>
> If you do not want to keep them around you would need to use --destroy
> and use the lv path as input:
>
> ceph-volume lvm zap --destroy
> osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
>
> >
> > I ended up manually removing LUKS volumes and then deleting LVM LV, VG,
> and PV
> >
> > cryptsetup remove /dev/mapper/AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr
> > cryptsetup remove /dev/mapper/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
> > lvremove
> /dev/ceph-f4efa78f-a467-4214-b550-81653da1c9bd/osd-block-097d59be-bbe6-493a-b785-48b259d2ff35
> > sgdisk -Z /dev/sdd
> >
> > # ceph-volume lvm zap osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
> > Running command: /usr/sbin/cryptsetup status /dev/mapper/
> > --> Zapping: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
> > Running command: /usr/sbin/wipefs --all
> osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
> >  stderr: wipefs: error:
> osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz: probing initialization
> failed: No such file or directory
> > -->  RuntimeError: command returned non-zero exit status: 1
>
> In this case, you removed the LV so the wipefs failed because that LV
> no longer exists. Do you have output on how it failed before?
>
> >
> >
> > On Thu, Apr 18, 2019 at 10:10 AM Alfredo Deza <[email protected]> wrote:
> >>
> >> On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev <[email protected]>
> wrote:
> >> >
> >> >  Hello,
> >> > I have a server with 18 disks, and 17 OSD daemons configured. One of
> the OSD daemons failed to deploy with ceph-deploy. The reason for failing
> is unimportant at this point, I believe it was race condition, as I was
> running ceph-deploy inside while loop for all disks in this server.
> >> >   Now I have two left over LVM dmcrypded volumes that I am not sure
> how clean up. The command that failed and did not quite clean up after
> itself was:
> >> > ceph-deploy osd create --bluestore --dmcrypt --data /dev/sdd
> --block-db osvg/sdd-db ${SERVERNAME}
> >> >
> >> > # lsblk
> >> > .......
> >> > sdd                                             8:48   0   7.3T  0
> disk
> >> >
> └─ceph--f4efa78f--a467--4214--b550--81653da1c9bd-osd--block--097d59be--bbe6--493a--b785--48b259d2ff35
> >> >                                               253:32   0   7.3T  0 lvm
> >> >   └─AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr    253:33   0   7.3T  0
> crypt
> >> >
> >> > sds                                            65:32   0 223.5G  0
> disk
> >> > ├─sds1                                         65:33   0   512M  0
> part  /boot
> >> > └─sds2                                         65:34   0   223G  0
> part
> >> >  .......
> >> >    ├─osvg-sdd--db                              253:8    0     8G  0
> lvm
> >> >    │ └─2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz  253:34   0     8G  0
> crypt
> >> >
> >> > # ceph-volume inventory /dev/sdd
> >> >
> >> > ====== Device report /dev/sdd ======
> >> >
> >> >      available                 False
> >> >      rejected reasons          locked
> >> >      path                      /dev/sdd
> >> >      scheduler mode            deadline
> >> >      rotational                1
> >> >      vendor                    SEAGATE
> >> >      human readable size       7.28 TB
> >> >      sas address               0x5000c500a6b1d581
> >> >      removable                 0
> >> >      model                     ST8000NM0185
> >> >      ro                        0
> >> >     --- Logical Volume ---
> >> >      cluster name              ceph
> >> >      name
> osd-block-097d59be-bbe6-493a-b785-48b259d2ff35
> >> >      osd id                    39
> >> >      cluster fsid              8e7a3953-7647-4133-9b9a-7f4a2e2b7da7
> >> >      type                      block
> >> >      block uuid                AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr
> >> >      osd fsid                  097d59be-bbe6-493a-b785-48b259d2ff35
> >> >
> >> > I was trying to run
> >> > ceph-volume lvm zap --destroy /dev/sdd but it errored out. Osd id on
> this volume is the same as on next drive, /dev/sde, and osd.39 daemon is
> running. This command was trying to zap running osd.
> >> >
> >> > What is the proper way to clean both data and block db volumes, so I
> can rerun ceph-deploy again, and add them to the pool?
> >> >
> >>
> >> Do you want to keep the LVs around or you want to complete get rid of
> >> them? If you are passing /dev/sdd to 'zap' you are telling the tool to
> >> destroy everything that is in there, regardless of who owns it
> >> (including running
> >> OSDs).
> >>
> >> If you want to keep LVs around then you can omit the --destroy flag
> >> and pass the LVs as input, or if using a recent enough version you can
> >> use --osd-fsid to zap:
> >>
> >> ceph-volume lvm zap osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
> >>
> >> If you don't want the LVs around you can add --destroy, but use the LV
> >> as input (not the device)
> >>
> >> > Thank you!
> >> >
> >> >
> >> >
> >> > _______________________________________________
> >> > ceph-users mailing list
> >> > [email protected]
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to