If it creates it, should also remove it. Ceph-disk prepare did not 
create it, so it is logical if it does not remove it. ceph-volume 
however creates it, thus should remove it.



-----Original Message-----
From: David Turner [mailto:[email protected]] 
Sent: zaterdag 2 juni 2018 23:36
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Bug? ceph-volume zap not working

Ceph-disk didn't remove an osd from the cluster either. That has never 
been a thing for ceph-disk or ceph-volume. There are other commands for 
that.


On Sat, Jun 2, 2018, 4:29 PM Marc Roos <[email protected]> wrote:


         
        But leaves still entries in crush map and maybe also ceph auth ls, 
and 
        the dir in /var/lib/ceph/osd
        
        
        
        -----Original Message-----
        From: Oliver Freyermuth [mailto:[email protected]] 
        Sent: zaterdag 2 juni 2018 18:29
        To: Marc Roos; ceph-users
        Subject: Re: [ceph-users] Bug? ceph-volume zap not working
        
        The command mapping from ceph-disk to ceph-volume is certainly not 
1:1. 
        What we are ended up using is:
        ceph-volume lvm zap /dev/sda --destroy
        This takes care of destroying Pvs and Lvs (as the documentation 
says). 
        
        Cheers,
                Oliver
        
        Am 02.06.2018 um 12:16 schrieb Marc Roos:
        > 
        > I guess zap should be used instead of destroy? Maybe keep 
ceph-disk 
        > backwards compatibility and keep destroy??
        > 
        > [root@c03 bootstrap-osd]# ceph-volume lvm zap /dev/sdf
        > --> Zapping: /dev/sdf
        > --> Unmounting /var/lib/ceph/osd/ceph-19
        > Running command: umount -v /var/lib/ceph/osd/ceph-19
        >  stderr: umount: /var/lib/ceph/osd/ceph-19 (tmpfs) unmounted 
Running 
        > command: wipefs --all /dev/sdf
        >  stderr: wipefs: error: /dev/sdf: probing initialization failed: 
        > Device or resource busy
        > -->  RuntimeError: command returned non-zero exit status: 1
        > 
        > Pvs / lvs are still there, I guess these are keeping the 
'resource 
        busy'
        > 
        > 
        > _______________________________________________
        > ceph-users mailing list
        > [email protected]
        > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
        > 
        
        
        _______________________________________________
        ceph-users mailing list
        [email protected]
        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
        


_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to