Why is ceph-disk trying to wipe a mounted filesystem?


[@]# ceph-disk prepare --bluestore --zap-disk --dmcrypt /dev/sdg
The operation has completed successfully.
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=4 blocks, Stripe width=4 blocks
2560 inodes, 10240 blocks
512 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=10485760
2 block groups
8192 blocks per group, 8192 fragments per group
1280 inodes per group
Superblock backups stored on blocks:
     8193

Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done

creating 
/var/lib/ceph/osd-lockbox/93bce5fa-3443-4bb2-bb93-bf65ad40ebcd/keyring
added entity client.osd-lockbox.93bce5fa-3443-4bb2-bb93-bf65ad40ebcd 
auth auth(auid = 18446744073709551615 
key=AQDzWRBb3XVSJBAAeQIS+iYgOWsxkkS9x007Jg== with 0 caps)
creating 
/var/lib/ceph/osd-lockbox/93bce5fa-3443-4bb2-bb93-bf65ad40ebcd/osd_keyri
ng
added entity osd.18 auth auth(auid = 18446744073709551615 
key=AQDzWRBbYVnNHRAAw4JvSVPetrZl24m+xVu91A== with 0 caps)
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
wipefs: error: /dev/sdg5: probing initialization failed: Device or 
resource busy
ceph-disk: Error: Command '['/usr/sbin/wipefs', '--all', '/dev/sdg5']' 
returned non-zero exit status 



/dev/sdg5 on 
/var/lib/ceph/osd-lockbox/93bce5fa-3443-4bb2-bb93-bf65ad40ebcd type ext4 
(rw,relatime,stripe=4,data=ordered)

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to