It has to be mounted from somewhere, if that server goes offline, you need
to mount it from somewhere else right?


On Thu, Feb 28, 2019 at 11:15 PM David Turner <[email protected]> wrote:

> Why are you making the same rbd to multiple servers?
>
> On Wed, Feb 27, 2019, 9:50 AM Ilya Dryomov <[email protected]> wrote:
>
>> On Wed, Feb 27, 2019 at 12:00 PM Thomas <[email protected]> wrote:
>> >
>> > Hi,
>> > I have noticed an error when writing to a mapped RBD.
>> > Therefore I unmounted the block device.
>> > Then I tried to unmap it w/o success:
>> > ld2110:~ # rbd unmap /dev/rbd0
>> > rbd: sysfs write failed
>> > rbd: unmap failed: (16) Device or resource busy
>> >
>> > The same block device is mapped on another client and there are no
>> issues:
>> > root@ld4257:~# rbd info hdb-backup/ld2110
>> > rbd image 'ld2110':
>> >         size 7.81TiB in 2048000 objects
>> >         order 22 (4MiB objects)
>> >         block_name_prefix: rbd_data.3cda0d6b8b4567
>> >         format: 2
>> >         features: layering
>> >         flags:
>> >         create_timestamp: Fri Feb 15 10:53:50 2019
>> > root@ld4257:~# rados -p hdb-backup  listwatchers
>> rbd_data.3cda0d6b8b4567
>> > error listing watchers hdb-backup/rbd_data.3cda0d6b8b4567: (2) No such
>> > file or directory
>> > root@ld4257:~# rados -p hdb-backup  listwatchers
>> rbd_header.3cda0d6b8b4567
>> > watcher=10.76.177.185:0/1144812735 client.21865052 cookie=1
>> > watcher=10.97.206.97:0/4023931980 client.18484780
>> > cookie=18446462598732841027
>> >
>> >
>> > Question:
>> > How can I force to unmap the RBD on client ld2110 (= 10.76.177.185)?
>>
>> Hi Thomas,
>>
>> It appears that /dev/rbd0 is still open on that node.
>>
>> Was the unmount successful?  Which filesystem (ext4, xfs, etc)?
>>
>> What is the output of "ps aux | grep rbd" on that node?
>>
>> Try lsof, fuser, check for LVM volumes and multipath -- these have been
>> reported to cause this issue previously:
>>
>>   http://tracker.ceph.com/issues/12763
>>
>> Thanks,
>>
>>                 Ilya
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to