... additionally, the forthcoming 4.12 kernel release will support
non-cooperative exclusive locking. By default, since 4.9, when the
exclusive-lock feature is enabled, only a single client can write to the
block device at a time -- but they will cooperatively pass the lock back
and forth upon write request. With the new "rbd map" option, you can map a
image on exactly one host and prevent other hosts from mapping the image.
If that host should die, the exclusive-lock will automatically become
available to other hosts for mapping.

Of course, I always have to ask the use-case behind mapping the same image
on multiple hosts. Perhaps CephFS would be a better fit if you are trying
to serve out a filesystem?

On Wed, Jun 28, 2017 at 6:25 PM, Maged Mokhtar <mmokh...@petasan.org> wrote:

> On 2017-06-28 22:55, li...@marcelofrota.info wrote:
>
> Hi People,
>
> I am testing the new enviroment, with ceph + rbd with ubuntu 16.04, and i
> have one question.
>
> I have my cluster ceph and mount the using the comands to ceph in my linux
> enviroment :
>
> rbd create veeamrepo --size 20480
> rbd --image veeamrepo info
> modprobe rbd
> rbd map veeamrepo
> rbd feature disable veeamrepo exclusive-lock object-map fast-diff
> deep-flatten
> mkdir /mnt/veeamrepo
> mount /dev/rbd0 /mnt/veeamrepo
>
> The comands work fine, but i have one problem, in the moment, i can mount
> the /mnt/veeamrepo in the same time in 2 machines, and this is a bad option
> for me in the moment, because this could generate one filesystem corrupt.
>
> I need only one machine to be allowed to mount and write at a time.
>
> Example if machine1 mount the /mnt/veeamrepo and machine2 try mount, one
> error would be displayed, show message the machine can not mount, because
> the system already mounted in machine1.
>
> Someone, could help-me with this or give some tips, for solution my
> problem. ?
>
> Thanks a lot
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> You can use Pacemaker to map the rbd and mount the filesystem on 1 server
> and in case of failure switch to another server.
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to