Hi,

Looks like there is some misinformation about exclusive lock feature, here is 
some information already on the mailing list:


The naming of the "exclusive-lock" feature probably implies too much compared 
to what it actually does.  In reality, when you enable the "exclusive-lock" 
feature, only one RBD client is able to modify the image while the lock is 
held.  However, that won't stop other RBD clients from *requesting* that 
maintenance operations be performed on the image (e.g. snapshot, resize).

Behind the scenes, librbd will detect that another client currently owns the 
lock and will proxy its request over to the current watch owner.  This ensures 
that we only have one client modifying the image while at the same time not 
crippling other use cases.  librbd also supports cooperative exclusive lock 
transfer, which is used in the case of qemu VM migrations where the image needs 
to be opened R/W by two clients at the same time.

--

Jason Dillaman


Saludos Cordiales,
Xavier Trilla P.
SiliconHosting

¿Un Servidor Cloud con SSDs, redundado
y disponible en menos de 30 segundos?

¡Pruébalo ahora en Clouding.io<https://clouding.io/>!

El 19 mar 2018, a las 9:38, Gregory Farnum 
<[email protected]<mailto:[email protected]>> escribió:

You can explore the rbd exclusive lock functionality if you want to do this, 
but it’s not typically advised because using it makes moving live VMs across 
hosts harder, IUIC.
-Greg

On Sat, Mar 17, 2018 at 7:47 PM Egoitz Aurrekoetxea 
<[email protected]<mailto:[email protected]>> wrote:

Good morning,


Does some kind of config param exist in Ceph for avoid two hosts accesing to 
the same vm pool or at least image inside?. Can be done at pool or image level?.


Best regards,

--


[sarenet]
Egoitz Aurrekoetxea
Departamento de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia)
[email protected]<mailto:[email protected]>
www.sarenet.es<https://www.sarenet.es>

Antes de imprimir este correo electrónico piense si es necesario hacerlo.
_______________________________________________
ceph-users mailing list
[email protected]<mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]<mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to