There's also no need to use locks at all for this (distributed or otherwise).

You can use a compare and update strategy with an exponential backoff similar to the approach taken here:

https://review.openstack.org/#/c/109837/

I'd have to look at the Neutron code, but I suspect that a simple strategy of issuing the UPDATE SQL statement with a WHERE condition that is constructed to take into account the expected current record state would do the trick...

Best,
-jay

On 07/30/2014 09:33 AM, Clint Byrum wrote:
Please do not re-invent locking.. the way we reinvented locking in Heat.
;)

There are well known distributed coordination services such as Zookeeper
and etcd, and there is an abstraction for them already called tooz:

https://git.openstack.org/cgit/stackforge/tooz/

Excerpts from Elena Ezhova's message of 2014-07-30 09:09:27 -0700:
Hello everyone!

Some recent change requests ([1], [2]) show that there is a number of
issues with locking db resources in Neutron.

One of them is initialization of drivers which can be performed
simultaneously by several neutron servers. In this case locking is
essential for avoiding conflicts which is now mostly done via using
SQLAlchemy's
with_lockmode() method, which emits SELECT..FOR UPDATE resulting in rows
being locked within a transaction. As it has been already stated by Mike
Bayer [3], this statement is not supported by Galera and, what’s more, by
Postgresql for which a lock doesn’t work in case when a table is empty.

That is why there is a need for an easy solution that would allow
cross-server locking and would work for every backend. First thing that
comes into mind is to create a table which would contain all locks acquired
by various pieces of code. Each time a code, that wishes to access a table
that needs locking, would have to perform the following steps:

1. Check whether a lock is already acquired by using SELECT lock_name FROM
cross_server_locks table.

2. If SELECT returned None, acquire a lock by inserting it into the
cross_server_locks table.

    In other case wait and then try again until a timeout is reached.

3. After a code has executed it should release the lock by deleting the
corresponding entry from the cross_server_locks table.

The locking process can be implemented by decorating a function that
performs a transaction by a special function, or as a context manager.

Thus, I wanted to ask the community whether this approach deserves
consideration and, if yes, it would be necessary to decide on the format of
an entry in cross_server_locks table: how a lock_name should be formed,
whether to support different locking modes, etc.


[1] https://review.openstack.org/#/c/101982/

[2] https://review.openstack.org/#/c/107350/

[3]
https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Pessimistic_Locking_-_SELECT_FOR_UPDATE

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to