[openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Elena Ezhova
Hello everyone! Some recent change requests ([1], [2]) show that there is a number of issues with locking db resources in Neutron. One of them is initialization of drivers which can be performed simultaneously by several neutron servers. In this case locking is essential for avoiding conflicts

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Clint Byrum
Please do not re-invent locking.. the way we reinvented locking in Heat. ;) There are well known distributed coordination services such as Zookeeper and etcd, and there is an abstraction for them already called tooz: https://git.openstack.org/cgit/stackforge/tooz/ Excerpts from Elena Ezhova's

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Jay Pipes
There's also no need to use locks at all for this (distributed or otherwise). You can use a compare and update strategy with an exponential backoff similar to the approach taken here: https://review.openstack.org/#/c/109837/ I'd have to look at the Neutron code, but I suspect that a simple

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Doug Wiegley
I'd have to look at the Neutron code, but I suspect that a simple strategy of issuing the UPDATE SQL statement with a WHERE condition that I¹m assuming the locking is for serializing code, whereas for what you describe above, is there some reason we wouldn¹t just use a transaction? Thanks,

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Jay Pipes
On 07/30/2014 09:48 AM, Doug Wiegley wrote: I'd have to look at the Neutron code, but I suspect that a simple strategy of issuing the UPDATE SQL statement with a WHERE condition that I¹m assuming the locking is for serializing code, whereas for what you describe above, is there some reason we

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Clint Byrum
Excerpts from Doug Wiegley's message of 2014-07-30 09:48:17 -0700: I'd have to look at the Neutron code, but I suspect that a simple strategy of issuing the UPDATE SQL statement with a WHERE condition that I¹m assuming the locking is for serializing code, whereas for what you describe

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Kevin Benton
i.e. 'optimistic locking' as opposed to the 'pessimistic locking' referenced in the 3rd link of the email starting the thread. On Wed, Jul 30, 2014 at 9:55 AM, Jay Pipes jaypi...@gmail.com wrote: On 07/30/2014 09:48 AM, Doug Wiegley wrote: I'd have to look at the Neutron code, but I suspect

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Morgan Fainberg
] [neutron] Cross-server locking for neutron server On 07/30/2014 09:48 AM, Doug Wiegley wrote: I'd have to look at the Neutron code, but I suspect that a simple strategy of issuing the UPDATE SQL statement with a WHERE condition that I¹m assuming the locking is for serializing code

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Jay Pipes
On 07/30/2014 10:05 AM, Kevin Benton wrote: i.e. 'optimistic locking' as opposed to the 'pessimistic locking' referenced in the 3rd link of the email starting the thread. No, there's no locking. On Wed, Jul 30, 2014 at 9:55 AM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com wrote:

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread ZZelle
Hello, I stop to improve vxlan population and remove SELECT FOR UPDATE[1] because i am not sure the current approach is the right approach to handle vxlan/gre tenant pools: 1- Do we really to populate vxlan/gre tenant pools? The neutron-server could also choose randomly an vxlan vni in

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Kevin Benton
Using the UPDATE WHERE statement you described is referred to as optimistic locking. [1] https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/The_CMP_Engine-Optimistic_Locking.html On Wed, Jul 30, 2014 at 10:30 AM, Jay Pipes jaypi...@gmail.com wrote: On 07/30/2014 10:05 AM,

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Jay Pipes
On 07/30/2014 10:53 AM, Kevin Benton wrote: Using the UPDATE WHERE statement you described is referred to as optimistic locking. [1] https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/The_CMP_Engine-Optimistic_Locking.html SQL != JBoss. It's not optimistic locking in the

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Kevin Benton
Maybe I misunderstood your approach then. I though you were suggesting where a node performs an UPDATE record WHERE record = last_state_node_saw query and then checks the number of affected rows. That's optimistic locking by every definition I've heard of it. It matches the following statement

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Jay Pipes
On 07/30/2014 12:21 PM, Kevin Benton wrote: Maybe I misunderstood your approach then. I though you were suggesting where a node performs an UPDATE record WHERE record = last_state_node_saw query and then checks the number of affected rows. That's optimistic locking by every definition I've

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-07-30 13:53:38 -0700: On 07/30/2014 12:21 PM, Kevin Benton wrote: Maybe I misunderstood your approach then. I though you were suggesting where a node performs an UPDATE record WHERE record = last_state_node_saw query and then checks the number of

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Eugene Nikanorov
In fact there are more applications for distributed locking than just accessing data in database. One of such use cases is serializing access to devices. This is what is not yet hardly needed, but will be as we get more service drivers working with appliances. It would be great if some existing

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Kevin Benton
Yes, we are talking about the same thing. I think the term 'optimistic locking' comes from what happens during the sql transaction. The sql engine converts a read (the WHERE clause) and update (the UPDATE clause) operations into an atomic operation. The atomic guarantee requires an internal lock

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Jay Pipes
On 07/30/2014 02:29 PM, Kevin Benton wrote: Yes, we are talking about the same thing. I think the term 'optimistic locking' comes from what happens during the sql transaction. The sql engine converts a read (the WHERE clause) and update (the UPDATE clause) operations into an atomic operation.

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Joshua Harlow
I'll just start by saying I'm not the expert in what should-be the solution for neutron here (this is their developers ultimate decision) but I just wanted to add my thoughts Jays solution looks/sounds like a spin lock with a test-and-set[1] (imho still a lock, no matter the makeup u put

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Jay Pipes
It's not about distributed locking. It's about allowing multiple threads to make some sort of progress in the face of a contentious piece of code. Obstruction and lock-free algorithms are preferred, IMO, over lock-based solutions that sit there and block while something else is doing

Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Joshua Harlow
On Jul 30, 2014, at 3:27 PM, Jay Pipes jaypi...@gmail.com wrote: It's not about distributed locking. It's about allowing multiple threads to make some sort of progress in the face of a contentious piece of code. Sure, multiple threads, multiple process, multiple