On Jul 30, 2014, at 3:27 PM, Jay Pipes wrote:
> It's not about distributed locking. It's about allowing multiple threads to
> make some sort of progress in the face of a contentious piece of code.
Sure, multiple threads, multiple process, multiple
remote/distributed/cooperating process (all p
It's not about distributed locking. It's about allowing multiple threads
to make some sort of progress in the face of a contentious piece of
code. Obstruction and lock-free algorithms are preferred, IMO, over
lock-based solutions that sit there and block while something else is
doing something.
I'll just start by saying I'm not the expert in what should-be the solution for
neutron here (this is their developers ultimate decision) but I just wanted to
add my thoughts
Jays solution looks/sounds like a spin lock with a test-and-set[1] (imho still
a lock, no matter the makeup u put on
On 07/30/2014 02:29 PM, Kevin Benton wrote:
Yes, we are talking about the same thing. I think the term 'optimistic
locking' comes from what happens during the sql transaction. The sql
engine converts a read (the WHERE clause) and update (the UPDATE clause)
operations into an atomic operation.
The
Yes, we are talking about the same thing. I think the term 'optimistic
locking' comes from what happens during the sql transaction. The sql engine
converts a read (the WHERE clause) and update (the UPDATE clause)
operations into an atomic operation.
The atomic guarantee requires an internal lock in
In fact there are more applications for distributed locking than just
accessing data in database.
One of such use cases is serializing access to devices.
This is what is not yet hardly needed, but will be as we get more service
drivers working with appliances.
It would be great if some existing li
Excerpts from Jay Pipes's message of 2014-07-30 13:53:38 -0700:
> On 07/30/2014 12:21 PM, Kevin Benton wrote:
> > Maybe I misunderstood your approach then.
> >
> > I though you were suggesting where a node performs an "UPDATE record
> > WHERE record = last_state_node_saw" query and then checks the
On 07/30/2014 12:21 PM, Kevin Benton wrote:
Maybe I misunderstood your approach then.
I though you were suggesting where a node performs an "UPDATE record
WHERE record = last_state_node_saw" query and then checks the number of
affected rows. That's optimistic locking by every definition I've hea
Maybe I misunderstood your approach then.
I though you were suggesting where a node performs an "UPDATE record WHERE
record = last_state_node_saw" query and then checks the number of affected
rows. That's optimistic locking by every definition I've heard of it. It
matches the following statement f
On 07/30/2014 10:53 AM, Kevin Benton wrote:
Using the UPDATE WHERE statement you described is referred to as
optimistic locking. [1]
https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/The_CMP_Engine-Optimistic_Locking.html
SQL != JBoss.
It's not optimistic locking in the da
Hello,
I stop to improve vxlan population and remove SELECT FOR UPDATE[1] because
i am not sure the current approach is the right approach to handle
vxlan/gre tenant pools:
1- Do we really to populate vxlan/gre tenant pools?
The neutron-server could also choose randomly an vxlan vni in vni_ran
Using the UPDATE WHERE statement you described is referred to as optimistic
locking. [1]
https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/The_CMP_Engine-Optimistic_Locking.html
On Wed, Jul 30, 2014 at 10:30 AM, Jay Pipes wrote:
> On 07/30/2014 10:05 AM, Kevin Benton wrote:
On 07/30/2014 10:05 AM, Kevin Benton wrote:
i.e. 'optimistic locking' as opposed to the 'pessimistic locking'
referenced in the 3rd link of the email starting the thread.
No, there's no locking.
On Wed, Jul 30, 2014 at 9:55 AM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:
On 07/30/2014 0
-Original Message-
From: Jay Pipes
Reply: OpenStack Development Mailing List (not for usage questions)
>
Date: July 30, 2014 at 09:59:15
To: openstack-dev@lists.openstack.org >
Subject: Re: [openstack-dev] [neutron] Cross-server locking for neutron server
> On 07/30/2014
i.e. 'optimistic locking' as opposed to the 'pessimistic locking'
referenced in the 3rd link of the email starting the thread.
On Wed, Jul 30, 2014 at 9:55 AM, Jay Pipes wrote:
> On 07/30/2014 09:48 AM, Doug Wiegley wrote:
>
>> I'd have to look at the Neutron code, but I suspect that a simple
>
Excerpts from Doug Wiegley's message of 2014-07-30 09:48:17 -0700:
> > I'd have to look at the Neutron code, but I suspect that a simple
> > strategy of issuing the UPDATE SQL statement with a WHERE condition that
>
> I¹m assuming the locking is for serializing code, whereas for what you
> describ
On 07/30/2014 09:48 AM, Doug Wiegley wrote:
I'd have to look at the Neutron code, but I suspect that a simple
strategy of issuing the UPDATE SQL statement with a WHERE condition that
I¹m assuming the locking is for serializing code, whereas for what you
describe above, is there some reason we w
> I'd have to look at the Neutron code, but I suspect that a simple
> strategy of issuing the UPDATE SQL statement with a WHERE condition that
I¹m assuming the locking is for serializing code, whereas for what you
describe above, is there some reason we wouldn¹t just use a transaction?
Thanks,
do
There's also no need to use locks at all for this (distributed or
otherwise).
You can use a compare and update strategy with an exponential backoff
similar to the approach taken here:
https://review.openstack.org/#/c/109837/
I'd have to look at the Neutron code, but I suspect that a simple
Please do not re-invent locking.. the way we reinvented locking in Heat.
;)
There are well known distributed coordination services such as Zookeeper
and etcd, and there is an abstraction for them already called tooz:
https://git.openstack.org/cgit/stackforge/tooz/
Excerpts from Elena Ezhova's me
Hello everyone!
Some recent change requests ([1], [2]) show that there is a number of
issues with locking db resources in Neutron.
One of them is initialization of drivers which can be performed
simultaneously by several neutron servers. In this case locking is
essential for avoiding conflicts wh
21 matches
Mail list logo