On 03/04/2016 08:15 AM, John Spray wrote:
On Fri, Mar 4, 2016 at 12:11 PM, Shinobu Kinjo <shinobu...@gmail.com> wrote:
What are you facing?

In this particular instance, I'm dealing with a case where we may add
some metadata in ceph that will get updated by the driver, and I need
to know how I'm going to be called.  I need to know whether e.g. I can
expect that ensure_share will only be called once at a time per share,
or whether it might be called multiple times in parallel, resulting in
a need for me to do more synchronisation a lower level.

This is more complicated than locking, because where we update more
than one thing at a time we also have to deal with recovery (e.g.
manila crashed halfway through updating something in ceph and now I'm
recovering it), especially whether the places we do recovery will be
called concurrently or not.

My very favourite answer here would be a pointer to some
documentation, but I'm guessing much this stuff is still at a "word of
mouth" stage.

Concurrency is the area where most of our problems are coming from. There was a time, I believe, when concurrency issues were largely taken care of, but that was before we forked Manila from Cinder and before Cinder forked from Nova. Over time, lack of test coverage has allowed race conditions to creep in, and architectural decisions have been made that failed to account for HA (highly available) deployments, where multiple services might be managing the very same backends. The Cinder team has been working on fixing these issues and we need to catch up.

As I start to turn my attention from wrapping up Mitaka to thinking about Newton, concurrency is the most urgent focus area I can see. Much of our gate stability problems are likely due to concurrency issues. This can be easily verified by changing the concurrency value to 1 when running tempest and noting that it runs flawlessly every time, yet when it's set to >1 we have occasional failures.

-Ben


John

On Fri, Mar 4, 2016 at 9:06 PM, John Spray <jsp...@redhat.com> wrote:
Hi,

What expectations should driver authors have about multiple instances
of the driver being instantiated within different instances of
manila-share?

For example, should I assume that when one instance of a driver is
having ensure_share called during startup, another instance of the
driver might be going through the same process on the same share at
the same time?  Are there any rules at all?

Thanks,
John

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to