There’s also some similar situations when we actually don’t lock on resources. 
For  example – a cgsnapshot may get deleted while creating a consistencygroup 
from it.

From my perspective it seems best to have atomic state changes and state-based 
exclusion in API. We would need some kind of 
currently_used_to_create_snapshot/volums/consistencygroups states to achieve 
that. Then we would be also able to return VolumeIsBusy exceptions so retrying 
a request would be on the user side.

From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
Sent: Sunday, June 28, 2015 12:16 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [cinder][oslo] Locks for create from 
volume/snapshot


We need mutual exclusion for several operations. Whether that is done by entity 
queues, locks, state based locking at the api later, or something else, we need 
mutual exclusion.

Our current api does not lend itself to looser consistency, and I struggle to 
come up with a sane api that does - nobody doing an operation on a volume  
wants it to happen maybe, at some time...
On 28 Jun 2015 07:30, "Avishay Traeger" 
<avis...@stratoscale.com<mailto:avis...@stratoscale.com>> wrote:
Do we really need any of these locks?  I'm sure we could come up with some way 
to remove them, rather than make them distributed.

On Sun, Jun 28, 2015 at 5:07 AM, Joshua Harlow 
<harlo...@outlook.com<mailto:harlo...@outlook.com>> wrote:
John Griffith wrote:


On Sat, Jun 27, 2015 at 11:47 AM, Joshua Harlow 
<harlo...@outlook.com<mailto:harlo...@outlook.com>
<mailto:harlo...@outlook.com<mailto:harlo...@outlook.com>>> wrote:

    Duncan Thomas wrote:

        We are working on some sort of distributed replacement for the
        locks in
        cinder, since file locks are limiting our ability to do HA. I'm
        afraid
        you're unlikely to get any traction until that work is done.

        I also have a concern that some backend do not handle load well,
        and so
        benefit from the current serialisation. It might be necessary to
        push
        this lock down into the driver and allow each driver to choose it's
        locking model for snapshots.


    IMHO (and I know this isn't what everyone thinks) but I'd rather
    have cinder (and other projects) be like this from top gear (
    https://www.youtube.com/watch?v=xnWKz7Cthkk ) where that toyota
    truck is virtually indestructible vs. trying to be a
    high-maintenance ferrari (when most openstack projects do a bad job
    of trying to be one). So, maybe for a time (and I may regret saying
    this) we could consider focusing on reliability, consistency, being
    the toyota vs. handling some arbitrary amount of load (trying to be
    a ferrari).

    Also I'd expect/think operators would rather prefer a toyota at this
    stage of openstack :) Ok enough analogies, ha.


​Well said Josh, I guess I've been going about this all wrong by not
using the analogies :)​

Exactly!! IMHO should be the new 'openstack mantra, built from 
components/projects that survive like a toyota truck' haha. Part 2 
(https://www.youtube.com/watch?v=xTPnIpjodA8) and part 3 
(https://www.youtube.com/watch?v=kFnVZXQD5_k) are funny/interesting also :-P

Now we just need openstack to be that reliable and tolerant of 
failures/calamities/...


    -Josh


        On 27 Jun 2015 06:18, "niuzhenguo" 
<niuzhen...@huawei.com<mailto:niuzhen...@huawei.com>
        <mailto:niuzhen...@huawei.com<mailto:niuzhen...@huawei.com>>
        <mailto:niuzhen...@huawei.com<mailto:niuzhen...@huawei.com> 
<mailto:niuzhen...@huawei.com<mailto:niuzhen...@huawei.com>>>>

        wrote:

             Hi folks,____

             __ __

             Currently we use a lockfile to protect the create
        operations from
             concurrent delete the source volume/snapshot, we use
        exclusive____

             locks on both delete and create sides which will ensure
        that:____

             __ __

             __1.__If a create of VolA from snap/VolB is in progress,
        any delete
             requests for snap/VolB will wait until the create is
        complete.____

             __2.__If a delete of snap/VolA is in progress, any create from
             snap/VolA will wait until snap/VolA delete is complte.____

             __ __

             but, the exclusive locks will also result in:____

             __ __

             __3.__If a create of VolA from snap/VolB is inprogress, any
        other
             create requests from snap/VolB will wait until the create is
             complete. ____

             __ __

             So the create operations from same volume/snapshot can not
        process
             on parallel, please reference bp [1].____

             I’d like to change the current filelock or introduce a new
        lock to
             oslo.concurrency.____

             __ __

             Proposed change:____

             Add exclusive(write) locks for delete operations and
        shared(read)
             locks for create operations, to ensure that create from
             volume/snapshot____

             can work on parallel and protect create operations from
        concurrent
             delete the source volume/snapshot.____

             __ __

             I’d like to get what’s your suggestions, thanks in advance.____

             __ __

             [1]
        https://blueprints.launchpad.net/cinder/+spec/enhance-locks____

             __ __

             __ __

             -zhenguo____



        
__________________________________________________________________________
             OpenStack Development Mailing List (not for usage questions)
             Unsubscribe:
        
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
        <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
        <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

        
__________________________________________________________________________
        OpenStack Development Mailing List (not for usage questions)
        Unsubscribe:
        
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
        <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Avishay Traeger
Storage R&D

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com<mailto:avis...@stratoscale.com>

[http://www.stratoscale.com/wp-content/uploads/Logo-Signature-Stratoscale-230.jpg]


Web<http://www.stratoscale.com/> | Blog<http://www.stratoscale.com/blog/> | 
Twitter<https://twitter.com/Stratoscale> | 
Google+<https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
 | Linkedin<https://www.linkedin.com/company/stratoscale>

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to