Re: [openstack-dev] [tc] [all] [glance] On operating a high throughput or otherwise team

2016-05-15 Thread Nikhil Komawar
Erno,

TBH, you clearly haven't taken the time to understand what I meant to
say here. Clearly, thus self-justifying my email about ML discussions.
Hence, I do not want to spend too much time writing a response. There
are a few things that struck, so comments inline. Also, I do not want to
comment on your response to Chris' feedback -- his feedback is something
I need more time to internalize.


On 5/15/16 12:45 PM, Erno Kuvaja wrote:
> I'm not sure why I'm spending one of these very rare sunny Sunday
> afternoons on this but perhaps it's just important enough.
>
> Thanks Nikhil and Chris for such a verbal start for the discussion. I
> will at least try to keep my part shorter. I will quote both of your
> e-mails by name from which one it came from.
>
> Nikhil:
> """Lately I have been involved in discussions that have resulted in giving
> a wrong idea to the approach I take in operating the (Glance) team(s)."""
>
> At very start, please change your approach and start leading the team
> instead of trying to operate it. This is community rather than huge
> team in big enterprise and our contributors have different
> corporational and cultural backgrounds and reasonings why they are
> contributing into it, not organization with resources in PTLs disposal.
>

Firstly, here's a definition of leadership from a _wiki_ like page (
http://www.businessdictionary.com/definition/leadership.html ) see point
# 4 under the involvement. Think about it more before commenting.

Secondly, this email is not about leadership, this is about
communication which is the most essential part of operations. If you did
not get that part, I have no words.

Since you agree this is a community, balance in the ecosystem is of
prime importance. The way you maintain that is through operating it.
Think about it more, and more, and more.

> Nikhil:
> """We are developing something that is usable, operationally friendly and
> that it's easier to contribute & maintain but, many strong influencers
> are missing on the most important need for OpenStack -- efficient way of
> communication."""
>
> With the community size of OpenStack spanning probably about every
> timezone that has land we can communicate how much we ever want and
> not reach everybody in time. After all the individual projects are
> responsible for the release and providing that something that their
> mission statement mandates. We can put all our efforts to communicate
> being nice, friendly and easy but if we do not deliver we do not need
> to do this for long.
>
> Nikhil:
> """Also, many people like to work on the assumption that all the
> tools of communication are equivalent or useful and there are no
> side-effects of using them ever."""
> """I think people prefer to use ML a lot and I am not a great fan of the
> same."""
>
> This is great to recognize, now lets work on that and get the
> expectations right.
> Community wide the primary mean is the mailing list (and just perhaps
> extending to specs and gerrit reviews). You don't like it, I don't
> like it, but it's what we have to deal with.

You haven't understood the bottom line here. Read my comments on the
intent.

> Secondary would be the more real time forums, namely IRC and design
> summits.
> Anything apart from that (yes; hangouts, bluejeans, midcycles are
> great for building the team and filing out misunderstandings and
> disagreements on individual level) is tertiary and worthless unless
> used purely to bring the discussion to the primary media. This is the
> only way to include the community asynchronous way without expecting
> that 2/3 of the timezones needs to participate at inconvenient or
> impossible times or find the funding to travel.
>
> Nikhil:
> """Multi-cast medium of communication is more disruptive as it involves a
> possibility of divergence from the topic, strongly polarizing opinions
> due to the small possibility of catching of the intent. So, let us use
> it 'judiciously' and preferably only as a newspaper."""
>
> Seriously? If you want to publish in unidirectional manner, please
> write a blog* like everyone else, but don't try to transform our
> primary communications to such. Thankfully this opening was already
> start for right direction.
> * "BLOGGING; Never Before Have So Many People with So Little to Say
> Said So Much to So Few."
> - Despair, Inc. http://despair.com/products/blogging
>
> Nikhil:
> """Though, I think every team needs to be synchronous about their approach
> and not use delayed mechanisms like ML or gerrit."""
>
> 10AM IST (UTC +0100) seems to be good time, half an hour every morning
> should be fine to get me synced up after I've gone through the
> pressing stuff from e-mails and 17:00 IST (UTC +0100) is probably good
> time for another to keep us in synchronous. I'm sure the rest of the
> team is willing to sacrifice half an hour of their mornings and
> evenings for same. Hopefully you can facilitate, or perhaps the
> synchronous approach is not that good after 

Re: [openstack-dev] [Congress] Nominating Anusha Ramineni and Eric Kao for core reviewer

2016-05-15 Thread Masahito MUROI

+1.

On 2016/05/14 9:16, Tim Hinrichs wrote:

Hi all,

I'm writing to nominate Anusha Ramineni and Eric Kao as Congress core
reviewers.  Both Anusha and Eric have been active and consistent
contributors in terms of code, reviewing, and interacting with the
community since September--for all of Mitaka and a few months before that.

Anusha was so active in Mitaka that she committed more code than the
other core reviewers, and wrote the 2nd most reviews over all.  She took
on stable-maintenance, is the first person to fix gate breakages, and
manages to keep Congress synchronized with the rest of the OpenStack
projects we depend on.  She's taken on numerous tasks in migrating to
our new distributed architecture, especially around the API.  She
manages to write honest yet kind reviews, and has discussions at the
same level as the rest of the cores.

Eric also committed more code in Mitaka than the other core reviewers.
He has demonstrated his ability to design and implement solutions and
work well with the community through the review process.  In particular,
he completed the Congress migration to Python3 (including dealing with
the antlr grammar), worked through difficult problems with the new
distributed architecture (e.g. message sequencing, test-nondeterminism),
and is now designing an HA deployment architecture.  His reviews and
responses are both thoughtful and thorough and engages in discussions at
the same level as the rest of the core team.

Anusha and Eric: it's great working with you both!

Tim




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Storlets] Swift copy middlleware

2016-05-15 Thread kajinamit
> I agree with Kota, and I think this is the easiest way to fix the problem.
I noticed that placing storlets_middleware outside copy may require some 
changes about slo handling,
because now it is assumed that storlets_middleware is placed "inside" slo, 
right?
(I noticed this just after sending my previous e-mail :-( )

I think we had better make sure about what should be fixed in each solutions, 
at this time.

Thanks,
Takashi


-Original Message-
From: kajina...@nttdata.co.jp [mailto:kajina...@nttdata.co.jp] 
Sent: Monday, May 16, 2016 10:20 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Storlets] Swift copy middlleware

Hi Eran and Kota,


> For temprary, that would work but I thought we could (*not sure*) fix the 
> issue just replace the order of pipeline, right? (i.e. storlets handler 
> should be the left of copy middleware) That is because the storlets middlware 
> have the logic to translate COPY/PUT(X-COPY-FROM) into GET(no storelets 
> running)/PUT(execute at Proxy). If it happens before the request reaching to 
> copy middleware, it looks like just PUT or GET at copy middleware itself (so 
> nothing to oparate there).
I agree with Kota, and I think this is the easiest way to fix the problem.

On the other hand, it's not efficient that Storlets has its original 
implementation
about COPY inside it, which is very similar to copy middleware.
As we discussed at Bristol, we had better make Storlets get rid of its original 
implementation
and reuse ServerSideComyMiddleware or ServerSideCopyWebContext in copy 
middleware,
to reduce that duplicated work, as the next step.
# I need a deep dive about copy middleware patch, now.

> I believe that for Storlets what would happen is that both PUT and GET 
> cause a storlet invocation, where in fact we want that invocation to 
> happen Eithrer in the GET or in the PUT (but not both) I believe that 
> if we are OK with running the storlet on the put, we can use The 
> swift_source SSC as an indicator that the get is generated from the 
> Copy middleware and disregard the X-Run-Storlet header.
I also like this idea, if possible.
Dealing with COPY only in the one point (in copy middleware) looks better,
because it enables us to maintain the functionality more easily.

Thanks,
Takashi


-Original Message-
From: Kota TSUYUZAKI [mailto:tsuyuzaki.k...@lab.ntt.co.jp] 
Sent: Monday, May 16, 2016 9:25 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Storlets] Swift copy middlleware

Hey Eran,

This is what I was concerning in Bristol Hackathon :/

> As a quick and temporary resolution I have changes the tox.ini 
> dependency to be 2.7.0 Instead of master. We still need, however, to 
> port the code accordingly,

For temprary, that would work but I thought we could (*not sure*) fix the issue 
just replace the order of pipeline, right? (i.e. storlets handler should be the 
left of copy middleware) That is because the storlets middlware have the logic 
to translate COPY/PUT(X-COPY-FROM) into GET(no storelets running)/PUT(execute 
at Proxy). If it happens before the request reaching to copy middleware, it 
looks like just PUT or GET at copy middleware itself (so nothing to oparate 
there).

I'll start to make sure my thought in this week but thanks to raise a flag to 
the community :)

Thanks,
Kota



(2016/05/16 3:42), Eran Rom wrote:
> Today the Swift team has merged copy middleware - congrats!
> For us, however, it breaks the copy code path, which in fact can get 
> much simpler now.
> 
> As a quick and temporary resolution I have changes the tox.ini 
> dependency to be 2.7.0 Instead of master. We still need, however, to 
> port the code accordingly,
> 
> Here is a suggestion:
> The copy middleware will process the COPY / PUT & X-Copy-From and will:
> 1. Do a GET of the source object
> 2. Do a PUT to the target object
> 
> I believe that for Storlets what would happen is that both PUT and GET 
> cause a storlet invocation, where in fact we want that invocation to 
> happen Eithrer in the GET or in the PUT (but not both) I believe that 
> if we are OK with running the storlet on the put, we can use The 
> swift_source SSC as an indicator that the get is generated from the 
> Copy middleware and disregard the X-Run-Storlet header.
> 
> Thoughts?
> 
> Thanks,
> Eran
> 
> 
> 
> 
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [Storlets] Swift copy middlleware

2016-05-15 Thread kajinamit
Hi Eran and Kota,


> For temprary, that would work but I thought we could (*not sure*) fix the 
> issue just replace the order of pipeline, right? (i.e. storlets handler 
> should be the left of copy middleware) That is because the storlets middlware 
> have the logic to translate COPY/PUT(X-COPY-FROM) into GET(no storelets 
> running)/PUT(execute at Proxy). If it happens before the request reaching to 
> copy middleware, it looks like just PUT or GET at copy middleware itself (so 
> nothing to oparate there).
I agree with Kota, and I think this is the easiest way to fix the problem.

On the other hand, it's not efficient that Storlets has its original 
implementation
about COPY inside it, which is very similar to copy middleware.
As we discussed at Bristol, we had better make Storlets get rid of its original 
implementation
and reuse ServerSideComyMiddleware or ServerSideCopyWebContext in copy 
middleware,
to reduce that duplicated work, as the next step.
# I need a deep dive about copy middleware patch, now.

> I believe that for Storlets what would happen is that both PUT and GET 
> cause a storlet invocation, where in fact we want that invocation to 
> happen Eithrer in the GET or in the PUT (but not both) I believe that 
> if we are OK with running the storlet on the put, we can use The 
> swift_source SSC as an indicator that the get is generated from the 
> Copy middleware and disregard the X-Run-Storlet header.
I also like this idea, if possible.
Dealing with COPY only in the one point (in copy middleware) looks better,
because it enables us to maintain the functionality more easily.

Thanks,
Takashi


-Original Message-
From: Kota TSUYUZAKI [mailto:tsuyuzaki.k...@lab.ntt.co.jp] 
Sent: Monday, May 16, 2016 9:25 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Storlets] Swift copy middlleware

Hey Eran,

This is what I was concerning in Bristol Hackathon :/

> As a quick and temporary resolution I have changes the tox.ini 
> dependency to be 2.7.0 Instead of master. We still need, however, to 
> port the code accordingly,

For temprary, that would work but I thought we could (*not sure*) fix the issue 
just replace the order of pipeline, right? (i.e. storlets handler should be the 
left of copy middleware) That is because the storlets middlware have the logic 
to translate COPY/PUT(X-COPY-FROM) into GET(no storelets running)/PUT(execute 
at Proxy). If it happens before the request reaching to copy middleware, it 
looks like just PUT or GET at copy middleware itself (so nothing to oparate 
there).

I'll start to make sure my thought in this week but thanks to raise a flag to 
the community :)

Thanks,
Kota



(2016/05/16 3:42), Eran Rom wrote:
> Today the Swift team has merged copy middleware - congrats!
> For us, however, it breaks the copy code path, which in fact can get 
> much simpler now.
> 
> As a quick and temporary resolution I have changes the tox.ini 
> dependency to be 2.7.0 Instead of master. We still need, however, to 
> port the code accordingly,
> 
> Here is a suggestion:
> The copy middleware will process the COPY / PUT & X-Copy-From and will:
> 1. Do a GET of the source object
> 2. Do a PUT to the target object
> 
> I believe that for Storlets what would happen is that both PUT and GET 
> cause a storlet invocation, where in fact we want that invocation to 
> happen Eithrer in the GET or in the PUT (but not both) I believe that 
> if we are OK with running the storlet on the put, we can use The 
> swift_source SSC as an indicator that the get is generated from the 
> Copy middleware and disregard the X-Run-Storlet header.
> 
> Thoughts?
> 
> Thanks,
> Eran
> 
> 
> 
> 
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-15 Thread Qiming Teng
On Sun, May 15, 2016 at 10:49:39PM +, Hongbin Lu wrote:
> Hi all,
> 
> This is a continued discussion from the design summit. For recap, Magnum 
> manages bay nodes by using ResourceGroup from Heat. This approach works but 
> it is infeasible to manage the heterogeneity across bay nodes, which is a 
> frequently demanded feature. As an example, there is a request to provision 
> bay nodes across availability zones [1]. There is another request to 
> provision bay nodes with different set of flavors [2]. For the request 
> features above, ResourceGroup won't work very well.
> 
> The proposal is to remove the usage of ResourceGroup and manually create Heat 
> stack for each bay nodes. For example, for creating a cluster with 2 masters 
> and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat 
> stack as right now):
> * A kube cluster stack that manages the global resources
> * Two kube master stacks that manage the two master nodes
> * Three kube minion stacks that manage the three minion nodes
> 
> The proposal might require an additional API endpoint to manage nodes or a 
> group of nodes. For example:
> $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 
> --availability-zone us-east-1 
> $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 
> --availability-zone us-east-2 ...
> 
> Thoughts?
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
> [2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor
> 
> Best regards,
> Hongbin

Seriously, I'm suggesting Magnum to use Senlin for this task. Senlin has
an API that provides rich operations you will need to manage a cluster
of things, where the "thing" here can be a Heat stack or a Nova server.

A "thing" is modeled as a Profile in Senlin, so it is pretty easy and
straightforward for Magnum to feed in the HOT templates (possibly with
parameters and environments?) to Senlin and offload the group management
task from Magnum.

Speaking of cross-AZ placement, Senlin has a policy plugin for this
purpose already. Regarding bay nodes bearing different set of flavors,
Senlin also permits that.

I believe by offloading these operations to Senlin, Magnum can remain
focusing on getting COE management and get it done well. I also believe
that Senlin team will be very responsive to your requirements if there
are needs to tune the Senlin API/policy/mechanism.

Regards,
  Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Is it still valid/supported to create a network with a br- id?

2016-05-15 Thread Jason Kölker
On Sun, May 15, 2016 at 3:56 PM, Sean M. Collins  wrote:
> Matt Riedemann wrote:
>> The nova create-server API allows passing a network id that's prefixed with
>> br- [1]. That was added due to this bug from Folsom [2].
>>
>> I'm wondering if that's still valid? Looking at the network.id data model in
>> Neutron it doesn't look like it would be [3].
>
> Wow. That bug is awful. Network IDs should be UUIDs and ONLY UUIDs.

I agree, I'm having flashbacks to the dark days right now

> Just because some vendor plugin decides that their going to break the
> Networking API contract and define their own ID scheme,
> doesn't mean that we should fix it to help them.

The networking API at the time (quantum 1.0/1.1, the 2.0 that we know
and loath today was added about 20 days prior, but not yet stable, or
deployed anywhere), used identifiers that were opaque strings. The
docs used uuid's as the identifiers for the examples, but the api
punted on validation to individual plugins.

> That commit shouldn't have been accepted into Nova,

Obviously I disagree, see below for the history.

> and I don't think
> that we should support anything but a UUID for a network id. Period.

Which is why this was fixed in the 2.0 api by checking the network
identifer for uuid-ness. This was the result of the fun of
shoe-horning every vendor's backend into quantum, without quantum
being the authority. Not to mention that the reason it was prefixed
with `br-` in the first place was established in the nova network api.
It exposed out the bridge that implemented the network in the nova
api, So the return from list networks on the nova side would list out
things like br-public, br-int, etc. This got carried over into the
quantum api when the project was founded.

I think we can all agree that this compat shim should be dropped now
that the v2.0 neutron api is merged and used (for quite sometime).

Happy Hacking!

7-11

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] [Blueprint] Stateless design definition

2016-05-15 Thread joehuang
Hi, Shinobu, 

You found a bug, yeah. The design has been moved to the master branch, so the 
description:

'Note: Stateless proposal is working on the “experiment” branch:'
'https://github.com/openstack/tricircle/tree/experiment'

Just been removed from the design blueprint as you mentioned.

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Shinobu Kinjo [mailto:shinobu...@gmail.com] 
Sent: Saturday, May 14, 2016 2:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [tricircle] [Blueprint] Stateless design definition

Hi Team,

In page1, we define "stateless design" as follows:

Note: Stateless proposal is working on the “experiment” branch:
https://github.com/openstack/tricircle/tree/experiment

But, in section 7, we define this design as follows:

A PoC was done to verify the feasibility of stateless architecture.
Since the feedback of this PoC was very positive, sorce codes of the stateless 
design was moved to the master branch of the Tricircle git repository

Since that, I am thinking of deleting former description (definition in page1).

Make sense?

Cheers,
Shinobu

--
Email:
shin...@linux.com
shin...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Storlets] Swift copy middlleware

2016-05-15 Thread Kota TSUYUZAKI
Hey Eran,

This is what I was concerning in Bristol Hackathon :/

> As a quick and temporary resolution I have changes the tox.ini dependency
> to be 2.7.0
> Instead of master. We still need, however, to port the code accordingly,

For temprary, that would work but I thought we could (*not sure*) fix the issue 
just replace the order of pipeline, right? (i.e. storlets handler should be the 
left of copy middleware)
That is because the storlets middlware have the logic to translate 
COPY/PUT(X-COPY-FROM) into GET(no storelets running)/PUT(execute at Proxy). If 
it happens before the request reaching to copy
middleware, it looks like just PUT or GET at copy middleware itself (so nothing 
to oparate there).

I'll start to make sure my thought in this week but thanks to raise a flag to 
the community :)

Thanks,
Kota



(2016/05/16 3:42), Eran Rom wrote:
> Today the Swift team has merged copy middleware - congrats!
> For us, however, it breaks the copy code path, which in fact can get much 
> simpler now.
> 
> As a quick and temporary resolution I have changes the tox.ini dependency 
> to be 2.7.0
> Instead of master. We still need, however, to port the code accordingly,
> 
> Here is a suggestion:
> The copy middleware will process the COPY / PUT & X-Copy-From and will:
> 1. Do a GET of the source object
> 2. Do a PUT to the target object
> 
> I believe that for Storlets what would happen is that both PUT and GET
> cause a storlet invocation, where in fact we want that invocation to 
> happen
> Eithrer in the GET or in the PUT (but not both)
> I believe that if we are OK with running the storlet on the put, we can 
> use
> The swift_source SSC as an indicator that the get is generated from the
> Copy middleware and disregard the X-Run-Storlet header.
> 
> Thoughts?
> 
> Thanks,
> Eran
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-15 Thread Yuanying OTSUKA
Hi,

I think, user also want to specify the deleting node.
So we should manage “node” individually.

For example:
$ magnum node-create —bay …
$ magnum node-list —bay
$ magnum node-delete $NODE_UUID

Anyway, if magnum want to manage a lifecycle of container infrastructure.
This feature is necessary.

Thanks
-yuanying


2016年5月16日(月) 7:50 Hongbin Lu :

> Hi all,
>
>
>
> This is a continued discussion from the design summit. For recap, Magnum
> manages bay nodes by using ResourceGroup from Heat. This approach works but
> it is infeasible to manage the heterogeneity across bay nodes, which is a
> frequently demanded feature. As an example, there is a request to provision
> bay nodes across availability zones [1]. There is another request to
> provision bay nodes with different set of flavors [2]. For the request
> features above, ResourceGroup won’t work very well.
>
>
>
> The proposal is to remove the usage of ResourceGroup and manually create
> Heat stack for each bay nodes. For example, for creating a cluster with 2
> masters and 3 minions, Magnum is going to manage 6 Heat stacks (instead of
> 1 big Heat stack as right now):
>
> * A kube cluster stack that manages the global resources
>
> * Two kube master stacks that manage the two master nodes
>
> * Three kube minion stacks that manage the three minion nodes
>
>
>
> The proposal might require an additional API endpoint to manage nodes or a
> group of nodes. For example:
>
> $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2
> --availability-zone us-east-1 ….
>
> $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3
> --availability-zone us-east-2 …
>
>
>
> Thoughts?
>
>
>
> [1]
> https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
>
> [2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor
>
>
>
> Best regards,
>
> Hongbin
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-15 Thread Jamie Lennox
On 13 May 2016 at 04:15, Sean Dague  wrote:

> On 05/12/2016 01:47 PM, Morgan Fainberg wrote:
> > This  also comes back to the conversation at the summit. We need to
> > propose the timeline to turn over for V3 (regardless of
> > voting/non-voting today) so that it is possible to set the timeline that
> > is expected for everything to get fixed (and where we are
> > expecting/planning to stop reverting while focusing on fixing the
> > v3-only changes).
> >
> > I am going to ask the Keystone team to set forth the timeline and commit
> > to getting the pieces in order so that we can make v3-only voting rather
> > than playing the propose/revert game we're currently doing. A proposed
> > timeline and gameplan will only help at this point.
>
> A timeline would be good (proposed below), but there are also other bits
> of the approach we should consider.
>

That was my job to get sent to the TC. I'll get on it.


>
> I would expect, for instance,
> gate-tempest-dsvm-neutron-identity-v3-only-full to be on keystone, and
> it does not appear to be. Is there a reason why?
>

To test that keystone works with keystone v3? Otherwise what you're doing
is making it so that keystone's gate breaks every time neutron does
something that's not v3 compatible which brings it to our attention but
otherwise just gets in the way. The hope was to push the check job failure
back to the service so that its not purely keystone's job to run around and
fix all the other services when the incompatible change is discovered.


>
> With that on keystone, devstack-gate, devstack, tempest the integrated
> space should be pretty well covered. There really is no need to also go
> stick this on glance, nova, cinder, neutron, swift I don't think,
> because they only really use keystone through pretty defined interfaces.
>

Initially i would have agreed, and there has been a voting job on devstack
with keystone v3 only that proves that all of these services can work
together for at least a cycle. Where we got stung was all the plugins and
configuration options used in these services that don't get tested by that
integrated gate job. The hope was that by pushing these jobs out to the
services we would get more coverage of the service specific configurations
- but I can see that might not be working.


> Then some strategic use of nv jobs on things we know would have some
> additional interactions here (because we know they are currently broken
> or they do interesting things) like ironic, heat, trove, would probably
> be pretty useful.
>
> That starts building up the list of known breaks the keystone folks are
> tracking, which should get a drum beat every week in email about
> outstanding issues, and issues fixed.
>
> The goal of gate-tempest-dsvm-neutron-identity-v3-only-full should not
> be for that to be voting, ever. It should be to use that as a good
> indicator that changing the default in devstack (and thus in the
> majority of upstream jobs) to not ever enable v2.
>
> Because of how v3 support exists in projects (largely hidden behind
> keystoneauth), it is really unlikely to rando regress once fixed. There
> just aren't that many knobs a project has that would make that happen.
> So I think we can make forward progress without a voting backstop until
> we get to a point where we can just throw the big red switch (with
> warning) on a Monday (probably early in the Otaca cycle) and say there
> you go. It's now the project job to handle it. And they'll all get fair
> warning for the month prior to the big red switch.
>

I agree. Very early in the Otaca cycle is also the timeframe we had
discussed at summit so it looks like there is a good consensus there and
i'll get that proposal to TC this week.

For now we maintain the v3-only jobs as non-voting and we continue to push
the changes particularly to projects that are not tested in the default
devstack integrated gate test.

PS. I assume i'm right in assuming it's just impossible/infeasable to have
project-config changes determine all the jobs that are affected by a change
and run those as the project-config gate. It seems like one of the last few
places where we can commit something that breaks everyone and have never
noticed.


-Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Is it still valid/supported to create a network with a br- id?

2016-05-15 Thread Doug Wiegley

> On May 15, 2016, at 10:17 AM, Matt Riedemann  
> wrote:
> 
>> On 5/15/2016 10:56 AM, Sean M. Collins wrote:
>> Matt Riedemann wrote:
>>> The nova create-server API allows passing a network id that's prefixed with
>>> br- [1]. That was added due to this bug from Folsom [2].
>>> 
>>> I'm wondering if that's still valid? Looking at the network.id data model in
>>> Neutron it doesn't look like it would be [3].
>> 
>> Wow. That bug is awful. Network IDs should be UUIDs and ONLY UUIDs.
>> 
>> Just because some vendor plugin decides that their going to break the
>> Networking API contract and define their own ID scheme,
>> doesn't mean that we should fix it to help them.
>> 
>> That commit shouldn't have been accepted into Nova, and I don't think
>> that we should support anything but a UUID for a network id. Period.
> 
> Yeah, I agree. Remember, this was Folsom, when Neutron was a young and brash 
> Quantum.
> 
> I was just trying to sort out if there is still anything out there in the 
> stadium that relies on this working. If not, I'll microversion it out of 
> support for the Nova API when we add support for auto-allocated-topology.

I agree with Sean.  Even if there is anything that relies on non uuid, it's 
totally fair to break it. 

Doug

> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [glance] glance-registry deprecation: Request for feedback

2016-05-15 Thread Sam Morrison

> On 14 May 2016, at 5:36 AM, Flavio Percoco  wrote:
> 
>> On 5/12/16 9:20 PM, Sam Morrison wrote:
>> 
>>   We find glance registry quite useful. Have a central glance-registry api 
>> is useful when you have multiple datacenters all with glance-apis and 
>> talking back to a central registry service. I guess they could all talk back 
>> to the central DB server but currently that would be over the public 
>> Internet for us. Not really an issue, we can work around it.
>> 
>>   The major thing that the registry has given us has been rolling upgrades. 
>> We have been able to upgrade our registry first then one by one upgrade our 
>> API servers (we have about 15 glance-apis)
> 
> I'm curious to know how you did this upgrade, though. Did you shutdown your
> registry nodes, upgraded the database and then re-started them? Did you 
> upgraded
> one registry node at a time?
> 
> I'm asking because, as far as I can tell, the strategy you used for upgrading
> the registry nodes is the one you would use to upgrade the glance-api nodes
> today. Shutting down all registry nodes would live you with unusable 
> glance-api
> nodes anyway so I'd assume you did a partial upgrade or something similar to
> that.

Yeah, if glance supported versioned objects then yes this would be great. 

We only have 3 glance-registries and so upgrading these first is a lot easier 
than upgrading all ~15 of our glance-apis at once.

Sam




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-15 Thread Hongbin Lu
Hi all,

This is a continued discussion from the design summit. For recap, Magnum 
manages bay nodes by using ResourceGroup from Heat. This approach works but it 
is infeasible to manage the heterogeneity across bay nodes, which is a 
frequently demanded feature. As an example, there is a request to provision bay 
nodes across availability zones [1]. There is another request to provision bay 
nodes with different set of flavors [2]. For the request features above, 
ResourceGroup won't work very well.

The proposal is to remove the usage of ResourceGroup and manually create Heat 
stack for each bay nodes. For example, for creating a cluster with 2 masters 
and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat 
stack as right now):
* A kube cluster stack that manages the global resources
* Two kube master stacks that manage the two master nodes
* Three kube minion stacks that manage the three minion nodes

The proposal might require an additional API endpoint to manage nodes or a 
group of nodes. For example:
$ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 
--availability-zone us-east-1 
$ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 
--availability-zone us-east-2 ...

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
[2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Storlets] Swift copy middlleware

2016-05-15 Thread Eran Rom
Today the Swift team has merged copy middleware - congrats!
For us, however, it breaks the copy code path, which in fact can get much 
simpler now.

As a quick and temporary resolution I have changes the tox.ini dependency 
to be 2.7.0
Instead of master. We still need, however, to port the code accordingly,

Here is a suggestion:
The copy middleware will process the COPY / PUT & X-Copy-From and will:
1. Do a GET of the source object
2. Do a PUT to the target object

I believe that for Storlets what would happen is that both PUT and GET
cause a storlet invocation, where in fact we want that invocation to 
happen
Eithrer in the GET or in the PUT (but not both)
I believe that if we are OK with running the storlet on the put, we can 
use
The swift_source SSC as an indicator that the get is generated from the
Copy middleware and disregard the X-Run-Storlet header.

Thoughts?

Thanks,
Eran


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-15 Thread Mike Perez
On 16:06 May 14, Nikhil Komawar wrote:
> On 5/14/16 2:35 PM, Mike Perez wrote:
> > Reading this thread, Nikhil who is speaking for the quota team is worried 
> > about
> > the amount of overhead caused by governance, instead of first focusing on
> > making something actually exist. I see quite a few people in this thread
> > speaking up that it should be part of the big tent either standalone or oslo
> > lib.
> >
> > I can't speak for the Oslo folks, but as a member of the TC here are the
> > requirements for the standalone route [1]. You would propose an agenda item 
> > to
> > the TC, and we would review that the project meets those requirements.
> > Considering the project does Open Design and has an Open Community - my 
> > guesses
> > on "probably would be followed" is Open Development and Open Source since we
> > don't have anything but a specification that exists to go off of.
> >
> > It sounds like the biggest hang up in going the oslo route is the oslo 
> > spec. So
> > question to the oslo folks, would you be interested in reviewing the
> > cross-project specification and allowing to be an oslo lib? That way the 
> > team
> > can focus on working on the library, and the community is happy it's part of
> > OpenStack already.
> >
> >
> >
> 
> Thanks Mike for helping us move forward.
> 
> After having been suggested yesterday, I have added this agenda item for
> the Oslo team meeting to get the answers
> https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting .
> 
> Also, can you please re-reference as the link did not come through?

[1] - http://governance.openstack.org/reference/new-projects-requirements.html

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter]My thoughts on how Delimiter uses generation-id for sequencing

2016-05-15 Thread Jay Pipes
Amrith, your code in SQL statements below is identical to mine in Python 
with two exceptions that make your code less scalable and more problematic:


1) You have your generation on a "resources" table instead of a 
"consumers" table, which means you have an extremely high likelihood of 
needing to retry your UPDATE statements and SELECTs because all 
consumers in the system share a single serialization point (the 
resources.generation field for the resource class).


2) The Delimiter library (i.e. "the quota system") *should not own 
resources*. Resources are owned by the individual services themselves, 
not some separate quota library. The quota library does not have 
knowledge of nor access to the allocations, inventories, or 
resource_providers database tables. In fact, the quota library should 
not assume *anything* about how usage and inventory information is 
stored or even that it is in a transactional RDBMS.


The Python code I used as an example was deliberately trying to keep the 
quota library as a consistent interface for how to deal with the 
check-consume pattern without needing the quota library to know anything 
at all about how the actual resource usage and inventory information was 
stored.


Best,
-jay

On 05/15/2016 01:55 PM, Amrith Kumar wrote:

Qijing,

As a simple example, let's assume that I use this schema. I realize that
it does not provide the resource provider thing that Jay talked about in
a previous (couple of weeks ago) email, but I believe that it serves to
illustrate how the generations are used.

create table resources (
 resource_id varchar(36) primary key,
 resourcevarchar(32),
 generation  integer
) engine=innodb;

create table allocations (
consumer_id   varchar(36),
resource_id   varchar(36),
amountinteger,
foreign key (resource_id)
references resources(resource_id)
) engine=innodb;

I've also populated it with this sample data.

insert into resources values ('b587d300-1a94-11e6-8478-000c291e9f7b',
'memory', 3);
insert into resources values ('b587ddb1-1a94-11e6-8478-000c291e9f7b',
'cpu', 3);
insert into resources values ('b587de7d-1a94-11e6-8478-000c291e9f7b',
'disk', 3);

insert into allocations values
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587d300-1a94-11e6-8478-000c291e9f7b', 1024 ),
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587ddb1-1a94-11e6-8478-000c291e9f7b',6 ),
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587de7d-1a94-11e6-8478-000c291e9f7b',10240 ),
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587d300-1a94-11e6-8478-000c291e9f7b', 2048 ),
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587ddb1-1a94-11e6-8478-000c291e9f7b',2 ),
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587de7d-1a94-11e6-8478-000c291e9f7b',  512 ),
( 'be03c4f7-1a96-11e6-8478-000c291e9f7b',
'b587d300-1a94-11e6-8478-000c291e9f7b', 2048 ),
( 'be03c4f7-1a96-11e6-8478-000c291e9f7b',
'b587ddb1-1a94-11e6-8478-000c291e9f7b',2 ),
( 'be03c4f7-1a96-11e6-8478-000c291e9f7b',
'b587de7d-1a94-11e6-8478-000c291e9f7b',  512 );


That gives me this as a starting point.

mysql> select distinct resource from resources;
+--+
| resource |
+--+
| memory   |
| cpu  |
| disk |
+--+
3 rows in set (0.00 sec)

mysql> select distinct consumer_id from allocations;
+--+
| consumer_id  |
+--+
| 61412e76-1a95-11e6-8478-000c291e9f7b |
| be03c4f7-1a96-11e6-8478-000c291e9f7b |
+--+
2 rows in set (0.00 sec)


-

Assume that the consumer (61412e76-1a95-11e6-8478-000c291e9f7b) has a
CPU quota of 12, we can see that the user has not yet hit his quota.

mysql> select sum(amount) from resources, allocations where
resources.resource_id = allocations.resource_id and resources.resource =
'cpu' and consumer_id = '61412e76-1a95-11e6-8478-000c291e9f7b';
+-+
| sum(amount) |
+-+
|   8 |
+-+
1 row in set (0.00 sec)


In this situation, assume that this consumer wishes to consume two
CPU's. Here's what quota library would do.

The caller of quota library would provide something like:

consumer_id: 61412e76-1a95-11e6-8478-000c291e9f7b
resource: cpu
quota: 12
request: 2

Here's what the quota library would do.

mysql> select resources.resource_id, generation, sum(amount) from
resources, allocations where resources.resource_id =
allocations.resource_id and resources.resource = 'cpu' and consumer_id =
'61412e76-1a95-11e6-8478-000c291e9f7b' group by resources.resource_id,
generation\g
+--++-+
| resource_id  | generation | sum(amount) |
+--++-+
| b587ddb1-1a94-11e6-8478-000c291e9f7b |  3 |   8 |

Re: [openstack-dev] [cross-project][quotas][delimiter]My thoughts on how Delimiter uses generation-id for sequencing

2016-05-15 Thread Amrith Kumar
Qijing,

As a simple example, let's assume that I use this schema. I realize that
it does not provide the resource provider thing that Jay talked about in
a previous (couple of weeks ago) email, but I believe that it serves to
illustrate how the generations are used.

create table resources (
resource_id varchar(36) primary key,
resourcevarchar(32),
generation  integer
) engine=innodb;

create table allocations (
   consumer_id   varchar(36),
   resource_id   varchar(36),
   amountinteger,
   foreign key (resource_id)
   references resources(resource_id)
) engine=innodb;

I've also populated it with this sample data.

insert into resources values ('b587d300-1a94-11e6-8478-000c291e9f7b',
'memory', 3);
insert into resources values ('b587ddb1-1a94-11e6-8478-000c291e9f7b',
'cpu', 3);
insert into resources values ('b587de7d-1a94-11e6-8478-000c291e9f7b',
'disk', 3);

insert into allocations values
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587d300-1a94-11e6-8478-000c291e9f7b', 1024 ),
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587ddb1-1a94-11e6-8478-000c291e9f7b',6 ),
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587de7d-1a94-11e6-8478-000c291e9f7b',10240 ),
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587d300-1a94-11e6-8478-000c291e9f7b', 2048 ),
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587ddb1-1a94-11e6-8478-000c291e9f7b',2 ),
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587de7d-1a94-11e6-8478-000c291e9f7b',  512 ),
( 'be03c4f7-1a96-11e6-8478-000c291e9f7b',
'b587d300-1a94-11e6-8478-000c291e9f7b', 2048 ),
( 'be03c4f7-1a96-11e6-8478-000c291e9f7b',
'b587ddb1-1a94-11e6-8478-000c291e9f7b',2 ),
( 'be03c4f7-1a96-11e6-8478-000c291e9f7b',
'b587de7d-1a94-11e6-8478-000c291e9f7b',  512 );


That gives me this as a starting point.

mysql> select distinct resource from resources;
+--+
| resource |
+--+
| memory   |
| cpu  |
| disk |
+--+
3 rows in set (0.00 sec)

mysql> select distinct consumer_id from allocations;
+--+
| consumer_id  |
+--+
| 61412e76-1a95-11e6-8478-000c291e9f7b |
| be03c4f7-1a96-11e6-8478-000c291e9f7b |
+--+
2 rows in set (0.00 sec)


-

Assume that the consumer (61412e76-1a95-11e6-8478-000c291e9f7b) has a
CPU quota of 12, we can see that the user has not yet hit his quota.

mysql> select sum(amount) from resources, allocations where
resources.resource_id = allocations.resource_id and resources.resource =
'cpu' and consumer_id = '61412e76-1a95-11e6-8478-000c291e9f7b';
+-+
| sum(amount) |
+-+
|   8 |
+-+
1 row in set (0.00 sec)


In this situation, assume that this consumer wishes to consume two
CPU's. Here's what quota library would do.

The caller of quota library would provide something like:

consumer_id: 61412e76-1a95-11e6-8478-000c291e9f7b
resource: cpu
quota: 12
request: 2

Here's what the quota library would do.

mysql> select resources.resource_id, generation, sum(amount) from
resources, allocations where resources.resource_id =
allocations.resource_id and resources.resource = 'cpu' and consumer_id =
'61412e76-1a95-11e6-8478-000c291e9f7b' group by resources.resource_id,
generation\g
+--++-+
| resource_id  | generation | sum(amount) |
+--++-+
| b587ddb1-1a94-11e6-8478-000c291e9f7b |  3 |   8 |
+--++-+
1 row in set (0.00 sec)

-- it can now determine that the quota of 12 won't be violated by
allocating two more. So it goes ahead and does this.

mysql> begin;
Query OK, 0 rows affected (0.00 sec)

mysql> insert into allocations values
( '61412e76-1a95-11e6-8478-000c291e9f7b',
'b587ddb1-1a94-11e6-8478-000c291e9f7b', 2);
Query OK, 1 row affected (0.00 sec)

And then does this:

mysql> update resources set generation = generation + 1
-> where resource_id = 'b587ddb1-1a94-11e6-8478-000c291e9f7b'
-> and generation = 3;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

It observes that 1 row was matched, so the allocation succeeded and
therefore it does this.

mysql> commit;
Query OK, 0 rows affected (0.01 sec)

---

Assume now that consumer 'be03c4f7-1a96-11e6-8478-000c291e9f7b' with a
cpu quota of 50 comes along and wants 4 more. The library does this.


mysql> select resources.resource_id, generation, sum(amount) from
resources, allocations where resources.resource_id =
allocations.resource_id and resources.resource = 'cpu' and consumer_id =
'be03c4f7-1a96-11e6-8478-000c291e9f7b' group by resources.resource_id,
generation;
+--++-+
| resource_id  | 

Re: [openstack-dev] [cross-project][quotas][delimiter]My thoughts on how Delimiter uses generation-id for sequencing

2016-05-15 Thread Amrith Kumar
I'm not thrilled that there are two API's one for a quota check and one
for a consumption, and where it is up to the caller to properly return
the results of one to the other.

I'd much rather have consume accept the quotas and the request and 'just
do it'. With that approach, the generation id's and things are entirely
out of the requesters reach.

I'll send a simple example illustrating the use of generations (which I
hope will be simpler than Jay's example below).

-amrith


On Sun, 2016-05-15 at 11:06 -0400, Jay Pipes wrote:
> On 05/15/2016 04:16 AM, Qijing Li wrote:
> > Hi Vilobh,
> >
> > Here is my thoughts on how Delimiter uses generation-id to guarantee
> >   sequencing. Please correct me if I understand it wrong.
> >
> > First, the Delimiter need to introduce another model ResourceProvider
> > who has two attributes:
> >
> >   * resource_id
> >   * generation_id
> 
> We will need generations for *consumers* of resources as well. The 
> generation for a provider of a resource is used when updating that 
> particular provider's view of its inventory. For quotas to work 
> effectively, we need each service to keep a generation for each consumer 
> of resources in the system.
> 
> > The followings are the steps of how to consume a quota:
> >
> > Step 1. Check if there is enough available quota
> 
> When you refer to quota above, you are using incorrect terminology. The 
> quota is the *limit* that a consumer of a resource has for a particular 
> class of resources. There is no such thing as "available quota". What 
> you are referring to above is whether the requested amount of resources 
> for a particular consumer would exceed that consumer's quota for a resource.
> 
> >  If yes, then get the $generation_id //by querying the model
> > ResourceProvider with the given resource_id which is the point in time
> > view of resource usage.
> 
> The generation for a resource provider is not used when *checking* if 
> quota would be exceeded for a consumer's request of a particular 
> resource class. The generation for a resource provider is used when 
> *consuming* resources on a particular resource provider. This 
> consumption process doesn't have anything to do with Delimiter, though. 
> It is an internal mechanism of each service whether it uses heavy 
> locking techniques or whether it uses a generation and retries to ensure 
> a consistent view.
> 
> Please see example code below.
> 
> >  If no, terminate the process of consuming the quota and return the
> > message of “No enough quotas available."
> 
> Yes.
> 
> > Step 2. Consume the quota.
> >
> > 2.1 Begin transaction
> >
> > 2.2 Update the QuotaUsage model: QuotaUsage.in_use =
> > QuotaUsage.in_use + amount of quota requested.
> 
> No. The above is precisely why there are lots of problems in the 
> existing system. The QuotaUsage model and database tables need to go 
> away entirely. They represent a synchronicity problem because they 
> contain duplicate data (the amount/sum used) from the actual resource 
> usage tables in the services.
> 
> Delimiter should not be responsible for updating any service's view of 
> resource usage. That is the responsibility of the service itself to do 
> this. All Delimiter needs to do is supply an interface/object model by 
> which services should represent usage records and an interface by which 
> services can determine if a consumer has concurrently changed its 
> consumption of resources in that service.
> 
> > 2.3 Get the $generation_id by querying the ResourceProvider by the
> > given resource_id.
> >
> >  If the $generation_id is larger than the $generation_id in Step
> > 1, then roll back transaction and GOTO step 1.
> >
> > this case means there is someone else has changed the
> > QuotaUsage during this process.
> >
> >  If the $generation_id is the same as the $generation_id in Step
> > 1, then increase the ResourceProvider.generation_id by one and
> >
> >  Commit the transaction. Done!
> >
> >  Note: no case the $generation_id is less than the
> > $generation_id in Step 1 because the $generation_id is nondecreasing.
> 
> No, sorry, the code in my earlier response to Vilobh and Nikhil was 
> confusing. The consumer's generation is what needs to be supplied by 
> Delimiter. The resource provider's generation is used by the service 
> itself to ensure a consistent view of usages across multiple concurrent 
> consumers. The resource provider's generation is an internal mechanism 
> the service could use to prevent multiple consumers from exceeding the 
> provider's available resources.
> 
> Here is what I think needs to be the "interface" that Delimiter facilitates:
> 
> ```python
> import sqlalchemy as sa
> from sqlalchemy import sql
> 
> import delimiter
> from delimiter import objects as d_objects
> from nova import objects as n_objects
> from nova.db.sqlalchey import tables as n_tables
> 
> 
> class NoRowsMatched(Exception):
>  pass
> 

Re: [openstack-dev] [tc] [all] [glance] On operating a high throughput or otherwise team

2016-05-15 Thread Erno Kuvaja
I'm not sure why I'm spending one of these very rare sunny Sunday
afternoons on this but perhaps it's just important enough.

Thanks Nikhil and Chris for such a verbal start for the discussion. I will
at least try to keep my part shorter. I will quote both of your e-mails by
name from which one it came from.

Nikhil:
"""Lately I have been involved in discussions that have resulted in giving
a wrong idea to the approach I take in operating the (Glance) team(s)."""

At very start, please change your approach and start leading the team
instead of trying to operate it. This is community rather than huge team in
big enterprise and our contributors have different corporational and
cultural backgrounds and reasonings why they are contributing into it, not
organization with resources in PTLs disposal.

Nikhil:
"""We are developing something that is usable, operationally friendly and
that it's easier to contribute & maintain but, many strong influencers
are missing on the most important need for OpenStack -- efficient way of
communication."""

With the community size of OpenStack spanning probably about every timezone
that has land we can communicate how much we ever want and not reach
everybody in time. After all the individual projects are responsible for
the release and providing that something that their mission statement
mandates. We can put all our efforts to communicate being nice, friendly
and easy but if we do not deliver we do not need to do this for long.

Nikhil:
"""Also, many people like to work on the assumption that all the
tools of communication are equivalent or useful and there are no
side-effects of using them ever."""
"""I think people prefer to use ML a lot and I am not a great fan of the
same."""

This is great to recognize, now lets work on that and get the expectations
right.
Community wide the primary mean is the mailing list (and just perhaps
extending to specs and gerrit reviews). You don't like it, I don't like it,
but it's what we have to deal with.
Secondary would be the more real time forums, namely IRC and design summits.
Anything apart from that (yes; hangouts, bluejeans, midcycles are great for
building the team and filing out misunderstandings and disagreements on
individual level) is tertiary and worthless unless used purely to bring the
discussion to the primary media. This is the only way to include the
community asynchronous way without expecting that 2/3 of the timezones
needs to participate at inconvenient or impossible times or find the
funding to travel.

Nikhil:
"""Multi-cast medium of communication is more disruptive as it involves a
possibility of divergence from the topic, strongly polarizing opinions
due to the small possibility of catching of the intent. So, let us use
it 'judiciously' and preferably only as a newspaper."""

Seriously? If you want to publish in unidirectional manner, please write a
blog* like everyone else, but don't try to transform our primary
communications to such. Thankfully this opening was already start for right
direction.
* "BLOGGING; Never Before Have So Many People with So Little to Say Said So
Much to So Few."
- Despair, Inc. http://despair.com/products/blogging

Nikhil:
"""Though, I think every team needs to be synchronous about their approach
and not use delayed mechanisms like ML or gerrit."""

10AM IST (UTC +0100) seems to be good time, half an hour every morning
should be fine to get me synced up after I've gone through the pressing
stuff from e-mails and 17:00 IST (UTC +0100) is probably good time for
another to keep us in synchronous. I'm sure the rest of the team is willing
to sacrifice half an hour of their mornings and evenings for same.
Hopefully you can facilitate, or perhaps the synchronous approach is not
that good after all?

Chris:
"""The fundamental problem is that we don't have shared understanding
and we don't do what is needed to build share understanding."""

This seems to be so spot on. We might be appear talking about same thing,
having agreement what we should do and months later realize that we were
not talking about same thing at all and start from the beginning.

Chris:
"""My experience of most OpenStack communications is that the degree of
shared understanding on some very fundamental things (e.g. "What are
we building?") is extremely low. It's really not that surprising
that there is discomfort."""

I'm not sure if you meant that but this issue seems to span across all the
layers. By that I mean that if you ask few persons that question 'What are
we building?' you get different responses regardless if that is component,
project or OpenStack tent level. What I do not know is at which level it
would be the best to start solving. Can people agree what their project or
component is doing if they don't agree on the big picture or is it
impossible to agree on the big picture if they can't agree what they are
doing at the component/project level.

Chris:
"""people, although
they may not agree on the solution, 

Re: [openstack-dev] [nova][neutron] Is it still valid/supported to create a network with a br- id?

2016-05-15 Thread Matt Riedemann

On 5/15/2016 10:56 AM, Sean M. Collins wrote:

Matt Riedemann wrote:

The nova create-server API allows passing a network id that's prefixed with
br- [1]. That was added due to this bug from Folsom [2].

I'm wondering if that's still valid? Looking at the network.id data model in
Neutron it doesn't look like it would be [3].


Wow. That bug is awful. Network IDs should be UUIDs and ONLY UUIDs.

Just because some vendor plugin decides that their going to break the
Networking API contract and define their own ID scheme,
doesn't mean that we should fix it to help them.

That commit shouldn't have been accepted into Nova, and I don't think
that we should support anything but a UUID for a network id. Period.




Yeah, I agree. Remember, this was Folsom, when Neutron was a young and 
brash Quantum.


I was just trying to sort out if there is still anything out there in 
the stadium that relies on this working. If not, I'll microversion it 
out of support for the Nova API when we add support for 
auto-allocated-topology.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Is it still valid/supported to create a network with a br- id?

2016-05-15 Thread Sean M. Collins
Matt Riedemann wrote:
> The nova create-server API allows passing a network id that's prefixed with
> br- [1]. That was added due to this bug from Folsom [2].
> 
> I'm wondering if that's still valid? Looking at the network.id data model in
> Neutron it doesn't look like it would be [3].

Wow. That bug is awful. Network IDs should be UUIDs and ONLY UUIDs.

Just because some vendor plugin decides that their going to break the
Networking API contract and define their own ID scheme,
doesn't mean that we should fix it to help them.

That commit shouldn't have been accepted into Nova, and I don't think
that we should support anything but a UUID for a network id. Period.


-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter]My thoughts on how Delimiter uses generation-id for sequencing

2016-05-15 Thread Jay Pipes

On 05/15/2016 04:16 AM, Qijing Li wrote:

Hi Vilobh,

Here is my thoughts on how Delimiter uses generation-id to guarantee
  sequencing. Please correct me if I understand it wrong.

First, the Delimiter need to introduce another model ResourceProvider
who has two attributes:

  * resource_id
  * generation_id


We will need generations for *consumers* of resources as well. The 
generation for a provider of a resource is used when updating that 
particular provider's view of its inventory. For quotas to work 
effectively, we need each service to keep a generation for each consumer 
of resources in the system.



The followings are the steps of how to consume a quota:

Step 1. Check if there is enough available quota


When you refer to quota above, you are using incorrect terminology. The 
quota is the *limit* that a consumer of a resource has for a particular 
class of resources. There is no such thing as "available quota". What 
you are referring to above is whether the requested amount of resources 
for a particular consumer would exceed that consumer's quota for a resource.



 If yes, then get the $generation_id //by querying the model
ResourceProvider with the given resource_id which is the point in time
view of resource usage.


The generation for a resource provider is not used when *checking* if 
quota would be exceeded for a consumer's request of a particular 
resource class. The generation for a resource provider is used when 
*consuming* resources on a particular resource provider. This 
consumption process doesn't have anything to do with Delimiter, though. 
It is an internal mechanism of each service whether it uses heavy 
locking techniques or whether it uses a generation and retries to ensure 
a consistent view.


Please see example code below.


 If no, terminate the process of consuming the quota and return the
message of “No enough quotas available."


Yes.


Step 2. Consume the quota.

2.1 Begin transaction

2.2 Update the QuotaUsage model: QuotaUsage.in_use =
QuotaUsage.in_use + amount of quota requested.


No. The above is precisely why there are lots of problems in the 
existing system. The QuotaUsage model and database tables need to go 
away entirely. They represent a synchronicity problem because they 
contain duplicate data (the amount/sum used) from the actual resource 
usage tables in the services.


Delimiter should not be responsible for updating any service's view of 
resource usage. That is the responsibility of the service itself to do 
this. All Delimiter needs to do is supply an interface/object model by 
which services should represent usage records and an interface by which 
services can determine if a consumer has concurrently changed its 
consumption of resources in that service.



2.3 Get the $generation_id by querying the ResourceProvider by the
given resource_id.

 If the $generation_id is larger than the $generation_id in Step
1, then roll back transaction and GOTO step 1.

this case means there is someone else has changed the
QuotaUsage during this process.

 If the $generation_id is the same as the $generation_id in Step
1, then increase the ResourceProvider.generation_id by one and

 Commit the transaction. Done!

 Note: no case the $generation_id is less than the
$generation_id in Step 1 because the $generation_id is nondecreasing.


No, sorry, the code in my earlier response to Vilobh and Nikhil was 
confusing. The consumer's generation is what needs to be supplied by 
Delimiter. The resource provider's generation is used by the service 
itself to ensure a consistent view of usages across multiple concurrent 
consumers. The resource provider's generation is an internal mechanism 
the service could use to prevent multiple consumers from exceeding the 
provider's available resources.


Here is what I think needs to be the "interface" that Delimiter facilitates:

```python
import sqlalchemy as sa
from sqlalchemy import sql

import delimiter
from delimiter import objects as d_objects
from nova import objects as n_objects
from nova.db.sqlalchey import tables as n_tables


class NoRowsMatched(Exception):
pass


class ConcurrentConsumption(Exception)
pass


def nova_check(quotas, request_spec):
"""
Do a verification that the resources requested by the supplied user
and tenant involved in the request specification do not cause the
user or tenant's quotas to be exceeded.

:param request_spec: `delimiter.objects.RequestSpec` object
 containing requested resource amounts, the
 requesting user and project, etc.
:returns `delimiter.objects.QuotaCheckResult` object
 containing the boolean result of the check, the
 resource classes that violated quotas, and a generation
 for the user.
"""
res = d_objects.QuotaCheckResult()
alloc_tbl = n_tables.ALLOCATIONS
cons_tbl = 

Re: [openstack-dev] [tc] supporting Go

2016-05-15 Thread Antoni Segura Puimedon
On Sat, May 14, 2016 at 7:13 PM, Clint Byrum  wrote:

> Excerpts from Dieterly, Deklan's message of 2016-05-14 01:18:20 +:
> > Python 2.x will not be supported for much longer, and let¹s face it,
> > Python is easy, but it just does not scale. Nor does Python have the
> > performance characteristics that large, distributed systems require.
> Maybe
> > Java could replace Python in OpenStack as the workhorse language.
>
> Which is why we've been pushing toward python 3 for years now. It's
> default for python apps in distros now, gates are holding the line at the
> unit test level now, so we just need a push toward integration testing
> and I truly believe we'll be seeing people use python3 and pypy to run
> OpenStack in the next year.
>

Kuryr kubernetes integration is python 3 only, as it is asyncio based. I
would
be surprised if new projects and subprojects don't go to python3 directly.


>
> And regarding not scaling: That's precisely what's being discussed,
> and it seems like there are plenty of options for pushing python further
> that aren't even half explored yet. Meanwhile, if enough people agree,
> perhaps go is a good option for those areas where we just can't push
> Python further without it already looking like another language anyway.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter]My thoughts on how Delimiter uses generation-id for sequencing

2016-05-15 Thread Amrith Kumar
Qijing,

I don't believe that this is correct. I promised to send a description
to the ML, I'll do that today.

-amrith

On Sun, 2016-05-15 at 01:16 -0700, Qijing Li wrote:
> Hi Vilobh,  
> 
> Here is my thoughts on how Delimiter uses generation-id to guarantee
>  sequencing. Please correct me if I understand it wrong.
> 
> First, the Delimiter need to introduce another model ResourceProvider
> who has two attributes:
> 
>   * resource_id 
>   * generation_id
> 
> The followings are the steps of how to consume a quota:
> 
> Step 1. Check if there is enough available quota 
> 
> If yes, then get the $generation_id  by querying the model
> ResourceProvider with the given resource_id which is the point in time
> view of resource usage.
> 
> If no, terminate the process of consuming the quota and return the
> message of “No enough quotas available."
> 
> Step 2. Consume the quota.
> 
>2.1 Begin transaction
> 
>2.2 Update the QuotaUsage model: QuotaUsage.in_use =
> QuotaUsage.in_use + amount of quota requested.
> 
>2.3 Get the $generation_id by querying the ResourceProvider by the
> given resource_id.
> 
> If the $generation_id is larger than the $generation_id in
> Step 1, then roll back transaction and GOTO step 1.
> 
>this case means there is someone else has changed the
> QuotaUsage during this process.
> 
> If the $generation_id is the same as the $generation_id in
> Step 1, then increase the ResourceProvider.generation_id by one and
> 
> Commit the transaction. Done!
> 
> Note: no case the $generation_id is less than the
> $generation_id in Step 1 because the $generation_id is nondecreasing.
> 
> 
> — Qijing
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] [glance] On operating a high throughput or otherwise team

2016-05-15 Thread Chris Dent


Hey Nikhil, found this all very interesting. Given your next to last
paragraph where you ask for a "judiciary" it sounds like there is
more going on here than I'm aware of, so apologies if my comments
are not entirely germane: I found some of your analysis very
interesting and related to things I've been thinking about since
joining the community two years ago, and wanted to respond. I've
probably missed some nuance in my effort to pick out the critical bits.

The analysis you do to lay in the background (OpenStack process is slow,
code is communication, people make a lot of assumptions that make it
harder to converge on action) is rather astute but from my standpoint
the apparent conclusion you make (to make up for inefficiencies in
existing communication styles let's have more synchronous communication
and synchronous participation) is unfortunate for a project at the scale
(on many dimensions) of OpenStack.

The fundamental problem is that we don't have shared understanding
and we don't do what is needed to build share understanding. In fact
we actively work against building it. TL;DR: When we are apart we need
to write more; when we are together we need to talk more but with less
purpose.

I mostly agree (and have been crowing) that:


many strong influencers
are missing on the most important need for OpenStack -- efficient way of
communication.


but disagree that getting people into so-called "high bandwidth" meetings
is the solution. Neither audio/video meetings nor IRC meetings are
of any use unless the participants have already established the basic
shared understanding that is required for the communication that
happens in the meeting to make any sense.

Most people in the OpenStack don't have that shared understanding
and without that shared understanding the assumption problem you
describe compounds. People make a list of goals that they believe
are mutually shared but they are actually just a list of words for
which everyone has a different interpretation.

(Part of the reason, I think, that people have a tendency to be
disruptive in IRC meetings is because it is incredibly obvious that
no one is talking about the same thing even though they claim to be.
The meeting is already just so much noise, why not add more? People
act out because they are squirming in the face of lack of coherent
meaning. TC meetings are classic and sometime hilarious examples of
this: People talking over one another, ostensibly about the same topic,
but from entirely different foundations. Without the foundations there's
no real point having the conversations, especially in a medium where
the foundations can't be built.)

We could improve this situation by altering how we are using our
communication tools to be more oriented towards establishing the
foundations. In a past life we used to talk about this collaborative
process as a sequence of "shared":

* Developing shared language and share understanding is a prerequisite
  for
* Developing shared goals which are a prerequisite for
* Share actions

Shared language is developed through hanging out and shooting the
breeze, talking things over in the broad sense, finding common
ground. It is the phase where you figure out what matters in the
very broad sense. This is the sort of stuff that happens in the
hallways at summit, late at night in open ended conversations at
IRC. The undirected and unplanned sense-making.

This stage is difficult for OpenStack newcomers or short-timers but
because there are so many newcomers and short-timers it is critical
that we figure out ways to make it easier. There must be
discoverable and digestible (thus not etherpads nor IRC logs) artifacts
from those conversations, ways in which the oral history becomes the
written history. Yes, this means more work for the old-timers, but as
you correctly point out Nikhil: communicating this sort of stuff should
really be their primary activity.

This myth-building happens to some extent already but there is not
a strong culture in OpenStack of pointing people at the history.
It's more common as a newbie to rock up in IRC with a question and
get a brief synchronous answer when it would be better to be pointed
at a written artifact that provides contextualizing background.

It's important to note: At this stage being pointed at etherpads,
gerrit reviews and IRC logs won't cut it. We're talking here about
building shared language. There are two issues with those media:

* They are already embedded in context of the language (that is,
  they only make sense if you already "get it).
* The UX on etherpads, gerrit and IRC logs for extracting meaning is
  dismal. If you're sending people to raw etherpads to help them learn
  something you are a cruel person. Etherpads are references, not
  digests.

The current expense of summits is such that participants go in with an
eye to coming out of it with goals not realizing that they first
need to establish the shared language. As a commercially driven open

[openstack-dev] [Infra][Kuryr] New sub projects created

2016-05-15 Thread Gal Sagie
Hello all,

We have created the following repositories  as part of Kuryr:

https://github.com/openstack?utf8=%E2%9C%93=kuryr

Would love if anyone from Infra that has access can allow kuryr-core team to
also be able to review/merge code in these new repositories.

Would also love if the TC can approve Kuryr deliverables as reflected by
this change
in this following patch:

https://review.openstack.org/#/c/314057/


Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support

2016-05-15 Thread Xav Paice
On 14 May 2016 at 00:27, Major Hayden  wrote:

>
> For what it's worth, I have a (somewhat dated) branch with Octavia support
> in Github[1].
>
>
>
Great stuff! - that's covered everything I've been looking at so far,
except that we're not wanting to run neutron-server (and therefore the
octavia api) on the same boxes as the Neutron L3 agent (where I understand
we need to run the worker).  This isn't the place for usage questions, I
was wondering how you deal with that separation or if it's not yet been
worked into the branch?

I will assume that SSL certs haven't been dealt to yet?  I expect to be
throwing Barbican into the mix shortly to deal with that, maybe Anchor too.


>
> We would definitely be happy to help with any questions you have while
> you're using OpenStack-Ansible.  It's always nice to have feedback from new
> users, especially those who are used to other deployment frameworks.  The
> OpenStack-Ansible contributors have done a lot to "smooth off" the rough
> edges of OpenStack deployments, but we find new things that surprise us
> from time to time. :)
>
>
I'll run up a test env asap, seems that using your branch with some minor
updates might be just what we need.  Any updates will of course be shared :)


> Feel free to join #openstack-ansible on Freenode or hang out with us
> during our IRC meetings on Thursday[2].
>

Ugh - I really need to live in a country with a decent timezone.  I'm in
UTC+12 - will lurk around a bit and see who's online at the same time as I
am, and the whole project looks to be pretty friendly for newcomers.  I
work funny hours but 4am isn't when I'm at my best.


>
> [1] https://github.com/major/openstack-ansible/tree/octavia
> [2] https://wiki.openstack.org/wiki/Meetings/openstack-ansible
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cross-project][quotas][delimiter]My thoughts on how Delimiter uses generation-id for sequencing

2016-05-15 Thread Qijing Li
Hi Vilobh,

Here is my thoughts on how Delimiter uses generation-id to guarantee
 sequencing. Please correct me if I understand it wrong.

First, the Delimiter need to introduce another model ResourceProvider who
has two attributes:

   - resource_id
   - generation_id

The followings are the steps of how to consume a quota:

Step 1. Check if there is enough available quota

If yes, then get the $generation_id  by querying the model
ResourceProvider with the given resource_id which is the point in time view
of resource usage.

If no, terminate the process of consuming the quota and return the
message of “No enough quotas available."

Step 2. Consume the quota.

   2.1 Begin transaction

   2.2 Update the QuotaUsage model: QuotaUsage.in_use = QuotaUsage.in_use +
amount of quota requested.

   2.3 Get the $generation_id by querying the ResourceProvider by the given
resource_id.

If the $generation_id is larger than the $generation_id in Step 1,
then roll back transaction and GOTO step 1.

   this case means there is someone else has changed the QuotaUsage
during this process.

If the $generation_id is the same as the $generation_id in Step 1,
then increase the ResourceProvider.generation_id by one and

Commit the transaction. Done!

Note: no case the $generation_id is less than the $generation_id in
Step 1 because the $generation_id is nondecreasing.


— Qijing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev