Re: [openstack-dev] Installing multinode(control node)--[any_updates]

2014-11-19 Thread Chhavi Kant/TVM/TCS


- Original Message -
From: Chhavi Kant/TVM/TCS 
To: openstack-dev@lists.openstack.org
Sent: Wed, 19 Nov 2014 16:32:55 +0530 (IST)
Subject: [openstack-dev] Installing multinode(control node)

Hi,

I want to install multinode in openstack, i need some guidence on what all are 
the services that i need to enable for installing control node.
Attached is the localrc. 

-- 
Thanks & Regards

Chhavi Kant

=-=-=

Notice: The information contained in this e-mail

message and/or attachments to it may contain 

confidential or privileged information. If you are 

not the intended recipient, any dissemination, use, 

review, distribution, printing or copying of the 

information contained in this e-mail message 

and/or attachments to it are strictly prohibited. If 

you have received this communication in error, 

please notify us by reply e-mail or telephone and 

immediately and permanently delete the message 

and any attachments. Thank you
-- 
Thanks & Regards

Chhavi Kant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-19 Thread Matthias Runge
On 19/11/14 17:52, Fox, Kevin M wrote:
> Perhaps they are there to support older browsers?
> 
Probable.

Windows dlls are quite uncommon in a Linux distribution.

It's a bit unlikely to have an older browser installed in a centrally
managed distribution like Fedora.

Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Judge the service state when perform a deleting on it

2014-11-19 Thread Eli Qiao

hello dear folks:

I'd like to get some feedback for this nova api spec on this

https://review.openstack.org/#/c/131633/

can you please kindly comment on it?

--
Thanks,
Eli (Li Yong) Qiao

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we need to add xml support for new API extensions on v2 API ?

2014-11-19 Thread Kenichi Oomichi
> -Original Message-
> From: Christopher Yeoh [mailto:cbky...@gmail.com]
> Sent: Wednesday, November 19, 2014 7:52 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Do we need to add xml support for new API 
> extensions on v2 API ?
> 
> On Wed, 19 Nov 2014 13:11:40 +0800
> Chen CH Ji  wrote:
> 
> >
> > Hi
> >  I saw we are removing v2 XML support proposed
> > several days before
> >
> >  For new api extensions, do we need to add it now and
> > remove it later or only support JSON ? Thanks
> 
> I don't think any api additions to the api should include XML support.

Yeah, right.
and we are dropping XML support now. [1]

[1]: https://review.openstack.org/#/c/134332/

Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Rally scenario question

2014-11-19 Thread Boris Pavlovic
Hi Ajay,

Let me explain how Rally works in this case:


1) run all context setup() methods. (Create tenants, users, setup quotas,
roles, upload images, )

2) Run load (100 times stuff with concurrency 10 )

3) run all context cleanup() methods. (Run in vise versa order all chosen
context including generic cleanup)


So, the answer on your questions is that you shouldn't change anything in
Rally to have 100 active instance before cleanup.

You should just use proper benchmark scenario.
Like this one:
https://github.com/stackforge/rally/blob/master/doc/samples/tasks/scenarios/nova/boot.json
That boots VMs but doesn't delete them. So they will be deleted only in
generic cleanup context.


Best regards,
Boris Pavlovic


On Thu, Nov 20, 2014 at 4:28 AM, Ajay Kalambur (akalambu) <
akala...@cisco.com> wrote:

>  Hi
> In rally I have 2 questions. When you use a context like example
> users: 2, tenant:2 I want the users and tenants to not be cleaned up but
> accumulated and cleaned up at end.
> Example create 1 user 1 tenant run tests create second user tenant run
> test and finally clean up user and tenant
>
>  Similarly when you specify
> times: 100
> concurrency: 10
> I want it to launch 10 at a time but not clean them up immediately and
> only clean up at end so 100 instances are active at end before clean up
>
>  What changes would be needed to Rally to support this. Is there also a
> temporary way to do this if not present
>
>  Ajay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zero MQ remove central broker. Architecture change.

2014-11-19 Thread Li Ma

Hi Yatin,

Thanks for sharing your presentation. That looks great. Welcome to 
contribute to ZeroMQ driver.


Cheers,
Li Ma

On 2014/11/19 12:50, yatin kumbhare wrote:

Hello Folks,

Couple of slides/diagrams, I documented it for my understanding way 
back for havana release. Particularly slide no. 10 onward.


https://docs.google.com/presentation/d/1ZPWKXN7dzXs9bX3Ref9fPDiia912zsHCHNMh_VSMhJs/edit#slide=id.p

I am also committed to using zeromq as it's light-weight/fast/scalable.

I would like to chip in for further development regarding zeromq.

Regards,
Yatin

On Wed, Nov 19, 2014 at 8:05 AM, Li Ma > wrote:



On 2014/11/19 1:49, Eric Windisch wrote:


I think for this cycle we really do need to focus on
consolidating and
testing the existing driver design and fixing up the biggest
deficiency (1) before we consider moving forward with lots of new


+1

1) Outbound messaging connection re-use - right now every
outbound
messaging creates and consumes a tcp connection - this
approach scales
badly when neutron does large fanout casts.



I'm glad you are looking at this and by doing so, will understand
the system better. I hope the following will give some insight
into, at least, why I made the decisions I made:
This was an intentional design trade-off. I saw three choices
here: build a fully decentralized solution, build a
fully-connected network, or use centralized brokerage. I wrote
off centralized brokerage immediately. The problem with a fully
connected system is that active TCP connections are required
between all of the nodes. I didn't think that would scale and
would be brittle against floods (intentional or otherwise).

IMHO, I always felt the right solution for large fanout casts was
to use multicast. When the driver was written, Neutron didn't
exist and there was no use-case for large fanout casts, so I
didn't implement multicast, but knew it as an option if it became
necessary. It isn't the right solution for everyone, of course.


Using multicast will add some complexity of switch forwarding
plane that it will enable and maintain multicast group
communication. For large deployment scenario, I prefer to make
forwarding simple and easy-to-maintain. IMO, run a set of
fanout-router processes in the cluster can also achieve the goal.
The data path is: openstack-daemon send the message (with
fanout=true) -> fanout-router -read the
matchmaker--> send to the destinations
Actually it just uses unicast to simulate multicast.

For connection reuse, you could manage a pool of connections and
keep those connections around for a configurable amount of time,
after which they'd expire and be re-opened. This would keep the
most actively used connections alive. One problem is that it
would make the service more brittle by making it far more
susceptible to running out of file descriptors by keeping
connections around significantly longer. However, this wouldn't
be as brittle as fully-connecting the nodes nor as poorly scalable.


+1. Set a large number of fds is not a problem. Because we use
socket pool, we can control and keep the fixed number of fds.

If OpenStack and oslo.messaging were designed specifically around
this message pattern, I might suggest that the library and its
applications be aware of high-traffic topics and persist the
connections for those topics, while keeping others ephemeral. A
good example for Nova would be api->scheduler traffic would be
persistent, whereas scheduler->compute_node would be ephemeral. 
Perhaps this is something that could still be added to the library.


2) PUSH/PULL tcp sockets - Pieter suggested we look at
ROUTER/DEALER
as an option once 1) is resolved - this socket type pairing
has some
interesting features which would help with resilience and
availability
including heartbeating. 



Using PUSH/PULL does not eliminate the possibility of being fully
connected, nor is it incompatible with persistent connections. If
you're not going to be fully-connected, there isn't much
advantage to long-lived persistent connections and without those
persistent connections, you're not benefitting from features such
as heartbeating.


How about REQ/REP? I think it is appropriate for long-lived
persistent connections and also provide reliability due to reply.

I'm not saying ROUTER/DEALER cannot be used, but use them with
care. They're designed for long-lived channels between hosts and
not for the ephemeral-type connections used in a peer-to-peer
system. Dealing with how to manage timeouts on the client and the
server and the swelling number of active file descriptions that
you'll 

Re: [openstack-dev] Solving the client libs stable support dilemma

2014-11-19 Thread Sean Dague
On 11/19/2014 08:55 PM, Brant Knudson wrote:
>
>
> On Mon, Nov 17, 2014 at 8:51 AM, Doug Hellmann  > wrote:
>
>
> On Nov 17, 2014, at 6:16 AM, Sean Dague  > wrote:
>
> > On 11/16/2014 11:23 AM, Doug Hellmann wrote:
> >>
> >> On Nov 16, 2014, at 9:54 AM, Jeremy Stanley  > wrote:
> >>
> >>> On 2014-11-16 09:06:02 -0500 (-0500), Doug Hellmann wrote:
>  So we would pin the client libraries used by the servers and
>  installed globally, but then install the more recent client
>  libraries in a virtualenv and test using those versions?
> >>>
> >>> That's what I was thinking anyway, yes.
> >>>
>  I like that.
> >>>
> >>> Honestly I don't, but it sucks less than the other solutions which
> >>> sprang to mind. Hopefully someone will come along with a more
> >>> elegant suggestion... in the meantime I don't see any obvious
> >>> reasons why it wouldn't work.
> >>
> >> Really, it’s a much more accurate test of what we want. We
> have, as an artifact of our test configuration, to install
> everything on a single box. But what we’re trying to test is that
> a user can install the new clients and talk to an old cloud. We
> don’t expect deployers of old clouds to install new clients — at
> least we shouldn’t, and by pinning the requirements we can make
> that clear. Using the virtualenv for the new clients gives us
> separation between the “user” and “cloud” parts of the test
> configuration that we don’t have now.
> >>
> >> Anyway, if we’re prepared to go along with this I think it’s
> safe for us to stop using alpha version numbers for Oslo libraries
> as a matter of course. We may still opt to do it in cases where we
> aren’t sure of a new API or feature, but we won’t have to do it
> for every release.
> >>
> >> Doug
> >
> > I think this idea sounds good on the surface, though what a working
> > system looks like is going to be a little interesting to make
> sure you
> > are in / out of the venv.
> >
> > I actually think you might find it simpler to invert this.
> >
> > Create 1 global venv for servers, specify the venv before
> launching a
> > service.
> >
> > Install all the clients into system level space, then running
> nova list
> > doesn't require that it is put inside the venv.
> >
> > This should have the same results, but be less confusing for people
> > poking at devstacks manually.
>
> That makes sense. I’m a little worried that it’s a bigger change
> to devstack vs. the job that’s testing the clients, but I’ll defer
> to you on what’s actually easier since you’re more familiar with
> the code. Either way, installing the servers and the clients into
> separate packaging spaces would allow us to pin the clients in the
> stable branches.
>
>
> Another piece is middleware, for example the auth_token middleware in
> the keystonemiddleware package.

Right, so that would get dragged into the system libs for the CLI pieces
at whatever the clients want, and would get additionally dragged into
the global venv for the server based on the server's needs. I think the
only question would be if LIBS_FROM_GIT did both system and global venv
in this case. Perhaps we'd need another SYSLIBS_FROM_GIT or something to
specify the differences.

Anyway, I think I have most of the ideas in my head, but will write all
this out in a spec before implementing, because there are probably
enough edge cases that we'll want plenty of eyes looking at it.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Solving the client libs stable support dilemma

2014-11-19 Thread Brant Knudson
On Mon, Nov 17, 2014 at 8:51 AM, Doug Hellmann 
wrote:

>
> On Nov 17, 2014, at 6:16 AM, Sean Dague  wrote:
>
> > On 11/16/2014 11:23 AM, Doug Hellmann wrote:
> >>
> >> On Nov 16, 2014, at 9:54 AM, Jeremy Stanley  wrote:
> >>
> >>> On 2014-11-16 09:06:02 -0500 (-0500), Doug Hellmann wrote:
>  So we would pin the client libraries used by the servers and
>  installed globally, but then install the more recent client
>  libraries in a virtualenv and test using those versions?
> >>>
> >>> That's what I was thinking anyway, yes.
> >>>
>  I like that.
> >>>
> >>> Honestly I don't, but it sucks less than the other solutions which
> >>> sprang to mind. Hopefully someone will come along with a more
> >>> elegant suggestion... in the meantime I don't see any obvious
> >>> reasons why it wouldn't work.
> >>
> >> Really, it’s a much more accurate test of what we want. We have, as an
> artifact of our test configuration, to install everything on a single box.
> But what we’re trying to test is that a user can install the new clients
> and talk to an old cloud. We don’t expect deployers of old clouds to
> install new clients — at least we shouldn’t, and by pinning the
> requirements we can make that clear. Using the virtualenv for the new
> clients gives us separation between the “user” and “cloud” parts of the
> test configuration that we don’t have now.
> >>
> >> Anyway, if we’re prepared to go along with this I think it’s safe for
> us to stop using alpha version numbers for Oslo libraries as a matter of
> course. We may still opt to do it in cases where we aren’t sure of a new
> API or feature, but we won’t have to do it for every release.
> >>
> >> Doug
> >
> > I think this idea sounds good on the surface, though what a working
> > system looks like is going to be a little interesting to make sure you
> > are in / out of the venv.
> >
> > I actually think you might find it simpler to invert this.
> >
> > Create 1 global venv for servers, specify the venv before launching a
> > service.
> >
> > Install all the clients into system level space, then running nova list
> > doesn't require that it is put inside the venv.
> >
> > This should have the same results, but be less confusing for people
> > poking at devstacks manually.
>
> That makes sense. I’m a little worried that it’s a bigger change to
> devstack vs. the job that’s testing the clients, but I’ll defer to you on
> what’s actually easier since you’re more familiar with the code. Either
> way, installing the servers and the clients into separate packaging spaces
> would allow us to pin the clients in the stable branches.
>
>
Another piece is middleware, for example the auth_token middleware in the
keystonemiddleware package.

 - Brant



> Doug
>
> >
> >   -Sean
> >
> > --
> > Sean Dague
> > http://dague.net
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Jay Pipes

On 11/19/2014 04:27 PM, Eugene Nikanorov wrote:

Wow, lots of feedback in a matter of hours.

First of all, reading postgres docs I see that READ COMMITTED is the
same as for mysql, so it should address the issue we're discussing:

"/Read Committed/ is the default isolation level in PostgreSQL. When a
transaction uses this isolation level, a SELECT query (without a FOR
UPDATE/SHARE clause) *sees only data committed before the query began
(not before TX began - Eugene)*; it never sees either uncommitted data
or changes committed during query execution by concurrent transactions.
In effect, a SELECT query sees a snapshot of the database as of the
instant the query begins to run. However, SELECT does see the effects of
previous updates executed within its own transaction, even though they
are not yet committed. *Also note that two successive **SELECT commands
can see different data, even though they are within a single
transaction, if other transactions commit changes during execution of
the first SELECT. "*
http://www.postgresql.org/docs/8.4/static/transaction-iso.html


So while the SELECTs may return different data on successive calls when 
you use the READ COMMITTED isolation level, the UPDATE statements will 
continue to return 0 rows affected **if they attempt to change rows that 
have been changed since the start of the transaction**


The reason that changing the isolation level to READ COMMITTED appears 
to work for the code in question:


https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/helpers.py#L98

is likely because the SELECT ... LIMIT 1 query is returning a different 
row on successive attempts (though since there is no ORDER BY on the 
query, the returned row of the query is entirely unpredictable (line 
112)). Since data from that returned row is used in the UPDATE statement 
(line 118 and 124), *different* rows are actually being changed by 
successive UPDATE statements.


What this means is that for this *very particular* case, setting the 
transaction isolation level to READ COMMITTTED will work presumably most 
of the time on MySQL, but it's not an appropriate solution for the 
generalized problem domain of the SELECT FOR UPDATE. If you need to 
issue a SELECT and an UPDATE in a retry loop, and you are attempting to 
update the same row or rows (for instance, in the quota reservation or 
resource allocation scenarios), this solution will not work, even with 
READ COMMITTED. This is why I say it's not really appropriate, and a 
better general solution is to use separate transactions for each loop in 
the retry mechanic.



So, in my opinion, unless neutron code has parts that rely on
'repeatable read' transaction isolation level (and I believe such code
is possible, didn't inspected closely yet), switching to READ COMMITTED
is fine for mysql.


This will introduce more problems than you think, I believe. A better 
strategy is to simply use separate transactions for each loop 
iteration's queries.



On multi-master scenario: it is not really an advanced use case. It is
basic, we need to consider it as a basic and build architecture with
respect to this fact.
"Retry" approach fits well here, however it either requires proper
isolation level, or redesign of whole DB access layer.


It's not about the retry approach. I don't think anyone is saying that a 
retry approach is not a good idea. I've been a proponent of the retry 
approach to get around issues with SELECT FOR UPDATE ever since I 
brought up the issue to the mailing list about 7 months ago. :)


The issue is about doing the retry within a single transaction. That's 
not what I recommend doing. I recommend instead doing short separate 
transactions instead of long-lived, multi-statement transactions and 
relying on the behaviour of the DB's isolation level (default or 
otherwise) to "solve" the problem of reading changes to a record that 
you intend to update.


Cheers,
-jay


Also, thanks Clint for clarification about example scenario described by
Mike Bayer.
Initially the issue was discovered with concurrent tests on multi master
environment with galera as a DB backend.

Thanks,
Eugene

On Thu, Nov 20, 2014 at 12:20 AM, Mike Bayer mailto:mba...@redhat.com>> wrote:



On Nov 19, 2014, at 3:47 PM, Ryan Moats mailto:rmo...@us.ibm.com>> wrote:

>
BTW, I view your examples from oslo as helping make my argument for
me (and I don't think that was your intent :) )



I disagree with that as IMHO the differences in producing MM in the
app layer against arbitrary backends (Postgresql vs. DB2 vs. MariaDB
vs. ???)  will incur a lot more “bifurcation” than a system that
targets only a handful of existing MM solutions.  The example I
referred to in oslo.db is dealing with distinct, non MM backends.
That level of DB-specific code and more is a given if we are
building a MM system against multiple backends generically.

It’s not possible to say which approach would be be

Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread joehuang
+1

There are lots of concurrent requests to the quota management service if it's 
shared for projects, especially if it's shared for multi-regions (KeyStone can 
be global service for multi-regions), also latency will affect the end user 
experience. POC is good idea to verify the concept.

Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: Kevin L. Mitchell [mailto:kevin.mitch...@rackspace.com] 
Sent: Thursday, November 20, 2014 7:40 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Quota management and enforcement across projects

On Thu, 2014-11-20 at 10:16 +1100, Blair Bethwaite wrote:
> For actions initiated directly through core OpenStack service APIs 
> (Nova, Cinder, Neutron, etc - anything using Keystone policy), 
> shouldn't quota-enforcement be handled by Keystone? To me this is just 
> a subset of authz, and OpenStack already has a well established 
> service for such decisions.

If you look a little earlier in the thread, you will find a post from me where 
I point out just how complicated quota management actually is.  I suggest that 
it should be developed as a proof-of-concept as a separate service; from there, 
we can see whether it makes sense to roll it into Keystone or maintain it as a 
separate thing.
--
Kevin L. Mitchell  Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-19 Thread Sukhdev Kapur
Folks,

Like Ian, I am jumping in this very late as well - as I decided to travel
Europe after the summit, just returned back and  catching up :-):-)

I have noticed that this thread has gotten fairly convoluted and painful to
read.

I think Armando summed it up well in the beginning of the thread. There are
basically three written proposals (listed in Armando's email - I pasted
them again here).

[1] https://review.openstack.org/#/c/134179/
[2] https://review.openstack.org/#/c/100278/
[3] https://review.openstack.org/#/c/93613/

On this thread I see that the authors of first two proposals have already
agreed to consolidate and work together. This leaves with two proposals.
Both Ian and I were involved with the third proposal [3] and have
reasonable idea about it. IMO, the use cases addressed by the third
proposal are very similar to use cases addressed by proposal [1] and [2]. I
can volunteer to  follow up with Racha and Stephen from Ericsson to see if
their use case will be covered with the new combined proposal. If yes, we
have one converged proposal. If no, then we modify the proposal to
accommodate their use case as well. Regardless, I will ask them to review
and post their comments on [1].

Having said that, this covers what we discussed during the morning session
on Friday in Paris. Now, comes the second part which Ian brought up in the
afternoon session on Friday.
My initial reaction was, when heard his use case, that this new
proposal/API should cover that use case as well (I am being bit optimistic
here :-)). If not, rather than going into the nitty gritty details of the
use case, let's see what modification is required to the proposed API to
accommodate Ian's use case and adjust it accordingly.

Now, the last point (already brought up by Salvatore as well as Armando) -
the abstraction of the API, so that it meets the Neutron API criteria. I
think this is the critical piece. I also believe the API proposed by [1] is
very close. We should clean it up and take out references to ToR's or
physical vs virtual devices. The API should work at an abstract level so
that it can deal with both physical as well virtual devices. If we can
agree to that, I believe we can have a solid solution.

Having said that I would like to request the community to review the
proposal submitted by Maruti in [1] and post comments on the spec with the
intent to get a closure on the API. I see lots of good comments already on
the spec. Lets get this done so that we can have a workable (even if not
perfect) version of API in Kilo cycle. Something which we can all start to
play with. We can always iterate over it, and make change as we get more
and more use cases covered.

Make sense?

cheers..
-Sukhdev


On Tue, Nov 18, 2014 at 6:44 PM, Armando M.  wrote:

> Hi,
>
> On 18 November 2014 16:22, Ian Wells  wrote:
>
>> Sorry I'm a bit late to this, but that's what you get from being on
>> holiday...  (Which is also why there are no new MTU and VLAN specs yet, but
>> I swear I'll get to them.)
>>
>
> Ah! I hope it was good at least :)
>
>
>>
>> On 17 November 2014 01:13, Mathieu Rohon  wrote:
>>
>>> Hi
>>>
>>> On Fri, Nov 14, 2014 at 6:26 PM, Armando M.  wrote:
>>> > Last Friday I recall we had two discussions around this topic. One in
>>> the
>>> > morning, which I think led to Maruti to push [1]. The way I understood
>>> [1]
>>> > was that it is an attempt at unifying [2] and [3], by choosing the API
>>> > approach of one and the architectural approach of the other.
>>> >
>>> > [1] https://review.openstack.org/#/c/134179/
>>> > [2] https://review.openstack.org/#/c/100278/
>>> > [3] https://review.openstack.org/#/c/93613/
>>> >
>>> > Then there was another discussion in the afternoon, but I am not 100%
>>> of the
>>> > outcome.
>>>
>>> Me neither, that's why I'd like ian, who led this discussion, to sum
>>> up the outcome from its point of view.
>>>
>>
>> So, the gist of what I said is that we have three, independent, use cases:
>>
>> - connecting two VMs that like to tag packets to each other (VLAN clean
>> networks)
>> - connecting many networks to a single VM (trunking ports)
>> - connecting the outside world to a set of virtual networks
>>
>> We're discussing that last use case here.  The point I was made was that:
>>
>> - there are more encaps in the world than just VLANs
>> - they can all be solved in the same way using an edge API
>>
>
> No disagreement all the way up to this point, assumed that I don't worry
> about what this edge API really is.
>
>
>> - if they are solved using an edge API, the job of describing the network
>> you're trying to bring in (be it switch/port/vlan, or MPLS label stack, or
>> l2tpv3 endpoint data) is best kept outside of Neutron's API, because
>> Neutron can't usefully do anything with it other than validate it and hand
>> it off to whatever network control code is being used.  (Note that most
>> encaps will likely *not* be implemented in Neutron's inbuilt control code.)
>>
>
> This is where the

Re: [openstack-dev] [glance] Parallels loopback disk format support

2014-11-19 Thread Nikhil Komawar
It would be good to have a small BP for bookkeeping purposes to the very least. 
(If you ping me on IRC when this is ready, promise to make this less painful). 
The idea being a good way to "publish" the image formats supported by Glance. 
Besides, this calls for Opt and Doc change which shall be reflected in release 
notes and such so good to have a spec/BP.

Thanks,
-Nikhil


From: Fei Long Wang [feil...@catalyst.net.nz]
Sent: Wednesday, November 19, 2014 4:39 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] Parallels loopback disk format support

IIUC, the blueprint just want to add a new image format, and no code
change in Glance, is it? If that's the case, I'm wondering if we really
need a blueprint/spec. Because the image format could be configured in
glance-api.conf. Please correct me if I missed anything. Cheers.


On 20/11/14 02:27, Maxim Nestratov wrote:
> Greetings,
>
> In scope of these changes [1], I would like to add a new image format
> into glance. For this purpose there was created a blueprint [2] and
> would really appreciate if someone from glance team could review this
> proposal.
>
> [1] https://review.openstack.org/#/c/111335/
> [2] https://blueprints.launchpad.net/glance/+spec/pcs-support
>
> Best,
>
> Maxim Nestratov,
> Lead Software Developer,
> Parallels
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Salvatore Orlando
Apparently, like everything Openstack-y we have a gathered a good crowd of
people with different opinions, more or less different, more or less strong.
My only strong opinion is that any ocean-boiling attempt should be
carefully avoided, and any proposed approach should add as little as
possible in terms of delays and complexity to the current process.
Anyway, I am glad nobody said so far "QaaS". If you're thinking to do so,
please don't. Please.

The reason for the now almost-abandoned proposal for a library was to have
a simple solution for allowing every project to enforce quotas in the same
ways.
As Kevin said, managing quotas for resources and their sequence attributes
(like the sec group rules for instance), can become pretty messy soon. As
of today this problem is avoided since quota management logic is baked in
every project's business logic. The library proposal also shamefully
avoided this problem - by focusing on enforcement only.

For enforcement, as Doug said, this is about answering the question
"can I consume
X units of a given resource?". In a distributed, non-locking, world, this
is not really obvious. A Boson-like proposal would solve this problem by
providing a centralised enforcement point. In this case the only possible
approach is one in which such centralised endpoint does both enforcement
and management.
While I too think that we need a centralised quota management endpoint, be
it folded in Keystone or not, I still need to be convinced that this is the
right architectural decision for enforcing quotas.

First, "can I consume X unit of a given resource" implies the external
service should be aware of current resource usage. Arguably it's the
consumer service (eg: nova, cinder) that owns this information, not the
quota service. The quota service can however be made aware of this in
several ways:
- notifications. In this case the service will also become a consumer of a
telemetry service like ceilometer or stacktach. I'm not sure we want to add
that dependency
- the API call itself might ask for reserving a given amount resource and
communicate at the same time resource usage. This sounds better, but why
would it be better than the dual approach, in which the consumer service
makes the reservation using quota values fetched by an external service.
Between quota and usage, the latter seem to be the more dynamic ones;
quotas can also be easily cached.
- usage info might be updated when reservations are committed. This looks
smart, but requirers more thinking for dealing with failure scenarios where
the failure occurs once resources are reserved but before they're committed
and there is no way to know which ones were actually and which ones not.

Second, calling an external service is not cheap. Especially since those
calls might be done several times for completing a single API request.
Think about POST /servers with all the ramifications into glance, neutron,
and cinder. In the simplest case we'd need two calls - one to reserve a
resource (or a set of resources), and one to either confirm or cancel the
reservation. This alone adds two round trips for each API call. Could this
scale well? To some extent it reminds me of the mayhem that was the
nova/neutron library before caching for keystone tokens was fixed.

Somebody mentioned an analogy with AuthZ. For some reason, perhaps
different from the reasons of who originally brought the analogy, I also
believe that this is a decent model to follow.
Quotas for each resource, together with the resource structure, might be
stored by a centralised service. This is pretty much inline with Boson's
goals, if I understand correctly. It will expose tenant-facing REST APIs
for quota management and will finally solve the problem of having a
standardized way for handling quotas, regardless of whether it stand as its
own service or is folded into Keystone, operators should be happier. And
there's nothing more satisfying that the face of a happy operator...
provided we also do not break backward compatibility!

Enforcement is something that in my opinion should happen locally, just
like we do authZ locally using information provided by Keystone upon
authentication. The issue Doug correctly pointed out however is that quota
enforcement is a too complex to be implemented in a library, like, for
instance, oslo.policy.
So, assuming there might consensus on doing quota enforcement locally in
the consumer application, what should be the 'thing' that enforces policy
if it can't be a library? Could it be a quota agent which communicates with
quota server, caches quota info? If yes, APIs like those for managing
reservation, over which transport should be invoked? IPC, AMQP, REST? I
need to go back to the drawing board to see if this can possibly work, but
in the meanwhile your feedback is more than welcome.

Salvatore

On 20 November 2014 00:39, Kevin L. Mitchell 
wrote:

> On Thu, 2014-11-20 at 10:16 +1100, Blair Bethwaite wrote:
> > For actions initiated directly

[openstack-dev] [Rally] Rally scenario question

2014-11-19 Thread Ajay Kalambur (akalambu)
Hi
In rally I have 2 questions. When you use a context like example
users: 2, tenant:2 I want the users and tenants to not be cleaned up but 
accumulated and cleaned up at end.
Example create 1 user 1 tenant run tests create second user tenant run test and 
finally clean up user and tenant

Similarly when you specify
times: 100
concurrency: 10
I want it to launch 10 at a time but not clean them up immediately and only 
clean up at end so 100 instances are active at end before clean up

What changes would be needed to Rally to support this. Is there also a 
temporary way to do this if not present

Ajay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] [storyboard] Goodbye Infra on Launchpad, Hello Infra on StoryBoard

2014-11-19 Thread Michael Krotscheck
The OpenStack Infrastructure team has successfully migrated all of the 
openstack-infra project bugs from LaunchPad to StoryBoard. With the exception 
of openstack-ci bugs tracked by elastic recheck, all bugs, tickets, and work 
tracked for OpenStack Infrastructure projects must now be submitted and 
accessed at https://storyboard.openstack.org. If you file a ticket on 
LaunchPad, the Infrastructure team no longer guarantees that it will be 
addressed. Note that only the infrastructure projects have moved, no other 
OpenStack projects have been migrated.

This is part of a long-term plan to migrate OpenStack from Launchpad to 
StoryBoard.  At this point we feel that StoryBoard meets the needs of the 
OpenStack infrastructure team and plan to use this migration to further 
exercise the project while we continue its development.

As you may notice, Development on StoryBoard is ongoing, and we have not yet 
reached feature parity with those parts of LaunchPad which are needed for the 
rest of OpenStack. Contributions are always welcome, and the team may be 
contacted in the #storyboard or #openstack-infra channels on freenode, via the 
openstack-dev list using the [storyboard] subject, or via StoryBoard itself by 
creating a story. Feel free to report any bugs, ask any questions, or make any 
improvement suggestions that you come up with at: 
https://storyboard.openstack.org/#!/project/456 


We are always looking for more contributors! If you have skill in AngularJS or 
Pecan, or would like to fill in some of our documentation for us, we are happy 
to accept patches. If your project is interested in moving to StoryBoard, 
please contact us directly. While we are hesitant to move new projects to 
storyboard at this point, we would love working with you to determine which 
features are needed to support you.

Relevant links:
• Storyboard: https://storyboard.openstack.org 

• Team Wiki: https://wiki.openstack.org/wiki/StoryBoard 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Online midcycle meetup

2014-11-19 Thread Angus Salkeld
Hi all

As agreed from our weekly meeting we are going to try an online meetup.

Why?

We did a poll (https://doodle.com/b9m4bf8hvm3mna97#table) and it is
split quite evenly by location. The story I am getting from the community
is:

"We want a midcycle meetup if it is nearby, but are having trouble getting
finance
to travel far."

Given that the Heat community is evenly spread across the globe this becomes
impossible to hold without excluding a significant group.

So let's try and figure out how to do an online meetup!
(but let's not spend 99% of the time arguing about the software to use
please)

I think more interesting is:

1) How do we minimize the time zone pain?
2) Can we make each session really focused so we are productive.
3) If we do this right it does not have to be "midcycle" but when ever we
want.

I'd be interested in feedback from others that have tried this too.

-Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API-WG meeting (note time change this week)

2014-11-19 Thread Everett Toews
By the way, I’m off the next couple of days so I won’t be able to attend this 
meeting.

See you!
Everett


On Nov 19, 2014, at 4:56 AM, Christopher Yeoh  wrote:

> Hi,
> 
> We have moved to alternating times each week for the API WG meeting so
> people from other timezones can attend. Since this is an odd week 
> the meeting will be Thursday UTC 1600. Details here:
> 
> https://wiki.openstack.org/wiki/Meetings/API-WG
> 
> The google ical feed hasn't been updated yet, but thats not surprising
> since the wiki page was only updated a few hours ago.
> 
> Regards,
> 
> Chris
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Kevin L. Mitchell
On Thu, 2014-11-20 at 10:16 +1100, Blair Bethwaite wrote:
> For actions initiated directly through core OpenStack service APIs
> (Nova, Cinder, Neutron, etc - anything using Keystone policy),
> shouldn't quota-enforcement be handled by Keystone? To me this is just
> a subset of authz, and OpenStack already has a well established
> service for such decisions.

If you look a little earlier in the thread, you will find a post from me
where I point out just how complicated quota management actually is.  I
suggest that it should be developed as a proof-of-concept as a separate
service; from there, we can see whether it makes sense to roll it into
Keystone or maintain it as a separate thing.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? Nov 19, 2014 [nova] [neutron] [cinder] [trove]

2014-11-19 Thread Anne Gentle
Get ready for bug triage day coming sooner than we all think, as in, soon
for those in Australia!
https://wiki.openstack.org/wiki/Documentation/BugTriageDay for all the
details. Looking forward to working with our Cross-Project Liaisons (CPLs)
for docs, put your name on
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation if you'd
like to be a docs liaison. Welcome Steve Martinelli and thank you!

This week we have a specification for the changes to the driver
documentation in nova, neutron, cinder, and trove. Read and review at
https://review.openstack.org/#/c/133372/.

Matt Kassawara made progress on the Distributed Virtual Routing
documentation for Networking for the new Networking Admin Guide. Review at
https://github.com/ionosphere80/openstack-networking-guide/blob/master/scenario-dvr/scenario-dvr.md.
Next steps are diagrams and a patch in the openstack-manuals repo.

We've got more progress on the new web page design for docs.openstack.org
content. Next steps are to get the design into Sphinx/RST, create a
taxonomy for tagging the content, and determine how to do the information
architecture while migrating content in phases.

The Architecture Design Guide team is working on a survey to find out more
about what readers want from the book and to get more feedback on the book
in its current state. Please send it to as many people as possible once
it's ready.

I'm on vacation all of next week, so look for the next edition of What's Up
Doc the first week of December. See you all for doc bug triage day.

Thanks,
Anne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] LP/review cleanup day

2014-11-19 Thread Angus Salkeld
Hi all

As an action from our meeting I'd like to announce a cleanup day on the 2nd
of December.

What does this mean?

1) We have been noticing a lot of old and potentially out of date bugs that
need
some attention (re-test/triage/mark invalid). Also we have 97 bugs
in-progress
I wonder if that is real? Maybe some have partial fixes and have been left
in-progress.

2) We have probably need to do a manual-abandon on some really old reviews
so it doesn't clutter up the review.

3) We have a lot of out of date blueprints that basically need deleting.
We need to go through and agree on a list of them to kill.

Regards
Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API-WG meeting (note time change this week)

2014-11-19 Thread Christopher Yeoh
On Wed, 19 Nov 2014 19:34:44 +
Everett Toews  wrote:
> 
> 2. Do you know if there is a way to subscribe to only the API WG
> meeting from that calendar?

I haven't been able to find a way to do that. Fortunately for me most
of the openstack meetings end up being between 12am and 8am so it
doesn't actually clutter up my calendar view ;-)

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] APIImpact flag for specs

2014-11-19 Thread Everett Toews

On Nov 13, 2014, at 2:06 PM, Everett Toews 
mailto:everett.to...@rackspace.com>> wrote:

On Nov 12, 2014, at 10:45 PM, Angus Salkeld 
mailto:asalk...@mirantis.com>> wrote:

On Sat, Nov 1, 2014 at 6:45 AM, Everett Toews 
mailto:everett.to...@rackspace.com>> wrote:
Hi All,

Chris Yeoh started the use of an APIImpact flag in commit messages for specs in 
Nova. It adds a requirement for an APIImpact flag in the commit message for a 
proposed spec if it proposes changes to the REST API. This will make it much 
easier for people such as the API Working Group who want to review API changes 
across OpenStack to find and review proposed API changes.

For example, specifications with the APIImpact flag can be found with the 
following query:

https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:apiimpact,n,z

Chris also proposed a similar change to many other projects and I did the rest. 
Here’s the complete list if you’d like to review them.

Barbican: https://review.openstack.org/131617
Ceilometer: https://review.openstack.org/131618
Cinder: https://review.openstack.org/131620
Designate: https://review.openstack.org/131621
Glance: https://review.openstack.org/131622
Heat: https://review.openstack.org/132338
Ironic: https://review.openstack.org/132340
Keystone: https://review.openstack.org/132303
Neutron: https://review.openstack.org/131623
Nova: https://review.openstack.org/#/c/129757
Sahara: https://review.openstack.org/132341
Swift: https://review.openstack.org/132342
Trove: https://review.openstack.org/132346
Zaqar: https://review.openstack.org/132348

There are even more projects in stackforge that could use a similar change. If 
you know of a project in stackforge that would benefit from using an APIImapct 
flag in its specs, please propose the change and let us know here.


I seem to have missed this, I'll place my review comment here too.

I like the general idea of getting more consistent/better API. But, is 
reviewing every spec across all projects just going to introduce a new non 
scalable bottle neck into our work flow (given the increasing move away from 
this approach: moving functional tests to projects, getting projects to do more 
of their own docs, etc..). Wouldn't a better approach be to have an API liaison 
in each project that can keep track of new guidelines and catch potential 
problems?

I see have added a new section here: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons

Isn't that enough?

I replied in the review. We’ll continue the discussion there.

The cross project liaisons are big help but the APIImpact flag let’s the API WG 
automate discovery of API changing specs. It's just one more tool in the box to 
help us find changes that impact the API.

Note that the patch says nothing about requiring a review from someone 
associated with the API WG. If you add the APIImpact flag and nobody comes 
along to review it, continue on as normal.

The API WG is not intended to be a gatekeeper of every change to every API. As 
you say that doesn't scale. We don't want to be a bottleneck. However, tools 
such as the APIImpact flag can help us be more effective.

(Angus suggested I give my review comment a bit more visibility. I agree :)

Everett

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Blair Bethwaite
On 20 November 2014 05:25,   wrote:
> --
>
> Message: 24
> Date: Wed, 19 Nov 2014 10:57:17 -0500
> From: Doug Hellmann 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] Quota management and enforcement across
> projects
> Message-ID: <13f4f7a1-d4ec-4d14-a163-d477a4fd9...@doughellmann.com>
> Content-Type: text/plain; charset=windows-1252
>
>
> On Nov 19, 2014, at 9:51 AM, Sylvain Bauza  wrote:
>> My bad. Let me rephrase it. I'm seeing this service as providing added value 
>> for managing quotas by ensuring consistency across all projects. But as I 
>> said, I'm also thinking that the quota enforcement has still to be done at 
>> the customer project level.
>
> Oh, yes, that is true. I envision the API for the new service having a call 
> that means ?try to consume X units of a given quota? and that it would return 
> information about whether that can be done. The apps would have to define 
> what quotas they care about, and make the appropriate calls.

For actions initiated directly through core OpenStack service APIs
(Nova, Cinder, Neutron, etc - anything using Keystone policy),
shouldn't quota-enforcement be handled by Keystone? To me this is just
a subset of authz, and OpenStack already has a well established
service for such decisions.

It sounds like the idea here is to provide something generic that
could be used outside of OpenStack? I worry that might be premature
scope creep that detracts from the outcome.

-- 
Cheers,
~Blairo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 integrated tests taking a lot longer?

2014-11-19 Thread Matt Riedemann



On 11/19/2014 3:34 PM, Andrew Laski wrote:


On 11/19/2014 04:16 PM, Jay Pipes wrote:

On 11/19/2014 04:00 PM, Matt Riedemann wrote:

On 11/19/2014 9:40 AM, Jay Pipes wrote:

On 11/18/2014 06:48 PM, Matt Riedemann wrote:

I just started noticing today that the v3 integrated api samples tests
seem to be taking a lot longer than the other non-v3 integrated api
samples tests. On my 4 VCPU, 4 GB RAM VM some of those tests are
taking
anywhere from 15-50+ seconds, while the non-v3 tests are taking less
than a second.

Has something changed recently in how the v3 API code is processed
that
might have caused this?  With microversions or jsonschema validation
perhaps?

I was thinking it was oslo.db 1.1.0 at first since that was a recent
update but given the difference in times between v3 and non-v3 api
samples tests I'm thinking otherwise.


Heya,

I've been stung in the past by running either tox or run_tests.sh while
active in a virtualenv. The speed goes from ~2 seconds per API sample
test to ~15 seconds per API sample test...

Not sure if this is what is causing your problem, but worth a check.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So...

Run: 13101 in 21466.460047 sec

...

mriedem@ubuntu:~/git/nova$ du -sh .testrepository/
1002M.testrepository/

I removed that and I'm running again now.

We remove pyc files on each run, is there any reason why we couldn't
also remove .testrepository on every run?


Well, having some history is sometimes useful (for instance when you
want to do: ./run_tests.sh -V --failing to execute only the tests that
failed during the last run), so I think having a separate flag (-R) to
run_tests.sh would be fine.


Testrepository also uses its history of test run times to try to group
tests so that each thread takes about the same amount of time to run.



But, then again I just learned that run_tests.sh is apparently
deprecated. Shame :(

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Turns out that my huge .testrepository is not apparently the issue, so 
I'll press on.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-19 Thread Thomas Goirand
On 11/19/2014 04:27 PM, Matthias Runge wrote:
> On 18/11/14 14:48, Thomas Goirand wrote:
> 
>>
>> And then, does selenium continues to work for testing Horizon? If so,
>> then the solution could be to send the .dll and .xpi files in non-free,
>> and remove them from Selenium in main.
>>
> Yes, it still works; that leaves the question, why they are included in
> the tarball at all.
> 
> In Fedora, we do not distribute .dll or selenium xpi files with selenium
> at all.
> 
> Matthias

Thanks for letting me know. I have opened a bug against the current
selenium package in non-free, to ask to have it uploaded in Debian main,
without the .xpi file. Let's see how it goes.

Cheers,

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas] Abandon Old LBaaS V2 Review

2014-11-19 Thread Brandon Logan
Evgeny,
Since change sets got moved to the feature branch, this review has
remained on master.  It needs to be abandoned:

https://review.openstack.org/#/c/109849/

Thanks,
Brandon

On Mon, 2014-11-17 at 12:31 -0800, Stephen Balukoff wrote:
> Awesome!
> 
> On Mon, Nov 10, 2014 at 9:10 AM, Susanne Balle 
> wrote:
> Works for me. Susanne
> 
> On Mon, Nov 10, 2014 at 10:57 AM, Brandon Logan
>  wrote:
> https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting
> 
> That is updated for lbaas and advanced services with
> the new times.
> 
> Thanks,
> Brandon
> 
> On Mon, 2014-11-10 at 11:07 +, Doug Wiegley wrote:
> > #openstack-meeting-4
> >
> >
> > > On Nov 10, 2014, at 10:33 AM, Evgeny Fedoruk
>  wrote:
> > >
> > > Thanks,
> > > Evg
> > >
> > > -Original Message-
> > > From: Doug Wiegley [mailto:do...@a10networks.com]
> > > Sent: Friday, November 07, 2014 9:04 PM
> > > To: OpenStack Development Mailing List
> > > Subject: [openstack-dev] [neutron][lbaas] meeting
> day/time change
> > >
> > > Hi all,
> > >
> > > Neutron LBaaS meetings are now going to be
> Tuesdays at 16:00 UTC.
> > >
> > > Safe travels.
> > >
> > > Thanks,
> > > Doug
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > >
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > >
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> -- 
> Stephen Balukoff 
> Blue Box Group, LLC 
> (800)613-4305 x807
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] mid-cycle meet-up planning ...

2014-11-19 Thread Jay S. Bryant

All,

For those of you that weren't able to make the Kilo meet-up in Paris I 
wanted to send out a note regarding Cinder's Kilo mid-cycle meet-up.


IBM has offered to host it in, warm, sunny, Austin, Texas.  The planned 
dates are January 27, 28 and 29, 2015.


I have put together an etherpad with the current plan and will be 
keeping the etherpad updated as we continue to firm out the details: 
https://etherpad.openstack.org/p/cinder-kilo-midcycle-meetup


I need to have a good idea how many people are planning to participate 
sooner, rather than later, so that I can make sure we have a big enough 
room.  So, if you think you are going to be able to make it please add 
your name to the 'Planned Attendees' list.


Again, we will also use Google Hangout to virtually include those who 
cannot be physically present.  I have a space in the etherpad to include 
your name if you wish to join that way.


I look forward to another successful meet-up with all of you!

Jay
(jungleboyj)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Eugene Nikanorov
> But the isolation mode change won’t really help here as pointed out by
Jay; discrete transactions have to be used instead.
I still think it will, per postgres documentation (which might look
confusing, but still...)
It actually helps for mysql, that was confirmed. For postgres it appears to
be the same.

Thanks,
Eugene.

On Thu, Nov 20, 2014 at 12:56 AM, Mike Bayer  wrote:

>
> > On Nov 19, 2014, at 4:14 PM, Clint Byrum  wrote:
> >
> >
> > One simply cannot rely on multi-statement transactions to always succeed.
>
> agree, but the thing you want is that the transaction either succeeds or
> explicitly fails, the latter hopefully in such a way that a retry can be
> added which has a chance at succeeding, if needed.  We have transaction
> replay logic in place in nova for example based on known failure conditions
> like concurrency exceptions, and this replay logic works, because it starts
> a new transaction.   In this specific case, since it’s looping within a
> transaction where the data won’t change, it’ll never succeed, and the retry
> mechanism is useless.   But the isolation mode change won’t really help
> here as pointed out by Jay; discrete transactions have to be used instead.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Mike Bayer

> On Nov 19, 2014, at 4:14 PM, Clint Byrum  wrote:
> 
> 
> One simply cannot rely on multi-statement transactions to always succeed.

agree, but the thing you want is that the transaction either succeeds or 
explicitly fails, the latter hopefully in such a way that a retry can be added 
which has a chance at succeeding, if needed.  We have transaction replay logic 
in place in nova for example based on known failure conditions like concurrency 
exceptions, and this replay logic works, because it starts a new transaction.   
In this specific case, since it’s looping within a transaction where the data 
won’t change, it’ll never succeed, and the retry mechanism is useless.   But 
the isolation mode change won’t really help here as pointed out by Jay; 
discrete transactions have to be used instead.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Who maintains the iCal meeting data?

2014-11-19 Thread Tony Breeds
On Wed, Nov 19, 2014 at 01:24:03PM +0100, Thierry Carrez wrote:

> The iCal is currently maintained by Anne (annegentle) and myself. In
> parallel, a small group is building a gerrit-powered agenda so that we
> can describe meetings in YAML and check for conflicts automatically, and
> build the ics automatically rather than manually.

Sounds good.
 
> That should still take a few weeks before we can migrate to that though,
> so in the mean time if you volunteer to keep the .ics up to date with
> changes to the wiki page, that would be of great help! It's maintained
> as a google calendar, I can add you to the ACL there if you send me your
> google email.

Done in a provate email.

Yours Tony.


pgpe3V_V44aXA.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Parallels loopback disk format support

2014-11-19 Thread Fei Long Wang
IIUC, the blueprint just want to add a new image format, and no code
change in Glance, is it? If that's the case, I'm wondering if we really
need a blueprint/spec. Because the image format could be configured in
glance-api.conf. Please correct me if I missed anything. Cheers.


On 20/11/14 02:27, Maxim Nestratov wrote:
> Greetings,
>
> In scope of these changes [1], I would like to add a new image format
> into glance. For this purpose there was created a blueprint [2] and
> would really appreciate if someone from glance team could review this
> proposal.
>
> [1] https://review.openstack.org/#/c/111335/
> [2] https://blueprints.launchpad.net/glance/+spec/pcs-support
>
> Best,
>
> Maxim Nestratov,
> Lead Software Developer,
> Parallels
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 integrated tests taking a lot longer?

2014-11-19 Thread Andrew Laski


On 11/19/2014 04:16 PM, Jay Pipes wrote:

On 11/19/2014 04:00 PM, Matt Riedemann wrote:

On 11/19/2014 9:40 AM, Jay Pipes wrote:

On 11/18/2014 06:48 PM, Matt Riedemann wrote:

I just started noticing today that the v3 integrated api samples tests
seem to be taking a lot longer than the other non-v3 integrated api
samples tests. On my 4 VCPU, 4 GB RAM VM some of those tests are 
taking

anywhere from 15-50+ seconds, while the non-v3 tests are taking less
than a second.

Has something changed recently in how the v3 API code is processed 
that

might have caused this?  With microversions or jsonschema validation
perhaps?

I was thinking it was oslo.db 1.1.0 at first since that was a recent
update but given the difference in times between v3 and non-v3 api
samples tests I'm thinking otherwise.


Heya,

I've been stung in the past by running either tox or run_tests.sh while
active in a virtualenv. The speed goes from ~2 seconds per API sample
test to ~15 seconds per API sample test...

Not sure if this is what is causing your problem, but worth a check.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So...

Run: 13101 in 21466.460047 sec

...

mriedem@ubuntu:~/git/nova$ du -sh .testrepository/
1002M.testrepository/

I removed that and I'm running again now.

We remove pyc files on each run, is there any reason why we couldn't
also remove .testrepository on every run?


Well, having some history is sometimes useful (for instance when you 
want to do: ./run_tests.sh -V --failing to execute only the tests that 
failed during the last run), so I think having a separate flag (-R) to 
run_tests.sh would be fine.


Testrepository also uses its history of test run times to try to group 
tests so that each thread takes about the same amount of time to run.




But, then again I just learned that run_tests.sh is apparently 
deprecated. Shame :(


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Eugene Nikanorov
Wow, lots of feedback in a matter of hours.

First of all, reading postgres docs I see that READ COMMITTED is the same
as for mysql, so it should address the issue we're discussing:

"*Read Committed* is the default isolation level in PostgreSQL. When a
transaction uses this isolation level, a SELECT query (without a FOR
UPDATE/SHARE clause) *sees only data committed before the query began (not
before TX began - Eugene)*; it never sees either uncommitted data or
changes committed during query execution by concurrent transactions. In
effect, a SELECT query sees a snapshot of the database as of the instant
the query begins to run. However, SELECT does see the effects of previous
updates executed within its own transaction, even though they are not yet
committed. *Also note that two successive **SELECT commands can see
different data, even though they are within a single transaction, if other
transactions commit changes during execution of the first SELECT. "*
http://www.postgresql.org/docs/8.4/static/transaction-iso.html

So, in my opinion, unless neutron code has parts that rely on 'repeatable
read' transaction isolation level (and I believe such code is possible,
didn't inspected closely yet), switching to READ COMMITTED is fine for
mysql.

On multi-master scenario: it is not really an advanced use case. It is
basic, we need to consider it as a basic and build architecture with
respect to this fact.
"Retry" approach fits well here, however it either requires proper
isolation level, or redesign of whole DB access layer.

Also, thanks Clint for clarification about example scenario described by Mike
Bayer.
Initially the issue was discovered with concurrent tests on multi master
environment with galera as a DB backend.

Thanks,
Eugene

On Thu, Nov 20, 2014 at 12:20 AM, Mike Bayer  wrote:

>
> On Nov 19, 2014, at 3:47 PM, Ryan Moats  wrote:
>
> >
> BTW, I view your examples from oslo as helping make my argument for
> me (and I don't think that was your intent :) )
>
>
> I disagree with that as IMHO the differences in producing MM in the app
> layer against arbitrary backends (Postgresql vs. DB2 vs. MariaDB vs. ???)
>  will incur a lot more “bifurcation” than a system that targets only a
> handful of existing MM solutions.  The example I referred to in oslo.db is
> dealing with distinct, non MM backends.   That level of DB-specific code
> and more is a given if we are building a MM system against multiple
> backends generically.
>
> It’s not possible to say which approach would be better or worse at the
> level of “how much database specific application logic do we need”, though
> in my experience, no matter what one is trying to do, the answer is always,
> “tons”; we’re dealing not just with databases but also Python drivers that
> have a vast amount of differences in behaviors, at every level.On top
> of all of that, hand-rolled MM adds just that much more application code to
> be developed and maintained, which also claims it will do a better job than
> mature (ish?) database systems designed to do the same job against a
> specific backend.
>
>
>
>
> > > My reason for asking this question here is that if the community
> > > wants to consider #2, then these problems are the place to start
> > > crafting that solution - if we solve the conflicts inherent with the
> > > two conncurrent thread scenarios, then I think we will find that
> > > we've solved the multi-master problem essentially "for free”.
> >
> > Maybe I’m missing something, if we learn how to write out a row such
> > that a concurrent transaction against the same row doesn’t throw us
> > off, where is the part where that data is replicated to databases
> > running concurrently on other IP numbers in a way that is atomic
> > come out of that effort “for free” ?   A home-rolled “multi master”
> > scenario would have to start with a system that has multiple
> > create_engine() calls, since we need to communicate directly to
> > multiple database servers. From there it gets really crazy.  Where’sall
> that ?
>
> Boiled down, what you are talking about here w.r.t. concurrent
> transactions is really conflict resolution, which is the hardest
> part of implementing multi-master (as a side note, using locking in
> this case is the equivalent of option #1).
>
> All I wished to point out is that there are other ways to solve the
> conflict resolution that could then be leveraged into a multi-master
> scenario.
>
> As for the parts that I glossed over, once conflict resolution is
> separated out, replication turns into a much simpler problem with
> well understood patterns and so I view that part as coming
> "for free."
>
> Ryan
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/

Re: [openstack-dev] [Fuel] Waiting for Haproxy backends

2014-11-19 Thread Andrew Beekhof

> On 20 Nov 2014, at 6:55 am, Sergii Golovatiuk  
> wrote:
> 
> Hi crew,
> 
> Please see my inline comments.
> 
> Hi Everyone,
> 
> I was reading the blueprints mentioned here and thought I'd take the 
> opportunity to introduce myself and ask a few questions.
> For those that don't recognise my name, Pacemaker is my baby - so I take a 
> keen interest helping people have a good experience with it :)
> 
> A couple of items stood out to me (apologies if I repeat anything that is 
> already well understood):
> 
> * Operations with CIB utilizes almost 100% of CPU on the Controller
> 
>  We introduced a new CIB algorithm in 1.1.12 which is O(2) faster/less 
> resource hungry than prior versions.
>  I would be interested to hear your experiences with it if you are able to 
> upgrade to that version.
>  
> Our team is aware of that. That's really nice improvement. Thank you very 
> much for that. We've prepared all packages, though we have feature freeze. 
> Pacemaker 1.1.12 will be added to next release.
>  
> * Corosync shutdown process takes a lot of time
> 
>  Corosync (and Pacemaker) can shut down incredibly quickly.
>  If corosync is taking a long time, it will be because it is waiting for 
> pacemaker, and pacemaker is almost always waiting for for one of the 
> clustered services to shut down.
> 
> As part of improvement we have idea to split signalling layer (corosync) and 
> resource management (pacemaker) layers by specifying
> service { 
>name: pacemaker
>ver:  1
> }
> 
> and create upstart script to set start ordering. That will allow us
> 
> 1. Create some notifications in puppet for pacemaker
> 2. Restart and manage corosync and pacemaker independently
> 3. Use respawn in upstart to restart corosync or pacemaker
> 
> 
> * Current Fuel Architecture is limited to Corosync 1.x and Pacemaker 1.x
> 
>  Corosync 2 is really the way to go.
>  Is there something in particular that is holding you back?
>  Also, out of interest, are you using cman or the pacemaker plugin?
> 
> We use almost standard corosync 1.x and pacemaker from CentOS 6.5

Please be aware that the plugin is not long for this world on CentOS.
It was already removed once (in 6.4 betas) and is not even slightly tested at 
RH and about the only ones using it upstream are SUSE.

http://blog.clusterlabs.org/blog/2013/pacemaker-on-rhel6-dot-4/ has some 
relevant details.
The short version is that I would really encourage a transition to CMAN (which 
is really just corosync 1.x plus a more mature and better tested plugin from 
the corosync people).
See http://clusterlabs.org/quickstart-redhat.html , its really quite painless.

> and Ubuntu 12.04. However, we've prepared corosync 2.x and pacemaker 1.1.12 
> packages. Also we have update puppet manifests on review. As was said above, 
> we can't just add at the end of development cycle.

Yep, makes sense.

>  
> 
> *  Diff operations against Corosync CIB require to save data to file rather
>   than keep all data in memory
> 
>  Can someone clarify this one for me?
>  
> That's our implementation for puppet. We can't just use shadow on distributed 
> environment, so we run 
> 
>  Also, I notice that the corosync init script has been modified to set/unset 
> maintenance-mode with cibadmin.
>  Any reason not to use crm_attribute instead?  You might find its a less 
> fragile solution than a hard-coded diff.
>  
> Can you give a particular line where you see that?  

I saw it in one of the bugs:
   https://bugs.launchpad.net/fuel/+bug/1340172

Maybe it is no longer accurate

> 
> * Debug process of OCF scripts is not unified requires a lot of actions from
>  Cloud Operator
> 
>  Two things to mention here... the first is crm_resource 
> --force-(start|stop|check) which queries the cluster for the resource's 
> definition but runs the command directly. 
>  Combined with -V, this means that you get to see everything the agent is 
> doing.
> 
> We write many own OCF scripts. We just need to see how OCF script behaves. 
> ocf_tester is not enough for our cases.

Agreed. ocf_tester is more for out-of-cluster regression testing, not really 
good for debugging a running cluster.

> I'll try if crm_resource -V --force-start is better.
>  
> 
>  Also, pacemaker now supports the ability for agents to emit specially 
> formatted error messages that are stored in the cib and can be shown back to 
> users.
>  This can make things much less painful for admins. Look for 
> PCMK_OCF_REASON_PREFIX in the upstream resource-agents project.
> 
> Thank you for tip. 
> 
> 
> * Openstack services are not managed by Pacemaker
> 
> The general idea to have all openstack services under pacemaker control 
> rather than having upstart and pacemaker. It will be very handy for operators 
> to see the status of all services from one console. Also it will give us 
> flexibility to have more complex service verification checks in monitor 
> function.
>  
> 
>  Oh?
> 
> * Compute nodes aren't in Pacemaker cluster, hen

[openstack-dev] [Neutron][L3] Reminder: Meeting Thursday at 1500 UTC

2014-11-19 Thread Carl Baldwin
The Neutron L3 team will meet [1] tomorrow at the regular time.  I'd
like to discuss the progress of the functional tests for the L3 agent
to see how we can get that on track.  I don't think we need to wait
for the BP to merge before get something going.

We will likely not have a meeting next week for the Thanksgiving
holiday in the US.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Mike Bayer

> On Nov 19, 2014, at 3:47 PM, Ryan Moats  wrote:
> 
> > 
> BTW, I view your examples from oslo as helping make my argument for
> me (and I don't think that was your intent :) )
> 

I disagree with that as IMHO the differences in producing MM in the app layer 
against arbitrary backends (Postgresql vs. DB2 vs. MariaDB vs. ???)  will incur 
a lot more “bifurcation” than a system that targets only a handful of existing 
MM solutions.  The example I referred to in oslo.db is dealing with distinct, 
non MM backends.   That level of DB-specific code and more is a given if we are 
building a MM system against multiple backends generically.

It’s not possible to say which approach would be better or worse at the level 
of “how much database specific application logic do we need”, though in my 
experience, no matter what one is trying to do, the answer is always, “tons”; 
we’re dealing not just with databases but also Python drivers that have a vast 
amount of differences in behaviors, at every level.On top of all of that, 
hand-rolled MM adds just that much more application code to be developed and 
maintained, which also claims it will do a better job than mature (ish?) 
database systems designed to do the same job against a specific backend.



> 
> > > My reason for asking this question here is that if the community 
> > > wants to consider #2, then these problems are the place to start 
> > > crafting that solution - if we solve the conflicts inherent with the
> > > two conncurrent thread scenarios, then I think we will find that 
> > > we've solved the multi-master problem essentially "for free”.
> >  
> > Maybe I’m missing something, if we learn how to write out a row such
> > that a concurrent transaction against the same row doesn’t throw us 
> > off, where is the part where that data is replicated to databases 
> > running concurrently on other IP numbers in a way that is atomic 
> > come out of that effort “for free” ?   A home-rolled “multi master” 
> > scenario would have to start with a system that has multiple 
> > create_engine() calls, since we need to communicate directly to 
> > multiple database servers. From there it gets really crazy.  Where’sall 
> > that ?
> 
> Boiled down, what you are talking about here w.r.t. concurrent
> transactions is really conflict resolution, which is the hardest
> part of implementing multi-master (as a side note, using locking in
> this case is the equivalent of option #1).  
> 
> All I wished to point out is that there are other ways to solve the
> conflict resolution that could then be leveraged into a multi-master
> scenario.
> 
> As for the parts that I glossed over, once conflict resolution is
> separated out, replication turns into a much simpler problem with
> well understood patterns and so I view that part as coming
> "for free."
> 
> Ryan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 integrated tests taking a lot longer?

2014-11-19 Thread Jay Pipes

On 11/19/2014 04:00 PM, Matt Riedemann wrote:

On 11/19/2014 9:40 AM, Jay Pipes wrote:

On 11/18/2014 06:48 PM, Matt Riedemann wrote:

I just started noticing today that the v3 integrated api samples tests
seem to be taking a lot longer than the other non-v3 integrated api
samples tests. On my 4 VCPU, 4 GB RAM VM some of those tests are taking
anywhere from 15-50+ seconds, while the non-v3 tests are taking less
than a second.

Has something changed recently in how the v3 API code is processed that
might have caused this?  With microversions or jsonschema validation
perhaps?

I was thinking it was oslo.db 1.1.0 at first since that was a recent
update but given the difference in times between v3 and non-v3 api
samples tests I'm thinking otherwise.


Heya,

I've been stung in the past by running either tox or run_tests.sh while
active in a virtualenv. The speed goes from ~2 seconds per API sample
test to ~15 seconds per API sample test...

Not sure if this is what is causing your problem, but worth a check.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So...

Run: 13101 in 21466.460047 sec

...

mriedem@ubuntu:~/git/nova$ du -sh .testrepository/
1002M.testrepository/

I removed that and I'm running again now.

We remove pyc files on each run, is there any reason why we couldn't
also remove .testrepository on every run?


Well, having some history is sometimes useful (for instance when you 
want to do: ./run_tests.sh -V --failing to execute only the tests that 
failed during the last run), so I think having a separate flag (-R) to 
run_tests.sh would be fine.


But, then again I just learned that run_tests.sh is apparently 
deprecated. Shame :(


-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2014-11-19 10:05:35 -0800:
> 
> > On Nov 18, 2014, at 1:38 PM, Eugene Nikanorov  
> > wrote:
> > 
> > Hi neutron folks,
> > 
> > There is an ongoing effort to refactor some neutron DB logic to be 
> > compatible with galera/mysql which doesn't support locking 
> > (with_lockmode('update')).
> > 
> > Some code paths that used locking in the past were rewritten to retry the 
> > operation if they detect that an object was modified concurrently.
> > The problem here is that all DB operations (CRUD) are performed in the 
> > scope of some transaction that makes complex operations to be executed in 
> > atomic manner.
> > For mysql the default transaction isolation level is 'REPEATABLE READ' 
> > which means that once the code issue a query within a transaction, this 
> > query will return the same result while in this transaction (e.g. the 
> > snapshot is taken by the DB during the first query and then reused for the 
> > same query).
> > In other words, the retry logic like the following will not work:
> > 
> > def allocate_obj():
> > with session.begin(subtrans=True):
> >  for i in xrange(n_retries):
> >   obj = session.query(Model).filter_by(filters)
> >   count = session.query(Model).filter_by(id=obj.id 
> > ).update({'allocated': True})
> >   if count:
> >return obj
> > 
> > since usually methods like allocate_obj() is called from within another 
> > transaction, we can't simply put transaction under 'for' loop to fix the 
> > issue.
> 
> has this been confirmed?  the point of systems like repeatable read is not 
> just that you read the “old” data, it’s also to ensure that updates to that 
> data either proceed or fail explicitly; locking is also used to prevent 
> concurrent access that can’t be reconciled.  A lower isolation removes these 
> advantages.  
> 

Yes this is confirmed and fails reliably on Galera based systems.

> I ran a simple test in two MySQL sessions as follows:
> 
> session 1:
> 
> mysql> create table some_table(data integer) engine=innodb;
> Query OK, 0 rows affected (0.01 sec)
> 
> mysql> insert into some_table(data) values (1);
> Query OK, 1 row affected (0.00 sec)
> 
> mysql> begin;
> Query OK, 0 rows affected (0.00 sec)
> 
> mysql> select data from some_table;
> +--+
> | data |
> +--+
> |1 |
> +--+
> 1 row in set (0.00 sec)
> 
> 
> session 2:
> 
> mysql> begin;
> Query OK, 0 rows affected (0.00 sec)
> 
> mysql> update some_table set data=2 where data=1;
> Query OK, 1 row affected (0.00 sec)
> Rows matched: 1  Changed: 1  Warnings: 0
> 
> then back in session 1, I ran:
> 
> mysql> update some_table set data=3 where data=1;
> 
> this query blocked;  that’s because session 2 has placed a write lock on the 
> table.  this is the effect of repeatable read isolation.

With Galera this session might happen on another node. There is no
distributed lock, so this would not block...

> 
> while it blocked, I went to session 2 and committed the in-progress 
> transaction:
> 
> mysql> commit;
> Query OK, 0 rows affected (0.00 sec)
> 
> then session 1 unblocked, and it reported, correctly, that zero rows were 
> affected:
> 
> Query OK, 0 rows affected (7.29 sec)
> Rows matched: 0  Changed: 0  Warnings: 0
> 
> the update had not taken place, as was stated by “rows matched":
> 
> mysql> select * from some_table;
> +--+
> | data |
> +--+
> |1 |
> +--+
> 1 row in set (0.00 sec)
> 
> the code in question would do a retry at this point; it is checking the 
> number of rows matched, and that number is accurate.
> 
> if our code did *not* block at the point of our UPDATE, then it would have 
> proceeded, and the other transaction would have overwritten what we just did, 
> when it committed.   I don’t know that read committed is necessarily any 
> better here.
> 
> now perhaps, with Galera, none of this works correctly.  That would be a 
> different issue in which case sure, we should use whatever isolation is 
> recommended for Galera.  But I’d want to potentially peg it to the fact that 
> Galera is in use, or not.
> 
> would love also to hear from Jay Pipes on this since he literally wrote the 
> book on MySQL ! :)

What you missed is that with Galera the commit that happened last would
be rolled back. This is a reality in many scenarios on SQL databases and
should be handled _regardless_ of Galera. It is a valid way to handle
deadlocks on single node DBs as well (pgsql will do this sometimes).

One simply cannot rely on multi-statement transactions to always succeed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Waiting for Haproxy backends

2014-11-19 Thread Jay Pipes

On 11/18/2014 07:25 PM, Andrew Woodward wrote:

On Tue, Nov 18, 2014 at 3:18 PM, Andrew Beekhof  wrote:

* Openstack services are not managed by Pacemaker

  Oh?


fuel doesn't (currently) set up API services in pacemaker


Nor should it, IMO. Other than the Neutron dhcp-agent, all OpenStack 
services that run on a "controller node" are completely stateless. 
Therefore, I don't see any reason to use corosync/pacemaker for 
management of these resources. haproxy should just spread the HTTP 
request load evenly across all API services and things should be fine, 
allowing haproxy's http healthcheck monitoring to handle the simple 
service status checks.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Recall for previous iscsi backend BP

2014-11-19 Thread Alex Meade
Hey Henry/Folks,

I think it could make sense for Glance to store the volume UUID, the idea
is that no matter where an image is stored it should be *owned* by Glance
and not deleted out from under it. But that is more of a single tenant vs
multi tenant cinder store.

It makes sense for Cinder to at least abstract all of the block storage
needs. Glance and any other service should reuse Cinders ability to talk to
certain backends. It would be wasted effort to reimplement Cinder drivers
as Glance stores. I do agree with Duncan that a great way to solve these
issues is a third party transfer service, which others and I in the Glance
community have discussed at numerous summits (since San Diego).

-Alex



On Wed, Nov 19, 2014 at 3:40 AM, henry hly  wrote:

> Hi Flavio,
>
> Thanks for your information about Cinder Store, Yet I have a little
> concern about Cinder backend: Suppose cinder and glance both use Ceph
> as Store, then if cinder  can do instant copy to glance by ceph clone
> (maybe not now but some time later), what information would be stored
> in glance? Obviously volume UUID is not a good choice, because after
> volume is deleted then image can't be referenced. The best choice is
> that cloned ceph object URI also be stored in glance location, letting
> both glance and cinder see the "backend store details".
>
> However, although it really make sense for Ceph like All-in-one Store,
> I'm not sure if iscsi backend can be used the same way.
>
> On Wed, Nov 19, 2014 at 4:00 PM, Flavio Percoco  wrote:
> > On 19/11/14 15:21 +0800, henry hly wrote:
> >>
> >> In the Previous BP [1], support for iscsi backend is introduced into
> >> glance. However, it was abandoned because of Cinder backend
> >> replacement.
> >>
> >> The reason is that all storage backend details should be hidden by
> >> cinder, not exposed to other projects. However, with more and more
> >> interest in "Converged Storage" like Ceph, it's necessary to expose
> >> storage backend to glance as well as cinder.
> >>
> >> An example  is that when transferring bits between volume and image,
> >> we can utilize advanced storage offload capability like linked clone
> >> to do very fast instant copy. Maybe we need a more general glance
> >> backend location support not only with iscsi.
> >>
> >>
> >>
> >> [1] https://blueprints.launchpad.net/glance/+spec/iscsi-backend-store
> >
> >
> > Hey Henry,
> >
> > This blueprint has been superseeded by one proposing a Cinder store
> > for Glance. The Cinder store is, unfortunately, in a sorry state.
> > Short story, it's not fully implemented.
> >
> > I truly think Glance is not the place where you'd have an iscsi store,
> > that's Cinder's field and the best way to achieve what you want is by
> > having a fully implemented Cinder store that doesn't rely on Cinder's
> > API but has access to the volumes.
> >
> > Unfortunately, this is not possible now and I don't think it'll be
> > possible until L (or even M?).
> >
> > FWIW, I think the use case you've mentioned is useful and it's
> > something we have in our TODO list.
> >
> > Cheers,
> > Flavio
> >
> > --
> > @flaper87
> > Flavio Percoco
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 integrated tests taking a lot longer?

2014-11-19 Thread Matt Riedemann



On 11/19/2014 9:40 AM, Jay Pipes wrote:

On 11/18/2014 06:48 PM, Matt Riedemann wrote:

I just started noticing today that the v3 integrated api samples tests
seem to be taking a lot longer than the other non-v3 integrated api
samples tests. On my 4 VCPU, 4 GB RAM VM some of those tests are taking
anywhere from 15-50+ seconds, while the non-v3 tests are taking less
than a second.

Has something changed recently in how the v3 API code is processed that
might have caused this?  With microversions or jsonschema validation
perhaps?

I was thinking it was oslo.db 1.1.0 at first since that was a recent
update but given the difference in times between v3 and non-v3 api
samples tests I'm thinking otherwise.


Heya,

I've been stung in the past by running either tox or run_tests.sh while
active in a virtualenv. The speed goes from ~2 seconds per API sample
test to ~15 seconds per API sample test...

Not sure if this is what is causing your problem, but worth a check.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So...

Run: 13101 in 21466.460047 sec

...

mriedem@ubuntu:~/git/nova$ du -sh .testrepository/
1002M   .testrepository/

I removed that and I'm running again now.

We remove pyc files on each run, is there any reason why we couldn't 
also remove .testrepository on every run?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Ryan Moats



Ian Wells  wrote on 11/19/2014 02:33:40 PM:

[snip]

> When you have a plugin that's decided to be synchronous, then there
> are cases where the DB lock is held for a technically indefinite
> period of time.  This is basically broken.

A big +1 to this statement

Ryan Moats___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Ryan Moats
>
>
> Mike Bayer  wrote on 11/19/2014 02:10:18 PM:
>
> > From: Mike Bayer 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: 11/19/2014 02:11 PM
> > Subject: Re: [openstack-dev] [Neutron] DB: transaction isolation and
> > related questions
> >
> > On Nov 19, 2014, at 1:49 PM, Ryan Moats  wrote:
> >
> > I was waiting for this because I think I may have a slightly
> > different (and outside of the box) view on how to approach a solution
to this.
> >
> > Conceptually (at least in my mind) there isn't a whole lot of
> > difference between how the example below (i.e. updates from two
> > concurrent threads) is handled
> > and how/if neutron wants to support a multi-master database scenario
> > (which in turn lurks in the background when one starts thinking/
> > talking about multi-region support).
> >
> > If neutron wants to eventually support multi-master database
> > scenarios, I see two ways to go about it:
> >
> > 1) Defer multi-master support to the database itself.
> > 2) Take responsibility for managing the conflict resolution inherent
> > in multi-master scenarios itself.
> >
> > The first approach is certainly simpler in the near term, but it has
> > the down side of restricting the choice of databases to those that
> > have solved multi-master and further, may lead to code bifurcation
> > based on possibly different solutions to the conflict resolution
> > scenarios inherent in multi-master.
> > The second approach is certainly more complex as neutron assumes
> > more responsibility for its own actions, but it has the advantage
> > that (if done right) would be transparent to the underlying
> > databases (with all that implies)
>
> multi-master is a very advanced use case so I don’t see why it would
> be unreasonable to require a multi-master vendor database.
> Reinventing a complex system like that in the application layer is
> an unnecessary reinvention.
>
> As far as working across different conflict resolution scenarios,
> while there may be differences across backends, these differences
> will be much less significant compared to the differences against
> non-clustered backends in which we are inventing our own multi-
> master solution.   I doubt a home rolled solution would insulate us
> at all from “code bifurcation” as this is already a fact of life in
> targeting different backends even without any implication of
> clustering.   Even with simple things like transaction isolation, we
> see that different databases have different behavior, and if you
> look at the logic in oslo.db inside of https://github.com/openstack/
> oslo.db/blob/master/oslo/db/sqlalchemy/exc_filters.py you can see an
> example of just how complex it is to just do the most rudimental
> task of organizing exceptions into errors that mean the same thing.

I didn't say it was unreasonable, I only point out that there is an
alternative for consideration.

BTW, I view your examples from oslo as helping make my argument for
me (and I don't think that was your intent :) )

> > My reason for asking this question here is that if the community
> > wants to consider #2, then these problems are the place to start
> > crafting that solution - if we solve the conflicts inherent with the
> > two conncurrent thread scenarios, then I think we will find that
> > we've solved the multi-master problem essentially "for free”.
>
> Maybe I’m missing something, if we learn how to write out a row such
> that a concurrent transaction against the same row doesn’t throw us
> off, where is the part where that data is replicated to databases
> running concurrently on other IP numbers in a way that is atomic
> come out of that effort “for free” ?   A home-rolled “multi master”
> scenario would have to start with a system that has multiple
> create_engine() calls, since we need to communicate directly to
> multiple database servers. From there it gets really crazy.  Where’sall
that ?

Boiled down, what you are talking about here w.r.t. concurrent
transactions is really conflict resolution, which is the hardest
part of implementing multi-master (as a side note, using locking in
this case is the equivalent of option #1).

All I wished to point out is that there are other ways to solve the
conflict resolution that could then be leveraged into a multi-master
scenario.

As for the parts that I glossed over, once conflict resolution is
separated out, replication turns into a much simpler problem with
well understood patterns and so I view that part as coming
"for free."

Ryan___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Ian Wells
On 19 November 2014 11:58, Jay Pipes  wrote:

> Some code paths that used locking in the past were rewritten to retry
>
>> the operation if they detect that an object was modified concurrently.
>> The problem here is that all DB operations (CRUD) are performed in the
>> scope of some transaction that makes complex operations to be executed
>> in atomic manner.
>>
>
> Yes. The root of the problem in Neutron is that the session object is
> passed through all of the various plugin methods and the
> session.begin(subtransactions=True) is used all over the place, when in
> reality many things should not need to be done in long-lived transactional
> containers.
>

I think the issue is one of design, and it's possible what we discussed at
the summit may address some of this.

At the moment, Neutron's a bit confused about what it is.  Some plugins
treat a call to Neutron as the period of time in which an action should be
completed - the 'atomicity' thing.  This is not really compatible with a
distributed system and it's certainly not compatible with the principle of
eventual consistency that Openstack is supposed to follow.  Some plugins
treat the call as a change to desired networking state, and the action on
the network is performed asynchronously to bring the network state into
alignment with the state of the database.  (Many plugins do a bit of both.)

When you have a plugin that's decided to be synchronous, then there are
cases where the DB lock is held for a technically indefinite period of
time.  This is basically broken.

What we said at the summit is that we should move to an entirely async
model for the API, which in turn gets us to the 'desired state' model for
the DB.  DB writes would take one of two forms:

- An API call has requested that the data be updated, which it can do
immediately - the DB transaction takes as long as it takes to write the DB
consistently, and can hold locks on referenced rows to main consistency
providing the whole operation remains brief
- A network change has completed and the plugin wants to update an object's
state - again, the DB transaction contains only DB ops and nothing else and
should be quick.

Now, if we moved to that model, DB locks would be very very brief for the
sort of queries we'd need to do.  Setting aside the joys of Galera (and I
believe we identified that using one Galera node and doing all writes
through it worked just fine, though we could probably distribute read-only
transactions across all of them in the future), would there be any need for
transaction retries in that scenario?  I would have thought that DB locking
would be just fine as long as there was nothing but DB operations for the
period a transaction was open, and thus significantly changing the DB
lock/retry model now is a waste of time because it's a problem that will go
away.

Does that theory hold water?

-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Mike Bayer

> On Nov 19, 2014, at 2:58 PM, Jay Pipes  wrote:
> 
> 
>> In other words, the retry logic like the following will not work:
>> 
>> def allocate_obj():
>> with session.begin(subtrans=True):
>>  for i in xrange(n_retries):
>>   obj = session.query(Model).filter_by(filters)
>>   count = session.query(Model).filter_by(id=obj.id
>> ).update({'allocated': True})
>>   if count:
>>return obj
>> 
>> since usually methods like allocate_obj() is called from within another
>> transaction, we can't simply put transaction under 'for' loop to fix the
>> issue.
> 
> Exactly. The above code, from here:
> 
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/helpers.py#L98
> 
> has no chance of working at all under the existing default isolation levels 
> for either MySQL or PostgreSQL. If another session updates the same row in 
> between the time the first session began and the UPDATE statement in the 
> first session starts, then the first session will return 0 rows affected. It 
> will continue to return 0 rows affected for each loop, as long as the same 
> transaction/session is still in effect, which in the code above, is the case.

oh, because it stays a zero, right.  yeah I didn’t understand that that was the 
failure case before.  should have just pinged you on IRC to answer the question 
without me wasting everyone’s time! :)

> 
> The design of the Neutron plugin code's interaction with the SQLAlchemy 
> session object is the main problem here. Instead of doing all of this within 
> a single transactional container, the code should instead be changed to 
> perform the SELECT statements in separate transactions/sessions.
> 
> That means not using the session parameter supplied to the 
> neutron.plugins.ml2.drivers.helpers.TypeDriverHelper.allocate_partially_specified_segment()
>  method, and instead performing the SQL statements in separate transactions.
> 
> Mike Bayer's EngineFacade blueprint work should hopefully unclutter the 
> current passing of a session object everywhere, but until that hits, it 
> should be easy enough to simply ensure that you don't use the same session 
> object over and over again, instead of changing the isolation level.

OK but EngineFacade was all about unifying broken-up transactions into one big 
transaction.   I’ve never been partial to the “retry something inside of a 
transaction” approach i usually prefer to have the API method raise and retry 
it’s whole series of operations all over again.  How do you propose 
EngineFacade’s transaction-unifying behavior with 
separate-transaction-per-SELECT (and wouldn’t that need to include the UPDATE 
as well? )  Did you see it as having the “one main transaction” with separate 
“ad-hoc, out of band” transactions as needed?




> 
> All the best,
> -jay
> 
>> Your feedback is appreciated.
>> 
>> Thanks,
>> Eugene.
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Mike Bayer

> On Nov 19, 2014, at 1:49 PM, Ryan Moats  wrote:
> 
> I was waiting for this because I think I may have a slightly different (and 
> outside of the box) view on how to approach a solution to this.
> 
> Conceptually (at least in my mind) there isn't a whole lot of difference 
> between how the example below (i.e. updates from two concurrent threads) is 
> handled
> and how/if neutron wants to support a multi-master database scenario (which 
> in turn lurks in the background when one starts thinking/talking about 
> multi-region support).
> 
> If neutron wants to eventually support multi-master database scenarios, I see 
> two ways to go about it:
> 
> 1) Defer multi-master support to the database itself.
> 2) Take responsibility for managing the conflict resolution inherent in 
> multi-master scenarios itself.
> 
> The first approach is certainly simpler in the near term, but it has the down 
> side of restricting the choice of databases to those that have solved 
> multi-master and further, may lead to code bifurcation based on possibly 
> different solutions to the conflict resolution scenarios inherent in 
> multi-master.
> 
> The second approach is certainly more complex as neutron assumes more 
> responsibility for its own actions, but it has the advantage that (if done 
> right) would be transparent to the underlying databases (with all that 
> implies)
> 
multi-master is a very advanced use case so I don’t see why it would be 
unreasonable to require a multi-master vendor database.   Reinventing a complex 
system like that in the application layer is an unnecessary reinvention.

As far as working across different conflict resolution scenarios, while there 
may be differences across backends, these differences will be much less 
significant compared to the differences against non-clustered backends in which 
we are inventing our own multi-master solution.   I doubt a home rolled 
solution would insulate us at all from “code bifurcation” as this is already a 
fact of life in targeting different backends even without any implication of 
clustering.   Even with simple things like transaction isolation, we see that 
different databases have different behavior, and if you look at the logic in 
oslo.db inside of 
https://github.com/openstack/oslo.db/blob/master/oslo/db/sqlalchemy/exc_filters.py
 

 you can see an example of just how complex it is to just do the most 
rudimental task of organizing exceptions into errors that mean the same thing.


> My reason for asking this question here is that if the community wants to 
> consider #2, then these problems are the place to start crafting that 
> solution - if we solve the conflicts inherent with the  two conncurrent 
> thread scenarios, then I think we will find that we've solved the 
> multi-master problem essentially "for free”.
> 

Maybe I’m missing something, if we learn how to write out a row such that a 
concurrent transaction against the same row doesn’t throw us off, where is the 
part where that data is replicated to databases running concurrently on other 
IP numbers in a way that is atomic come out of that effort “for free” ?   A 
home-rolled “multi master” scenario would have to start with a system that has 
multiple create_engine() calls, since we need to communicate directly to 
multiple database servers. From there it gets really crazy.  Where’s all that ?




> 
> Ryan Moats
> 
> Mike Bayer  wrote on 11/19/2014 12:05:35 PM:
> 
> > From: Mike Bayer 
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > Date: 11/19/2014 12:05 PM
> > Subject: Re: [openstack-dev] [Neutron] DB: transaction isolation and
> > related questions
> > 
> > On Nov 18, 2014, at 1:38 PM, Eugene Nikanorov  
> > wrote:
> > 
> > Hi neutron folks,
> > 
> > There is an ongoing effort to refactor some neutron DB logic to be 
> > compatible with galera/mysql which doesn't support locking 
> > (with_lockmode('update')).
> > 
> > Some code paths that used locking in the past were rewritten to 
> > retry the operation if they detect that an object was modified concurrently.
> > The problem here is that all DB operations (CRUD) are performed in 
> > the scope of some transaction that makes complex operations to be 
> > executed in atomic manner.
> > For mysql the default transaction isolation level is 'REPEATABLE 
> > READ' which means that once the code issue a query within a 
> > transaction, this query will return the same result while in this 
> > transaction (e.g. the snapshot is taken by the DB during the first 
> > query and then reused for the same query).
> > In other words, the retry logic like the following will not work:
> > 
> > def allocate_obj():
> > with session.begin(subtrans=True):
> >  for i in xrange(n_retries):
> >   obj = session.query(Model).filter_by(filters)
> >   count = session.query(Model).filter_by(id=obj.id

Re: [openstack-dev] [OpenStack-dev][Nova] Migration stuck - resize/migrating

2014-11-19 Thread Solly Ross
Indeed.  Ensure you have SSH access between compute nodes (I'm working on some 
code to remove this requirement, but it may be a while before it gets merged).

Also, if you can, could you post logs somewhere with the 'debug' config option 
enabled?  I might be able to spot something quickly, since I've been working on 
the related code recently.

Best Regards,
Solly Ross

- Original Message -
> From: "Vishvananda Ishaya" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Tuesday, November 18, 2014 4:07:31 PM
> Subject: Re: [openstack-dev] [OpenStack-dev][Nova] Migration stuck -  
> resize/migrating
> 
> Migrate/resize uses scp to copy files back and forth with the libvirt driver.
> This shouldn’t be necessary with shared storage, but it may still need ssh
> configured between the user that nova is running as in order to complete the
> migration. It is also possible that there is a bug in the code path dealing
> with shared storage, although I would have expected you to see a traceback
> somewhere.
> 
> Vish
> 
> On Nov 11, 2014, at 1:10 AM, Eduard Matei < eduard.ma...@cloudfounders.com >
> wrote:
> 
> 
> 
> 
> Hi,
> 
> I'm testing our cinder volume driver in the following setup:
> - 2 nodes, ubuntu, devstack juno (2014.2.1)
> - shared storage (common backend), our custom software solution + cinder
> volume on shared storage
> - 1 instance running on node 1, /instances directory on shared storage
> - kvm, libvirt (with live migration flags)
> 
> Live migration of instance between nodes works perfectly.
> Migrate simply blocks. The instance in in status Resize/Migrate, no errors in
> n-cpu or n-sch, and it stays like that for over 8 hours (all night). I
> thought it was copying the disk, but it's a 20GB sparse file with approx.
> 200 mb of data, and the nodes have 1Gbps link, so it should be a couple of
> seconds.
> 
> Any difference between live migration and "migration"?
> As i said, we use a "shared filesystem"-like storage solution so the volume
> files and the instance files are visible on both nodes, so no data needs
> copying.
> 
> I know it's tricky to debug since we use a custom cinder driver, but anyone
> has any ideas where to start looking?
> 
> Thanks,
> Eduard
> 
> --
> Eduard Biceri Matei, Senior Software Developer
> www.cloudfounders.com
> | eduard.ma...@cloudfounders.com
> 
> CloudFounders, The Private Cloud Software Company
> Disclaimer:
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you are not the named addressee or an employee or agent responsible for
> delivering this message to the named addressee, you are hereby notified that
> you are not authorized to read, print, retain, copy or disseminate this
> message or any part of it. If you have received this email in error we
> request you to notify us by reply e-mail and to delete all electronic files
> of the message. If you are not the intended recipient you are notified that
> disclosing, copying, distributing or taking any action in reliance on the
> contents of this information is strictly prohibited.
> E-mail transmission cannot be guaranteed to be secure or error free as
> information could be intercepted, corrupted, lost, destroyed, arrive late or
> incomplete, or contain viruses. The sender therefore does not accept
> liability for any errors or omissions in the content of this message, and
> shall have no liability for any loss or damage suffered by the user, which
> arise as a result of e-mail transmission.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Jay Pipes
Hi Eugene, please see comments inline. But, bottom line, is that setting 
the transaction isolation level to READ_COMMITTED should be avoided.


On 11/18/2014 01:38 PM, Eugene Nikanorov wrote:

Hi neutron folks,

There is an ongoing effort to refactor some neutron DB logic to be
compatible with galera/mysql which doesn't support locking
(with_lockmode('update')).

Some code paths that used locking in the past were rewritten to retry
the operation if they detect that an object was modified concurrently.
The problem here is that all DB operations (CRUD) are performed in the
scope of some transaction that makes complex operations to be executed
in atomic manner.


Yes. The root of the problem in Neutron is that the session object is 
passed through all of the various plugin methods and the 
session.begin(subtransactions=True) is used all over the place, when in 
reality many things should not need to be done in long-lived 
transactional containers.



For mysql the default transaction isolation level is 'REPEATABLE READ'
which means that once the code issue a query within a transaction, this
query will return the same result while in this transaction (e.g. the
snapshot is taken by the DB during the first query and then reused for
the same query).


Correct.

However note that the default isolation level in PostgreSQL is READ 
COMMITTED, though it is important to point out that PostgreSQL's READ 
COMMITTED isolation level does *NOT* allow one session to see changes 
committed during query execution by concurrent transactions.


It is a common misunderstanding that MySQL's READ COMMITTED isolation 
level is the same as PostgreSQL's READ COMMITTED isolation level. It is 
not. PostgreSQL's READ COMMITTED isolation level is actually most 
closely similar to MySQL's REPEATABLE READ isolation level.


I bring this up because the proposed solution of setting the isolation 
level to READ COMMITTED will not work like you think it will on 
PostgreSQL. Regardless, please see below as to why setting the isolation 
level to READ COMMITTED is not the appropriate solution to this problem 
anyway...



In other words, the retry logic like the following will not work:

def allocate_obj():
 with session.begin(subtrans=True):
  for i in xrange(n_retries):
   obj = session.query(Model).filter_by(filters)
   count = session.query(Model).filter_by(id=obj.id
).update({'allocated': True})
   if count:
return obj

since usually methods like allocate_obj() is called from within another
transaction, we can't simply put transaction under 'for' loop to fix the
issue.


Exactly. The above code, from here:

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/helpers.py#L98

has no chance of working at all under the existing default isolation 
levels for either MySQL or PostgreSQL. If another session updates the 
same row in between the time the first session began and the UPDATE 
statement in the first session starts, then the first session will 
return 0 rows affected. It will continue to return 0 rows affected for 
each loop, as long as the same transaction/session is still in effect, 
which in the code above, is the case.



The particular issue here is
https://bugs.launchpad.net/neutron/+bug/1382064 with the proposed fix:
https://review.openstack.org/#/c/129288

So far the solution proven by tests is to change transaction isolation
level for mysql to be 'READ COMMITTED'.
The patch suggests changing the level for particular transaction where
issue occurs (per sqlalchemy, it will be reverted to engine default once
transaction is committed)
This isolation level allows the code above to see different result in
each iteration.


Not for PostgreSQL, see above. You would need to set the level to READ 
*UNCOMMITTED* to get that behaviour for PostgreSQL, and setting to READ 
UNCOMMITTED is opening up the code to a variety of other issues and 
should be avoided.



At the same time, any code that relies that repeated query under same
transaction gives the same result may potentially break.

So the question is: what do you think about changing the default
isolation level to READ COMMITTED for mysql project-wise?
It is already so for postgress, however we don't have much concurrent
test coverage to guarantee that it's safe to move to a weaker isolation
level.


PostgreSQL READ COMMITTED is the same as MySQL's REPEATABLE READ. :) So, 
no, it doesn't work for PostgreSQL either.


The design of the Neutron plugin code's interaction with the SQLAlchemy 
session object is the main problem here. Instead of doing all of this 
within a single transactional container, the code should instead be 
changed to perform the SELECT statements in separate transactions/sessions.


That means not using the session parameter supplied to the 
neutron.plugins.ml2.drivers.helpers.TypeDriverHelper.allocate_partially_specified_segment() 
method, and instead performin

Re: [openstack-dev] [Fuel] Waiting for Haproxy backends

2014-11-19 Thread Sergii Golovatiuk
Hi crew,

Please see my inline comments.

Hi Everyone,
>
> I was reading the blueprints mentioned here and thought I'd take the
> opportunity to introduce myself and ask a few questions.
> For those that don't recognise my name, Pacemaker is my baby - so I take a
> keen interest helping people have a good experience with it :)
>
> A couple of items stood out to me (apologies if I repeat anything that is
> already well understood):
>
> * Operations with CIB utilizes almost 100% of CPU on the Controller
>
>  We introduced a new CIB algorithm in 1.1.12 which is O(2) faster/less
> resource hungry than prior versions.
>  I would be interested to hear your experiences with it if you are able to
> upgrade to that version.
>

Our team is aware of that. That's really nice improvement. Thank you very
much for that. We've prepared all packages, though we have feature freeze.
Pacemaker 1.1.12 will be added to next release.


> * Corosync shutdown process takes a lot of time
>
>  Corosync (and Pacemaker) can shut down incredibly quickly.
>  If corosync is taking a long time, it will be because it is waiting for
> pacemaker, and pacemaker is almost always waiting for for one of the
> clustered services to shut down.
>

As part of improvement we have idea to split signalling layer (corosync)
and resource management (pacemaker) layers by specifying

service {
   name: pacemaker
   ver:  1
}

and create upstart script to set start ordering. That will allow us

1. Create some notifications in puppet for pacemaker
2. Restart and manage corosync and pacemaker independently
3. Use respawn in upstart to restart corosync or pacemaker


> * Current Fuel Architecture is limited to Corosync 1.x and Pacemaker 1.x
>
>  Corosync 2 is really the way to go.
>  Is there something in particular that is holding you back?
>  Also, out of interest, are you using cman or the pacemaker plugin?
>

We use almost standard corosync 1.x and pacemaker from CentOS 6.5 and
Ubuntu 12.04. However, we've prepared corosync 2.x and pacemaker 1.1.12
packages. Also we have update puppet manifests on review. As was said
above, we can't just add at the end of development cycle.


>
> *  Diff operations against Corosync CIB require to save data to file rather
>   than keep all data in memory
>
>  Can someone clarify this one for me?
>

That's our implementation for puppet. We can't just use shadow on
distributed environment, so we run

>
>  Also, I notice that the corosync init script has been modified to
> set/unset maintenance-mode with cibadmin.
>  Any reason not to use crm_attribute instead?  You might find its a less
> fragile solution than a hard-coded diff.
>

Can you give a particular line where you see that?

* Debug process of OCF scripts is not unified requires a lot of actions from
>  Cloud Operator
>
>  Two things to mention here... the first is crm_resource
> --force-(start|stop|check) which queries the cluster for the resource's
> definition but runs the command directly.

 Combined with -V, this means that you get to see everything the agent is
> doing.
>

We write many own OCF scripts. We just need to see how OCF script behaves.
ocf_tester is not enough for our cases. I'll try if crm_resource -V
--force-start is better.


>
>  Also, pacemaker now supports the ability for agents to emit specially
> formatted error messages that are stored in the cib and can be shown back
> to users.
>  This can make things much less painful for admins. Look for
> PCMK_OCF_REASON_PREFIX in the upstream resource-agents project.
>

Thank you for tip.

>
>
> * Openstack services are not managed by Pacemaker
>

The general idea to have all openstack services under pacemaker control
rather than having upstart and pacemaker. It will be very handy for
operators to see the status of all services from one console. Also it will
give us flexibility to have more complex service verification checks in
monitor function.


>
>  Oh?
>
> * Compute nodes aren't in Pacemaker cluster, hence, are lacking a viable
>  control plane for their's compute/nova services.
>
>  pacemaker-remoted might be of some interest here.
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Pacemaker_Remote/index.html
>
>
> * Creating and committing shadows not only adds constant pain with
> dependencies and unneeded complexity but also rewrites cluster attributes
> and even other changes if you mess up with ordering and it’s really hard to
> debug it.
>
>  Is this still an issue?  I'm reasonably sure this is specific to the way
> crmsh uses shadows.
>  Using the native tools it should be possible to commit only the delta, so
> any other changes that occur while you're updating the shadow would not be
> an issue, and existing attributes wouldn't be rewritten.
>

We are on the way to replace pcs and crm with native tools in puppet
service provider.


>
> * Restarting resources by Puppet’s pacemaker service provider restarts
> them even if they are running on other nodes and it sometimes imp

Re: [openstack-dev] [api] API-WG meeting (note time change this week)

2014-11-19 Thread Anne Gentle
On Wed, Nov 19, 2014 at 1:34 PM, Everett Toews 
wrote:

> On Nov 19, 2014, at 4:56 AM, Christopher Yeoh  wrote:
>
> > Hi,
> >
> > We have moved to alternating times each week for the API WG meeting so
> > people from other timezones can attend. Since this is an odd week
> > the meeting will be Thursday UTC 1600. Details here:
> >
> > https://wiki.openstack.org/wiki/Meetings/API-WG
> >
> > The google ical feed hasn't been updated yet, but thats not surprising
> > since the wiki page was only updated a few hours ago.
>
> I see on the Meetings [1] page the link to the Google Calendar iCal feed.
>
> 1. Do you know the link to the Google Calendar itself, not just the iCal
> feed?
>

All I can find is:
*https://www.google.com/calendar/embed?src=bj05mroquq28jhud58esggqmh4%40group.calendar.google.com&ctz=America/Chicago

*

 (Calendar ID: bj05mroquq28jhud58esggq...@group.calendar.google.com)


>
> 2. Do you know if there is a way to subscribe to only the API WG meeting
> from that calendar?
>
>
I don't think so, it's an ical feed for all OpenStack meetings. I've added
this alternating Thursday one to the OpenStack calendar. Thierry and I have
permissions, and as noted in this thread[1], he's working on automation.

Still, what I have to do is set my own calendar items for the meetings that
matter to me.

Anne

1.
http://lists.openstack.org/pipermail/openstack-dev/2014-November/051036.html


> Thanks,
> Everett
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Summit summary

2014-11-19 Thread melanie witt
On Nov 19, 2014, at 11:38, Everett Toews  wrote:

> Does anybody know what happened to the Etherpad? It’s completely blank now!!!
> 
> If you check the Timeslider, it appears that it only ever existed on Nov. 15. 
> Bizarre.

I see it as blank now too, however I can see all of the previous revisions and 
content when I drag the timeslider back.

melanie (melwitt)






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Summit summary

2014-11-19 Thread Everett Toews
Does anybody know what happened to the Etherpad? It’s completely blank now!!!

If you check the Timeslider, it appears that it only ever existed on Nov. 15. 
Bizarre.

Everett


On Nov 14, 2014, at 5:05 PM, Everett Toews  wrote:

> Hi All,
> 
> Here’s a summary of what happened at the Summit from the API Working Group 
> perspective.
> 
> Etherpad: https://etherpad.openstack.org/p/kilo-crossproject-api-wg
> 
> The 2 design summit sessions on Tuesday were very well attended, maybe 100ish 
> people I’m guessing. I got the impression there were developers from a 
> diverse set of projects just from the people who spoke up during the session. 
> We spent pretty much all of these 2 sessions discussing the working group 
> itself.
> 
> Some action items of note:
> 
> Update the wiki page [1] with the decisions made during the discussion
> Add an additional meeting time [2] to accommodate EU time
> Email the WG about the Nova (and Neutron?) API microversions effort and how 
> it might be a strategy for moving forward with API changes
> 
> Review the rest of the action items in the etherpad to get a better picture.
> 
> The follow up session on Thursday (last slot of the day) was attended by 
> about half the people of the Tuesday sessions. We reviewed what happened on 
> Tuesday and then got to work. We ran through the workflow of creating a 
> guideline. We basically did #1 and #2 of How to Contribute [3] but instead of 
> first taking notes on the API Improvement in the wiki we just discussed it in 
> the session. We then submitted the patch for a new guideline [4].
> 
> As you can see there’s still a lot of work to be done in that review. It may 
> even be that we need a fresh start with it. But it was a good exercise for 
> everyone present to walk through the process together for the first time. I 
> think it really helped put everyone on the same page for working together as 
> a group.
> 
> Thanks,
> Everett
> 
> [1] https://wiki.openstack.org/wiki/API_Working_Group
> [2] https://wiki.openstack.org/wiki/Meetings/API-WG
> [3] https://wiki.openstack.org/wiki/API_Working_Group#How_to_Contribute
> [4] https://review.openstack.org/#/c/133087/
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Cross-Project Liaison for the API Working Group

2014-11-19 Thread Everett Toews
On Nov 16, 2014, at 4:59 PM, Christopher Yeoh  wrote:

> My 2c is we should say "The liason should be the PTL or whomever they 
> delegate to be their representative"  and not mention anything about the 
> person needing to be a core developer. It removes any ambiguity about who 
> ultimately decides who the liason is (the PTL) without saying that they have 
> to do the work themselves.

Sure. Go ahead and change it to that.

Everett


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API-WG meeting (note time change this week)

2014-11-19 Thread Everett Toews
On Nov 19, 2014, at 4:56 AM, Christopher Yeoh  wrote:

> Hi,
> 
> We have moved to alternating times each week for the API WG meeting so
> people from other timezones can attend. Since this is an odd week 
> the meeting will be Thursday UTC 1600. Details here:
> 
> https://wiki.openstack.org/wiki/Meetings/API-WG
> 
> The google ical feed hasn't been updated yet, but thats not surprising
> since the wiki page was only updated a few hours ago.

I see on the Meetings [1] page the link to the Google Calendar iCal feed.

1. Do you know the link to the Google Calendar itself, not just the iCal feed?

2. Do you know if there is a way to subscribe to only the API WG meeting from 
that calendar?

Thanks,
Everett


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] [congress] Protocol for Congress --> Enactor

2014-11-19 Thread Gregory Lebovitz
anyone read this? comments?

On Sat, Nov 1, 2014 at 11:13 AM, Gregory Lebovitz 
wrote:

> Summary from IRC chat 10/14/2014 on weekly meeting [1] [2]
>
> Topic:  Declarative Language for Congress —> Enactor/Enforcer
>
> Question: Shall we specify a declarative language for communicating policy
> configured in Congress to enactors / enforcement systems
>
> Hypothesis (derived at conclusion of discussion):
>  - Specify declarative protocol and framework for describing policy
> with extensible attributes/value fields described in a base ontology, with
> additional affinity ontologies, is what is needed earlier than later, to be
> able to achieve it as an end-state, before too many Enactors dive into
> one-offs.
>  - We could achieve that specification once we know the right structure
>
> Discussion:
>
>- Given the following framework:
>- Elements:
>  - Congress - The policy description point, a place where:
> - (a) policy inputs are collected
> - (b) collected policy inputs are integrated
> - (c) policy is defined
> - (d) declares policy intent to enforcing / enacting systems
> - (e) observes state of environment, noting policy violations
>  - Feeders - provides policy inputs to Congress
>  - Enactors / Enforcers - receives policy declarations from
>  Congress and enacts / enforces the policy according to its 
> capabilities
> - E.g. Nova for VM placement, Neutron for interface
> connectivity, FWaaS for access control, etc.
>
> What will the protocol be for the Congress —> Enactors / Enforcers?
>
>
> thinrichs:  we’ve we've been assuming that Congress will leverage
> whatever the Enactors (policy engines) and Feeders (and more generally
> datacenter services) that exist are using. For basic datacenter services,
> we had planned on teaching Congress what their API is and what it does. So
> there's no new protocol there—we'd just use HTTP or whatever the service
> expects. For Enactors, there are 2 pieces: (1) what policy does Congress
> push and (2) what protocol does it use to do that? We don't know the answer
> to (1) yet.  (2) is less important, I think. For (2) we could use opflex,
> for example, or create a new one. (1) is hard because the Enactors likely
> have different languages that they understand. I’m not aware of anyone
> thinking about (2). I’m not thinking about (2) b/c I don't know the answer
> to (1). The *really* hard thing to understand IMO is how these Enactors
> should cooperate (in terms of the information they exchange and the
> functionality they provide).  The bits they use to wrap the messages they
> send while cooperating is a lower-level question.
>
> jasonsb & glebo: feel the need to clarify (2)
>
> glebo: if we come out strongly with a framework spec that identifies
> a protocol for (2), and make it clear that Congress participants, including
> several data center Feeders and Enactors, are in consensus, then the other
> Feeders & Enactors will line up, in order to be useful in the modern
> deployments. Either that, or they will remain isolated from the
> new environment, or their customers will have to create custom connectors
> to the new environment. It seems that we have 2 options. (a) Congress
> learns any language spoken by Feeders and Enactors, or (b) specifies a
> single protocol for Congress —> Enactors policy declarations, including a
> highly adaptable public registry(ies) for defining the meaning of content
> blobs in those messages. For (a) Congress would get VERY bloated with an
> abstraction layer, modules, semantics and state for each different language
> it needed to speak. And there would be 10s of these languages. For (b),
> there would be one way to structure messages that were constructed of blobs
> in (e.g.) some sort of Type/Length/Value (TLV) method, where the Types and
> Values were specified in some Internet registry.
>
> jasonsb: Could we attack this from the opposite direction? E.g. if
> Congress wanted to provide an operational dashboard to show if things are
> in compliance, it would be better served by receiving the state and stats
> from the Enactors in a single protocol. Could a dashboard like this be a
> carrot to lure the various players into a single protocol for Congress —>
> Enactor?
>
> glebo & jasonsb: If Congress has to give Enactors precise instructions on
> what to do, then Congress will bloat, having to have intelligence about
> each Enactor type, and hold its state and such. If Congress can deliver
> generalized policy declarations, and the Enactor is responsible for
> interpreting it, and applying it, and gathering and analyzing the state so
> that it knows how to react, then the intelligence and state that it is
> specialized in knowing will live in the Enactor. A smaller Congress is
> better, and this provides cleaner “layering” of the problem space overall.
>
> thinrichs: would love to see a single (2) lan

Re: [openstack-dev] [glance] Parallels loopback disk format support

2014-11-19 Thread Maxim Nestratov

Hi Nikhil,

Thank you for your response and advice. I'm currently creating the spec 
and will publish a review for it shortly.


Best,
Maxim

19.11.2014 18:15, Nikhil Komawar пишет:

Hi Maxim,

Thanks for showing interest in this aspect. Like nova-specs, Glance also needs 
a spec to be create for discussion related to the blueprint.

Please try to create one here [1]. Additionally you may join us at the meeting 
[2] if you feel stuck or need clarifications.

[1] https://github.com/openstack/glance-specs
[2] https://wiki.openstack.org/wiki/Meetings#Glance_Team_meeting

Thanks,
-Nikhil


From: Maxim Nestratov [mnestra...@parallels.com]
Sent: Wednesday, November 19, 2014 8:27 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [glance] Parallels loopback disk format support

Greetings,

In scope of these changes [1], I would like to add a new image format
into glance. For this purpose there was created a blueprint [2] and
would really appreciate if someone from glance team could review this
proposal.

[1] https://review.openstack.org/#/c/111335/
[2] https://blueprints.launchpad.net/glance/+spec/pcs-support

Best,

Maxim Nestratov,
Lead Software Developer,
Parallels


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Ryan Moats
I was waiting for this because I think I may have a slightly different (and
outside of the box) view on how to approach a solution to this.

Conceptually (at least in my mind) there isn't a whole lot of difference
between how the example below (i.e. updates from two concurrent threads) is
handled
and how/if neutron wants to support a multi-master database scenario (which
in turn lurks in the background when one starts thinking/talking about
multi-region support).

If neutron wants to eventually support multi-master database scenarios, I
see two ways to go about it:

1) Defer multi-master support to the database itself.
2) Take responsibility for managing the conflict resolution inherent in
multi-master scenarios itself.

The first approach is certainly simpler in the near term, but it has the
down side of restricting the choice of databases to those that have solved
multi-master and further, may lead to code bifurcation based on possibly
different solutions to the conflict resolution scenarios inherent in
multi-master.

The second approach is certainly more complex as neutron assumes more
responsibility for its own actions, but it has the advantage that (if done
right) would be transparent to the underlying databases (with all that
implies)

My reason for asking this question here is that if the community wants to
consider #2, then these problems are the place to start crafting that
solution - if we solve the conflicts inherent with the  two conncurrent
thread scenarios, then I think we will find that we've solved the
multi-master problem essentially "for free".

Ryan Moats

Mike Bayer  wrote on 11/19/2014 12:05:35 PM:

> From: Mike Bayer 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 11/19/2014 12:05 PM
> Subject: Re: [openstack-dev] [Neutron] DB: transaction isolation and
> related questions
>
> On Nov 18, 2014, at 1:38 PM, Eugene Nikanorov 
wrote:
>
> Hi neutron folks,
>
> There is an ongoing effort to refactor some neutron DB logic to be
> compatible with galera/mysql which doesn't support locking
> (with_lockmode('update')).
>
> Some code paths that used locking in the past were rewritten to
> retry the operation if they detect that an object was modified
concurrently.
> The problem here is that all DB operations (CRUD) are performed in
> the scope of some transaction that makes complex operations to be
> executed in atomic manner.
> For mysql the default transaction isolation level is 'REPEATABLE
> READ' which means that once the code issue a query within a
> transaction, this query will return the same result while in this
> transaction (e.g. the snapshot is taken by the DB during the first
> query and then reused for the same query).
> In other words, the retry logic like the following will not work:
>
> def allocate_obj():
> with session.begin(subtrans=True):
>  for i in xrange(n_retries):
>   obj = session.query(Model).filter_by(filters)
>   count = session.query(Model).filter_by(id=obj.id
> ).update({'allocated': True})
>   if count:
>return obj
>
> since usually methods like allocate_obj() is called from within
> another transaction, we can't simply put transaction under 'for'
> loop to fix the issue.
>
> has this been confirmed?  the point of systems like repeatable read
> is not just that you read the “old” data, it’s also to ensure that
> updates to that data either proceed or fail explicitly; locking is
> also used to prevent concurrent access that can’t be reconciled.  A
> lower isolation removes these advantages.
>
> I ran a simple test in two MySQL sessions as follows:
>
> session 1:
>
> mysql> create table some_table(data integer) engine=innodb;
> Query OK, 0 rows affected (0.01 sec)
>
> mysql> insert into some_table(data) values (1);
> Query OK, 1 row affected (0.00 sec)
>
> mysql> begin;
> Query OK, 0 rows affected (0.00 sec)
>
> mysql> select data from some_table;
> +--+
> | data |
> +--+
> |1 |
> +--+
> 1 row in set (0.00 sec)
>
> session 2:
>
> mysql> begin;
> Query OK, 0 rows affected (0.00 sec)
>
> mysql> update some_table set data=2 where data=1;
> Query OK, 1 row affected (0.00 sec)
> Rows matched: 1  Changed: 1  Warnings: 0
>
> then back in session 1, I ran:
>
> mysql> update some_table set data=3 where data=1;
>
> this query blocked;  that’s because session 2 has placed a write
> lock on the table.  this is the effect of repeatable read isolation.
>
> while it blocked, I went to session 2 and committed the in-progress
> transaction:
>
> mysql> commit;
> Query OK, 0 rows affected (0.00 sec)
>
> then session 1 unblocked, and it reported, correctly, that zero rows
> were affected:
>
> Query OK, 0 rows affected (7.29 sec)
> Rows matched: 0  Changed: 0  Warnings: 0
>
> the update had not taken place, as was stated by “rows matched":
>
> mysql> select * from some_table;
> +--+
> | data |
> +--+
> |1 |
> +--+
> 1 row in set (0.00 sec)
>
> the code in 

Re: [openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-19 Thread Dan Smith
> However, it presents a problem when we consider NovaObjects, and
> dependencies between them.

I disagree with this assertion, because:

> For example, take Instance.save(). An
> Instance has relationships with several other object types, one of which
> is InstanceInfoCache. Consider the following code, which is amongst what
> happens in spawn():
> 
> instance = Instance.get_by_uuid(uuid)
> instance.vm_state = vm_states.ACTIVE
> instance.info_cache.network_info = new_nw_info
> instance.save()
> 
> instance.save() does (simplified):
>   self.info_cache.save()
>   self._db_save()
> 
> Both of these saves happen in separate db transactions.

This has always been two DB calls, and for a while recently, it was two
RPCs, each of which did one call.

> This has at least 2 undesirable effects:
> 
> 1. A failure can result in an inconsistent database. i.e. info_cache
> having been persisted, but instance.vm_state not having been persisted.
> 
> 2. Even in the absence of a failure, an external reader can see the new
> info_cache but the old instance.

I think you might want to pick a different example. We update the
info_cache all the time asynchronously, due to "time has passed" and
other non-user-visible reasons.

> New features continue to add to the problem,
> including numa topology and pci requests.

NUMA and PCI information are now created atomically with the instance
(or at least, passed to SQLA in a way I expect does the insert as a
single transaction). We don't yet do that in save(), I think because we
didn't actually change this information after creation until recently.

Definitely agree that we should not save the PCI part without the base
instance part.

> I don't think we can reasonably remove the cascading save() above due to
> the deliberate design of objects. Objects don't correspond directly to
> their datamodels, so save() does more work than just calling out to the
> DB. We need a way to allow cascading object saves to happen within a
> single DB transaction. This will mean:
> 
> 1. A change will be persisted either entirely or not at all in the event
> of a failure.
> 
> 2. A reader will see either the whole change or none of it.

This is definitely what we should strive for in cases where the updates
are related, but as I said above, for things (like info cache) where it
doesn't matter, we should be fine.

> Note that there is this recently approved oslo.db spec to make
> transactions more manageable:
> 
> https://review.openstack.org/#/c/125181/11/specs/kilo/make-enginefacade-a-facade.rst,cm
> 
> Again, while this will be a significant benefit to the DB api, it will
> not solve the problem of cascading object saves without allowing
> transaction management at the level of NovaObject.save(): we need to
> allow something to call a db api with an existing session, and we need
> to allow something to pass an existing db transaction to NovaObject.save().

I don't agree that we need to be concerned about this at the
NovaObject.save() level. I do agree that Instance.save() needs to have a
relationship to its sub-objects that facilitates atomicity (where
appropriate), and that such a pattern can be used for other such
hierarchies.

> An obvious precursor to that is removing N309 from hacking, which
> specifically tests for db apis which accept a session argument. We then
> need to consider how NovaObject.save() should manage and propagate db
> transactions.

Right, so I believe that we had more consistent handling of transactions
in the past. We had a mechanism for passing around the session between
chained db/api methods to ensure they happened atomically. I think Boris
led the charge to eliminate that, culminating with the hacking rule you
mentioned.

Maybe getting back to the justification for removing that facility would
help us understand the challenges we face going forward?

> [1] At a slight tangent, this looks like an artifact of some premature
> generalisation a few years ago. It seems unlikely that anybody is going
> to rewrite the db api using an ORM other than sqlalchemy, so we should
> probably ditch it and promote it to db/api.py.

We've had a few people ask about it, in terms of rewriting some or all
of our DB API to talk to a totally non-SQL backend. Further, AFAIK, RAX
rewrites a few of the DB API calls to use raw SQL queries for
performance (or did, at one point).

I'm quite happy to have the implementation of Instance.save() make use
of primitives to ensure atomicity where appropriate. I don't think
that's something that needs or deserves generalization at this point,
and I'm not convinced it needs to be in the save method itself. Right
now we update several things atomically by passing something to db/api
that gets turned into properly-related SQLA objects. I think we could do
the same for any that we're currently cascading separately, even if the
db/api update method uses a transaction to ensure safety.

--Dan



signature.asc
Description: OpenPGP digital signature
___

[openstack-dev] [QA] Meeting Thursday November 20th at 17:00 UTC

2014-11-19 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, November 20th at 17:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

It's also worth noting that a few weeks ago we started having a regular
dedicated Devstack topic during the meetings. So if anyone is interested in
Devstack development please join the meetings to be a part of the discussion.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

12:00 EST
02:00 JST
03:30 ACDT
18:00 CET
11:00 CST
9:00 PST

-Matt Treinish


pgpHiRreaoc7D.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-19 Thread Mike Bayer

> On Nov 19, 2014, at 12:59 PM, Boris Pavlovic  wrote:
> 
> Matthew, 
> 
> 
> LOL ORM on top of another ORM 
> 
> https://img.neoseeker.com/screenshots/TW92aWVzL0RyYW1h/inception_image33.png 
> 

I know where you stand on this Boris, but I fail to see how this is a 
productive contribution to the discussion.  Leo Dicaprio isn’t going to solve 
our issue here and I look forward to iterating on what we have today.




> 
> 
> 
> Best regards,
> Boris Pavlovic 
> 
> On Wed, Nov 19, 2014 at 8:46 PM, Matthew Booth  > wrote:
> We currently have a pattern in Nova where all database code lives in
> db/sqla/api.py[1]. Database transactions are only ever created or used
> in this module. This was an explicit design decision:
> https://blueprints.launchpad.net/nova/+spec/db-session-cleanup 
>  .
> 
> However, it presents a problem when we consider NovaObjects, and
> dependencies between them. For example, take Instance.save(). An
> Instance has relationships with several other object types, one of which
> is InstanceInfoCache. Consider the following code, which is amongst what
> happens in spawn():
> 
> instance = Instance.get_by_uuid(uuid)
> instance.vm_state = vm_states.ACTIVE
> instance.info_cache.network_info = new_nw_info
> instance.save()
> 
> instance.save() does (simplified):
>   self.info_cache.save()
>   self._db_save()
> 
> Both of these saves happen in separate db transactions. This has at
> least 2 undesirable effects:
> 
> 1. A failure can result in an inconsistent database. i.e. info_cache
> having been persisted, but instance.vm_state not having been persisted.
> 
> 2. Even in the absence of a failure, an external reader can see the new
> info_cache but the old instance.
> 
> This is one example, but there are lots. We might convince ourselves
> that the impact of this particular case is limited, but there will be
> others where it isn't. Confidently assuring ourselves of a limited
> impact also requires a large amount of context which not many
> maintainers will have. New features continue to add to the problem,
> including numa topology and pci requests.
> 
> I don't think we can reasonably remove the cascading save() above due to
> the deliberate design of objects. Objects don't correspond directly to
> their datamodels, so save() does more work than just calling out to the
> DB. We need a way to allow cascading object saves to happen within a
> single DB transaction. This will mean:
> 
> 1. A change will be persisted either entirely or not at all in the event
> of a failure.
> 
> 2. A reader will see either the whole change or none of it.
> 
> We are not talking about crossing an RPC boundary. The single database
> transaction only makes sense within the context of a single RPC call.
> This will always be the case when NovaObject.save() cascades to other
> object saves.
> 
> Note that we also have a separate problem, which is that the DB api's
> internal use of transactions is wildly inconsistent. A single db api
> call can result in multiple concurrent db transactions from the same
> thread, and all the deadlocks that implies. This needs to be fixed, but
> it doesn't require changing our current assumption that DB transactions
> live only within the DB api.
> 
> Note that there is this recently approved oslo.db spec to make
> transactions more manageable:
> 
> https://review.openstack.org/#/c/125181/11/specs/kilo/make-enginefacade-a-facade.rst,cm
>  
> 
> 
> Again, while this will be a significant benefit to the DB api, it will
> not solve the problem of cascading object saves without allowing
> transaction management at the level of NovaObject.save(): we need to
> allow something to call a db api with an existing session, and we need
> to allow something to pass an existing db transaction to NovaObject.save().
> 
> An obvious precursor to that is removing N309 from hacking, which
> specifically tests for db apis which accept a session argument. We then
> need to consider how NovaObject.save() should manage and propagate db
> transactions.
> 
> I think the following pattern would solve it:
> 
> @remotable
> def save():
> session = 
> try:
> r = self._save(session)
> session.commit() (or reader/writer magic from oslo.db)
> return r
> except Exception:
> session.rollback() (or reader/writer magic from oslo.db)
> raise
> 
> @definitelynotremotable
> def _save(session):
> previous contents of save() move here
> session is explicitly passed to db api calls
> cascading saves call object._save(session)
> 
> Whether we wait for the oslo.db updates or not, we need something like
> the above. We could implement this today by exposing
> db.sqla.api.get_session().
> 
> Thoughts?
> 
> Matt

Re: [openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-19 Thread Mike Bayer

> On Nov 19, 2014, at 11:46 AM, Matthew Booth  wrote:
> 
> We currently have a pattern in Nova where all database code lives in
> db/sqla/api.py[1]. Database transactions are only ever created or used
> in this module. This was an explicit design decision:
> https://blueprints.launchpad.net/nova/+spec/db-session-cleanup .
> 
> However, it presents a problem when we consider NovaObjects, and
> dependencies between them. For example, take Instance.save(). An
> Instance has relationships with several other object types, one of which
> is InstanceInfoCache. Consider the following code, which is amongst what
> happens in spawn():
> 
> instance = Instance.get_by_uuid(uuid)
> instance.vm_state = vm_states.ACTIVE
> instance.info_cache.network_info = new_nw_info
> instance.save()
> 
> instance.save() does (simplified):
>  self.info_cache.save()
>  self._db_save()
> 
> Both of these saves happen in separate db transactions.
> 

> I don't think we can reasonably remove the cascading save() above due to
> the deliberate design of objects. Objects don't correspond directly to
> their datamodels, so save() does more work than just calling out to the
> DB. We need a way to allow cascading object saves to happen within a
> single DB transaction.

So this is actually part of what https://review.openstack.org/#/c/125181/ aims 
to solve.If it isn’t going to achieve this (and I think I see what the 
problem is), we need to fix it.

> 
> Note that we also have a separate problem, which is that the DB api's
> internal use of transactions is wildly inconsistent. A single db api
> call can result in multiple concurrent db transactions from the same
> thread, and all the deadlocks that implies. This needs to be fixed, but
> it doesn't require changing our current assumption that DB transactions
> live only within the DB api.
> 
> Note that there is this recently approved oslo.db spec to make
> transactions more manageable:
> 
> https://review.openstack.org/#/c/125181/11/specs/kilo/make-enginefacade-a-facade.rst,cm
> 
> Again, while this will be a significant benefit to the DB api, it will
> not solve the problem of cascading object saves without allowing
> transaction management at the level of NovaObject.save(): we need to
> allow something to call a db api with an existing session, and we need
> to allow something to pass an existing db transaction to NovaObject.save().

OK so here is why EngineFacade as described so far doesn’t work, because if it 
is like this:

def some_api_operation ->

novaobject1.save() ->

   @writer
   def do_some_db_thing()

novaobject2.save() ->

   @writer
   def do_some_other_db_thing()

then yes, those two @writer calls aren’t coordinated.   So yes, I think 
something that ultimately communicates the same meaning as @writer needs to be 
at the top:

@something_that_invokes_writer_without_exposing_db_stuff
def some_api_operation ->

# … etc

If my decorator is not clear enough, let me clarify that a decorator that is 
present at the API/ nova objects layer will interact with the SQL layer through 
some form of dependency injection, and not any kind of explicit import; that 
is, when the SQL layer is invoked, it registers some kind of state onto the 
@something_that_invokes_writer_without_exposing_db_stuff system that causes its 
“cleanup”, in this case the commit(), to occur at the end of that topmost 
decorator.


> I think the following pattern would solve it:
> 
> @remotable
> def save():
>session = 
>try:
>r = self._save(session)
>session.commit() (or reader/writer magic from oslo.db)
>return r
>except Exception:
>session.rollback() (or reader/writer magic from oslo.db)
>raise
> 
> @definitelynotremotable
> def _save(session):
>previous contents of save() move here
>session is explicitly passed to db api calls
>cascading saves call object._save(session)

so again with EngineFacade rewrite, the @definitelynotremotable system should 
also interact such that if @writer is invoked internally, an error is raised, 
just the same as when @writer is invoked within @reader.


> 
> Whether we wait for the oslo.db updates or not, we need something like
> the above. We could implement this today by exposing
> db.sqla.api.get_session().

EngineFacade is hoped to be ready for Kilo and obviously Nova is very much 
hoped to be my first customer for integration. It would be great if folks 
want to step up and help implement it, or at least take hold of a prototype I 
can build relatively quickly and integration test it and/or work on a real nova 
integration.

> 
> Thoughts?
> 
> Matt
> 
> [1] At a slight tangent, this looks like an artifact of some premature
> generalisation a few years ago. It seems unlikely that anybody is going
> to rewrite the db api using an ORM other than sqlalchemy, so we should
> probably ditch it and promote it to db/api.py.

funny you should menti

Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-19 Thread Fox, Kevin M
I meant, either go into the bios and set the node to netboot, or hit f12(or 
whatever) and netboot. the netbooted discovery image should be able to gather 
all the rest of the bits? Minimal ISO could also be just gpxe or something like 
that and would do the same as above. Then you don't have to update the iso 
every time you enhance discovery process too.

Hmmm... Some bmc's do dhcp though out of the box. I guess if you watched for 
dhcp leases and then tried to contact them over ipmi with a few default 
username/passwords, you'd probably get a fair number of them without much 
effort, if they are preconfigured.

In our experience though, we usually get nodes in that we have to configure 
netboot in the bios, then the next easiest step is to install and then 
configure the bmc via the installed linux. You can manually setup the bmc 
username/password/ip/whatever but its work. Most of the bmc's we've seen have 
ipmi over the network disabled by default. :/ So in that case, the former, 
netboot the box, load a discovery image, and have it configure the bmc all in 
one go would be nicer I think.

Thanks,
Kevin

From: Tomasz Napierala [tnapier...@mirantis.com]
Sent: Wednesday, November 19, 2014 9:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Bogdan Dobrelya
Subject: Re: [openstack-dev] [Fuel] Power management in Cobbler

> On 19 Nov 2014, at 17:56, Fox, Kevin M  wrote:
>
> Would net booting a minimal discovery image work? You usually can dump ipmi 
> network information from the host.
>

To boot from minimal iso (which is waht we do now) you still need to tell the 
host to do it. This is where IPMI discovery is needed I guess.

Regards,
--
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DB: transaction isolation and related questions

2014-11-19 Thread Mike Bayer

> On Nov 18, 2014, at 1:38 PM, Eugene Nikanorov  wrote:
> 
> Hi neutron folks,
> 
> There is an ongoing effort to refactor some neutron DB logic to be compatible 
> with galera/mysql which doesn't support locking (with_lockmode('update')).
> 
> Some code paths that used locking in the past were rewritten to retry the 
> operation if they detect that an object was modified concurrently.
> The problem here is that all DB operations (CRUD) are performed in the scope 
> of some transaction that makes complex operations to be executed in atomic 
> manner.
> For mysql the default transaction isolation level is 'REPEATABLE READ' which 
> means that once the code issue a query within a transaction, this query will 
> return the same result while in this transaction (e.g. the snapshot is taken 
> by the DB during the first query and then reused for the same query).
> In other words, the retry logic like the following will not work:
> 
> def allocate_obj():
> with session.begin(subtrans=True):
>  for i in xrange(n_retries):
>   obj = session.query(Model).filter_by(filters)
>   count = session.query(Model).filter_by(id=obj.id 
> ).update({'allocated': True})
>   if count:
>return obj
> 
> since usually methods like allocate_obj() is called from within another 
> transaction, we can't simply put transaction under 'for' loop to fix the 
> issue.

has this been confirmed?  the point of systems like repeatable read is not just 
that you read the “old” data, it’s also to ensure that updates to that data 
either proceed or fail explicitly; locking is also used to prevent concurrent 
access that can’t be reconciled.  A lower isolation removes these advantages.  

I ran a simple test in two MySQL sessions as follows:

session 1:

mysql> create table some_table(data integer) engine=innodb;
Query OK, 0 rows affected (0.01 sec)

mysql> insert into some_table(data) values (1);
Query OK, 1 row affected (0.00 sec)

mysql> begin;
Query OK, 0 rows affected (0.00 sec)

mysql> select data from some_table;
+--+
| data |
+--+
|1 |
+--+
1 row in set (0.00 sec)


session 2:

mysql> begin;
Query OK, 0 rows affected (0.00 sec)

mysql> update some_table set data=2 where data=1;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

then back in session 1, I ran:

mysql> update some_table set data=3 where data=1;

this query blocked;  that’s because session 2 has placed a write lock on the 
table.  this is the effect of repeatable read isolation.

while it blocked, I went to session 2 and committed the in-progress transaction:

mysql> commit;
Query OK, 0 rows affected (0.00 sec)

then session 1 unblocked, and it reported, correctly, that zero rows were 
affected:

Query OK, 0 rows affected (7.29 sec)
Rows matched: 0  Changed: 0  Warnings: 0

the update had not taken place, as was stated by “rows matched":

mysql> select * from some_table;
+--+
| data |
+--+
|1 |
+--+
1 row in set (0.00 sec)

the code in question would do a retry at this point; it is checking the number 
of rows matched, and that number is accurate.

if our code did *not* block at the point of our UPDATE, then it would have 
proceeded, and the other transaction would have overwritten what we just did, 
when it committed.   I don’t know that read committed is necessarily any better 
here.

now perhaps, with Galera, none of this works correctly.  That would be a 
different issue in which case sure, we should use whatever isolation is 
recommended for Galera.  But I’d want to potentially peg it to the fact that 
Galera is in use, or not.

would love also to hear from Jay Pipes on this since he literally wrote the 
book on MySQL ! :)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-19 Thread Boris Pavlovic
Matthew,


LOL ORM on top of another ORM 

https://img.neoseeker.com/screenshots/TW92aWVzL0RyYW1h/inception_image33.png



Best regards,
Boris Pavlovic

On Wed, Nov 19, 2014 at 8:46 PM, Matthew Booth  wrote:

> We currently have a pattern in Nova where all database code lives in
> db/sqla/api.py[1]. Database transactions are only ever created or used
> in this module. This was an explicit design decision:
> https://blueprints.launchpad.net/nova/+spec/db-session-cleanup .
>
> However, it presents a problem when we consider NovaObjects, and
> dependencies between them. For example, take Instance.save(). An
> Instance has relationships with several other object types, one of which
> is InstanceInfoCache. Consider the following code, which is amongst what
> happens in spawn():
>
> instance = Instance.get_by_uuid(uuid)
> instance.vm_state = vm_states.ACTIVE
> instance.info_cache.network_info = new_nw_info
> instance.save()
>
> instance.save() does (simplified):
>   self.info_cache.save()
>   self._db_save()
>
> Both of these saves happen in separate db transactions. This has at
> least 2 undesirable effects:
>
> 1. A failure can result in an inconsistent database. i.e. info_cache
> having been persisted, but instance.vm_state not having been persisted.
>
> 2. Even in the absence of a failure, an external reader can see the new
> info_cache but the old instance.
>
> This is one example, but there are lots. We might convince ourselves
> that the impact of this particular case is limited, but there will be
> others where it isn't. Confidently assuring ourselves of a limited
> impact also requires a large amount of context which not many
> maintainers will have. New features continue to add to the problem,
> including numa topology and pci requests.
>
> I don't think we can reasonably remove the cascading save() above due to
> the deliberate design of objects. Objects don't correspond directly to
> their datamodels, so save() does more work than just calling out to the
> DB. We need a way to allow cascading object saves to happen within a
> single DB transaction. This will mean:
>
> 1. A change will be persisted either entirely or not at all in the event
> of a failure.
>
> 2. A reader will see either the whole change or none of it.
>
> We are not talking about crossing an RPC boundary. The single database
> transaction only makes sense within the context of a single RPC call.
> This will always be the case when NovaObject.save() cascades to other
> object saves.
>
> Note that we also have a separate problem, which is that the DB api's
> internal use of transactions is wildly inconsistent. A single db api
> call can result in multiple concurrent db transactions from the same
> thread, and all the deadlocks that implies. This needs to be fixed, but
> it doesn't require changing our current assumption that DB transactions
> live only within the DB api.
>
> Note that there is this recently approved oslo.db spec to make
> transactions more manageable:
>
>
> https://review.openstack.org/#/c/125181/11/specs/kilo/make-enginefacade-a-facade.rst,cm
>
> Again, while this will be a significant benefit to the DB api, it will
> not solve the problem of cascading object saves without allowing
> transaction management at the level of NovaObject.save(): we need to
> allow something to call a db api with an existing session, and we need
> to allow something to pass an existing db transaction to NovaObject.save().
>
> An obvious precursor to that is removing N309 from hacking, which
> specifically tests for db apis which accept a session argument. We then
> need to consider how NovaObject.save() should manage and propagate db
> transactions.
>
> I think the following pattern would solve it:
>
> @remotable
> def save():
> session = 
> try:
> r = self._save(session)
> session.commit() (or reader/writer magic from oslo.db)
> return r
> except Exception:
> session.rollback() (or reader/writer magic from oslo.db)
> raise
>
> @definitelynotremotable
> def _save(session):
> previous contents of save() move here
> session is explicitly passed to db api calls
> cascading saves call object._save(session)
>
> Whether we wait for the oslo.db updates or not, we need something like
> the above. We could implement this today by exposing
> db.sqla.api.get_session().
>
> Thoughts?
>
> Matt
>
> [1] At a slight tangent, this looks like an artifact of some premature
> generalisation a few years ago. It seems unlikely that anybody is going
> to rewrite the db api using an ORM other than sqlalchemy, so we should
> probably ditch it and promote it to db/api.py.
> --
> Matthew Booth
> Red Hat Engineering, Virtualisation Team
>
> Phone: +442070094448 (UK)
> GPG ID:  D33C3490
> GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mail

Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-19 Thread Tomasz Napierala

> On 19 Nov 2014, at 17:56, Fox, Kevin M  wrote:
> 
> Would net booting a minimal discovery image work? You usually can dump ipmi 
> network information from the host.
> 

To boot from minimal iso (which is waht we do now) you still need to tell the 
host to do it. This is where IPMI discovery is needed I guess.

Regards,
-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-19 Thread Ivar Lazzaro
While I agree that a unified endpoint could be a good solution for now, I
think that the easiest way of doing this would be by implementing it as an
external Neutron service.

Using python entry_points, the advanced service extensions can be loaded in
Neutron just like we do today (using neutron.conf).

We will basically have a new project for which Neutron will be a dependency
(not the other way around!) so that any module of Neutron can be
imported/used just like the new code was living within Neutron itself.

As far as UTs are concerned, Neutron will also be in the test-requirements
for the new project, which means that any existing UT framework in Neutron
today can be easily reused by the new services.

This is compliant with the requirement that Neutron stays the only
endpoint, giving the ability to the user to load the new services when she
wants by configuring Neutron alone, while separating the concerns more
easily and clearly.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-19 Thread Kyle Mestery
On Tue, Nov 18, 2014 at 5:32 PM, Doug Wiegley  wrote:
> Hi,
>
>> so the specs repository would continue to be shared during the Kilo cycle.
>
> One of the reasons to split is that these two teams have different
> priorities and velocities.  Wouldn’t that be easier to track/manage as
> separate launchpad projects and specs repos, irrespective of who is
> approving them?
>
My thinking here is that the specs repo is shared (at least initialy)
because the projects are under one umbrella, and we want them to work
closely together initially. This keeps everyone in the loop. Once
things mature, we can look at reevaluating this. Does that make sense?

Thanks,
Kyle

> Thanks,
> doug
>
>
>
> On Nov 18, 2014, at 10:31 PM, Mark McClain  wrote:
>
> All-
>
> Over the last several months, the members of the Networking Program have
> been discussing ways to improve the management of our program.  When the
> Quantum project was initially launched, we envisioned a combined service
> that included all things network related.  This vision served us well in the
> early days as the team mostly focused on building out layers 2 and 3;
> however, we’ve run into growth challenges as the project started building
> out layers 4 through 7.  Initially, we thought that development would float
> across all layers of the networking stack, but the reality is that the
> development concentrates around either layer 2 and 3 or layers 4 through 7.
> In the last few cycles, we’ve also discovered that these concentrations have
> different velocities and a single core team forces one to match the other to
> the detriment of the one forced to slow down.
>
> Going forward we want to divide the Neutron repository into two separate
> repositories lead by a common Networking PTL.  The current mission of the
> program will remain unchanged [1].  The split would be as follows:
>
> Neutron (Layer 2 and 3)
> - Provides REST service and technology agnostic abstractions for layer 2 and
> layer 3 services.
>
> Neutron Advanced Services Library (Layers 4 through 7)
> - A python library which is co-released with Neutron
> - The advance service library provides controllers that can be configured to
> manage the abstractions for layer 4 through 7 services.
>
> Mechanics of the split:
> - Both repositories are members of the same program, so the specs repository
> would continue to be shared during the Kilo cycle.  The PTL and the drivers
> team will retain approval responsibilities they now share.
> - The split would occur around Kilo-1 (subject to coordination of the Infra
> and Networking teams). The timing is designed to enable the proposed REST
> changes to land around the time of the December development sprint.
> - The core team for each repository will be determined and proposed by Kyle
> Mestery for approval by the current core team.
> - The Neutron Server and the Neutron Adv Services Library would be co-gated
> to ensure that incompatibilities are not introduced.
> - The Advance Service Library would be an optional dependency of Neutron, so
> integrated cross-project checks would not be required to enable it during
> testing.
> - The split should not adversely impact operators and the Networking program
> should maintain standard OpenStack compatibility and deprecation cycles.
>
> This proposal to divide into two repositories achieved a strong consensus at
> the recent Paris Design Summit and it does not conflict with the current
> governance model or any proposals circulating as part of the ‘Big Tent’
> discussion.
>
> Kyle and mark
>
> [1]
> https://git.openstack.org/cgit/openstack/governance/plain/reference/programs.yaml
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-19 Thread Kyle Mestery
On Tue, Nov 18, 2014 at 5:36 PM, Armando M.  wrote:
> Mark, Kyle,
>
> What is the strategy for tracking the progress and all the details about
> this initiative? Blueprint spec, wiki page, or something else?
>
We're in the process of writing a spec for this now, but we first
wanted community feedback. Also, it's on the TC agenda for next week I
believe, so once we get signoff from the TC, we'll propose the spec.

Thanks,
Kyle

> One thing I personally found useful about the spec approach adopted in [1],
> was that we could quickly and effectively incorporate community feedback;
> having said that I am not sure that the same approach makes sense here,
> hence the question.
>
> Also, what happens for experimental efforts that are neither L2-3 nor L4-7
> (e.g. TaaS or NFV related ones?), but they may still benefit from this
> decomposition (as it promotes better separation of responsibilities)? Where
> would they live? I am not sure we made any particular progress of the
> incubator project idea that was floated a while back.
>
> Cheers,
> Armando
>
> [1] https://review.openstack.org/#/c/134680/
>
> On 18 November 2014 15:32, Doug Wiegley  wrote:
>>
>> Hi,
>>
>> > so the specs repository would continue to be shared during the Kilo
>> > cycle.
>>
>> One of the reasons to split is that these two teams have different
>> priorities and velocities.  Wouldn’t that be easier to track/manage as
>> separate launchpad projects and specs repos, irrespective of who is
>> approving them?
>>
>> Thanks,
>> doug
>>
>>
>>
>> On Nov 18, 2014, at 10:31 PM, Mark McClain  wrote:
>>
>> All-
>>
>> Over the last several months, the members of the Networking Program have
>> been discussing ways to improve the management of our program.  When the
>> Quantum project was initially launched, we envisioned a combined service
>> that included all things network related.  This vision served us well in the
>> early days as the team mostly focused on building out layers 2 and 3;
>> however, we’ve run into growth challenges as the project started building
>> out layers 4 through 7.  Initially, we thought that development would float
>> across all layers of the networking stack, but the reality is that the
>> development concentrates around either layer 2 and 3 or layers 4 through 7.
>> In the last few cycles, we’ve also discovered that these concentrations have
>> different velocities and a single core team forces one to match the other to
>> the detriment of the one forced to slow down.
>>
>> Going forward we want to divide the Neutron repository into two separate
>> repositories lead by a common Networking PTL.  The current mission of the
>> program will remain unchanged [1].  The split would be as follows:
>>
>> Neutron (Layer 2 and 3)
>> - Provides REST service and technology agnostic abstractions for layer 2
>> and layer 3 services.
>>
>> Neutron Advanced Services Library (Layers 4 through 7)
>> - A python library which is co-released with Neutron
>> - The advance service library provides controllers that can be configured
>> to manage the abstractions for layer 4 through 7 services.
>>
>> Mechanics of the split:
>> - Both repositories are members of the same program, so the specs
>> repository would continue to be shared during the Kilo cycle.  The PTL and
>> the drivers team will retain approval responsibilities they now share.
>> - The split would occur around Kilo-1 (subject to coordination of the
>> Infra and Networking teams). The timing is designed to enable the proposed
>> REST changes to land around the time of the December development sprint.
>> - The core team for each repository will be determined and proposed by
>> Kyle Mestery for approval by the current core team.
>> - The Neutron Server and the Neutron Adv Services Library would be
>> co-gated to ensure that incompatibilities are not introduced.
>> - The Advance Service Library would be an optional dependency of Neutron,
>> so integrated cross-project checks would not be required to enable it during
>> testing.
>> - The split should not adversely impact operators and the Networking
>> program should maintain standard OpenStack compatibility and deprecation
>> cycles.
>>
>> This proposal to divide into two repositories achieved a strong consensus
>> at the recent Paris Design Summit and it does not conflict with the current
>> governance model or any proposals circulating as part of the ‘Big Tent’
>> discussion.
>>
>> Kyle and mark
>>
>> [1]
>> https://git.openstack.org/cgit/openstack/governance/plain/reference/programs.yaml
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev m

Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-19 Thread Fox, Kevin M
Would net booting a minimal discovery image work? You usually can dump ipmi 
network information from the host.

Thanks,
Kevin

From: Matthew Mosesohn [mmoses...@mirantis.com]
Sent: Wednesday, November 19, 2014 7:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Bogdan Dobrelya
Subject: Re: [openstack-dev] [Fuel] Power management in Cobbler

Tomasz, Vladimir, others,

The way I see it is we need a way to discover the corresponding IPMI
address for a given node for out-of-band power management. The
ultimate ipmitool command is going to be exactly the same whether it
comes from Cobbler or Ironic, and all we need to do is feed
information to the appropriate utility when it comes to power
management. If it's the same command, it doesn't matter who does it.
Ironic of course is a better option, but I'm not sure where we are
with discovering ipmi IP addresses or prompting admins to enter this
data for every node. Without this step, neither Cobbler nor Ironic is
capable of handling this task.

Best Regards,
Matthew Mosesohn

On Wed, Nov 19, 2014 at 7:38 PM, Tomasz Napierala
 wrote:
>
>> On 19 Nov 2014, at 16:10, Vladimir Kozhukalov  
>> wrote:
>>
>> I am absolutely -1 for using Cobbler for that. Lastly, Ironic guys became 
>> much more open for adopting new features (at least if they are implemented 
>> in terms of Ironic drivers). Currently, it looks like we are  probably able 
>> to deliver zero step Fuel Ironic driver by 6.1. Ironic already has working 
>> IPMI stuff and they don't oppose ssh based power management any more. 
>> Personally, I'd prefer to focus our efforts towards  Ironic stuff and 
>> keeping in mind that Cobbler will be removed in the nearest future.
>
> I know that due to time constraints we would be better to go with Cobbler, 
> but I also think we should be closer to the community and switch to Ironic as 
> soon as possible.
>
> Regards,
> --
> Tomasz 'Zen' Napierala
> Sr. OpenStack Engineer
> tnapier...@mirantis.com
>
>
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-19 Thread Fox, Kevin M
Perhaps they are there to support older browsers?

Thanks,
Kevin

From: Matthias Runge [mru...@redhat.com]
Sent: Wednesday, November 19, 2014 12:27 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Horizon] the future of angularjs development in 
Horizon

On 18/11/14 14:48, Thomas Goirand wrote:

>
> And then, does selenium continues to work for testing Horizon? If so,
> then the solution could be to send the .dll and .xpi files in non-free,
> and remove them from Selenium in main.
>
Yes, it still works; that leaves the question, why they are included in
the tarball at all.

In Fedora, we do not distribute .dll or selenium xpi files with selenium
at all.

Matthias


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.db][nova] NovaObject.save() needs its own DB transaction

2014-11-19 Thread Matthew Booth
We currently have a pattern in Nova where all database code lives in
db/sqla/api.py[1]. Database transactions are only ever created or used
in this module. This was an explicit design decision:
https://blueprints.launchpad.net/nova/+spec/db-session-cleanup .

However, it presents a problem when we consider NovaObjects, and
dependencies between them. For example, take Instance.save(). An
Instance has relationships with several other object types, one of which
is InstanceInfoCache. Consider the following code, which is amongst what
happens in spawn():

instance = Instance.get_by_uuid(uuid)
instance.vm_state = vm_states.ACTIVE
instance.info_cache.network_info = new_nw_info
instance.save()

instance.save() does (simplified):
  self.info_cache.save()
  self._db_save()

Both of these saves happen in separate db transactions. This has at
least 2 undesirable effects:

1. A failure can result in an inconsistent database. i.e. info_cache
having been persisted, but instance.vm_state not having been persisted.

2. Even in the absence of a failure, an external reader can see the new
info_cache but the old instance.

This is one example, but there are lots. We might convince ourselves
that the impact of this particular case is limited, but there will be
others where it isn't. Confidently assuring ourselves of a limited
impact also requires a large amount of context which not many
maintainers will have. New features continue to add to the problem,
including numa topology and pci requests.

I don't think we can reasonably remove the cascading save() above due to
the deliberate design of objects. Objects don't correspond directly to
their datamodels, so save() does more work than just calling out to the
DB. We need a way to allow cascading object saves to happen within a
single DB transaction. This will mean:

1. A change will be persisted either entirely or not at all in the event
of a failure.

2. A reader will see either the whole change or none of it.

We are not talking about crossing an RPC boundary. The single database
transaction only makes sense within the context of a single RPC call.
This will always be the case when NovaObject.save() cascades to other
object saves.

Note that we also have a separate problem, which is that the DB api's
internal use of transactions is wildly inconsistent. A single db api
call can result in multiple concurrent db transactions from the same
thread, and all the deadlocks that implies. This needs to be fixed, but
it doesn't require changing our current assumption that DB transactions
live only within the DB api.

Note that there is this recently approved oslo.db spec to make
transactions more manageable:

https://review.openstack.org/#/c/125181/11/specs/kilo/make-enginefacade-a-facade.rst,cm

Again, while this will be a significant benefit to the DB api, it will
not solve the problem of cascading object saves without allowing
transaction management at the level of NovaObject.save(): we need to
allow something to call a db api with an existing session, and we need
to allow something to pass an existing db transaction to NovaObject.save().

An obvious precursor to that is removing N309 from hacking, which
specifically tests for db apis which accept a session argument. We then
need to consider how NovaObject.save() should manage and propagate db
transactions.

I think the following pattern would solve it:

@remotable
def save():
session = 
try:
r = self._save(session)
session.commit() (or reader/writer magic from oslo.db)
return r
except Exception:
session.rollback() (or reader/writer magic from oslo.db)
raise

@definitelynotremotable
def _save(session):
previous contents of save() move here
session is explicitly passed to db api calls
cascading saves call object._save(session)

Whether we wait for the oslo.db updates or not, we need something like
the above. We could implement this today by exposing
db.sqla.api.get_session().

Thoughts?

Matt

[1] At a slight tangent, this looks like an artifact of some premature
generalisation a few years ago. It seems unlikely that anybody is going
to rewrite the db api using an ORM other than sqlalchemy, so we should
probably ditch it and promote it to db/api.py.
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally]Rally created tenant and users

2014-11-19 Thread Ajay Kalambur (akalambu)
Hi
Is there a way to specify that the Rally created tenants and users and created 
with admin privileges
Currently its created using member role and hence some admin operations are not 
allowed
I want to specify that the account created has admin access
Ajay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Error message when Neutron network is running out of IP addresses

2014-11-19 Thread Edgar Magana
Nova People,

In Havana the behavior of this issue was different, basically the VM was 
successfully created and after trying multiple times to get an IP the state was 
changed to ERROR.
This behavior is different in Juno (currently testing Icehouse) but I am not 
able to find the bug fix.
As an operator this is really important, anyone can help me to find the fix 
that I just described above?

Thanks,

Edgar

From: Vishvananda Ishaya mailto:vishvana...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 18, 2014 at 3:27 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Nova][Neutron] Error message when Neutron network 
is running out of IP addresses

It looks like this has not been reported so a bug would be great. It looks like 
it might be as easy as adding the NoMoreFixedIps exception to the list where 
FixedIpLimitExceeded is caught in nova/network/manager.py

Vish

On Nov 18, 2014, at 8:13 AM, Edgar Magana 
mailto:edgar.mag...@workday.com>> wrote:

Hello Community,

When a network subnet runs out of IP addresses a request to create a VM on that 
network fails with the Error message: "No valid host was found. There are not 
enough hosts available."
In the nova logs the error message is: NoMoreFixedIps: No fixed IP addresses 
available for network:

Obviously, this is not the desirable behavior, is there any work in progress to 
change it or I should open a bug to properly propagate the right error message.

Thanks,

Edgar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WSME 0.6.2 released

2014-11-19 Thread Doug Hellmann
Version 0.6.2 was incorrectly configured to build universal wheels, leading to 
installation errors under Python 3 because of a dependency on ipaddr that only 
works on Python 2 (there is a version of the module already in the standard 
library for Python 3).

To fix this, fungi removed the bad wheel from PyPI and our mirror. I just 
tagged version 0.6.3 with the wheel build settings changed so that the wheels 
are only built for Python 2. This release was just uploaded to PyPI and should 
hit the mirror fairly soon.

Sorry for the issues,
Doug

On Nov 18, 2014, at 10:01 AM, Doug Hellmann  wrote:

> The WSME development team has released version 0.6.2, which includes several 
> bug fixes.
> 
> $ git log --oneline --no-merges 0.6.1..0.6.2
> 2bb9362 Fix passing Dict/Array based UserType as params
> ea9f71d Document next version changes
> 4e68f96 Allow non-auto-registered complex type
> 292c556 Make the flask adapter working with flask.ext.restful
> c833702 Avoid Sphinx 1.3x in the tests
> 6cb0180 Doc: status= -> status_code=
> 4441ca7 Minor documentation edits
> 2c29787 Fix tox configuration.
> 26a6acd Add support for manually specifying supported content types in 
> @wsmeexpose.
> 7cee58b Fix broken sphinx tests.
> baa816c fix errors/warnings in tests
> 2e1863d Use APIPATH_MAXLEN from the right module
> 
> Please report issues through launchpad https://launchpad.net/wsme or the 
> #wsme channel on IRC.
> 
> Doug
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Doug Hellmann

On Nov 19, 2014, at 9:51 AM, Sylvain Bauza  wrote:

> 
> Le 19/11/2014 15:06, Doug Hellmann a écrit :
>> On Nov 19, 2014, at 8:33 AM, Sylvain Bauza  wrote:
>> 
>>> Le 18/11/2014 20:05, Doug Hellmann a écrit :
 On Nov 17, 2014, at 7:18 PM, Kevin L. Mitchell 
  wrote:
 
> On Mon, 2014-11-17 at 18:48 -0500, Doug Hellmann wrote:
>> I’ve spent a bit of time thinking about the resource ownership issue.
>> The challenge there is we don’t currently have any libraries that
>> define tables in the schema of an application. I think that’s a good
>> pattern to maintain, since it avoids introducing a lot of tricky
>> issues like how to manage migrations for the library, how to ensure
>> they are run by the application, etc. The fact that this common quota
>> thing needs to store some data in a schema that it controls says to me
>> that it is really an app and not a library. Making the quota manager
>> an app solves the API definition issue, too, since we can describe a
>> generic way to configure quotas and other applications can then use
>> that API to define specific rules using the quota manager’s API.
>> 
>> I don’t know if we need a new application or if it would make sense
>> to, as with policy, add quota management features to keystone. A
>> single well-defined app has some appeal, but there’s also a certain
>> amount of extra ramp-up time needed to go that route that we wouldn’t
>> need if we added the features directly to keystone.
> I'll also point out that it was largely because of the storage needs
> that I chose to propose Boson[1] as a separate app, rather than as a
> library.  Further, the dimensions over which quota-covered resources
> needed to be tracked seemed to me to be complicated enough that it would
> be better to define a new app and make it support that one domain well,
> which is why I didn't propose it as something to add to Keystone.
> Consider: nova has quotas that are applied by user, other quotas that
> are applied by tenant, and even some quotas on what could be considered
> sub-resources—a limit on the number of security group rules per security
> group, for instance.
> 
> My current feeling is that, if we can figure out a way to make the quota
> problem into an acceptable library, that will work; it would probably
> have to maintain its own database separate from the client app and have
> features for automatically managing the schema, since we couldn't
> necessarily rely on the client app to invoke the proper juju there.  If,
> on the other hand, that ends up failing, then the best route is probably
> to begin by developing a separate app, like Boson, as a PoC; then, after
> we have some idea of just how difficult it is to actually solve the
> problem, we can evaluate whether it makes sense to actually fold it into
> a service like Keystone, or whether it should stand on its own.
> 
> (Personally, I think Boson should be created and should stand on its
> own, but I also envision using it for purposes outside of OpenStack…)
 Thanks for mentioning Boson again. I’m embarrassed that I completely 
 forgot about the fact that you mentioned this at the summit.
 
 I’ll have to look at the proposal more closely before I comment in any 
 detail, but I take it as a good sign that we’re coming back around to the 
 idea of solving this with an app instead of a library.
>>> I assume I'm really late in the thread so I can just sit and give +1 to 
>>> this direction : IMHO, quotas need to managed thanks to a CRUD interface 
>>> which implies to get an app, as it sounds unreasonable to extend each 
>>> consumer app API.
>>> 
>>> That said, back to Blazar, I just would like to emphasize that Blazar is 
>>> not trying to address the quota enforcement level, but rather provide a 
>>> centralized endpoint for managing reservations.
>>> Consequently, Blazar can also be considered as a consumer of this quota 
>>> system, whatever it's in a library or on a separate REST API.
>>> 
>>> Last thing, I don't think that a quota application necessarly means that 
>>> quotas enforcement should be managed thanks to external calls to this app. 
>>> I can rather see an external system able to set for each project a local 
>>> view of what should be enforced locally. If operators don't want to deploy 
>>> that quota management project, it's up to them to address the hetergenous 
>>> setups for each project.
>> I’m not sure what this means. You want the new service to be optional? How 
>> would apps written against the service find and manage quota data if the 
>> service isn’t there?
> 
> My bad. Let me rephrase it. I'm seeing this service as providing added value 
> for managing quotas by ensuring consistency across all projects. But as I 
> said, I'm also thinking that the quota enforcement has still to be done at 

Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-19 Thread Matthew Mosesohn
Tomasz, Vladimir, others,

The way I see it is we need a way to discover the corresponding IPMI
address for a given node for out-of-band power management. The
ultimate ipmitool command is going to be exactly the same whether it
comes from Cobbler or Ironic, and all we need to do is feed
information to the appropriate utility when it comes to power
management. If it's the same command, it doesn't matter who does it.
Ironic of course is a better option, but I'm not sure where we are
with discovering ipmi IP addresses or prompting admins to enter this
data for every node. Without this step, neither Cobbler nor Ironic is
capable of handling this task.

Best Regards,
Matthew Mosesohn

On Wed, Nov 19, 2014 at 7:38 PM, Tomasz Napierala
 wrote:
>
>> On 19 Nov 2014, at 16:10, Vladimir Kozhukalov  
>> wrote:
>>
>> I am absolutely -1 for using Cobbler for that. Lastly, Ironic guys became 
>> much more open for adopting new features (at least if they are implemented 
>> in terms of Ironic drivers). Currently, it looks like we are  probably able 
>> to deliver zero step Fuel Ironic driver by 6.1. Ironic already has working 
>> IPMI stuff and they don't oppose ssh based power management any more. 
>> Personally, I'd prefer to focus our efforts towards  Ironic stuff and 
>> keeping in mind that Cobbler will be removed in the nearest future.
>
> I know that due to time constraints we would be better to go with Cobbler, 
> but I also think we should be closer to the community and switch to Ironic as 
> soon as possible.
>
> Regards,
> --
> Tomasz 'Zen' Napierala
> Sr. OpenStack Engineer
> tnapier...@mirantis.com
>
>
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] We lost some commits during upstream puppet manifests merge

2014-11-19 Thread Vladimir Kuklin
Fuelers

I am writing that we had a really sad incident - we noticed that after we
merged upstream keystone module we lost modifications (Change-Id:
Idfe4b54caa0d96a93e93bfff12d8b6216f83e2f1
)
for memcached dogpile driver which are crucial for us. And here I can see 2
problems:

1) how can we ensure that we did not lose anything else?
2) how can we ensure that this will never happen again?

Sadly, it seems that the first question implies that we recheck all the
upstream merge/adaptation commits by hand and check that we did not lose
anything.

Regarding question number 2 we do already have established process for
upstream code merge:
http://docs.mirantis.com/fuel-dev/develop/module_structure.html#contributing-to-existing-fuel-library-modules.
It seems that this process had  not been established when keystone code was
reviewed. I see two ways here:

1) We should enforce code review workflow and specifically say that
upstream merges can be accepted only after we have 2 '+2s' from core
reviewers after they recheck that corresponding change does not introduce
any regressions.
2) We should speed up development of some modular testing framework that
will check that corresponding change affects only particular pieces. It
seems much easier if we split deployment into stages (oh my, I am again
talking about granular deployment feature) and each particular commit
affects only one of the stages, so that we can see the difference and catch
regressions eariler.





-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Error message when Neutron network is running out of IP addresses

2014-11-19 Thread Edgar Magana
Ok, I will open a bug and commit a patch!  :-)

Edgar

From: Vishvananda Ishaya mailto:vishvana...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 18, 2014 at 3:27 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Nova][Neutron] Error message when Neutron network 
is running out of IP addresses

It looks like this has not been reported so a bug would be great. It looks like 
it might be as easy as adding the NoMoreFixedIps exception to the list where 
FixedIpLimitExceeded is caught in nova/network/manager.py

Vish

On Nov 18, 2014, at 8:13 AM, Edgar Magana 
mailto:edgar.mag...@workday.com>> wrote:

Hello Community,

When a network subnet runs out of IP addresses a request to create a VM on that 
network fails with the Error message: "No valid host was found. There are not 
enough hosts available."
In the nova logs the error message is: NoMoreFixedIps: No fixed IP addresses 
available for network:

Obviously, this is not the desirable behavior, is there any work in progress to 
change it or I should open a bug to properly propagate the right error message.

Thanks,

Edgar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Error message when Neutron network is running out of IP addresses

2014-11-19 Thread Matt Riedemann



On 11/18/2014 5:27 PM, Vishvananda Ishaya wrote:

It looks like this has not been reported so a bug would be great. It
looks like it might be as easy as adding the NoMoreFixedIps exception to
the list where FixedIpLimitExceeded is caught in nova/network/manager.py

Vish

On Nov 18, 2014, at 8:13 AM, Edgar Magana mailto:edgar.mag...@workday.com>> wrote:


Hello Community,

When a network subnet runs out of IP addresses a request to create a
VM on that network fails with the Error message: "No valid host was
found. There are not enough hosts available."
In the nova logs the error message is: NoMoreFixedIps: No fixed IP
addresses available for network:

Obviously, this is not the desirable behavior, is there any work in
progress to change it or I should open a bug to properly propagate the
right error message.

Thanks,

Edgar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Except this is neutron right?  In that case nova.network.neutronv2.api 
needs to translate the NeutronClientException to a NovaException and 
raise that back up so the compute manager can tell the scheduler it blew 
up in setting up networking.


When you open a bug, please provide a stacktrace so we know where you're 
hitting this in the neutronv2 API.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] v3 integrated tests taking a lot longer?

2014-11-19 Thread Jay Pipes

On 11/18/2014 06:48 PM, Matt Riedemann wrote:

I just started noticing today that the v3 integrated api samples tests
seem to be taking a lot longer than the other non-v3 integrated api
samples tests. On my 4 VCPU, 4 GB RAM VM some of those tests are taking
anywhere from 15-50+ seconds, while the non-v3 tests are taking less
than a second.

Has something changed recently in how the v3 API code is processed that
might have caused this?  With microversions or jsonschema validation
perhaps?

I was thinking it was oslo.db 1.1.0 at first since that was a recent
update but given the difference in times between v3 and non-v3 api
samples tests I'm thinking otherwise.


Heya,

I've been stung in the past by running either tox or run_tests.sh while 
active in a virtualenv. The speed goes from ~2 seconds per API sample 
test to ~15 seconds per API sample test...


Not sure if this is what is causing your problem, but worth a check.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-19 Thread Tomasz Napierala

> On 19 Nov 2014, at 16:10, Vladimir Kozhukalov  
> wrote:
> 
> I am absolutely -1 for using Cobbler for that. Lastly, Ironic guys became 
> much more open for adopting new features (at least if they are implemented in 
> terms of Ironic drivers). Currently, it looks like we are  probably able to 
> deliver zero step Fuel Ironic driver by 6.1. Ironic already has working IPMI 
> stuff and they don't oppose ssh based power management any more. Personally, 
> I'd prefer to focus our efforts towards  Ironic stuff and keeping in mind 
> that Cobbler will be removed in the nearest future. 

I know that due to time constraints we would be better to go with Cobbler, but 
I also think we should be closer to the community and switch to Ironic as soon 
as possible. 

Regards,
-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV][Telco] Telco Working Group meeting minutes (2014-11-19)

2014-11-19 Thread Steve Gordon
Hi all,

Please find minutes nd logs for the Telco Working Group meeting held at 1400 
UTC on Wednesday the 19th of November at the following locations:

* Meeting ended Wed Nov 19 15:02:01 2014 UTC.  Information about MeetBot at 
http://wiki.debian.org/MeetBot . (v 0.1.4)
* Minutes:
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-11-19-14.02.html
* Minutes (text): 
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-11-19-14.02.txt
* Log:
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-11-19-14.02.log.html

Action items:

sgordon_ to investigate current state of storyboard and report back
sgordon_ to update wiki structure and provide a draft template for 
discussion
sgordon_ to kick off M/L discussion about template for use cases
jannis_rake-reve to create glossary page on wiki
sgordon_ to lock in meeting time for next week on monday barring further 
feedback
mkoderer to kick off mailing list thread to get wider feedback
sgordon_ to follow up on vlan trunking


Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][stable] Review request

2014-11-19 Thread Nikhil Komawar
This patch is a bit different as it is not a straightforward cherry-pick from 
the commit in master (that fixes the bug). So, it seemed like a good idea to 
start the discussion.

Nonetheless, it's my bad turning into a miscommunication where the email just 
became a review request; while the intent was of a (possible) 
discussion/clarification.
 
Thanks,
-Nikhil


From: Flavio Percoco [fla...@redhat.com]
Sent: Wednesday, November 19, 2014 3:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][stable] Review request

Please, abstain to send review requests to the mailing list

http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

Thanks,
Flavio

>Hi All,
On 18/11/14 17:42 +, Kekane, Abhishek wrote:
>
>Greetings!!!
>
>
>
>Can anyone please review this patch [1].
>
>It requires one more +2 to get merged in stable/juno.
>
>
>
>We want to use stable/juno in production environment and this patch will fix
>the blocker bug [1] for restrict download image feature.
>
>Please do the needful.
>
>
>
>[1] https://review.openstack.org/#/c/133858/
>
>[2] https://bugs.launchpad.net/glance/+bug/1387973
>
>
>
>
>
>Thank You in advance.
>
>
>
>Abhishek Kekane
>
>
>__
>Disclaimer: This email and any attachments are sent in strictest confidence
>for the sole use of the addressee and may contain legally privileged,
>confidential, and proprietary data. If you are not the intended recipient,
>please advise the sender by replying promptly to this email and then delete
>and destroy this email and any attachments without any further use, copying
>or forwarding.

>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Parallels loopback disk format support

2014-11-19 Thread Nikhil Komawar
Hi Maxim,

Thanks for showing interest in this aspect. Like nova-specs, Glance also needs 
a spec to be create for discussion related to the blueprint. 

Please try to create one here [1]. Additionally you may join us at the meeting 
[2] if you feel stuck or need clarifications.

[1] https://github.com/openstack/glance-specs
[2] https://wiki.openstack.org/wiki/Meetings#Glance_Team_meeting

Thanks,
-Nikhil


From: Maxim Nestratov [mnestra...@parallels.com]
Sent: Wednesday, November 19, 2014 8:27 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [glance] Parallels loopback disk format support

Greetings,

In scope of these changes [1], I would like to add a new image format
into glance. For this purpose there was created a blueprint [2] and
would really appreciate if someone from glance team could review this
proposal.

[1] https://review.openstack.org/#/c/111335/
[2] https://blueprints.launchpad.net/glance/+spec/pcs-support

Best,

Maxim Nestratov,
Lead Software Developer,
Parallels


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Power management in Cobbler

2014-11-19 Thread Vladimir Kozhukalov
I am absolutely -1 for using Cobbler for that. Lastly, Ironic guys became
much more open for adopting new features (at least if they are implemented
in terms of Ironic drivers). Currently, it looks like we are  probably able
to deliver zero step Fuel Ironic driver by 6.1. Ironic already has working
IPMI stuff and they don't oppose ssh based power management any more.
Personally, I'd prefer to focus our efforts towards  Ironic stuff and
keeping in mind that Cobbler will be removed in the nearest future.

Vladimir Kozhukalov

On Wed, Nov 5, 2014 at 7:28 PM, Vladimir Kuklin 
wrote:

> I am +1 for using cobbler as power management before we merge Ironic-based
> stuff. It is essential part also for our HA and stop
> provisioning/deployment mechanism.
>
> On Tue, Nov 4, 2014 at 1:00 PM, Dmitriy Shulyak 
> wrote:
>
>> Not long time ago we discussed necessity of power management feature in
>> Fuel.
>>
>> What is your opinion on power management support in Cobbler, i took a
>> look at documentation [1] and templates [2] that  we have right now.
>> And it actually looks like we can make use of it.
>>
>> The only issue is that power address that cobbler system is configured
>> with is wrong.
>> Because provisioning serializer uses one reported by boostrap, but it can
>> be easily fixed.
>>
>> Ofcourse another question is separate network for power management, but
>> we can leave with
>> admin for now.
>>
>> Please share your opinions on this matter. Thanks
>>
>> [1] http://www.cobblerd.org/manuals/2.6.0/4/5_-_Power_Management.html
>> [2] http://paste.openstack.org/show/129063/
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 45bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Sylvain Bauza


Le 19/11/2014 15:06, Doug Hellmann a écrit :

On Nov 19, 2014, at 8:33 AM, Sylvain Bauza  wrote:


Le 18/11/2014 20:05, Doug Hellmann a écrit :

On Nov 17, 2014, at 7:18 PM, Kevin L. Mitchell  
wrote:


On Mon, 2014-11-17 at 18:48 -0500, Doug Hellmann wrote:

I’ve spent a bit of time thinking about the resource ownership issue.
The challenge there is we don’t currently have any libraries that
define tables in the schema of an application. I think that’s a good
pattern to maintain, since it avoids introducing a lot of tricky
issues like how to manage migrations for the library, how to ensure
they are run by the application, etc. The fact that this common quota
thing needs to store some data in a schema that it controls says to me
that it is really an app and not a library. Making the quota manager
an app solves the API definition issue, too, since we can describe a
generic way to configure quotas and other applications can then use
that API to define specific rules using the quota manager’s API.

I don’t know if we need a new application or if it would make sense
to, as with policy, add quota management features to keystone. A
single well-defined app has some appeal, but there’s also a certain
amount of extra ramp-up time needed to go that route that we wouldn’t
need if we added the features directly to keystone.

I'll also point out that it was largely because of the storage needs
that I chose to propose Boson[1] as a separate app, rather than as a
library.  Further, the dimensions over which quota-covered resources
needed to be tracked seemed to me to be complicated enough that it would
be better to define a new app and make it support that one domain well,
which is why I didn't propose it as something to add to Keystone.
Consider: nova has quotas that are applied by user, other quotas that
are applied by tenant, and even some quotas on what could be considered
sub-resources—a limit on the number of security group rules per security
group, for instance.

My current feeling is that, if we can figure out a way to make the quota
problem into an acceptable library, that will work; it would probably
have to maintain its own database separate from the client app and have
features for automatically managing the schema, since we couldn't
necessarily rely on the client app to invoke the proper juju there.  If,
on the other hand, that ends up failing, then the best route is probably
to begin by developing a separate app, like Boson, as a PoC; then, after
we have some idea of just how difficult it is to actually solve the
problem, we can evaluate whether it makes sense to actually fold it into
a service like Keystone, or whether it should stand on its own.

(Personally, I think Boson should be created and should stand on its
own, but I also envision using it for purposes outside of OpenStack…)

Thanks for mentioning Boson again. I’m embarrassed that I completely forgot 
about the fact that you mentioned this at the summit.

I’ll have to look at the proposal more closely before I comment in any detail, 
but I take it as a good sign that we’re coming back around to the idea of 
solving this with an app instead of a library.

I assume I'm really late in the thread so I can just sit and give +1 to this 
direction : IMHO, quotas need to managed thanks to a CRUD interface which 
implies to get an app, as it sounds unreasonable to extend each consumer app 
API.

That said, back to Blazar, I just would like to emphasize that Blazar is not 
trying to address the quota enforcement level, but rather provide a centralized 
endpoint for managing reservations.
Consequently, Blazar can also be considered as a consumer of this quota system, 
whatever it's in a library or on a separate REST API.

Last thing, I don't think that a quota application necessarly means that quotas 
enforcement should be managed thanks to external calls to this app. I can 
rather see an external system able to set for each project a local view of what 
should be enforced locally. If operators don't want to deploy that quota 
management project, it's up to them to address the hetergenous setups for each 
project.

I’m not sure what this means. You want the new service to be optional? How 
would apps written against the service find and manage quota data if the 
service isn’t there?


My bad. Let me rephrase it. I'm seeing this service as providing added 
value for managing quotas by ensuring consistency across all projects. 
But as I said, I'm also thinking that the quota enforcement has still to 
be done at the customer project level.


So, I can imagine a client (or a Facade if you prefer) providing quota 
resources to the customer app which could be either fetched (thru some 
caching) from the service, or directly taken from the existing quota DB.


In order to do that, I could imagine those steps :
 #1 : customer app makes use of oslo.quota for managing its own quota 
resources
 #2 : the external app provides a client able to either query the app 
or 

Re: [openstack-dev] [all][oslo.messaging] Multi-versioning RPC Service API support

2014-11-19 Thread Russell Bryant
On 11/19/2014 09:18 AM, Doug Hellmann wrote:
> 
> On Nov 19, 2014, at 8:49 AM, Denis Makogon  > wrote:
> 
>> Hello Stackers.
>>
>>
>>
>> When i was browsing through bugs of oslo.messaging [1] i found
>> one [2] pretty interesting (it’s old as universe), but it doesn’t seem
>> like a bug, mostly like a blueprint.
>> Digging into code of oslo.messaging i’ve found that, at least, for
>> now, there’s no way launch single service that would be able to handle
>> multiple versions (actually it can if manager implementation can
>> handle request for different RPC API versions).
>>
>> So, i’d like to understand if it’s still valid? And if it is
>> i’d like to collect use cases from all projects and see if
>> oslo.messaging can handle such case.
>> But, as first step to understanding multi-versioning/multi-managers
>> strategy for RPC services, i want to clarify few things. Current code
>> maps
>> single version to a list of RPC service endpoints implementation, so
>> here comes question:
>>
>> -Does a set of endpoints represent single RPC API version cap?
>>
>> If that’s it, how should we represent multi-versioning? If
>> we’d follow existing pattern: each RPC API version cap represents its
>> own set of endpoints,
>> let me provide some implementation details here, for now ‘endpoints’
>> is a list of classes for a single version cap, but if we’d support
>> multiple version
>> caps ‘endpoints’ would become a dictionary that contains pairs of
>> ‘version_cap’-’endpoints’. This type of multi-versioning seems to be
>> the easiest.
>>
>>
>> Thoughts/Suggestion?
> 
> The dispatcher [1] supports endpoints with versions, and searches for a
> compatible endpoint for incoming requests. I’ll go ahead and close the
> ticket. There are lots of others open and valid, so you might want to
> start looking at some that aren’t quite so old if you’re looking for
> something to contribute. Drop by #openstack-oslo on freenode if you want
> to chat about any of them.
> 
> Thanks!
> Doug
> 
> [1] 
> http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/rpc/dispatcher.py#n153

In particular, each endpoint can have an associated namespace, which
effectively allows separate APIs to be separately versioned since a
request comes in and identifies the namespace it is targeting.

Services can also separate versions by just using multiple topics.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo.messaging] Multi-versioning RPC Service API support

2014-11-19 Thread Doug Hellmann

On Nov 19, 2014, at 8:49 AM, Denis Makogon  wrote:

> Hello Stackers.
> 
> 
> 
> When i was browsing through bugs of oslo.messaging [1] i found one 
> [2] pretty interesting (it’s old as universe), but it doesn’t seem like a 
> bug, mostly like a blueprint. 
> Digging into code of oslo.messaging i’ve found that, at least, for now, 
> there’s no way launch single service that would be able to handle 
> multiple versions (actually it can if manager implementation can handle 
> request for different RPC API versions).
> 
> So, i’d like to understand if it’s still valid? And if it is i’d like 
> to collect use cases from all projects and see if oslo.messaging can handle 
> such case.
> But, as first step to understanding multi-versioning/multi-managers strategy 
> for RPC services, i want to clarify few things. Current code maps 
> single version to a list of RPC service endpoints implementation, so here 
> comes question:
> 
> - Does a set of endpoints represent single RPC API version cap?
> 
> If that’s it, how should we represent multi-versioning? If we’d 
> follow existing pattern: each RPC API version cap represents its own set of 
> endpoints,
> let me provide some implementation details here, for now ‘endpoints’ is a 
> list of classes for a single version cap, but if we’d support multiple version
> caps ‘endpoints’ would become a dictionary that contains pairs of 
> ‘version_cap’-’endpoints’. This type of multi-versioning seems to be the 
> easiest.
> 
> 
> Thoughts/Suggestion?

The dispatcher [1] supports endpoints with versions, and searches for a 
compatible endpoint for incoming requests. I’ll go ahead and close the ticket. 
There are lots of others open and valid, so you might want to start looking at 
some that aren’t quite so old if you’re looking for something to contribute. 
Drop by #openstack-oslo on freenode if you want to chat about any of them.

Thanks!
Doug

[1] 
http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/rpc/dispatcher.py#n153

> 
> 
> [1] https://bugs.launchpad.net/oslo.messaging
> [2] https://bugs.launchpad.net/oslo.messaging/+bug/1050374
> 
> 
> Kind regards,
> Denis Makogon
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Doug Hellmann

On Nov 19, 2014, at 8:33 AM, Sylvain Bauza  wrote:

> 
> Le 18/11/2014 20:05, Doug Hellmann a écrit :
>> On Nov 17, 2014, at 7:18 PM, Kevin L. Mitchell 
>>  wrote:
>> 
>>> On Mon, 2014-11-17 at 18:48 -0500, Doug Hellmann wrote:
 I’ve spent a bit of time thinking about the resource ownership issue.
 The challenge there is we don’t currently have any libraries that
 define tables in the schema of an application. I think that’s a good
 pattern to maintain, since it avoids introducing a lot of tricky
 issues like how to manage migrations for the library, how to ensure
 they are run by the application, etc. The fact that this common quota
 thing needs to store some data in a schema that it controls says to me
 that it is really an app and not a library. Making the quota manager
 an app solves the API definition issue, too, since we can describe a
 generic way to configure quotas and other applications can then use
 that API to define specific rules using the quota manager’s API.
 
 I don’t know if we need a new application or if it would make sense
 to, as with policy, add quota management features to keystone. A
 single well-defined app has some appeal, but there’s also a certain
 amount of extra ramp-up time needed to go that route that we wouldn’t
 need if we added the features directly to keystone.
>>> I'll also point out that it was largely because of the storage needs
>>> that I chose to propose Boson[1] as a separate app, rather than as a
>>> library.  Further, the dimensions over which quota-covered resources
>>> needed to be tracked seemed to me to be complicated enough that it would
>>> be better to define a new app and make it support that one domain well,
>>> which is why I didn't propose it as something to add to Keystone.
>>> Consider: nova has quotas that are applied by user, other quotas that
>>> are applied by tenant, and even some quotas on what could be considered
>>> sub-resources—a limit on the number of security group rules per security
>>> group, for instance.
>>> 
>>> My current feeling is that, if we can figure out a way to make the quota
>>> problem into an acceptable library, that will work; it would probably
>>> have to maintain its own database separate from the client app and have
>>> features for automatically managing the schema, since we couldn't
>>> necessarily rely on the client app to invoke the proper juju there.  If,
>>> on the other hand, that ends up failing, then the best route is probably
>>> to begin by developing a separate app, like Boson, as a PoC; then, after
>>> we have some idea of just how difficult it is to actually solve the
>>> problem, we can evaluate whether it makes sense to actually fold it into
>>> a service like Keystone, or whether it should stand on its own.
>>> 
>>> (Personally, I think Boson should be created and should stand on its
>>> own, but I also envision using it for purposes outside of OpenStack…)
>> Thanks for mentioning Boson again. I’m embarrassed that I completely forgot 
>> about the fact that you mentioned this at the summit.
>> 
>> I’ll have to look at the proposal more closely before I comment in any 
>> detail, but I take it as a good sign that we’re coming back around to the 
>> idea of solving this with an app instead of a library.
> 
> I assume I'm really late in the thread so I can just sit and give +1 to this 
> direction : IMHO, quotas need to managed thanks to a CRUD interface which 
> implies to get an app, as it sounds unreasonable to extend each consumer app 
> API.
> 
> That said, back to Blazar, I just would like to emphasize that Blazar is not 
> trying to address the quota enforcement level, but rather provide a 
> centralized endpoint for managing reservations.
> Consequently, Blazar can also be considered as a consumer of this quota 
> system, whatever it's in a library or on a separate REST API.
> 
> Last thing, I don't think that a quota application necessarly means that 
> quotas enforcement should be managed thanks to external calls to this app. I 
> can rather see an external system able to set for each project a local view 
> of what should be enforced locally. If operators don't want to deploy that 
> quota management project, it's up to them to address the hetergenous setups 
> for each project.

I’m not sure what this means. You want the new service to be optional? How 
would apps written against the service find and manage quota data if the 
service isn’t there?

Doug

> 
> My 2 cts (too),
> -Sylvain
> 
>> Doug
>> 
>>> Just my $.02…
>>> 
>>> [1] https://wiki.openstack.org/wiki/Boson
>>> -- 
>>> Kevin L. Mitchell 
>>> Rackspace
>>> 
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack

Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Endre Karlson
All I can say at the moment is that Usage and Quota management is a crappy
thing to do in OpenStack. Every service has it's own way of doing it both
in clients and api's. +n+ for making a effort in standardizing this thing
in a way that could be alike across projects..

2014-11-19 14:33 GMT+01:00 Sylvain Bauza :

>
> Le 18/11/2014 20:05, Doug Hellmann a écrit :
>
>  On Nov 17, 2014, at 7:18 PM, Kevin L. Mitchell <
>> kevin.mitch...@rackspace.com> wrote:
>>
>>  On Mon, 2014-11-17 at 18:48 -0500, Doug Hellmann wrote:
>>>
 I’ve spent a bit of time thinking about the resource ownership issue.
 The challenge there is we don’t currently have any libraries that
 define tables in the schema of an application. I think that’s a good
 pattern to maintain, since it avoids introducing a lot of tricky
 issues like how to manage migrations for the library, how to ensure
 they are run by the application, etc. The fact that this common quota
 thing needs to store some data in a schema that it controls says to me
 that it is really an app and not a library. Making the quota manager
 an app solves the API definition issue, too, since we can describe a
 generic way to configure quotas and other applications can then use
 that API to define specific rules using the quota manager’s API.

 I don’t know if we need a new application or if it would make sense
 to, as with policy, add quota management features to keystone. A
 single well-defined app has some appeal, but there’s also a certain
 amount of extra ramp-up time needed to go that route that we wouldn’t
 need if we added the features directly to keystone.

>>> I'll also point out that it was largely because of the storage needs
>>> that I chose to propose Boson[1] as a separate app, rather than as a
>>> library.  Further, the dimensions over which quota-covered resources
>>> needed to be tracked seemed to me to be complicated enough that it would
>>> be better to define a new app and make it support that one domain well,
>>> which is why I didn't propose it as something to add to Keystone.
>>> Consider: nova has quotas that are applied by user, other quotas that
>>> are applied by tenant, and even some quotas on what could be considered
>>> sub-resources—a limit on the number of security group rules per security
>>> group, for instance.
>>>
>>> My current feeling is that, if we can figure out a way to make the quota
>>> problem into an acceptable library, that will work; it would probably
>>> have to maintain its own database separate from the client app and have
>>> features for automatically managing the schema, since we couldn't
>>> necessarily rely on the client app to invoke the proper juju there.  If,
>>> on the other hand, that ends up failing, then the best route is probably
>>> to begin by developing a separate app, like Boson, as a PoC; then, after
>>> we have some idea of just how difficult it is to actually solve the
>>> problem, we can evaluate whether it makes sense to actually fold it into
>>> a service like Keystone, or whether it should stand on its own.
>>>
>>> (Personally, I think Boson should be created and should stand on its
>>> own, but I also envision using it for purposes outside of OpenStack…)
>>>
>> Thanks for mentioning Boson again. I’m embarrassed that I completely
>> forgot about the fact that you mentioned this at the summit.
>>
>> I’ll have to look at the proposal more closely before I comment in any
>> detail, but I take it as a good sign that we’re coming back around to the
>> idea of solving this with an app instead of a library.
>>
>
> I assume I'm really late in the thread so I can just sit and give +1 to
> this direction : IMHO, quotas need to managed thanks to a CRUD interface
> which implies to get an app, as it sounds unreasonable to extend each
> consumer app API.
>
> That said, back to Blazar, I just would like to emphasize that Blazar is
> not trying to address the quota enforcement level, but rather provide a
> centralized endpoint for managing reservations.
> Consequently, Blazar can also be considered as a consumer of this quota
> system, whatever it's in a library or on a separate REST API.
>
> Last thing, I don't think that a quota application necessarly means that
> quotas enforcement should be managed thanks to external calls to this app.
> I can rather see an external system able to set for each project a local
> view of what should be enforced locally. If operators don't want to deploy
> that quota management project, it's up to them to address the hetergenous
> setups for each project.
>
> My 2 cts (too),
> -Sylvain
>
>
>  Doug
>>
>>  Just my $.02…
>>>
>>> [1] https://wiki.openstack.org/wiki/Boson
>>> --
>>> Kevin L. Mitchell 
>>> Rackspace
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> 

Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-19 Thread Doug Hellmann

On Nov 18, 2014, at 6:11 PM, Sachi King  wrote:

> On Wednesday, November 12, 2014 02:06:02 PM Doug Hellmann wrote:
>> During our “Graduation Schedule” summit session we worked through the list 
>> of modules remaining the in the incubator. Our notes are in the etherpad 
>> [1], but as part of the "Write it Down” theme for Oslo this cycle I am also 
>> posting a summary of the outcome here on the mailing list for wider 
>> distribution. Let me know if you remembered the outcome for any of these 
>> modules differently than what I have written below.
>> 
>> Doug
>> 
>> 
>> 
>> Deleted or deprecated modules:
>> 
>> funcutils.py - This was present only for python 2.6 support, but it is no 
>> longer used in the applications. We are keeping it in the stable/juno branch 
>> of the incubator, and removing it from master 
>> (https://review.openstack.org/130092)
>> 
>> hooks.py - This is not being used anywhere, so we are removing it. 
>> (https://review.openstack.org/#/c/125781/)
>> 
>> quota.py - A new quota management system is being created 
>> (https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and should 
>> replace this, so we will keep it in the incubator for now but deprecate it.
>> 
>> crypto/utils.py - We agreed to mark this as deprecated and encourage the use 
>> of Barbican or cryptography.py (https://review.openstack.org/134020)
>> 
>> cache/ - Morgan is going to be working on a new oslo.cache library as a 
>> front-end for dogpile, so this is also deprecated 
>> (https://review.openstack.org/134021)
>> 
>> apiclient/ - With the SDK project picking up steam, we felt it was safe to 
>> deprecate this code as well (https://review.openstack.org/134024).
>> 
>> xmlutils.py - This module was used to provide a security fix for some XML 
>> modules that have since been updated directly. It was removed. 
>> (https://review.openstack.org/#/c/125021/)
>> 
>> 
>> 
>> Graduating:
>> 
>> oslo.context:
>> - Dims is driving this
>> - https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
>> - includes:
>>  context.py
>> 
>> oslo.service:
> 
> During the "Oslo graduation schedule" meet up someone was mentioning they'd 
> be willing to help out as a contact for questions during this process.
> Can anyone put me in contact with that person or remember who he was?

I don’t know if it was me, but I’ll volunteer now. :-)

dhellmann on freenode, or this email address, are the best way to reach me. I’m 
in the US Eastern time zone.

Doug

> 
>> - Sachi is driving this
>> - https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
>> - includes:
>>  eventlet_backdoor.py
>>  loopingcall.py
>>  periodic_task.py
>>  request_utils.py
>>  service.py
>>  sslutils.py
>>  systemd.py
>>  threadgroup.py
>> 
>> oslo.utils:
>> - We need to look into how to preserve the git history as we import these 
>> modules.
>> - includes:
>>  fileutils.py
>>  versionutils.py
>> 
>> 
>> 
>> Remaining untouched:
>> 
>> scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
>> whether Gantt has enough traction yet so we will hold onto these in the 
>> incubator for at least another cycle.
>> 
>> report/ - There’s interest in creating an oslo.reports library containing 
>> this code, but we haven’t had time to coordinate with Solly about doing that.
>> 
>> 
>> 
>> Other work:
>> 
>> We will continue the work on oslo.concurrency and oslo.log that we started 
>> during Juno.
>> 
>> [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo.messaging] Multi-versioning RPC Service API support

2014-11-19 Thread Denis Makogon
Hello Stackers.




When i was browsing through bugs of oslo.messaging [1] i found one
[2] pretty interesting (it’s old as universe), but it doesn’t seem like a
bug, mostly like a blueprint.

Digging into code of oslo.messaging i’ve found that, at least, for now,
there’s no way launch single service that would be able to handle

multiple versions (actually it can if manager implementation can handle
request for different RPC API versions).


So, i’d like to understand if it’s still valid? And if it is i’d
like to collect use cases from all projects and see if oslo.messaging can
handle such case.

But, as first step to understanding multi-versioning/multi-managers
strategy for RPC services, i want to clarify few things. Current code maps

single version to a list of RPC service endpoints implementation, so here
comes question:

- Does a set of endpoints represent single RPC API version cap?

If that’s it, how should we represent multi-versioning? If we’d
follow existing pattern: each RPC API version cap represents its own set of
endpoints,

let me provide some implementation details here, for now ‘endpoints’ is a
list of classes for a single version cap, but if we’d support multiple
version

caps ‘endpoints’ would become a dictionary that contains pairs of
‘version_cap’-’endpoints’. This type of multi-versioning seems to be the
easiest.


Thoughts/Suggestion?


[1] https://bugs.launchpad.net/oslo.messaging


[2] https://bugs.launchpad.net/oslo.messaging/+bug/1050374


Kind regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-19 Thread Sylvain Bauza


Le 18/11/2014 20:05, Doug Hellmann a écrit :

On Nov 17, 2014, at 7:18 PM, Kevin L. Mitchell  
wrote:


On Mon, 2014-11-17 at 18:48 -0500, Doug Hellmann wrote:

I’ve spent a bit of time thinking about the resource ownership issue.
The challenge there is we don’t currently have any libraries that
define tables in the schema of an application. I think that’s a good
pattern to maintain, since it avoids introducing a lot of tricky
issues like how to manage migrations for the library, how to ensure
they are run by the application, etc. The fact that this common quota
thing needs to store some data in a schema that it controls says to me
that it is really an app and not a library. Making the quota manager
an app solves the API definition issue, too, since we can describe a
generic way to configure quotas and other applications can then use
that API to define specific rules using the quota manager’s API.

I don’t know if we need a new application or if it would make sense
to, as with policy, add quota management features to keystone. A
single well-defined app has some appeal, but there’s also a certain
amount of extra ramp-up time needed to go that route that we wouldn’t
need if we added the features directly to keystone.

I'll also point out that it was largely because of the storage needs
that I chose to propose Boson[1] as a separate app, rather than as a
library.  Further, the dimensions over which quota-covered resources
needed to be tracked seemed to me to be complicated enough that it would
be better to define a new app and make it support that one domain well,
which is why I didn't propose it as something to add to Keystone.
Consider: nova has quotas that are applied by user, other quotas that
are applied by tenant, and even some quotas on what could be considered
sub-resources—a limit on the number of security group rules per security
group, for instance.

My current feeling is that, if we can figure out a way to make the quota
problem into an acceptable library, that will work; it would probably
have to maintain its own database separate from the client app and have
features for automatically managing the schema, since we couldn't
necessarily rely on the client app to invoke the proper juju there.  If,
on the other hand, that ends up failing, then the best route is probably
to begin by developing a separate app, like Boson, as a PoC; then, after
we have some idea of just how difficult it is to actually solve the
problem, we can evaluate whether it makes sense to actually fold it into
a service like Keystone, or whether it should stand on its own.

(Personally, I think Boson should be created and should stand on its
own, but I also envision using it for purposes outside of OpenStack…)

Thanks for mentioning Boson again. I’m embarrassed that I completely forgot 
about the fact that you mentioned this at the summit.

I’ll have to look at the proposal more closely before I comment in any detail, 
but I take it as a good sign that we’re coming back around to the idea of 
solving this with an app instead of a library.


I assume I'm really late in the thread so I can just sit and give +1 to 
this direction : IMHO, quotas need to managed thanks to a CRUD interface 
which implies to get an app, as it sounds unreasonable to extend each 
consumer app API.


That said, back to Blazar, I just would like to emphasize that Blazar is 
not trying to address the quota enforcement level, but rather provide a 
centralized endpoint for managing reservations.
Consequently, Blazar can also be considered as a consumer of this quota 
system, whatever it's in a library or on a separate REST API.


Last thing, I don't think that a quota application necessarly means that 
quotas enforcement should be managed thanks to external calls to this 
app. I can rather see an external system able to set for each project a 
local view of what should be enforced locally. If operators don't want 
to deploy that quota management project, it's up to them to address the 
hetergenous setups for each project.


My 2 cts (too),
-Sylvain


Doug


Just my $.02…

[1] https://wiki.openstack.org/wiki/Boson
--
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Parallels loopback disk format support

2014-11-19 Thread Maxim Nestratov

Greetings,

In scope of these changes [1], I would like to add a new image format 
into glance. For this purpose there was created a blueprint [2] and 
would really appreciate if someone from glance team could review this 
proposal.


[1] https://review.openstack.org/#/c/111335/
[2] https://blueprints.launchpad.net/glance/+spec/pcs-support

Best,

Maxim Nestratov,
Lead Software Developer,
Parallels


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy

2014-11-19 Thread Henry Nash
Hi Adam,

So a comprehensive write-up...although I'm not sure we have made the case for 
why we need a complete rewrite of how policy is managed.  We seemed to have 
lept into a solution without looking at other possible solutions to the 
problems we are trying to solve.  Here's a start at an alternative approach:

Problem 1: The current services don't use the centralised policy store/fetch of 
keystone, meaning that a) policy file management is hard, and b) we can't 
support the policy-per-endpoint style of working
Solution: Let's get the other services using it!  No code changes required in 
Keytsone.  The fact that we haven't succeeded before, just means we haven't 
tried hard enough.

Problem 2: Different domains want to be able to create their own "roles" which 
are more meaningful to their users...but our "roles" are global and are 
directly linked to the rules in the policy file - something only a cloud 
operator is going to want to own.
Solution: Have some kind of domain-scoped role-group (maybe just called 
"domain-roles"?) that a domain owner can define, that maps to a set of 
underlying roles that a policy file understands (see: 
https://review.openstack.org/#/c/133855/). [As has been pointed out, what we 
are really doing with this is finally doing real RBAC, where what we call roles 
today are really capabilities and domain-roles are really just roles].  As this 
evolves, cloud providers could slowly migrate to the position where each 
service API is effectively a role (i.e. a capability) and at the domain level 
there exists the "abstraction that makes sense for the users of that domain" 
into the underlying capabilities. No code changes...this just uses policy files 
as they are today (plus domain-groups) - and tokens as they are too. And I 
think that level of functionality would satisfy a lot of people. Eventually (as 
pointed out by samuelmz) the policy "file" could even simply become the 
definition of the service capabilities (and whether each capability is "open", 
"closed" or "is a role")...maybe just registered and stored in the service 
entity the keystone DB (allowing dynamic service registration). My point being, 
that we really didn't require much code change (nor really any conceptual 
changes) to get to this end point...and certainly no rewriting of policy/token 
formats etc.  [In reality, this last point would cause problems with token size 
(since a broad admin capability would need a lot of capabilities), so some kind 
a collections of capabilities would be required.]

Problem 3: A cloud operator wants to be able to enable resellers to white label 
her services (who in turn may resell to others) - so needs some kind of 
inheritance model so that service level agreements can be supported by policy 
(e.g. let the reseller give the support expert from the cloud provider have 
access to their projects).
Solution: We already have hierarchical inheritance in the works...so that we 
would allow a reseller to assign roles to a user/group from the parent onto 
their own domain/project. Further, domain-roles are just another thing that can 
(optionally) be inherited and used in this fashion.

My point about all the above is that I think while what you have laid out is a 
great set of stepsI don't think we have conceptual agreement as to whether 
that path is the only way we could go to solve out problems.

Henry
On 18 Nov 2014, at 23:40, Adam Young  wrote:

> There is a lot of discussion about policy.  I've attempted to pull the 
> majority of the work into a single document that explains the process in a 
> step-by-step manner:
> 
> 
> http://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/
> 
> Its really long, so I won't bother reposting the whole article here.  
> Instead, I will post the links to the topic on Gerrit.
> 
> https://review.openstack.org/#/q/topic:dynamic-policy,n,z
> 
> 
> There is one additional review worth noting:
> 
> https://review.openstack.org/#/c/133855/
> 
> Which is for "private groups of roles"  specific to a domain.  This is 
> related, but not part of the critical path for the things I wrote above.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Who maintains the iCal meeting data?

2014-11-19 Thread Thierry Carrez
Tony Breeds wrote:
> I was going to make an iCal feed for all the openstack meetings.
> 
> I discovered I was waaay behind the times and one exists and is linked from
> 
> https://wiki.openstack.org/wiki/Meetings
> 
> With some of the post Paris changes it's a little out of date.  I'm really
> happy to help maintain it if that's a thing someone can do ;P
> 
> So whom do I poke?
> Should that information be slightly more visible?

The iCal is currently maintained by Anne (annegentle) and myself. In
parallel, a small group is building a gerrit-powered agenda so that we
can describe meetings in YAML and check for conflicts automatically, and
build the ics automatically rather than manually.

That should still take a few weeks before we can migrate to that though,
so in the mean time if you volunteer to keep the .ics up to date with
changes to the wiki page, that would be of great help! It's maintained
as a google calendar, I can add you to the ACL there if you send me your
google email.

Regards,

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] request to review bug 1321617

2014-11-19 Thread Harshada Kakad
Hi All,

Could someone please, review the bug
https://bugs.launchpad.net/tempest/+bug/1321617


Test case related to quota usage (test_show_quota_usage,
test_cinder_quota_defaults and test_cinder_quota_show)
uses tenant name, ideally it should use tenant id as quota
requires UUID of tenant and not tenant name.

Cinder quota-show ideally requires tenant_id to show quota.
As there is bug in cinder (BUG: 1307475) , that if we give
non-existent tenant_id still cinder show the quota and
returns 200 (OK). Hence, these testcases should use
tenant_id and not tenant_name.


Here is the link for review : https://review.openstack.org/#/c/95087/


-- 
*Regards,*
*Harshada Kakad*
**
*Sr. Software Engineer*
*C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune – 411013,
India*
*Mobile-9689187388*
*Email-Id : harshada.ka...@izeltech.com *
*website : www.izeltech.com *

-- 
*Disclaimer*
The information contained in this e-mail and any attachment(s) to this 
message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information of Izel 
Technologies Pvt. Ltd. If you are not the intended recipient, you are 
notified that any review, use, any form of reproduction, dissemination, 
copying, disclosure, modification, distribution and/or publication of this 
e-mail message, contents or its attachment(s) is strictly prohibited and 
you are requested to notify us the same immediately by e-mail and delete 
this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for 
virus infected e-mail or errors or omissions or consequences which may 
arise as a result of this e-mail transmission.
*End of Disclaimer*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-19 Thread Paul Michali (pcm)
I like the definition.


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Nov 18, 2014, at 10:10 PM, Sumit Naiksatam  wrote:

> On Tue, Nov 18, 2014 at 4:44 PM, Mohammad Hanif  wrote:
>> I agree with Paul as advanced services go beyond just L4-L7.  Today, VPNaaS
>> deals with L3 connectivity but belongs in advanced services.  Where does
>> Edge-VPN work belong?  We need a broader definition for advanced services
>> area.
>> 
> 
> So the following definition is being proposed to capture the broader
> context and complement Neutron's current mission statement:
> 
> To implement services and associated libraries that provide
> abstractions for advanced network functions beyond basic L2/L3
> connectivity and forwarding.
> 
> What do people think?
> 
>> Thanks,
>> —Hanif.
>> 
>> From: "Paul Michali (pcm)" 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Tuesday, November 18, 2014 at 4:08 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into
>> separate repositories
>> 
>> On Nov 18, 2014, at 6:36 PM, Armando M.  wrote:
>> 
>> Mark, Kyle,
>> 
>> What is the strategy for tracking the progress and all the details about
>> this initiative? Blueprint spec, wiki page, or something else?
>> 
>> One thing I personally found useful about the spec approach adopted in [1],
>> was that we could quickly and effectively incorporate community feedback;
>> having said that I am not sure that the same approach makes sense here,
>> hence the question.
>> 
>> Also, what happens for experimental efforts that are neither L2-3 nor L4-7
>> (e.g. TaaS or NFV related ones?), but they may still benefit from this
>> decomposition (as it promotes better separation of responsibilities)? Where
>> would they live? I am not sure we made any particular progress of the
>> incubator project idea that was floated a while back.
>> 
>> 
>> Would it make sense to define the advanced services repo as being for
>> services that are beyond basic connectivity and routing? For example, VPN
>> can be L2 and L3. Seems like restricting to L4-L7 may cause some confusion
>> as to what’s in and what’s out.
>> 
>> 
>> Regards,
>> 
>> PCM (Paul Michali)
>> 
>> MAIL …..…. p...@cisco.com
>> IRC ……..… pc_m (irc.freenode.com)
>> TW ………... @pmichali
>> GPG Key … 4525ECC253E31A83
>> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>> 
>> 
>> 
>> Cheers,
>> Armando
>> 
>> [1] https://review.openstack.org/#/c/134680/
>> 
>> On 18 November 2014 15:32, Doug Wiegley  wrote:
>>> 
>>> Hi,
>>> 
 so the specs repository would continue to be shared during the Kilo
 cycle.
>>> 
>>> One of the reasons to split is that these two teams have different
>>> priorities and velocities.  Wouldn’t that be easier to track/manage as
>>> separate launchpad projects and specs repos, irrespective of who is
>>> approving them?
>>> 
>>> Thanks,
>>> doug
>>> 
>>> 
>>> 
>>> On Nov 18, 2014, at 10:31 PM, Mark McClain  wrote:
>>> 
>>> All-
>>> 
>>> Over the last several months, the members of the Networking Program have
>>> been discussing ways to improve the management of our program.  When the
>>> Quantum project was initially launched, we envisioned a combined service
>>> that included all things network related.  This vision served us well in the
>>> early days as the team mostly focused on building out layers 2 and 3;
>>> however, we’ve run into growth challenges as the project started building
>>> out layers 4 through 7.  Initially, we thought that development would float
>>> across all layers of the networking stack, but the reality is that the
>>> development concentrates around either layer 2 and 3 or layers 4 through 7.
>>> In the last few cycles, we’ve also discovered that these concentrations have
>>> different velocities and a single core team forces one to match the other to
>>> the detriment of the one forced to slow down.
>>> 
>>> Going forward we want to divide the Neutron repository into two separate
>>> repositories lead by a common Networking PTL.  The current mission of the
>>> program will remain unchanged [1].  The split would be as follows:
>>> 
>>> Neutron (Layer 2 and 3)
>>> - Provides REST service and technology agnostic abstractions for layer 2
>>> and layer 3 services.
>>> 
>>> Neutron Advanced Services Library (Layers 4 through 7)
>>> - A python library which is co-released with Neutron
>>> - The advance service library provides controllers that can be configured
>>> to manage the abstractions for layer 4 through 7 services.
>>> 
>>> Mechanics of the split:
>>> - Both repositories are members of the same program, so the specs
>>> repository would continue to be shared during the Kilo cycle.  The PTL and
>>> the drivers team will retain approval responsibilities

Re: [openstack-dev] [barbican] Secret store API validation

2014-11-19 Thread Kelsey, Timothy John


On 18/11/2014 21:07, "Nathan Reller"  wrote:

>> It seems we need to add some validation to the process
>
>Yes, we are planning to add some validation checks in Kilo. I would
>submit a bug report for this.
>
>The big part of the issue is that we need to be clearer about the
>expected input types to the API as well as the SecretStores. This was
>a big topic of discussion at the summit. I hope to have a spec out
>soon, I hope, that will address this issue.
>
>-Nate
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OK, I¹ll file a bug and look forward to reviewing your spec. Thanks Nate.


>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Cleaning up spec review queue.

2014-11-19 Thread Imre Farkas

On 11/19/2014 12:07 PM, Dmitry Tantsur wrote:

On 11/18/2014 06:13 PM, Chris K wrote:

Hi all,

In an effort to keep the Ironic specs review queue as up to date as
possible, I have identified several specs that were proposed in the Juno
cycle and have not been updated to reflect the changes to the current
Kilo cycle.

I would like to set a deadline to either update them to reflect the Kilo
cycle or abandon them if they are no longer relevant.
If there are no objections I will abandon any specs on the list below
that have not been updated to reflect the Kilo cycle after the end of
the next Ironic meeting (Nov. 24th 2014).

Below is the list of specs I have identified that would be affected:
https://review.openstack.org/#/c/107344 - *Generic Hardware Discovery
Bits*

Killed it with fire :D


https://review.openstack.org/#/c/102557 - *Driver for NetApp storage
arrays*
https://review.openstack.org/#/c/108324 - *DRAC hardware discovery*

Imre, are you going to work on it?


I think it's replaced by Lucas' proposal: 
https://review.openstack.org/#/c/125920

I will discuss it with him and abandon one of them.

Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >