Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread Attila Fazekas
I agree with Jay.

The extension layer is also expensive in CPU usage,
and it also makes more difficult to troubleshoot issues.


- Original Message -
> From: "Jay Pipes" 
> To: "OpenStack Development Mailing List" , 
> "Sergey Nikitin"
> 
> Sent: Sunday, March 8, 2015 1:31:34 AM
> Subject: [openstack-dev] [nova][api] Microversions. And why do we need API 
> extensions for new API functionality?
> 
> Hi Stackers,
> 
> Now that microversions have been introduced to the Nova API (meaning we
> can now have novaclient request, say, version 2.3 of the Nova API using
> the special X-OpenStack-Nova-API-Version HTTP header), is there any good
> reason to require API extensions at all for *new* functionality.
> 
> Sergey Nikitin is currently in the process of code review for the final
> patch that adds server instance tagging to the Nova API:
> 
> https://review.openstack.org/#/c/128940
> 
> Unfortunately, for some reason I really don't understand, Sergey is
> being required to create an API extension called "os-server-tags" in
> order to add the server tag functionality to the API. The patch
> implements the 2.4 Nova API microversion, though, as you can see from
> this part of the patch:
> 
> https://review.openstack.org/#/c/128940/43/nova/api/openstack/compute/plugins/v3/server_tags.py
> 
> What is the point of creating a new "plugin"/API extension for this new
> functionality? Why can't we just modify the
> nova/api/openstack/compute/server.py Controller.show() method and
> decorate it with a 2.4 microversion that adds a "tags" attribute to the
> returned server dictionary?
> 
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L369
> 
> Because we're using an API extension for this new server tags
> functionality, we are instead having the extension "extend" the server
> dictionary with an "os-server-tags:tags" key containing the list of
> string tags.
> 
> This is ugly and pointless. We don't need to use API extensions any more
> for this stuff.
> 
> A client knows that server tags are supported by the 2.4 API
> microversion. If the client requests the 2.4+ API, then we should just
> include the "tags" attribute in the server dictionary.
> 
> Similarly, new microversion API functionality should live in a module,
> as a top-level (or subcollection) Controller in
> /nova/api/openstack/compute/, and should not be in the
> /nova/api/openstack/compute/plugins/ directory. Why? Because it's not a
> plugin.
> 
> Why are we continuing to use these awkward, messy, and cumbersome API
> extensions?
> 
> Please, I am begging the Nova core team. Let us stop this madness. No
> more API extensions.
> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ideas about Openstack Cinder for GSOC 2015

2015-03-09 Thread harryxiyou
On Sat, Feb 28, 2015 at 7:52 AM, Jay Bryant
 wrote:
> Fyi ...
>
> This is something that Mike Perez was thinking about so you can ping thingee
> on irc if you can't find e0ne.

I would, thanks very much ;-)


Harry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ideas about Openstack Cinder for GSOC 2015

2015-03-09 Thread harryxiyou
On Sat, Feb 28, 2015 at 6:10 AM, Ivan Kolodyazhny  wrote:
> Hi Harry,

Hi e0ne,

>
> Please, ping me in IRC (e0ne) if you are still in Cinder as a part of GSoC.
>

I am sorry I reply a little late, I would contact you in IRC ;-)


Thanks, Harry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Testing DB migrations

2015-03-09 Thread Igor Kalnitsky
Hi, guys,

Indeed, it's a hot topic since it looks like there's no silver bullet
at all. As OpenStack community, we should move toward oslo.db
approach, but it may require hard effort from our side.

Meantime, as a part of bp/consume-external-ubuntu [1] I've prepared a
base class for testing migrations [2]. It's rough and could be
improved, but it works and I ask all contributors to test their
migrations from now on. You could use this testcase as an example [3].

[1]: https://blueprints.launchpad.net/fuel/+spec/consume-external-ubuntu
[2]: 
https://github.com/stackforge/fuel-web/blob/b1cb2f73c147c394fd6a7d91667f61859e6bc20a/nailgun/nailgun/test/base.py#L1125-L1146
[3]: 
https://github.com/stackforge/fuel-web/blob/b1cb2f73c147c394fd6a7d91667f61859e6bc20a/nailgun/nailgun/test/unit/test_migration_fuel_6_1.py#L23-L191

Thanks,
Igor

On Fri, Mar 6, 2015 at 5:15 PM, Roman Podoliaka  wrote:
> Hi all,
>
> You could take a look at how this is done in OpenStack projects [1][2]
>
> Most important parts:
> 1) use the same RDBMS you use in production
> 2) test migration scripts on data, not on empty schema
> 3) test corner cases (adding a NOT NULL column without a server side
> default value, etc)
> 4) do a separate migration scripts run with large data sets to make
> sure you don't introduce slow migrations [3]
>
> Thanks,
> Roman
>
> [1] 
> https://github.com/openstack/nova/blob/fb642be12ef4cd5ff9029d4dc71c7f5d5e50ce29/nova/tests/unit/db/test_migrations.py#L66-L833
> [2] 
> https://github.com/openstack/oslo.db/blob/0058c6510bfc6c41c830c38f3a30b5347a703478/oslo_db/sqlalchemy/test_migrations.py#L40-L273
> [3] 
> http://josh.people.rcbops.com/2013/12/third-party-testing-with-turbo-hipster/
>
> On Fri, Mar 6, 2015 at 4:50 PM, Nikolay Markov  wrote:
>> We already run unit tests only using real Postgresql. But this still doesn't
>> answer the question how we should test migrations.
>>
>> On Fri, Mar 6, 2015 at 5:24 PM, Boris Bobrov  wrote:
>>>
>>> On Friday 06 March 2015 16:57:19 Nikolay Markov wrote:
>>> > Hi everybody,
>>> >
>>> > From time to time some bugs appear regarding failed database migrations
>>> > during upgrade and we have High-priority bug for 6.1 (
>>> > https://bugs.launchpad.net/fuel/+bug/1391553) on testing this migration
>>> > process. I want to start a thread for discussing how we're going to do
>>> > it.
>>> >
>>> > I don't see any obvious solution, but we can at least start adding tests
>>> > together with any changes in migrations, which will use a number of
>>> > various
>>> > fake environments upgrading and downgrading DB.
>>> >
>>> > Any thoughts?
>>>
>>> In Kyestone adding unit tests and running them in in-memory sqlite was
>>> proven
>>> ineffective.The only solution we've come to is to run all db-related tests
>>> against real rdbmses.
>>>
>>> --
>>> Best regards,
>>> Boris Bobrov
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Best regards,
>> Nick Markov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] IPAM reference driver status and other stuff

2015-03-09 Thread Salvatore Orlando
Aloha everybody!

This is another long email, so here's the summary for people with 5-minute
or less attention span:

The IPAM reference driver is almost ready [1]. Unfortunately letting the
driver doing allocation pools required a few major changes, so the latest
patchset is rather big. I would like reviewers to advice on how they prefer
to split it out in multiple patches.

I also had to add an additional method to the subnet interface [2]

However, I am not sure we are doing the right thing wrt the introduction of
the driver. I think we might need to think about it more thoroughly, and
possibly introduce a concept of "allocator". Otherwise if we switch an
existing deployment to the driver, it's very likely things will just break.

So here's the detailed part.
There are only a few bits needed to complete the IPAM reference driver:
- A thorough unit test case for the newly introduced "subnet manager",
which provides db apis.
- Some more unit tests for driver behaviour
- Logic for handling allocation pools update (which pretty much needs to be
copied over from db base plugin with minor changes)
- Rework the synchronisation between the ipam object and the neutron db
object. The current one is very error prone.

While dealing with management of allocation pools, I found out that it's
not convenient for the IPAM driver to rely on Neutron's DB allocation
pools. This is a bit of a chicken and egg problem, since:
A) Neutron needs to call the driver to define allocation pools
B) The driver needs neutron to define allocation pools before creating
availability ranges
C) Neutron cannot create allocation pools if the driver has not defined them
I therefore decided, reluctantly, to add more data duplication -
introducing IPAM tables for allocation pools and availability ranges. The
latter is meant to replace Neutron's one when the IPAM driver is enabled,
whereas the former is pure data duplication. There is also an association
table between neutron subnets and ipam objects for the same subnet; I
decided to do so to not do further duplication.
I dislike this data separation; on the other hand the only viable
alternative would be to give the IPAM driver full control over neutron's
database tables for IP Allocation and Subnet allocation pools. While this
is feasible, I think it would be even worse to give code which can
potentially be 3rd party control over data structures used by the Neutron
API.

As a result, the patch is different and larger. I would like to split it to
make it simpler to review, but rather than just doing that of my own accord
I would like to ask reviewers how they would prefer to have it split. At
the end of the day I'm not the one who has to spend a lot of time reviewing
that code.

Nevertheless, I think I've realised our refactoring approach is kind of
flawed, since it might work for greenfields deployments, but I don't think
it will work for brownfields deployments. Also, switching between drivers
is pretty much impossible (but we were already aware of that and we agreed
to address this in the future).
The decorator approach currently taken in [3] allows to either use the
driver or not - which means that if an operator decides to switch to the
IPAM driver, then all allocation data for existing subnets would be simply
lost, unless we provide a solution for migrating data. This is feasible and
rather trivial, but implies an additional management step.

An alternative solution would be to introduce a concept of "allocator" for
subnets. Such information should be stored in the database. It could point
to an IPAM driver or to nothing. In the latter case it would simply
instruct to use the "usual" baked-in IPAM code. This would allow us to make
the driver work on existing deployments, and pave the way for multiple
drivers. Indeed in this way, the new IPAM driver will be used for subnets
created after enabling it, whereas existing subnets will keep using the
existing logic. This should also make safe cases in which operators revert
the decision of using the IPAM driver. Additionally, administrative APIs
might be provided to migrate existing subnets to/from the driver. However,
when adopting solutions like this, it is important to pay extra care in
ensuring that the database does not start relying on the current
configuration, and therefore we need to find a way to decouple the
allocator from the actual IPAM driver, which should represent its
realization.

Any feedback is very appreciated as usual.

Salvatore

[1] https://review.openstack.org/#/c/150485/
[2] https://review.openstack.org/#/c/150485/6/neutron/ipam/driver.py
[3]
https://review.openstack.org/#/c/153236/15/neutron/db/db_base_plugin_v2.py
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][log] Log Working Group priorities

2015-03-09 Thread John Garbutt
Hi,

I would really like to help make these logging improvements a reality.

On 5 March 2015 at 12:13, Kuvaja, Erno  wrote:
> We had our first logging workgroup meeting [1] yesterday where we agreed 3
> main priorities for the group to focus on.

This time really doesn't work for me I am afraid, clashes with a
regular commitment I have. But happy to communicate in an async way.

> 1)  Educating the community About the Logging Guidelines spec
> http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html

+1 to these, a great starting point.

> 2)  Cross project specs for Request IDs and Error codes
> a.   There is a spec proposals in Cinder tree [2] for Request IDs and in
> Glance tree [3] for Error codes
> b.  The cross project specs are being written on the basis of these
> specs adjusted with the feedback and ideas collected from wider audience at
> and since Paris Summit
> c.   Links for the specs will be provided as soon as they are available
> for review

I really want to work on making sure all the calls Nova makes to other
openstack APIs can be traced in the logs of both services.

I am certainly happy to look at doing the work for this on the Nova
side (during Liberty), and if no one else is keen, possibly the
Glance/Neutron/Cinder side (possibly just wsgi middlewear, to start
with).

My assumption is we ensure the nova request id is logged in
glance/neutron/cinder and the glance/neutron/cinder request id appears
in the nova logs. The key point being a single Nova request id makes
multiple calls to Glance and Neutron. Focusing on these should be the
simple case, as I think we can assume isolated APIs for the initial
implementation, thus side stepping the trust issue, and leaving that
as a parallel activity for the moment.

In addition to logs, I think we should also look at adding this
cross-service information into notifications, where that makes sense.
I will look at trying to put a cross project spec together for this
effort.

> 3)  Project Liaisons for Log Working Group [4]
> a.   Person helping us out to implement the work items in the project
> b.  No need to be core
> c.   Please, no fighting for the slots. We happily take all available
> hands onboard on this.

I am happy to help from a Nova point of view, but I certainly don't
want to step on others toes here.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread Sean Dague
On 03/07/2015 07:31 PM, Jay Pipes wrote:
> Hi Stackers,
> 
> Now that microversions have been introduced to the Nova API (meaning we
> can now have novaclient request, say, version 2.3 of the Nova API using
> the special X-OpenStack-Nova-API-Version HTTP header), is there any good
> reason to require API extensions at all for *new* functionality.
> 
> Sergey Nikitin is currently in the process of code review for the final
> patch that adds server instance tagging to the Nova API:
> 
> https://review.openstack.org/#/c/128940
> 
> Unfortunately, for some reason I really don't understand, Sergey is
> being required to create an API extension called "os-server-tags" in
> order to add the server tag functionality to the API. The patch
> implements the 2.4 Nova API microversion, though, as you can see from
> this part of the patch:
> 
> https://review.openstack.org/#/c/128940/43/nova/api/openstack/compute/plugins/v3/server_tags.py
> 
> 
> What is the point of creating a new "plugin"/API extension for this new
> functionality? Why can't we just modify the
> nova/api/openstack/compute/server.py Controller.show() method and
> decorate it with a 2.4 microversion that adds a "tags" attribute to the
> returned server dictionary?
> 
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L369
> 
> 
> Because we're using an API extension for this new server tags
> functionality, we are instead having the extension "extend" the server
> dictionary with an "os-server-tags:tags" key containing the list of
> string tags.
> 
> This is ugly and pointless. We don't need to use API extensions any more
> for this stuff.
> 
> A client knows that server tags are supported by the 2.4 API
> microversion. If the client requests the 2.4+ API, then we should just
> include the "tags" attribute in the server dictionary.
> 
> Similarly, new microversion API functionality should live in a module,
> as a top-level (or subcollection) Controller in
> /nova/api/openstack/compute/, and should not be in the
> /nova/api/openstack/compute/plugins/ directory. Why? Because it's not a
> plugin.
> 
> Why are we continuing to use these awkward, messy, and cumbersome API
> extensions?
> 
> Please, I am begging the Nova core team. Let us stop this madness. No
> more API extensions.

Agreed, the current extensions list exists to explain the base v2
functionality. I think we should consider that frozen and deprecated as
of v2.1 as we have a better way to express features.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] SQLAlchemy performance suite and upcoming features (was: [nova] blueprint about multiple workers)

2015-03-09 Thread Attila Fazekas




- Original Message -
> From: "Mike Bayer" 
> To: "Attila Fazekas" 
> Cc: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Friday, March 6, 2015 2:20:45 AM
> Subject: Re: [openstack-dev] [all] SQLAlchemy performance suite and upcoming 
> features (was: [nova] blueprint about
> multiple workers)
> 
> 
> 
> Attila Fazekas  wrote:
> 
> > I see lot of improvements,
> > but cPython is still cPython.
> > 
> > When you benchmarking query related things, please try to
> > get the actual data from the returned objects
> 
> that goes without saying. I’ve been benching SQLAlchemy and DBAPIs for many
> years. New performance improvements tend to be the priority for pretty much
> every major release.
> 
> > and try to do
> > something with data what is not expected to be optimized out even by
> > a smarter compiler.
> 
> Well I tend to favor breaking out the different elements into individual
> tests here, though I guess if you’re trying to trick a JIT then the more
> composed versions may be more relevant. For example, I could already tell
> you that the AttributeDict thing would perform terribly without having to
> mix it up with the DB access. __getattr__ is a poor performer (learned that
> in SQLAlchemy 0.1 about 9 years ago).
Equivalent things also slower in perl. 
> 
> > Here is my play script and several numbers:
> > http://www.fpaste.org/193999/25585380/raw/
> > Is there any faster ORM way for the same op?
> 
> Absolutely, as I’ve been saying for months all the way back in my wiki entry
> on forward, query for individual columns, also skip the session.rollback()
> and do a close() instead (the transaction is still rolled back, we just skip
> the bookkeeping we don’t need).  You get the nice attribute access
> pattern too:

The script probably will be extended with explicit transaction management,
I agree my close / rollback usage is bad and ugly.
Also thanks for the URL usage fix.

> 
> http://www.fpaste.org/194098/56040781/
> 
> def query_sqla_cols(self):
> "SQLAlchemy yield(100) named tuples"
> session = self.Session()
> start = time.time()
> summary = 0
> for obj in session.query(
> Ints.id, Ints.A, Ints.B, Ints.C).yield_per(100):
> summary += obj.id + obj.A + obj.B + obj.C
> session.rollback()
> end = time.time()
> return [end-start, summary]
> 
> def query_sqla_cols_a3(self):
> "SQLAlchemy yield(100) named tuples 3*access"
> session = self.Session()
> start = time.time()
> summary = 0
> for obj in session.query(
> Ints.id, Ints.A, Ints.B, Ints.C).yield_per(100):
> summary += obj.id + obj.A + obj.B + obj.C
> summary += obj.id + obj.A + obj.B + obj.C
> summary += obj.id + obj.A + obj.B + obj.C
> session.rollback()
> end = time.time()
> return [end-start, summary/3]
> 
> 
> Here’s that:
> 
> 0 SQLAlchemy yield(100) named tuples: time: 0.635045 (data [18356026L])
> 1 SQLAlchemy yield(100) named tuples: time: 0.630911 (data [18356026L])
> 2 SQLAlchemy yield(100) named tuples: time: 0.641687 (data [18356026L])
> 0 SQLAlchemy yield(100) named tuples 3*access: time: 0.807285 (data
> [18356026L])
> 1 SQLAlchemy yield(100) named tuples 3*access: time: 0.814160 (data
> [18356026L])
> 2 SQLAlchemy yield(100) named tuples 3*access: time: 0.829011 (data
> [18356026L])
> 
> compared to the fastest Core test:
> 
> 0 SQlAlchemy core simple: time: 0.707205 (data [18356026L])
> 1 SQlAlchemy core simple: time: 0.702223 (data [18356026L])
> 2 SQlAlchemy core simple: time: 0.708816 (data [18356026L])
> 
> 
> This is using 1.0’s named tuple which is faster than the one in 0.9. As I
> discussed in the migration notes I linked, over here
> http://docs.sqlalchemy.org/en/latest/changelog/migration_10.html#new-keyedtuple-implementation-dramatically-faster
> is where I discuss how I came up with that named tuple approach.
> 
> In 0.9, the tuples are much slower (but still faster than straight entities):
> 
> 0 SQLAlchemy yield(100) named tuples: time: 1.083882 (data [18356026L])
> 1 SQLAlchemy yield(100) named tuples: time: 1.097783 (data [18356026L])
> 2 SQLAlchemy yield(100) named tuples: time: 1.113621 (data [18356026L])
> 0 SQLAlchemy yield(100) named tuples 3*access: time: 1.204280 (data
> [18356026L])
> 1 SQLAlchemy yield(100) named tuples 3*access: time: 1.245768 (data
> [18356026L])
> 2 SQLAlchemy yield(100) named tuples 3*access: time: 1.258327 (data
> [18356026L])
> 
> Also note that the difference in full object fetches for 0.9 vs. 1.0 are
> quite different:
> 
> 0.9.8:
> 
> 0 SQLAlchemy yield(100): time: 2.802273 (data [18356026L])
> 1 SQLAlchemy yield(100): time: 2.778059 (data [18356026L])
> 2 SQLAlchemy yield(100): time: 2.841441 (data [18356026L])
> 
> 1.0:
> 
> 0 SQLAlchemy yield(100): time: 2.01915

Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread Christopher Yeoh
Hi,

Apologies for the slow reply, long weekend because of a public holiday over
here. I'm probably going to end up repeating part of what
Alex has mentioned as well.

So the first thing I think we want to distinguish between plugins being a
REST API user or operator concept and it being
 a tool developers use as a framework to support the Nova REST API. As I've
mentioned before I've no problem with the feature set of the
API being fixed (per microversion) across all Nova deployments. Get back to
me when we have consensus on that and its trivial to
implement and we'll no longer have the concept of core and extension/plugin.

But plugin like implementations using Stevedore as a tool for developers to
keep good modularity has proven to be very useful to keep complexity level
lower and interactions between modules much clearer. servers.py is an
example of this where in v2 I think we have/had the most complex method and
even with all the fix up work which has been done on it it is still very
complicated to understand.


On Sun, Mar 8, 2015 at 11:01 AM, Jay Pipes  wrote:

> Hi Stackers,
>
> Now that microversions have been introduced to the Nova API (meaning we
> can now have novaclient request, say, version 2.3 of the Nova API using the
> special X-OpenStack-Nova-API-Version HTTP header), is there any good reason
> to require API extensions at all for *new* functionality.
>
> Sergey Nikitin is currently in the process of code review for the final
> patch that adds server instance tagging to the Nova API:
>
> https://review.openstack.org/#/c/128940
>
> Unfortunately, for some reason I really don't understand, Sergey is being
> required to create an API extension called "os-server-tags" in order to add
> the server tag functionality to the API. The patch implements the 2.4 Nova
> API microversion, though, as you can see from this part of the patch:
>
> https://review.openstack.org/#/c/128940/43/nova/api/
> openstack/compute/plugins/v3/server_tags.py
>
> What is the point of creating a new "plugin"/API extension for this new
> functionality? Why can't we just modify the 
> nova/api/openstack/compute/server.py
> Controller.show() method and decorate it with a 2.4 microversion that adds
> a "tags" attribute to the returned server dictionary?
>
>
Actually I think it does more than just add extra reponse information:
- it adds extra tags parameter to show
  - it doesn't add it to index, but it probably should add the response
information to detail to be consistent with the rest of the API
- It adds a new resource /servers/server_id/tags
   - with create, delete and delete all supported. I don't think that these
belong in servers.py



> https://github.com/openstack/nova/blob/master/nova/api/
> openstack/compute/servers.py#L369
>
> Because we're using an API extension for this new server tags
> functionality, we are instead having the extension "extend" the server
> dictionary with an "os-server-tags:tags" key containing the list of string
> tags.
>
> This is ugly and pointless. We don't need to use API extensions any more
> for this stuff.
>
>
So we had a prefix rule originally in V2 to allow for extensions and
guarantee no name clashes. I'd be happy removing this requirement, even
removing old ones as long as we have consensus.


> A client knows that server tags are supported by the 2.4 API microversion.
> If the client requests the 2.4+ API, then we should just include the "tags"
> attribute in the server dictionary.
>
> Similarly, new microversion API functionality should live in a module, as
> a top-level (or subcollection) Controller in /nova/api/openstack/compute/,
> and should not be in the /nova/api/openstack/compute/plugins/ directory.
> Why? Because it's not a plugin.
>
> So I don't see how that changes whether we're using plugins (from a user
point of view) or not. The good news for you is that
there is fixing the shambles of a directory structure for the api is on the
list of things to do, it just wasn't a high prioirty things for us in Kilo,
get v2.1 and microversions out. For example, we have v3 in the directory
path as well for historical reasons and we also have a contrib directory in
compute and none of those are really "contrib" now either.  Now the
nova/api/openstack/compute/ directory where you want to put all the v2
microversions code is currently full of v2 core code already.  It just
makes more sense to me to wait unti the old v2 core code can be removed
because the v2.1 api is considered equivalent and then move the v2.1
microversions code into its final place , rather than a shuffle now to move
the old v2 code (along with all the changes need to the unittests) and then
just have to delete it again not much longer.





> Why are we continuing to use these awkward, messy, and cumbersome API
> extensions?
>
> Please, I am begging the Nova core team. Let us stop this madness. No more
> API extensions.
>
>
It is still not clear to me exactly what you mean by use of an extension.
None of us ours 

Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread Christopher Yeoh
On Mon, Mar 9, 2015 at 9:35 PM, Sean Dague  wrote:

> On 03/07/2015 07:31 PM, Jay Pipes wrote:
> > Hi Stackers,
> >
> > Now that microversions have been introduced to the Nova API (meaning we
> > can now have novaclient request, say, version 2.3 of the Nova API using
> > the special X-OpenStack-Nova-API-Version HTTP header), is there any good
> > reason to require API extensions at all for *new* functionality.
> >
> > Sergey Nikitin is currently in the process of code review for the final
> > patch that adds server instance tagging to the Nova API:
> >
> > https://review.openstack.org/#/c/128940
> >
> > Unfortunately, for some reason I really don't understand, Sergey is
> > being required to create an API extension called "os-server-tags" in
> > order to add the server tag functionality to the API. The patch
> > implements the 2.4 Nova API microversion, though, as you can see from
> > this part of the patch:
> >
> >
> https://review.openstack.org/#/c/128940/43/nova/api/openstack/compute/plugins/v3/server_tags.py
> >
> >
> > What is the point of creating a new "plugin"/API extension for this new
> > functionality? Why can't we just modify the
> > nova/api/openstack/compute/server.py Controller.show() method and
> > decorate it with a 2.4 microversion that adds a "tags" attribute to the
> > returned server dictionary?
> >
> >
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L369
> >
> >
> > Because we're using an API extension for this new server tags
> > functionality, we are instead having the extension "extend" the server
> > dictionary with an "os-server-tags:tags" key containing the list of
> > string tags.
> >
> > This is ugly and pointless. We don't need to use API extensions any more
> > for this stuff.
> >
> > A client knows that server tags are supported by the 2.4 API
> > microversion. If the client requests the 2.4+ API, then we should just
> > include the "tags" attribute in the server dictionary.
> >
> > Similarly, new microversion API functionality should live in a module,
> > as a top-level (or subcollection) Controller in
> > /nova/api/openstack/compute/, and should not be in the
> > /nova/api/openstack/compute/plugins/ directory. Why? Because it's not a
> > plugin.
> >
> > Why are we continuing to use these awkward, messy, and cumbersome API
> > extensions?
> >
> > Please, I am begging the Nova core team. Let us stop this madness. No
> > more API extensions.
>
> Agreed, the current extensions list exists to explain the base v2
> functionality. I think we should consider that frozen and deprecated as
> of v2.1 as we have a better way to express features.
>
> -Sean
>
>

So I think we can a microversion ASAP to remove support for /extensions.
Obviously we'll need to keep the actual code
to support v2.1 for quite a while though.

I think we still want some fields in the controller like we do because we
want to automate JSON-HOME/Schema stuff (maybe not sure)


> --
> Sean Dague
> http://dague.net
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread John Garbutt
Hi,

I think I agree with Jay here, but let me explain...

On 8 March 2015 at 12:10, Alex Xu  wrote:
> Thanks for Jay point this out! If we have agreement on this and document it,
> that will be great for guiding developer how to add new API.

+1

Please could you submit a dev ref for this?

We can argue on the review, a bit like this one:
https://github.com/openstack/nova/blob/master/doc/source/devref/policy_enforcement.rst

> For modularity, we need define what should be in a separated module(it is
> extension now.) There are three cases:
>
> 1. Add new resource
> This is totally worth to put in a separated module.

+1

> 2. Add new sub-resource
> like server-tags, I prefer to put in a separated module, I don't think
> put another 100 lines code in the servers.py is good choice.

-1

I hate the idea of show instance extension code for version 2.4 living
separately to the rest of the instance show logic, when it really
doesn't have to.

It feels too heavyweight in its current form.

Maybe we need a more modular way of expressing the extension within
the same file?

> 3. extend attributes and methods for a existed resource
>like add new attributes for servers, we can choice one of existed module
> to put it in. Just like this patch https://review.openstack.org/#/c/155853/

+1

I wish it was easier to read, but I hope thats fixable long term.

> 2015-03-08 8:31 GMT+08:00 Jay Pipes :
>> Now that microversions have been introduced to the Nova API (meaning we
>> can now have novaclient request, say, version 2.3 of the Nova API using the
>> special X-OpenStack-Nova-API-Version HTTP header), is there any good reason
>> to require API extensions at all for *new* functionality.

As above, a new "resource" probably should get a new "plugins/v3" module right?

It feels (at worst) borderline in the os-server-tags case, due to the
extra actions.

>> What is the point of creating a new "plugin"/API extension for this new
>> functionality? Why can't we just modify the
>> nova/api/openstack/compute/server.py Controller.show() method and decorate
>> it with a 2.4 microversion that adds a "tags" attribute to the returned
>> server dictionary?
>>
>> Similarly, new microversion API functionality should live in a module, as
>> a top-level (or subcollection) Controller in /nova/api/openstack/compute/,
>> and should not be in the /nova/api/openstack/compute/plugins/ directory.
>> Why? Because it's not a plugin.

Everything is a "plugin" in v3, no more distinction between core vs
plugin. It needs renaming really.

It should look just like servers, I guess, which is a top level item:
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/servers.py

>> Why are we continuing to use these awkward, messy, and cumbersome API
>> extensions?

We certainly should never be forced to add an extension to advertise
new functionality anymore.

Its a big reason why I want to see the API micro-versions succeed.

>> Please, I am begging the Nova core team. Let us stop this madness. No more
>> API extensions.

Lets try get something agreed in devref, so we are ready to go when
Liberty opens.

It would be nice to look at ways to fold back the existing extensions
into the main code. I know there are v2.0 compatibility issues there,
but I think/hope thats mostly cosmetic at this point.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread Christopher Yeoh
On Mon, Mar 9, 2015 at 10:08 PM, John Garbutt  wrote:

> Hi,
>
> I think I agree with Jay here, but let me explain...
>
> On 8 March 2015 at 12:10, Alex Xu  wrote:
> > Thanks for Jay point this out! If we have agreement on this and document
> it,
> > that will be great for guiding developer how to add new API.
>
> +1
>
> Please could you submit a dev ref for this?
>
> We can argue on the review, a bit like this one:
>
> https://github.com/openstack/nova/blob/master/doc/source/devref/policy_enforcement.rst
>
> > For modularity, we need define what should be in a separated module(it is
> > extension now.) There are three cases:
> >
> > 1. Add new resource
> > This is totally worth to put in a separated module.
>
> +1
>
> > 2. Add new sub-resource
> > like server-tags, I prefer to put in a separated module, I don't
> think
> > put another 100 lines code in the servers.py is good choice.
>
> -1
>
> I hate the idea of show instance extension code for version 2.4 living
> separately to the rest of the instance show logic, when it really
> doesn't have to.
>
> It feels too heavyweight in its current form.
>
>
If the only thing server-tags did was to add a parameter then we wouldn't
need a new extension,
but its not, it adds another resource with associated actions


> Maybe we need a more modular way of expressing the extension within
> the same file?
>
>
I think servers.py is simply to big. Its much harder to read and debug than
any other plugin just because of its size - or
maybe I just need a 50" monitor :) I'd rather ensure functionality common
server-tags and the API is kept together rather than
spread through servers.py



> > 3. extend attributes and methods for a existed resource
> >like add new attributes for servers, we can choice one of existed
> module
> > to put it in. Just like this patch
> https://review.openstack.org/#/c/155853/
>
> +1
>
> I wish it was easier to read, but I hope thats fixable long term.
>
> > 2015-03-08 8:31 GMT+08:00 Jay Pipes :
> >> Now that microversions have been introduced to the Nova API (meaning we
> >> can now have novaclient request, say, version 2.3 of the Nova API using
> the
> >> special X-OpenStack-Nova-API-Version HTTP header), is there any good
> reason
> >> to require API extensions at all for *new* functionality.
>
> As above, a new "resource" probably should get a new "plugins/v3" module
> right?
>
> It feels (at worst) borderline in the os-server-tags case, due to the
> extra actions.
>
> >> What is the point of creating a new "plugin"/API extension for this new
> >> functionality? Why can't we just modify the
> >> nova/api/openstack/compute/server.py Controller.show() method and
> decorate
> >> it with a 2.4 microversion that adds a "tags" attribute to the returned
> >> server dictionary?
> >>
> >> Similarly, new microversion API functionality should live in a module,
> as
> >> a top-level (or subcollection) Controller in
> /nova/api/openstack/compute/,
> >> and should not be in the /nova/api/openstack/compute/plugins/ directory.
> >> Why? Because it's not a plugin.
>
> Everything is a "plugin" in v3, no more distinction between core vs
> plugin. It needs renaming really.
>
> It should look just like servers, I guess, which is a top level item:
>
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/servers.py
>
> >> Why are we continuing to use these awkward, messy, and cumbersome API
> >> extensions?
>
> We certainly should never be forced to add an extension to advertise
> new functionality anymore.
>
> Its a big reason why I want to see the API micro-versions succeed.
>

Yep, there is I think no reason except to support /extensions for now and I
don't really think its worth having
two entry points, one for modules which will appear in /extensions and one
for modules which won't appear
in /extensioins. The overhead is low. We should warn v2.1+ users to ignore
/extensions unless they are legacy v2 api users
and they should remove their use of it anyway as soon as they get off v2.1.
They key to dumping it all is
when people tell us v2.1 really is behaving just like v2 so we can remove
the old v2 code and then later have a microversion that
doesn't support /extensions. I hope all the json-home stuff is in by then
:-)

>
> >> Please, I am begging the Nova core team. Let us stop this madness. No
> more
> >> API extensions.
>
> Lets try get something agreed in devref, so we are ready to go when
> Liberty opens.
>
> It would be nice to look at ways to fold back the existing extensions
> into the main code. I know there are v2.0 compatibility issues there,
> but I think/hope thats mostly cosmetic at this point.
>
>
Yea we already did a lot of that in v3 and had to separate  some of them
out again for v2.1 (argh!). Others we have just faked (eg you load module
"X" you get module "Y" for free which doesn't really exist anymore - but
only for those that we were very sure that the extension only existed to
notify users that s

Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread Christopher Yeoh
On Mon, Mar 9, 2015 at 10:08 PM, John Garbutt  wrote:

> +1
>
> Please could you submit a dev ref for this?
>
> We can argue on the review, a bit like this one:
>
> https://github.com/openstack/nova/blob/master/doc/source/devref/policy_enforcement.rst
>
> I think it'd also be a good idea to add a testcase (use test_plugins
directory where you can
define your own controller which is never plubished) for each example so
they don't get out of date

Regards,

Chris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread Attila Fazekas




- Original Message -
> From: "Christopher Yeoh" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Monday, March 9, 2015 1:04:15 PM
> Subject: Re: [openstack-dev] [nova][api] Microversions. And why do we need 
> API extensions for new API functionality?
> 
> 
> 
> On Mon, Mar 9, 2015 at 10:08 PM, John Garbutt < j...@johngarbutt.com > wrote:
> 
> 
> Hi,
> 
> I think I agree with Jay here, but let me explain...
> 
> On 8 March 2015 at 12:10, Alex Xu < sou...@gmail.com > wrote:
> > Thanks for Jay point this out! If we have agreement on this and document
> > it,
> > that will be great for guiding developer how to add new API.
> 
> +1
> 
> Please could you submit a dev ref for this?
> 
> We can argue on the review, a bit like this one:
> https://github.com/openstack/nova/blob/master/doc/source/devref/policy_enforcement.rst
> 
> > For modularity, we need define what should be in a separated module(it is
> > extension now.) There are three cases:
> > 
> > 1. Add new resource
> > This is totally worth to put in a separated module.
> 
> +1
> 
> > 2. Add new sub-resource
> > like server-tags, I prefer to put in a separated module, I don't think
> > put another 100 lines code in the servers.py is good choice.
> 
> -1
> 
> I hate the idea of show instance extension code for version 2.4 living
> separately to the rest of the instance show logic, when it really
> doesn't have to.
> 
> It feels too heavyweight in its current form.
> 
> 
> If the only thing server-tags did was to add a parameter then we wouldn't
> need a new extension,
> but its not, it adds another resource with associated actions
> 
> 
> Maybe we need a more modular way of expressing the extension within
> the same file?
> 
> 
> I think servers.py is simply to big. Its much harder to read and debug than
> any other plugin just because of its size - or
> maybe I just need a 50" monitor :) I'd rather ensure functionality common
> server-tags and the API is kept together rather than
> spread through servers.py
> 
No, it isn't.
It is bellow 2k line. I usually use low level tools even for python related 
debugging.
For ex.: strace, gdb..
With the extension I get lot of files which may be involved may be not.
This causes me additional headache, because more difficult to see which file
is involved. After an strace I usually know what is the mistake, I just need to 
find
it in the code.
I do not like when I had to open more than 3 files, after I see what went wrong.
I some cases I use gdb, just to get python stack traces just before the first 
incorrect
step is detected, in other cases git grep is sufficient.

Actually for me the extensions increases the required monitor number,
some cases I also need to use more complicated approaches.
I tied lot of python profiler tool as well, but there is no single all cases 
win version,
extra custom hack is required in many cases to get something close what I want.

> 
> > 3. extend attributes and methods for a existed resource
> > like add new attributes for servers, we can choice one of existed module
> > to put it in. Just like this patch https://review.openstack.org/#/c/155853/
> 
> +1
> 
> I wish it was easier to read, but I hope thats fixable long term.
> 
> > 2015-03-08 8:31 GMT+08:00 Jay Pipes < jaypi...@gmail.com >:
> >> Now that microversions have been introduced to the Nova API (meaning we
> >> can now have novaclient request, say, version 2.3 of the Nova API using
> >> the
> >> special X-OpenStack-Nova-API-Version HTTP header), is there any good
> >> reason
> >> to require API extensions at all for *new* functionality.
> 
> As above, a new "resource" probably should get a new "plugins/v3" module
> right?
> 
> It feels (at worst) borderline in the os-server-tags case, due to the
> extra actions.
> 
> >> What is the point of creating a new "plugin"/API extension for this new
> >> functionality? Why can't we just modify the
> >> nova/api/openstack/compute/server.py Controller.show() method and decorate
> >> it with a 2.4 microversion that adds a "tags" attribute to the returned
> >> server dictionary?
> >> 
> >> Similarly, new microversion API functionality should live in a module, as
> >> a top-level (or subcollection) Controller in /nova/api/openstack/compute/,
> >> and should not be in the /nova/api/openstack/compute/plugins/ directory.
> >> Why? Because it's not a plugin.
> 
> Everything is a "plugin" in v3, no more distinction between core vs
> plugin. It needs renaming really.
> 
> It should look just like servers, I guess, which is a top level item:
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/servers.py
> 
> >> Why are we continuing to use these awkward, messy, and cumbersome API
> >> extensions?
> 
> We certainly should never be forced to add an extension to advertise
> new functionality anymore.
> 
> Its a big reason why I want to see the API micro-versions succeed.
> 
> Yep, there is I think no reason except to support 

Re: [openstack-dev] [horizon] Do No Evil

2015-03-09 Thread Michael Krotscheck
On Sun, Mar 8, 2015 at 3:21 PM Thomas Goirand  wrote:

>
> Anyway, you understood me: please *never* use this Expat/MIT license
> with the "The Software shall be used for Good, not Evil." additional
> clause. This is non-free software, which I will *never* be able to
> upload to Debian (and Canonical guys will have the same issue).
>

So, to clarify: Does this include tooling used to build the software, but
is not shipped with it? I suppose a similar example is using GCC (which is
GPL'd) to compile something that's Apache licensed.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-09 Thread Salvatore Orlando
Greetings!

Neutron is adding a new concept of "subnet pool". To put it simply, it is a
collection of IP prefixes from which subnets can be allocated. In this way
a user does not have to specify a full CIDR, but simply a desired prefix
length, and then let the pool generate a CIDR from its prefixes. The full
spec is available at [1], whereas two patches are up for review at [2]
(CRUD) and [3] (integration between subnets and subnet pools).
While [2] is quite straightforward, I must admit I am not really sure that
the current approach chosen for generating subnets from a pool might be the
best one, and I'm therefore seeking your advice on this matter.

A subnet can be created with or without a pool.
Without a pool the user will pass the desired cidr:

POST /v2.0/subnets
{'network_id': 'meh',
  'cidr': '192.168.0.0/24'}

Instead with a pool the user will pass pool id and desired prefix lenght:
POST /v2.0/subnets
{'network_id': 'meh',
 'prefix_len': 24,
 'pool_id': 'some_pool'}

The response to the previous call would populate the subnet cidr.
So far it looks quite good. Prefix_len is a bit of duplicated information,
but that's tolerable.
It gets a bit awkward when the user specifies also attributes such as
desired gateway ip or allocation pools, as they have to be specified in a
"cidr-agnostic" way. For instance:

POST /v2.0/subnets
{'network_id': 'meh',
 'gateway_ip': '0.0.0.1',
 'prefix_len': 24,
 'pool_id': 'some_pool'}

would indicate that the user wishes to use the first address in the range
as the gateway IP, and the API would return something like this:

POST /v2.0/subnets
{'network_id': 'meh',
 'cidr': '10.10.10.0/24'
 'gateway_ip': '10.10.10.1',
 'prefix_len': 24,
 'pool_id': 'some_pool'}

The problem with this approach is, in my opinion, that attributes such as
gateway_ip are used with different semantics in requests and responses;
this might also need users to write client applications expecting the
values in the response might differ from those in the request.

I have been considering alternatives, but could not find any that I would
regard as winner.
I therefore have some questions for the neutron community and the API
working group:

1) (this is more for neutron people) Is there a real use case for
requesting specific gateway IPs and allocation pools when allocating from a
pool? If not, maybe we should let the pool set a default gateway IP and
allocation pools. The user can then update them with another call. Another
option would be to provide "subnet templates" from which a user can choose.
For instance one template could have the gateway as first IP, and then a
single pool for the rest of the CIDR.

2) Is the action of creating a subnet from a pool better realized as a
different way of creating a subnet, or should there be some sort of "pool
action"? Eg.:

POST /subnet_pools/my_pool_id/subnet
{'prefix_len': 24}

which would return a subnet response like this (note prefix_len might not
be needed in this case)

{'id': 'meh',
 'cidr': '192.168.0.0/24',
 'gateway_ip': '192.168.0.1',
 'pool_id': 'my_pool_id'}

I am generally not a big fan of RESTful actions. But in this case the
semantics of the API operation are that of a subnet creation from within a
pool, so that might be ok.

3) Would it be possible to consider putting information about how to
generate a subnet from a pool in the subnet request body as follows?

POST /v2.0/subnets
{
 'pool_info':
{'pool_id': my_pool_id,
 'prefix_len': 24}
}

This would return a response like the previous.
This approach is in theory simple, but composite attributes proved to a
difficult beast already - for instance you can look at
external_gateway_info in the router definition [4]

Thanks for your time and thanks in advance for your feedback.
Salvatore

[1]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/subnet-allocation.html
[2] https://review.openstack.org/#/c/148698/
[3] https://review.openstack.org/#/c/157597/21/neutron/api/v2/attributes.py
[4]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/l3.py#n106
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] how can deploy environment with useEnvironmentNerwork=false

2015-03-09 Thread Choe, Cheng-Dae
hi there

In murano when deploy environment useEnvironmentNerwork=true is default.

How can I deploy with useEnvironmentNerwork=true?

I'm currently using sample Apache web server package


-- 
Choe, Cheng-Dae
Blog: http://blog.woosum.net 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] auth migration and user data migration

2015-03-09 Thread Weidong Shao
hi,

I have a standalone swift cluster with swauth as the auth module. By
standalone, I mean the cluster is not in the context of OpenStack, or
keystone server.

Now I have moved ACL logic to application level and decided to have all
data in swift under one user account. I have a few questions on this change:

1) is it possible to migrate swauth to the tempAuth? (assuming tempauth
will be supported in newer swift versions).

2) Is there a way to migrate data associated with one user account to
another user?

Thanks,
Weidong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Do No Evil

2015-03-09 Thread Radomir Dopieralski
On 03/09/2015 01:59 PM, Michael Krotscheck wrote:
> On Sun, Mar 8, 2015 at 3:21 PM Thomas Goirand  > wrote:
> 
> 
> Anyway, you understood me: please *never* use this Expat/MIT license
> with the "The Software shall be used for Good, not Evil." additional
> clause. This is non-free software, which I will *never* be able to
> upload to Debian (and Canonical guys will have the same issue).
> 
> 
> So, to clarify: Does this include tooling used to build the software,
> but is not shipped with it? I suppose a similar example is using GCC
> (which is GPL'd) to compile something that's Apache licensed.

To clarify, we are not shipping jshint/jslint with horizon, or requiring
you to have it in order to run or build horizon. It's not used in the
build process, the install process or at runtime. The only places where
it is used is at the developer's own machine, when they install it and
run it explicitly, to check their code, and on the gate, to check the
code submitted for merging. In either case we are not distributing any
software, so no copyright applies.

One could argue that since the review process often causes a lot of
stress both to the authors of the patches and to their reviewers, and so
in fact we are using the software for evil...

We are working on switching to ESLint, not strictly because of the
license, but simply because it seems to be a better and more flexible
tool, but this is not very urgent, and will likely take some time.

-- 
Radomir Dopieralski


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][qa] Forward plan for heat scenario tests

2015-03-09 Thread David Kranz
Since test_server_cfn_init was recently moved from tempest to the heat 
functional tests, there are no subclasses of OrchestrationScenarioTest.
If there is no plan to add any more heat scenario tests to tempest I 
would like to remove that class. So I want to confirm that future 
scenario tests will go in the heat tree.


 -David

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tooz] 1.0 goals

2015-03-09 Thread Julien Danjou
Hi fellow developers,

It'd be nice to achieve a 1.0 release for tooz, as some projects are
already using it, and more are going to adopt it.

I think we should collect features and potential bugs/limitations we'd
like to have and fix before that. Ideas, thoughts?

Cheers,
-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] cliff 1.10.1 release

2015-03-09 Thread Doug Hellmann
The Oslo team is chuffed to announce the release of:

cliff 1.10.1: Command Line Interface Formulation Framework

For more details, please see the git log history below and:

https://launchpad.net/python-cliff/+milestone/1.10.1

Please report issues through launchpad:

https://bugs.launchpad.net/python-cliff

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in cliff 1.10.0..1.10.1
---

cccb255 Document print_help_if_requested method
103b9b8 Correct completion in interactive mode

Diffstat (except docs and test files)
-

cliff/app.py|  8 ++
cliff/interactive.py| 34 --
3 files changed, 95 insertions(+), 9 deletions(-)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.concurrency 1.7.0 release

2015-03-09 Thread Doug Hellmann
The Oslo team is pleased to announce the release of:

oslo.concurrency 1.7.0: oslo.concurrency library

For more details, please see the git log history below and:

http://launchpad.net/oslo.concurrency/+milestone/1.7.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.concurrency

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.concurrency 1.6.0..1.7.0


e3656e7 Imported Translations from Transifex
aeb9e17 Updated from global requirements

Diffstat (except docs and test files)
-

.../fr/LC_MESSAGES/oslo.concurrency-log-error.po   |  32 ++
.../fr/LC_MESSAGES/oslo.concurrency-log-info.po|  32 ++
.../locale/fr/LC_MESSAGES/oslo.concurrency.po  | 123 +
requirements.txt   |   2 +-
4 files changed, 188 insertions(+), 1 deletion(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index a7d1656..1fcebea 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9 +9 @@ fixtures>=0.3.14
-oslo.config>=1.6.0  # Apache-2.0
+oslo.config>=1.9.0  # Apache-2.0
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all][qa][gabbi][rally][tempest] Extend "rally verfiy" to unify work with Gabbi, Tempest and all in-tree functional tests

2015-03-09 Thread Davanum Srinivas
Boris,

1. Suppose a project say Nova wants to enable this Rally Integration
for its functional tests, what does that project have to do? (other
than the existing well defined tox targets)
2. Is there a "test" project with Gabbi based tests that you know of?
3. What changes if any are needed in Gabbi to make this happen?

Guessing going forward, we can setup weekly Rally jobs against
different projects so we can compare performance over time etc?

thanks,
dims


On Fri, Mar 6, 2015 at 6:47 PM, Boris Pavlovic  wrote:
> Hi stackers,
>
> Intro (Gabbi)
> -
>
> Gabbi is amazing tool that allows you to describe in human readable way what
> API requests to execute and what you are expecting as a result. It
> Simplifies a lot API testing.
>
> It's based on unittest so it can be easily run using tox/tester/nose and so
> on..
>
>
> Intro (Functional in-tree tests)
> ---
>
> Keeping all tests in one project like Tempest, that is maintained by one
> team, was not enough scalable approach. To scale things, projects started
> maintaining their own functional tests in their own tree. This resolves
> scale issues and now new features can be merged with functional tests.
>
>
> The Problem
> -
>
> As far as you know there are a lot of OpenStack projects with their own
> functional tests / gabbi tests in tree. It becomes hard for developers,
> devops and operators to work with them. (Like it's hard to install OpenStack
> by hands without DevStack. )
>
> Usually, end users are choosing 2 approach:
> 1) Make own tests
> 2) Make scripts that runs somehow all these tests
>
>
> Small Intro (Rally)
> 
>
> Rally idea is to make tool that simplifies all kinds of testing of multiple
> OpenStack clouds.
> It should be for human and as well simple to integrated in CI/CD process.
>
> Rally automates all testing process (managing testing systems / running
> tests / storing results / working with results)
>
> At this moment there are 3 major parts:
> *) deployment  - manages OpenStack deployments (creates or uses existing)
> *) verify - manages fully tempest (installs/configurtion/running/parsing
> output/storing results/working with results)
> *) task - own rally testing framework that allows you to do all kinds of
> testing functional/load/performance/scale/load/volume and others.
>
> I can say that "rally verify" command that automates work with Tempest is
> very popular. More details here:
> https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
>
>
> Proposal to make the life better
> --
>
> Recently Yair Fried and Prasanth Anbalagan proposed a great idea to extend
> "rally verify" command to add ability to run in-tree functional tests in the
> same way as tempest.
>
> In other words to have next syntax:  "rally verify  "
>
> Something like this:
>
>   rally verify swift start   # 1. Check is installed swift for active rally
> deployment.
>  # IF NO:
>  #   Downloads from default (our
> specified place) swift
>  #   Switch to master or specified tag
>  #   Installs in venv swift
>  #   Configure swift functional test
> config for active deployment
>  # 2. Run swift functional test
>  # 3. Parse subunit output and store to
> Rally DB (for future work)
>
>   rally verify swift list  # List all swift
> verification runs
>   rally verify swift show UUID# Shows results
>   rally verify swift compare UUID1 UUID2 # Compare results of two runs
>
>
> Why it makes sense?
> 
>
> 1) Unification of testing process.
>
> There is a simple to learn set of command "rally verify  "
> that works for all projects in the same way.  End users like such things=)
>
> 2) Simplification of testing process.
>
> "rally verify  start" - will automate all steps, so you won't need
> to install project manually, and configure functional test, collect and
> somewhere store results.
>
> 3) Avoiding duplication of effort
>
> We don't need to implement part of "rally verify" functionality in every
> project.
> It is better to implement it in one place with plugin support. Adding new
> project means implementing new plugin (in most case it will be just
> functional test conf generation)
>
> 4) Reusing already existing code
>
> Most of the code that we need is already implemented in Rally,
> it requires just small refactoring and generalization.
>
>
> Thoughts?
>
>
> Best regards,
> Boris Pavlovic
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac

[openstack-dev] [all] oslo.config 1.9.1 release

2015-03-09 Thread Doug Hellmann
The Oslo team is glad to announce the release of:

oslo.config 1.9.1: Oslo Configuration API

For more details, please see the git log history below and:

http://launchpad.net/oslo.config/+milestone/1.9.1

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.config

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.config 1.9.0..1.9.1
---

0f550d7 Generate help text indicating possible values
9a6de3f fix bug link in readme

Diffstat (except docs and test files)
-

README.rst  |  2 +-
oslo_config/generator.py|  4 
4 files changed, 37 insertions(+), 1 deletion(-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Logstash and grok patterns

2015-03-09 Thread Foss Geek
Dear All,

I have a openstack HA environment deployed using fuel 5.1. Fuel master node
collects all the node logs under /var/log/docker-logs/remote/ directory.

I have installed Logstash on fuel master node. Here is Logstash.conf:

http://paste.openstack.org/show/190985/

Here is rsyslog Template format:

# cat /etc/rsyslog.d/00-remote.conf  | grep Template

# Templates
$Template RemoteLog, "<%pri%>%timestamp% %hostname%
%syslogtag%%msg:::sp-if-no-1st-sp%%msg%\n"

$ActionFileDefaultTemplate RemoteLog

Is there any grok pattern reference for fuel centralized logs?

Thanks for your time.

-- 
Thanks & Regards
E-Mail: thefossg...@gmail.com
IRC: neophy
Blog : http://lmohanphy.livejournal.com/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Do No Evil

2015-03-09 Thread Daniel P. Berrange
On Sun, Mar 08, 2015 at 11:19:10PM +0100, Thomas Goirand wrote:
> On 03/08/2015 06:34 PM, Mike Bayer wrote:
> > 
> > 
> > Ian Wells  wrote:
> > 
> >> With apologies for derailing the question, but would you care to tell us 
> >> what evil you're planning on doing?  I find it's always best to be 
> >> informed about these things.
> > 
> > All of us, every day, do lots of things that someone is going to think is
> > evil. From eating meat, to living various kinds of lifestyles, to supporting
> > liberal or conservative causes, to just living in a certain country, to
> > using Windows or other “non-free” operating systems, to top-posting, makes
> > you evil to someone; to lots of people, in fact. This is why a blanket
> > statement like “do no evil” is pretty much down to two choices, A. based on
> > some arbitrary, undefined notion of “evil” in which case nobody can use the
> > software, or B. based on the user’s own subjective view of “evil” which
> > means the phrase is just a humorous frill. Maybe authors add this phrase as
> > a means to limit the use of their software only to those communities where
> > such a statement is patently ridiculous (e.g., not publicly held
> > corporations).
> > 
> > but also given that “evil” can be almost anything, I don’t think it’s 
> > reasonable
> > that users would have to report on their intended brand of “evil”.
> 
> tl;dr: Debian considers the "do no evil" license non-free.

[snip]

> Anyway, you understood me: please *never* use this Expat/MIT license
> with the "The Software shall be used for Good, not Evil." additional
> clause. This is non-free software, which I will *never* be able to
> upload to Debian (and Canonical guys will have the same issue).

The same is true of Fedora's licensing policies. Code under a license
with this clause is not permitted in Fedora.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.db 1.6.0 release

2015-03-09 Thread Doug Hellmann
The Oslo team is thrilled to announce the release of:

oslo.db 1.6.0: oslo.db library

For more details, please see the git log history below and:

http://launchpad.net/oslo.db/+milestone/1.6.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.db 1.5.0..1.6.0
---

11f71cd Updated from global requirements
fa43657 Use PyMySQL as DB driver in py3 environment

Diffstat (except docs and test files)
-

requirements.txt  | 4 ++--
test-requirements-py3.txt | 4 +---
tox.ini   | 6 ++
3 files changed, 9 insertions(+), 5 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 3e7a757..e3384db 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10 +10 @@ oslo.i18n>=1.3.0  # Apache-2.0
-oslo.config>=1.6.0  # Apache-2.0
+oslo.config>=1.9.0  # Apache-2.0
@@ -13 +13 @@ SQLAlchemy>=0.9.7,<=0.9.99
-sqlalchemy-migrate>=0.9.1,!=0.9.2
+sqlalchemy-migrate>=0.9.5
diff --git a/test-requirements-py3.txt b/test-requirements-py3.txt
index b9c1c33..03670e8 100644
--- a/test-requirements-py3.txt
+++ b/test-requirements-py3.txt
@@ -15,0 +16 @@ oslotest>=1.2.0  # Apache-2.0
+PyMySQL>=0.6.2  # MIT License
@@ -19,3 +19,0 @@ tempest-lib>=0.2.0
-
-# TODO(harlowja): add in pymysql when able to...
-# https://review.openstack.org/#/c/123737

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.i18n 1.5.0 release

2015-03-09 Thread Doug Hellmann
The Oslo team is gleeful to announce the release of:

oslo.i18n 1.5.0: oslo.i18n library

For more details, please see the git log history below and:

http://launchpad.net/oslo.i18n/+milestone/1.5.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.i18n

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.i18n 1.4.0..1.5.0
-

b0faab7 Updated from global requirements

Diffstat (except docs and test files)
-

requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 5bef251..79a76f0 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ Babel>=1.3
-six>=1.7.0
+six>=1.9.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.log 1.0.0 release

2015-03-09 Thread Doug Hellmann
The Oslo team is excited to announce the release of:

oslo.log 1.0.0: oslo.log library

For more details, please see the git log history below and:

http://launchpad.net/oslo.log/+milestone/1.0.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.log

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.log 0.4.0..1.0.0


2142405 Updated from global requirements
2bf8164 Make use_syslog=True log to syslog via /dev/log
cc8d42a update urllib3.util.retry log level to WARN

Diffstat (except docs and test files)
-

oslo_log/_options.py | 2 ++
oslo_log/log.py  | 6 --
requirements.txt | 4 ++--
3 files changed, 8 insertions(+), 4 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 61a5b83..54ada9b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9,2 +9,2 @@ iso8601>=0.1.9
-oslo.config>=1.6.0  # Apache-2.0
-oslo.context>=0.1.0 # Apache-2.0
+oslo.config>=1.9.0  # Apache-2.0
+oslo.context>=0.2.0 # Apache-2.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][qa] Forward plan for heat scenario tests

2015-03-09 Thread Matthew Treinish
On Mon, Mar 09, 2015 at 09:52:54AM -0400, David Kranz wrote:
> Since test_server_cfn_init was recently moved from tempest to the heat
> functional tests, there are no subclasses of OrchestrationScenarioTest.
> If there is no plan to add any more heat scenario tests to tempest I would
> like to remove that class. So I want to confirm that future scenario tests
> will go in the heat tree.
> 

I think it perfectly fine to remove it, it was something that probably should
have been part of Steve's patch which removed the testing. It's unused code
right now so there is no reason to keep it around.

-Matt Treinish


pgp33_Ugmgmli.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.messaging 1.8.0 release

2015-03-09 Thread Doug Hellmann
The Oslo team is overjoyed to announce the release of:

oslo.messaging 1.8.0: Oslo Messaging API

For more details, please see the git log history below and:

http://launchpad.net/oslo.messaging/+milestone/1.8.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.messaging 1.7.0..1.8.0
--

9fbec54 Updated from global requirements
0359147 NotifyPublisher need handle amqp_auto_delete
e8def40 Fix matchmaker_redis ack_alive fails with KeyError
4c0ef9b Use import of zmq package for test skip

Diffstat (except docs and test files)
-

oslo_messaging/_drivers/impl_rabbit.py |  2 +
oslo_messaging/_drivers/matchmaker_redis.py|  2 +-
requirements-py3.txt   |  4 +-
requirements.txt   |  4 +-
7 files changed, 61 insertions(+), 42 deletions(-)


Requirements updates


diff --git a/requirements-py3.txt b/requirements-py3.txt
index 64f3cb8..05cb050 100644
--- a/requirements-py3.txt
+++ b/requirements-py3.txt
@@ -5 +5 @@
-oslo.config>=1.6.0  # Apache-2.0
+oslo.config>=1.9.0  # Apache-2.0
@@ -12 +12 @@ stevedore>=1.1.0  # Apache-2.0
-six>=1.7.0
+six>=1.9.0
diff --git a/requirements.txt b/requirements.txt
index e6747b0..3b49a53 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ pbr>=0.6,!=0.7,<1.0
-oslo.config>=1.6.0  # Apache-2.0
+oslo.config>=1.9.0  # Apache-2.0
@@ -14 +14 @@ stevedore>=1.1.0  # Apache-2.0
-six>=1.7.0
+six>=1.9.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.middleware 1.0.0 release

2015-03-09 Thread Doug Hellmann
The Oslo team is gleeful to announce the release of:

oslo.middleware 1.0.0: Oslo Middleware library

For more details, please see the git log history below and:

http://launchpad.net/oslo.middleware/+milestone/1.0.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.middleware

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.middleware 0.5.0..1.0.0
---

5c3c5a9 Updated from global requirements

Diffstat (except docs and test files)
-

requirements.txt | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index f459298..95059c6 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7,2 +7,2 @@ Babel>=1.3
-oslo.config>=1.6.0  # Apache-2.0
-oslo.context>=0.1.0 # Apache-2.0
+oslo.config>=1.9.0  # Apache-2.0
+oslo.context>=0.2.0 # Apache-2.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.rootwrap 1.6.0 release

2015-03-09 Thread Doug Hellmann
The Oslo team is chuffed to announce the release of:

oslo.rootwrap 1.6.0: Oslo Rootwrap

For more details, please see the git log history below and:

http://launchpad.net/oslo.rootwrap/+milestone/1.6.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.rootwrap

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.rootwrap 1.5.0..1.6.0
-

f485b93 Remove env changing support in daemon mode
8472c5e Updated from global requirements
8196b10 Updated from global requirements
7a6769b Add bug link to README

Diffstat (except docs and test files)
-

README.rst | 10 --
oslo_rootwrap/client.py|  6 +++---
oslo_rootwrap/daemon.py|  6 +-
oslo_rootwrap/filters.py   | 10 --
oslo_rootwrap/wrapper.py   |  5 ++---
requirements.txt   |  2 +-
test-requirements.txt  |  2 +-
9 files changed, 20 insertions(+), 35 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 37619e8..d33fc18 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@
-six>=1.7.0
+six>=1.9.0
diff --git a/test-requirements.txt b/test-requirements.txt
index cfe73db..4b0b51a 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -24 +24 @@ mock>=1.0
-eventlet>=0.15.2
+eventlet>=0.16.1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all][qa][gabbi][rally][tempest] Extend "rally verfiy" to unify work with Gabbi, Tempest and all in-tree functional tests

2015-03-09 Thread Boris Pavlovic
Davanum,


1. Suppose a project say Nova wants to enable this Rally Integration
> for its functional tests, what does that project have to do? (other
> than the existing well defined tox targets)


Actually project by it self shouldn't do anything.

The whole work regarding to integration belongs to Infra and Rally code.
In Infra we should add job that runs few commands like:

  rally deployment use devstack   # this is already predefined in rally
devstack plugin
  rally verify nova start  # ...
  rally verify nova results --html /files-that-will-be-plubished/result.html
  rally verify nova results --json /files-that-will-be-plubished/result.json
  rally verify nova check_status# return 1 if some of tests
failed

In Rally we should add auto generation on nova .conf file, based on
OpenStack credentials.

The whole idea is to make common, short and simple set of command that is
easy to integrate in gates and use locally.

2 more things:
  1) We are going to support running specific releases of Rally in gates
(so projects won't depend on Rally master)
  2) Rally has own DB where it stores full results of rally verify runs, if
we share this DB between different runs, we can collect all results of all
functional test runs in one place, and in future analyze (like generating
graphs failures_of_test_x_per/week)

2. Is there a "test" project with Gabbi based tests that you know of?


As far as I know Ceilometer is first path-finder here:
https://github.com/openstack/ceilometer/tree/master/ceilometer/tests/gabbi


3. What changes if any are needed in Gabbi to make this happen?


More or less no changes. It can be run via testr/nose that is enough.


Best regards,
Boris Pavlovic



On Mon, Mar 9, 2015 at 5:05 PM, Davanum Srinivas  wrote:

> Boris,
>
> 1. Suppose a project say Nova wants to enable this Rally Integration
> for its functional tests, what does that project have to do? (other
> than the existing well defined tox targets)
> 2. Is there a "test" project with Gabbi based tests that you know of?
> 3. What changes if any are needed in Gabbi to make this happen?
>
> Guessing going forward, we can setup weekly Rally jobs against
> different projects so we can compare performance over time etc?
>
> thanks,
> dims
>
>
> On Fri, Mar 6, 2015 at 6:47 PM, Boris Pavlovic  wrote:
> > Hi stackers,
> >
> > Intro (Gabbi)
> > -
> >
> > Gabbi is amazing tool that allows you to describe in human readable way
> what
> > API requests to execute and what you are expecting as a result. It
> > Simplifies a lot API testing.
> >
> > It's based on unittest so it can be easily run using tox/tester/nose and
> so
> > on..
> >
> >
> > Intro (Functional in-tree tests)
> > ---
> >
> > Keeping all tests in one project like Tempest, that is maintained by one
> > team, was not enough scalable approach. To scale things, projects started
> > maintaining their own functional tests in their own tree. This resolves
> > scale issues and now new features can be merged with functional tests.
> >
> >
> > The Problem
> > -
> >
> > As far as you know there are a lot of OpenStack projects with their own
> > functional tests / gabbi tests in tree. It becomes hard for developers,
> > devops and operators to work with them. (Like it's hard to install
> OpenStack
> > by hands without DevStack. )
> >
> > Usually, end users are choosing 2 approach:
> > 1) Make own tests
> > 2) Make scripts that runs somehow all these tests
> >
> >
> > Small Intro (Rally)
> > 
> >
> > Rally idea is to make tool that simplifies all kinds of testing of
> multiple
> > OpenStack clouds.
> > It should be for human and as well simple to integrated in CI/CD process.
> >
> > Rally automates all testing process (managing testing systems / running
> > tests / storing results / working with results)
> >
> > At this moment there are 3 major parts:
> > *) deployment  - manages OpenStack deployments (creates or uses existing)
> > *) verify - manages fully tempest (installs/configurtion/running/parsing
> > output/storing results/working with results)
> > *) task - own rally testing framework that allows you to do all kinds of
> > testing functional/load/performance/scale/load/volume and others.
> >
> > I can say that "rally verify" command that automates work with Tempest is
> > very popular. More details here:
> >
> https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
> >
> >
> > Proposal to make the life better
> > --
> >
> > Recently Yair Fried and Prasanth Anbalagan proposed a great idea to
> extend
> > "rally verify" command to add ability to run in-tree functional tests in
> the
> > same way as tempest.
> >
> > In other words to have next syntax:  "rally verify  "
> >
> > Something like this:
> >
> >   rally verify swift start   # 1. Check is installed swift for active
> rally
> > deployment.
> >  

[openstack-dev] [all] oslo.serialization 1.4.0 release

2015-03-09 Thread Doug Hellmann
The Oslo team is chuffed to announce the release of:

oslo.serialization 1.4.0: oslo.serialization library

For more details, please see the git log history below and:

http://launchpad.net/oslo.serialization/+milestone/1.4.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.serialization

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.serialization 1.3.0..1.4.0
--

7bfd5de Updated from global requirements

Diffstat (except docs and test files)
-

requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index ae6cc7c..06e2755 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -12 +12 @@ Babel>=1.3
-six>=1.7.0
+six>=1.9.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.utils 1.4.0 release

2015-03-09 Thread Doug Hellmann
The Oslo team is gleeful to announce the release of:

oslo.utils 1.4.0: Oslo Utility library

For more details, please see the git log history below and:

http://launchpad.net/oslo.utils/+milestone/1.4.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.utils

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslo.utils 1.3.0..1.4.0
--

548b640 Add a stopwatch + split for duration(s)
2e65171 Allow providing a logger to save_and_reraise_exception
db5a0c6 Updated from global requirements
9d9818b Utility API to generate EUI-64 IPv6 address

Diffstat (except docs and test files)
-

oslo_utils/excutils.py |  13 ++-
oslo_utils/netutils.py |  54 +++
oslo_utils/timeutils.py| 189 +
requirements.txt   |   2 +-
8 files changed, 547 insertions(+), 54 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 06f022e..be23276 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ Babel>=1.3
-six>=1.7.0
+six>=1.9.0
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslotest 1.5.1 release

2015-03-09 Thread Doug Hellmann
The Oslo team is happy to announce the release of:

oslotest 1.5.1: OpenStack test framework

For more details, please see the git log history below and:

http://launchpad.net/oslotest/+milestone/1.5.1

Please report issues through launchpad:

http://bugs.launchpad.net/oslotest

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in oslotest 1.5.0..1.5.1


f74e974 Force rebuild egg-info before running cross tests

Diffstat (except docs and test files)
-

1 file changed, 1 insertion(+)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] stevedore 1.3.0 release

2015-03-09 Thread Doug Hellmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] stevedore 1.3.0 release

2015-03-09 Thread Doug Hellmann
The Oslo team is excited to announce the release of:

stevedore 1.3.0: Manage dynamic plugins for Python applications

For more details, please see the git log history below and:

https://launchpad.net/python-stevedore/+milestone/1.3.0

Please report issues through launchpad:

https://bugs.launchpad.net/python-stevedore

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in stevedore 1.2.0..1.3.0
-

218e95d Updated from global requirements
f5eea34 Fix test for finding multiple drivers
549fa83 ignore .testrepository directory created by testr
554bd47 clean up default environments run by tox

Diffstat (except docs and test files)
-

.gitignore |  1 +
requirements.txt   |  2 +-
tox.ini|  2 +-
4 files changed, 24 insertions(+), 15 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 76d9c0f..f7f4cc9 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ argparse
-six>=1.7.0
+six>=1.9.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] too 0.13.0 release

2015-03-09 Thread Doug Hellmann
The Oslo team is pumped to announce the release of:

tooz 0.13.0: Coordination library for distributed systems.

For more details, please see the git log history below and:

http://launchpad.net/python-tooz/+milestone/0.13.0

Please report issues through launchpad:

http://bugs.launchpad.net/python-tooz/

Notable changes


We hope to make this the last release of the library for the Kilo cycle.

Changes in tooz 0.12..0.13.0


3d01a84 Two locks acquired from one coord must works
47f831f Updated from global requirements
a879eb9 Releases locks in tests
60bf3af Allow coordinator non-string options and use them
9afaefd Since we use msgpack this can be more than a str
5b77b96 Updated from global requirements

Diffstat (except docs and test files)
-

requirements-py3.txt| 11 +++--
requirements.txt|  9 ++--
setup.py|  8 +++-
test-requirements.txt   |  3 ++
tooz/coordination.py| 20 +++--
tooz/drivers/file.py|  6 +--
tooz/drivers/memcached.py   |  9 ++--
tooz/drivers/mysql.py   | 93 ++---
tooz/drivers/pgsql.py   | 42 ---
tooz/drivers/redis.py   | 11 +
tooz/drivers/zake.py|  6 ++-
tooz/drivers/zookeeper.py   | 16 +--
tooz/locking.py | 66 +
tooz/utils.py   | 28 +
15 files changed, 271 insertions(+), 94 deletions(-)


Requirements updates


diff --git a/requirements-py3.txt b/requirements-py3.txt
index 5b14e3c..9c359b3 100644
--- a/requirements-py3.txt
+++ b/requirements-py3.txt
@@ -0,0 +1,3 @@
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
@@ -4,2 +7,2 @@ stevedore>=1.1.0
-six>=1.7.0
-iso8601
+six>=1.9.0
+iso8601>=0.1.9
@@ -11,2 +14,2 @@ retrying>=1.2.3,!=1.3.0
-oslo.utils>=1.0.0
-redis
+oslo.utils>=1.2.0   # Apache-2.0
+redis>=2.10.0
diff --git a/requirements.txt b/requirements.txt
index 5f7325e..4f3931c 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -0,0 +1,3 @@
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
@@ -4 +7 @@ stevedore>=1.1.0
-six>=1.7.0
+six>=1.9.0
@@ -8 +11 @@ pymemcache>=1.2
-zake>=0.1
+zake>=0.1.6 # Apache-2.0
@@ -12 +15 @@ futures>=2.1.6
-oslo.utils>=1.0.0
+oslo.utils>=1.2.0   # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 9839287..6c62a75 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -0,0 +1,3 @@
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest] isolation default config change notification

2015-03-09 Thread Attila Fazekas
Hi All,

This is follow up on [1].
Running the full tempest test-suite in parallel without the 
allow_tenant_isolation=True settings, can cause random not too obvious
failures, which caused lot of issue to tempest newcomers. 

There are special uses case when you might want to disable it,
for example when you would like to run just for several test cases for
benchmarking, when you know it is safe for sure, and you do not want to
include account creation related times to the result.

Now, the other case when you might want to disable this feature, when
you running tempest without admin account. This is expected to change
with the upcoming `test accounts` [2], where allow_tenant_isolation=True
is expected to be the recommended configuration. 
 
Best Regards,
Attila

[1] https://review.openstack.org/#/c/157052/
[2] https://blueprints.launchpad.net/tempest/+spec/test-accounts

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Do No Evil

2015-03-09 Thread Doug Hellmann


On Mon, Mar 9, 2015, at 08:52 AM, Radomir Dopieralski wrote:
> On 03/09/2015 01:59 PM, Michael Krotscheck wrote:
> > On Sun, Mar 8, 2015 at 3:21 PM Thomas Goirand  > > wrote:
> > 
> > 
> > Anyway, you understood me: please *never* use this Expat/MIT license
> > with the "The Software shall be used for Good, not Evil." additional
> > clause. This is non-free software, which I will *never* be able to
> > upload to Debian (and Canonical guys will have the same issue).
> > 
> > 
> > So, to clarify: Does this include tooling used to build the software,
> > but is not shipped with it? I suppose a similar example is using GCC
> > (which is GPL'd) to compile something that's Apache licensed.
> 
> To clarify, we are not shipping jshint/jslint with horizon, or requiring
> you to have it in order to run or build horizon. It's not used in the
> build process, the install process or at runtime. The only places where
> it is used is at the developer's own machine, when they install it and
> run it explicitly, to check their code, and on the gate, to check the
> code submitted for merging. In either case we are not distributing any
> software, so no copyright applies.

Not everyone realizes that many of the distros run our tests against the
packages they build, too. So our tool choices trickle downstream beyond
our machines and our CI environment. In this case, because the tool is a
linter, it seems like the distros wouldn't care about running it. But if
it was some sort of test runner or other tool that might be used for
functional tests, then they may well consider running it a requirement
to validate the packages they create.

That's not to say we need to let our tool choices be dictated by
downstream users, just don't assume that because a tool isn't used as
part of the runtime for a package that it isn't needed by those
downstream users.

Doug

> One could argue that since the review process often causes a lot of
> stress both to the authors of the patches and to their reviewers, and so
> in fact we are using the software for evil...
> 
> We are working on switching to ESLint, not strictly because of the
> license, but simply because it seems to be a better and more flexible
> tool, but this is not very urgent, and will likely take some time.
> 
> -- 
> Radomir Dopieralski
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Tempest Configuration Guide

2015-03-09 Thread Matthew Treinish
Hi Everyone,

I just wanted to mention that in the past couple of weeks we've started an
effort to write some documentation to explain common aspects of tempest
configuration. The current version of the doc can be found here:

http://docs.openstack.org/developer/tempest/configuration.html

Right now there is only an explanation of how to configure authentication, but
it would be great if we could expand this to cover more aspects of Tempest
configuration. Hopefully, we can grow this doc to be a good starting point for
people who are trying to hand configure tempest.

I've created an etherpad so we can track the work to improve this guide here:

https://etherpad.openstack.org/p/tempest-configuration-doc

If anyone wants to help with expanding this guide just add a line on the
etherpad for the section you're planning on adding.

Thanks,

-Matt Treinish


pgpldXuc2WLaX.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all][qa][gabbi][rally][tempest] Extend "rally verfiy" to unify work with Gabbi, Tempest and all in-tree functional tests

2015-03-09 Thread Chris Dent

On Mon, 9 Mar 2015, Davanum Srinivas wrote:


2. Is there a "test" project with Gabbi based tests that you know of?


In addition to the ceilometer tests that Boris pointed out gnocchi
is using it as well:

   https://github.com/stackforge/gnocchi/tree/master/gnocchi/tests/gabbi


3. What changes if any are needed in Gabbi to make this happen?


I was unable to tell from the original what "this" is and how gabbi
is involved but the above link ought to be able to show you how
gabbi can be used. There's also the docs (which could do with some
improvement, so suggestions or pull requests welcome):

   http://gabbi.readthedocs.org/en/latest/

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] isolation default config change notification

2015-03-09 Thread Matthew Treinish
On Mon, Mar 09, 2015 at 10:47:21AM -0400, Attila Fazekas wrote:
> Hi All,
> 
> This is follow up on [1].
> Running the full tempest test-suite in parallel without the 
> allow_tenant_isolation=True settings, can cause random not too obvious
> failures, which caused lot of issue to tempest newcomers. 
> 
> There are special uses case when you might want to disable it,
> for example when you would like to run just for several test cases for
> benchmarking, when you know it is safe for sure, and you do not want to
> include account creation related times to the result.
> 
> Now, the other case when you might want to disable this feature, when
> you running tempest without admin account. This is expected to change
> with the upcoming `test accounts` [2], where allow_tenant_isolation=True
> is expected to be the recommended configuration. 

I also should probably point out that an explanation around all of the different
credential providers and the tradeoffs involved with each of them has new
documentation here:

http://docs.openstack.org/developer/tempest/configuration.html#credential-provider-mechanisms

with a patch taking this change (and some other recent ones) into account here:

https://review.openstack.org/162019

-Matt Treinish


>  
> Best Regards,
> Attila
> 
> [1] https://review.openstack.org/#/c/157052/
> [2] https://blueprints.launchpad.net/tempest/+spec/test-accounts
> 



pgpikcYLkHuGd.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How neutron calculate routing path

2015-03-09 Thread Assaf Muller


- Original Message -
> The L3 agent uses ARP and static routes like a normal router would. The L2
> agent is where there might be differences depending on the network type
> used. If it's a tunnel overlay, the L2 agent may perform an ARP offload from
> information it has learned via the L2 population mechanism.
> 

To expand on what Kevin is saying, the L3 agent currently does not support any
sort of dynamic routing, and a Neutron virtual router may only have access
to up to one external network. It can be (Directly) connected to many internal 
networks
and performs static routing. If you connect a router to one external network
(8.8.8.0/24) and two internal networks (10.0.1.0/24, 10.0.2.0/24) then you
can observe the router's routing table and see that it has three routes
to said network, as well as a default gateway (The external network's gateway).

> On Sat, Mar 7, 2015 at 4:02 AM, Leo Y < minh...@gmail.com > wrote:
> 
> 
> 
> Hello, I am looking to learn how neutron agent (probably L3) calculates a new
> routing path when VM on compute node wants to communicate with some
> destination. Does it use neutron API to learn about network topology or it
> uses its internal structures to simulate path resolving like in real
> network? If the latter is correct, then what happens when a network topology
> is changed in neutron DB (due user intervention for by other actions) and
> the "local" data is invalid?

When a router is updated (Internal/external interface added/removed, floating 
IPs
added or removed) then an update is sent to the relevant L3 agent which adds
or removes the interface or floating IP.

> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> --
> Kevin Benton
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron agent internal data structures

2015-03-09 Thread Assaf Muller


- Original Message -
> Thank you. I am looking to read this state and compare it with neutron DB. If
> there are agents that do it already, I would like only to learn if I can
> change the polling period. Can you advise about the most efficient way to
> learn which agent does it and which doesn't?

The L3 agent has a periodic task (60 seconds, non-configurable, see neutron/
agent/l3/agent.py.L3NATAgent.periodic_sync_routers_task) that gets all of
the routers hosted on the agent from the DB, IF some error condition is met,
i.e. this periodic task doesn't do anything at all unless an error occurred
during the configuration of a router. It only performs a full-sync (Get all
routers from Neutron DB and configure them locally) when it starts up.

The DHCP and OVS agents are similar in that they don't actually get the state 
from the
Neutron DB in a periodic manner unless an error has occurred.

> 
> Leonid
> 
> On Sun, Mar 8, 2015 at 12:02 AM, Salvatore Orlando < sorla...@nicira.com >
> wrote:
> 
> 
> 
> Hi Leo,
> 
> Every agent keeps anyway an in-memory state throughout its execution.
> The agents indeed have no persistent storage - at least not in the usual form
> of a database. They however rely on data other than the neutron database.
> 
> For instance for the l2 agent, ovsdb itself is a source of information. The
> agent periodically scans it to detect interfaces which are brought up or
> down.
> As another example the dhcp agent stores its current state a 'data' directory
> (if you're using devstack it's usually /opt/stack/data/neutron/dhcp)
> 
> Hope this helps,
> Salvatore
> 
> 
> 
> 
> 
> On 7 March 2015 at 13:05, Leo Y < minh...@gmail.com > wrote:
> 
> 
> 
> 
> 
> Hello,
> 
> Where within the code of neutron agents I can find structure(s) that store
> network information? The agent has to know all current networks and ports in
> use by all VMs that are running in its compute node. Does anyone know where
> this information is stored except for neutron DB?
> 
> Thank you
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> --
> Regards,
> Leo
> -
> I enjoy the massacre of ads. This sentence will slaughter ads without a messy
> bloodbath
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] auth migration and user data migration

2015-03-09 Thread John Dickinson

> On Mar 9, 2015, at 9:46 AM, Weidong Shao  wrote:
> 
> hi,
> 
> I have a standalone swift cluster with swauth as the auth module. By 
> standalone, I mean the cluster is not in the context of OpenStack, or 
> keystone server.

That's completely fine (and not uncommon at all).

> 
> Now I have moved ACL logic to application level and decided to have all data 
> in swift under one user account. I have a few questions on this change:
> 
> 1) is it possible to migrate swauth to the tempAuth? (assuming tempauth will 
> be supported in newer swift versions).

Why?

Yes, tempauth is still in swift. It's mostly there for testing. I wouldn't 
recommend using it in production.


> 
> 2) Is there a way to migrate data associated with one user account to another 
> user?

"user account" Do you mean the identity or the account part of the Swift URL? 
If the former, then changing the reference in the auth system should probably 
work. If the latter, then you'll need to copy from one account to the other 
(Swift supports account-to-account server-side copy).


> 
> Thanks,
> Weidong
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Fedora 21 support

2015-03-09 Thread Dan Prince
I've been chipping away at what it would take to upgrade our Fedora CI
jobs to Fedora 21. The motivation was primarily that Fedora 21 has more
recent packages which would simplify some of the package requirements in
Delorean. Besides this Fedora 21 has been out for months now so we
really should be using it already...

Here is what I've found so far:

https://review.openstack.org/#/c/161836/ (Create Fedora 21 softlink
for /var/run/mariadb)

https://review.openstack.org/#/c/161840/ (Work around 20-neutron-selinux
issues on F21) Can someone have a look at a proper fix here?

https://review.openstack.org/#/c/162442/ (Fedora 21 tftp files fix...
copy in syslinux modules)

Install Fedora kernel-modules pkg for iscsi_tcp
(https://review.openstack.org/#/c/162443/)


For TripleO CI:

https://review.openstack.org/#/c/162399/ (Add support for Fedora 21
image downloads)

Lastly there is this CI job where I've been initially testing things:
https://review.openstack.org/#/c/161277/7

The Job is almost working (after cherrypicking all the above fixes)
except for the fact that there is an open qemu-img bug that is exposed
by our Cinder Glance image conversion code. See here for details
https://bugzilla.redhat.com/show_bug.cgi?id=1200043.

We can either fix this bug... or patch Cinder to avoid it (perhaps by
using -t writeback... thanks eharney for help with this!).

Dan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] too 0.13.0 release

2015-03-09 Thread Doug Hellmann
Apologies to the tooz team, auto-correct ate the Z in the name in the
subject line. - Doug

On Mon, Mar 9, 2015, at 09:44 AM, Doug Hellmann wrote:
> The Oslo team is pumped to announce the release of:
> 
> tooz 0.13.0: Coordination library for distributed systems.
> 
> For more details, please see the git log history below and:
> 
> http://launchpad.net/python-tooz/+milestone/0.13.0
> 
> Please report issues through launchpad:
> 
> http://bugs.launchpad.net/python-tooz/
> 
> Notable changes
> 
> 
> We hope to make this the last release of the library for the Kilo cycle.
> 
> Changes in tooz 0.12..0.13.0
> 
> 
> 3d01a84 Two locks acquired from one coord must works
> 47f831f Updated from global requirements
> a879eb9 Releases locks in tests
> 60bf3af Allow coordinator non-string options and use them
> 9afaefd Since we use msgpack this can be more than a str
> 5b77b96 Updated from global requirements
> 
> Diffstat (except docs and test files)
> -
> 
> requirements-py3.txt| 11 +++--
> requirements.txt|  9 ++--
> setup.py|  8 +++-
> test-requirements.txt   |  3 ++
> tooz/coordination.py| 20 +++--
> tooz/drivers/file.py|  6 +--
> tooz/drivers/memcached.py   |  9 ++--
> tooz/drivers/mysql.py   | 93
> ++---
> tooz/drivers/pgsql.py   | 42 ---
> tooz/drivers/redis.py   | 11 +
> tooz/drivers/zake.py|  6 ++-
> tooz/drivers/zookeeper.py   | 16 +--
> tooz/locking.py | 66 +
> tooz/utils.py   | 28 +
> 15 files changed, 271 insertions(+), 94 deletions(-)
> 
> 
> Requirements updates
> 
> 
> diff --git a/requirements-py3.txt b/requirements-py3.txt
> index 5b14e3c..9c359b3 100644
> --- a/requirements-py3.txt
> +++ b/requirements-py3.txt
> @@ -0,0 +1,3 @@
> +# The order of packages is significant, because pip processes them in
> the order
> +# of appearance. Changing the order has an impact on the overall
> integration
> +# process, which may cause wedges in the gate later.
> @@ -4,2 +7,2 @@ stevedore>=1.1.0
> -six>=1.7.0
> -iso8601
> +six>=1.9.0
> +iso8601>=0.1.9
> @@ -11,2 +14,2 @@ retrying>=1.2.3,!=1.3.0
> -oslo.utils>=1.0.0
> -redis
> +oslo.utils>=1.2.0   # Apache-2.0
> +redis>=2.10.0
> diff --git a/requirements.txt b/requirements.txt
> index 5f7325e..4f3931c 100644
> --- a/requirements.txt
> +++ b/requirements.txt
> @@ -0,0 +1,3 @@
> +# The order of packages is significant, because pip processes them in
> the order
> +# of appearance. Changing the order has an impact on the overall
> integration
> +# process, which may cause wedges in the gate later.
> @@ -4 +7 @@ stevedore>=1.1.0
> -six>=1.7.0
> +six>=1.9.0
> @@ -8 +11 @@ pymemcache>=1.2
> -zake>=0.1
> +zake>=0.1.6 # Apache-2.0
> @@ -12 +15 @@ futures>=2.1.6
> -oslo.utils>=1.0.0
> +oslo.utils>=1.2.0   # Apache-2.0
> diff --git a/test-requirements.txt b/test-requirements.txt
> index 9839287..6c62a75 100644
> --- a/test-requirements.txt
> +++ b/test-requirements.txt
> @@ -0,0 +1,3 @@
> +# The order of packages is significant, because pip processes them in
> the order
> +# of appearance. Changing the order has an impact on the overall
> integration
> +# process, which may cause wedges in the gate later.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tooz] 1.0 goals

2015-03-09 Thread Joshua Harlow

Let's do it,

One that I can think of off the top of my head would be to have 
`join_group` and associated functions have the ability to automatically 
create the group if it does not exist already (instead of raising a 
error and then having the user deal with the failure themselves).


I'm also thinking we might want to have better docs on the supported 
features of each backend/driver; as not all of them are fully functional 
(and may never be?) and we should make sure people are aware of this 
(from the docs, not by reading the code).


-Josh

Julien Danjou wrote:

Hi fellow developers,

It'd be nice to achieve a 1.0 release for tooz, as some projects are
already using it, and more are going to adopt it.

I think we should collect features and potential bugs/limitations we'd
like to have and fix before that. Ideas, thoughts?

Cheers,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread Alex Xu
ok, no problem, will take a look it tomorrow.

2015-03-09 20:18 GMT+08:00 Christopher Yeoh :

>
>
> On Mon, Mar 9, 2015 at 10:08 PM, John Garbutt 
> wrote:
>
>> +1
>>
>> Please could you submit a dev ref for this?
>>
>> We can argue on the review, a bit like this one:
>>
>> https://github.com/openstack/nova/blob/master/doc/source/devref/policy_enforcement.rst
>>
>> I think it'd also be a good idea to add a testcase (use test_plugins
> directory where you can
> define your own controller which is never plubished) for each example so
> they don't get out of date
>
> Regards,
>
> Chris
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tooz] 1.0 goals

2015-03-09 Thread Joshua Harlow
Another idea; provide some way for tooz to handle the heartbeating 
(instead of clients having to do this); perhaps tooz coordinator should 
take ownership of the thread that heartbeats (instead of clients having 
to do this on there own)? This avoids having each client create there 
own thread (or something else) that does the same thing...


Perhaps using/sharing:

http://docs.openstack.org/developer/taskflow/types.html#module-taskflow.types.periodic

Or making something else...

-Josh

Joshua Harlow wrote:

Let's do it,

One that I can think of off the top of my head would be to have
`join_group` and associated functions have the ability to automatically
create the group if it does not exist already (instead of raising a
error and then having the user deal with the failure themselves).

I'm also thinking we might want to have better docs on the supported
features of each backend/driver; as not all of them are fully functional
(and may never be?) and we should make sure people are aware of this
(from the docs, not by reading the code).

-Josh

Julien Danjou wrote:

Hi fellow developers,

It'd be nice to achieve a 1.0 release for tooz, as some projects are
already using it, and more are going to adopt it.

I think we should collect features and potential bugs/limitations we'd
like to have and fix before that. Ideas, thoughts?

Cheers,

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-09 Thread Adam Young

On 03/08/2015 02:28 PM, Morgan Fainberg wrote:
On March 8, 2015 at 11:24:37 AM, David Stanek (dsta...@dstanek.com 
) wrote:


On Sun, Mar 8, 2015 at 1:37 PM, Mike Bayer>wrote:


can you elaborate on your reasoning that FK constraints should be
used less
overall?  or do you just mean that the client side should be
mirroring the same
rules that would be enforced by the FKs?


I don't think he means that we will use them less. Our SQL backends 
are full of them.  What Keystone can't do is rely on them because not 
all implementations of our backends support FKs.


100% spot on David. We support implementations that have no real 
concept of FK and we cannot assume that a cascade (or restrict) will 
occur on these implementations.




And even if the back ends do, we split behavior across identity, 
assignments, and resources ,and FKs cannot  cross those; Thety can and 
will vary independently.




—Morga




--
David
blog:http://www.traceback.org
twitter:http://twitter.com/dstanek
www:http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday March 10th at 19:00 UTC

2015-03-09 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday March 10th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, meeting logs and
minutes from our last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-03-03-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-03-03-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-03-03-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-09 Thread Clint Byrum
Excerpts from David Stanek's message of 2015-03-08 11:18:05 -0700:
> On Sun, Mar 8, 2015 at 1:37 PM, Mike Bayer  wrote:
> 
> > can you elaborate on your reasoning that FK constraints should be used less
> > overall?  or do you just mean that the client side should be mirroring the
> > same
> > rules that would be enforced by the FKs?
> >
> 
> I don't think he means that we will use them less.  Our SQL backends are
> full of them.  What Keystone can't do is rely on them because not all
> implementations of our backends support FKs.
> 

Note that they're also a huge waste of SQL performance. It's _far_ cheaper
to scale out application servers and garbage-collect using background jobs
like pt-archiver than it will ever be to scale out a consistent data-store
and do every single little bit of house keeping in real time.  So even
on SQL backends, I'd recommend just disabling and dropping FK constraints
if you expect any more than the bare minimum usage of Keystone.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-09 Thread Mike Bayer


Wei D  wrote:

> +1,
> 
>  
> 
> I am fan of checking the constraints in the controller level instead of 
> relying on FK constraints itself, thanks.

Why shouldn’t the storage backends, be they relational or not, be tasked
with verifying integrity of data manipulations? If data integrity rules are
pushed out to the frontend, the frontend starts implementing parts of the
backend. Other front-ends to the same persistence backend might not have the
same rule checks, and you are now wide open for invalid data to be
persisted.

Front-ends should of course be encouraged to report on a potential issue in
integrity before proceeding with an operation, but IMO the backend should
definitely not allow the operation to proceed if the frontend fails to check
for a constraint. Persistence operations in which related objects must also
be modified in response to a primary object (e.g. a CASCADE situation),
else integrity will fail, should also be part of the backend, not the front end.





> Best Regards,
> 
> Dave Chen
> 
>  
> 
> From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com] 
> Sent: Monday, March 09, 2015 2:29 AM
> To: David Stanek; OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE
> 
>  
> 
> On March 8, 2015 at 11:24:37 AM, David Stanek (dsta...@dstanek.com) wrote:
> 
> 
> On Sun, Mar 8, 2015 at 1:37 PM, Mike Bayer  wrote:
> 
> can you elaborate on your reasoning that FK constraints should be used less
> overall?  or do you just mean that the client side should be mirroring the 
> same
> rules that would be enforced by the FKs?
> 
> 
> I don't think he means that we will use them less.  Our SQL backends are full 
> of them.  What Keystone can't do is rely on them because not all 
> implementations of our backends support FKs.
> 
> 100% spot on David. We support implementations that have no real concept of 
> FK and we cannot assume that a cascade (or restrict) will occur on these 
> implementations.
> 
>  
> 
> —Morga
> 
>  
> 
> --
> 
> David
> blog: http://www.traceback.org
> twitter: http://twitter.com/dstanek
> 
> www: http://dstanek.com
> 
> __ 
> OpenStack Development Mailing List (not for usage questions) 
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Recent issues with our review workflow

2015-03-09 Thread Ryan Moe
Hi All,

I've noticed a few times recently where reviews have been abandoned by
people who were not the original authors. These reviews were only days old
and there was no prior notice or discussion. This is both rude and
discouraging to contributors. Reasons for abandoning should be discussed on
the review and/or in email before any action is taken.

I would also like to talk about issues with our backporting procedure [0].
Over the past few weeks I've seen changes proposed to stable branches
before the change in master was merged. This serves no purpose other than
to increase our workload. We also run the risk of inconsistency between the
same commit on master and stable branches. Please, do not propose backports
until the change has been merged to master.

[0]
https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series

Thanks,
Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-09 Thread David Stanek
On Sun, Mar 8, 2015 at 10:28 PM, Chen, Wei D  wrote:

> +1,
>
>
>
> I am fan of checking the constraints in the controller level instead of
> relying on FK constraints itself, thanks.
>

The Keystone controllers shouldn't do any business logic. This should be in
the managers. The controllers should do nothing more that take web stuff
and convert it for use by the managers.


-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Request non-priority feature freeze exception for VIF_TYPE_TAP

2015-03-09 Thread Russell Bryant
On 02/25/2015 07:07 AM, Daniel P. Berrange wrote:
> On Wed, Feb 25, 2015 at 11:46:05AM +, Neil Jerram wrote:
>> Although we are past the non-priority deadline, I have been encouraged
>> to request this late exception for Project Calico's spec and code adding
>> VIF_TYPE_TAP to Nova.
> 
> I'm afraid you're also past the freeze exception request deadline. The
> meeting to decide upon exception requests took place a week & a half
> ago now.
> 
> So while I'd probably support inclusion of your new VIF driver to libvirt,
> you are out of luck for Kilo in terms of the process currently being
> applied to Nova.

I took a look at this is and it's such a small and low risk thing, I
think it's pretty harmless to give an exception to.  I'd probably say
the same for all new VIF types.  I'm happy to review this one (I pretty
much already did just looking into it).  If there are others that need
review, I'm happy to review those as well.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-09 Thread Mike Bayer


Clint Byrum  wrote:

> Excerpts from David Stanek's message of 2015-03-08 11:18:05 -0700:
>> On Sun, Mar 8, 2015 at 1:37 PM, Mike Bayer  wrote:
>> 
>>> can you elaborate on your reasoning that FK constraints should be used less
>>> overall?  or do you just mean that the client side should be mirroring the
>>> same
>>> rules that would be enforced by the FKs?
>> 
>> I don't think he means that we will use them less.  Our SQL backends are
>> full of them.  What Keystone can't do is rely on them because not all
>> implementations of our backends support FKs.
> 
> Note that they're also a huge waste of SQL performance. It's _far_ cheaper
> to scale out application servers and garbage-collect using background jobs
> like pt-archiver than it will ever be to scale out a consistent data-store
> and do every single little bit of house keeping in real time.  So even
> on SQL backends, I'd recommend just disabling and dropping FK constraints
> if you expect any more than the bare minimum usage of Keystone.

Im about -1000 on disabling foreign key constraints. Any decision based on
“performance” IMHO has to be proven with benchmarks. Foreign keys on modern
databases like MySQL and Postgresql do not add overhead to any significant
degree compared to just the workings of the Python code itself (which means,
a benchmark here should be illustrating a tangible impact on the python
application itself). OTOH, the prospect of a database with failed
referential integrity is a recipe for disaster.   


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Missing button to send Ctrl+Alt+Del for SPICE Console

2015-03-09 Thread Ben Nemec
On 03/05/2015 09:34 AM, Andy McCrae wrote:
> Sorry to resurrect this almost a year later! I've recently run into this
> issue, and it does make logging into Windows instances, via the console,
> quite challenging.
> 
> On Fri, Jun 6, 2014 at 3:46 PM, Ben Nemec  wrote:
> 
>> This sounds like a reasonable thing to open a bug against Horizon on
>> Launchpad.
> 
> 
> I'm wondering if this is something that should go into Horizon or into
> spice-html5 itself? I noticed this was fixed in ovirt (
> https://bugzilla.redhat.com/show_bug.cgi?id=1014069), similar to how I
> imagine it would be fixed in Horizon. However, this would mean if, for
> whatever reason, you aren't using Horizon and get the link directly via
> nova (using get-spice-console) you wouldn't get the ctrl-alt-del button.
> 
> Is there a reason it wouldn't go into spice-html5, or that making this
> change in Horizon would be a better change?
> 

I'm not a horizon dev, so I'm not sure I'm really qualified to answer
that question.  That said, if it can be fixed in spice-html5, I agree
that seems like the way to go.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tooz] 1.0 goals

2015-03-09 Thread Joshua Harlow

Julien Danjou wrote:

On Mon, Mar 09 2015, Joshua Harlow wrote:


One that I can think of off the top of my head would be to have `join_group`
and associated functions have the ability to automatically create the group if
it does not exist already (instead of raising a error and then having the user
deal with the failure themselves).


That would be a wrapper that would just do try/join/except/create+join?


Ya, something like that, or something like this for those APIs;

def join_group(group, create_if_missing=False):
  




I'm also thinking we might want to have better docs on the supported features
of each backend/driver; as not all of them are fully functional (and may never
be?) and we should make sure people are aware of this (from the docs, not by
reading the code).


We could build a compatibility matrix in the doc.



+ 1

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tooz] 1.0 goals

2015-03-09 Thread Julien Danjou
On Mon, Mar 09 2015, Joshua Harlow wrote:

> One that I can think of off the top of my head would be to have `join_group`
> and associated functions have the ability to automatically create the group if
> it does not exist already (instead of raising a error and then having the user
> deal with the failure themselves).

That would be a wrapper that would just do try/join/except/create+join?

> I'm also thinking we might want to have better docs on the supported features
> of each backend/driver; as not all of them are fully functional (and may never
> be?) and we should make sure people are aware of this (from the docs, not by
> reading the code).

We could build a compatibility matrix in the doc.

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] debtcollector 0.3.0 release

2015-03-09 Thread Joshua Harlow

The Oslo team is glad to announce the release of:

debtcollector 0.3.0: A collection of Python deprecation patterns and
strategies that help you collect your technical debt in a non-
destructive manner.

For more details, please see the git log history below and:

http://launchpad.net/debtcollector/+milestone/0.3

Please report issues through launchpad:

http://bugs.launchpad.net/debtcollector

Changes in debtcollector 0.2.0..0.3.0
-

NOTE: Skipping requirement commits...

f3ea4d4 Add a removed module deprecation helper
bab8a5b Move to hacking 0.10
3d9e38f Add a 'removed_kwarg' function/method decorator
f0e07d5 Match the updated openstack-manuals description
f4abeae Format the method/class removals messages like the others
77edd33 Add examples of using the new removals decorator

Diffstat (except docs and test files)
-

README.rst  |   5 +-
debtcollector/removals.py   | 111 
++--

requirements.txt|   2 +-
setup.cfg   |   2 +-
setup.py|   8 +++
test-requirements.txt   |  12 ++--
tox.ini |   5 +-
10 files changed, 178 insertions(+), 36 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 1abdc3e..6af7f18 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ Babel>=1.3
-six>=1.7.0
+six>=1.9.0
diff --git a/test-requirements.txt b/test-requirements.txt
index d494076..8592bde 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5 +5 @@
-hacking>=0.9.2,<0.10
+hacking<0.11,>=0.10.0
@@ -9,4 +9,4 @@ discover
-python-subunit
-sphinx>=1.1.2
-oslosphinx
-oslotest>=1.1.0.0a1
+python-subunit>=0.0.18
+sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
+oslosphinx>=2.2.0  # Apache-2.0
+oslotest>=1.2.0  # Apache-2.0
@@ -15 +15 @@ testscenarios>=0.4
-testtools>=0.9.34
+testtools>=0.9.36,!=1.2.0


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tooz] 1.0 goals

2015-03-09 Thread Julien Danjou
On Mon, Mar 09 2015, Joshua Harlow wrote:

> Another idea; provide some way for tooz to handle the heartbeating (instead of
> clients having to do this); perhaps tooz coordinator should take ownership of
> the thread that heartbeats (instead of clients having to do this on there 
> own)?
> This avoids having each client create there own thread (or something else) 
> that
> does the same thing...

Shh, I'm not for this at all. What we should do is provide the ability
to plug tooz in an event loop but, as you know, most of the modules we
use are not providing such a feature so we're stuck on building a "while
True: heartbeat()" loop… :(

I think it's better to let the user run heartbeat() it's own
system (thread, event loop, whatever) and not mess with that.

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] metadata and tagging guidelines up for review

2015-03-09 Thread Miguel Grinberg
Hi all,

I would like to invite you to review the guidelines for metadata and
tagging that I proposed to the API-WG. Links:

https://review.openstack.org/#/c/141229/
https://review.openstack.org/#/c/155620/

The idea with these guidelines is to come up with a set of recommendations
to be used going forward. This is not an attempt to describe the current
state of things, since that would be impossible due to the inconsistencies
in the implementations across projects.

If you have any questions or would like to discuss these proposals, feel
free to do it here, or else find me on our shiny new #openstack-api channel
(my nick is miguelgrinberg).

Thanks,

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tooz] 1.0 goals

2015-03-09 Thread Joshua Harlow

Julien Danjou wrote:

On Mon, Mar 09 2015, Joshua Harlow wrote:


Another idea; provide some way for tooz to handle the heartbeating (instead of
clients having to do this); perhaps tooz coordinator should take ownership of
the thread that heartbeats (instead of clients having to do this on there own)?
This avoids having each client create there own thread (or something else) that
does the same thing...


Shh, I'm not for this at all. What we should do is provide the ability
to plug tooz in an event loop but, as you know, most of the modules we
use are not providing such a feature so we're stuck on building a "while
True: heartbeat()" loop… :(

I think it's better to let the user run heartbeat() it's own
system (thread, event loop, whatever) and not mess with that.



Fair enough; the landscape right now is probably to 'scattered' 
(asyncio, threads, greenthreads, blah blah...) to provide this, point taken.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Getting to a 1.0

2015-03-09 Thread Giulio Fidente

On 03/07/2015 04:34 AM, Dan Prince wrote:

On Tue, 2015-03-03 at 17:30 -0500, James Slagle wrote:

Hi,

Don't let the subject throw you off :)

I wasn't sure how to phrase what I wanted to capture in this mail, and
that seemed reasonable enough. I wanted to kick off a discussion about
what gaps people think are missing from TripleO before we can meet the
goal of realistically being able to use TripleO in production.

The things in my mind are:

Upgrades - I believe the community is trending away from the image
based upgrade rebuild process. The ongoing Puppet integration work is
integrated with Heat's SoftwareConfig/SoftwareDeployment features and
is package driven. There is still work to be done, especially around
supporting rollbacks, but I think this could be part of the answer to
how the upgrade problem gets solved.


+1 Using packages solves some problems very nicely. We haven't solved
all the CI related issues around using packages with upstream but it is
getting better. I mention this because it would be nice to have CI
testing on the upgrade process automated at some point...



HA - We have an implementation of HA in tripleo-image-elements today.
However, the Puppet codepath leaves that mostly unused. The Puppet
modules however do support HA. Is that the answer here as well?


In general most of the puppet modules support the required HA bits. We
are still working to integrate some of the final pieces here but in
general I expect this to proceed quickly.


going back to CI, I think this would benefit from an additional CI job

given we have a non-voting HA job running on precise/elements, I'd like 
to add one running on fedora/puppet, maybe initially non-voting as well


this said, it would also be nice to have a job which deploys additional 
block storage (cinder) and object storage (swift) nodes ...


... and to save some resources, maybe we can switch 
'check-tripleo-ironic-overcloud-f20puppet-nonha' and 
'check-tripleo-ironic-overcloud-precise-nonha' to deploy a single 
compute node instead of two



CLI - We have devtest. I'm not sure if anyone would argue that should
be used in production. It could be...but I don't think that was it's
original goal and it shows. The downstreams of TripleO that I'm aware
of each ended up more of less having their own CLI tooling. Obviously
I'm only very familiar with one of the downstreams, but in some
instances I believe parts of devtest were reused, and other times not.
That begs the question, do we need a well represented unified CLI in
TripleO? We have a pretty good story about using Nova/Ironic/Heat[0]
to deploy OpenStack, and devtest is one such implementation of that
story. Perhaps we need something more production oriented.


I think this is an interesting idea and perhaps has some merit. I'd like
to see some specific examples showing how the unified CLI might make
things easier for end users...


I am of the same feeling; AFAIK devtest was meant to setup a development 
environment, not a production environment, more on this later



Baremetal management - To what extent should TripleO venture into this
space? I'm thinking things like discovery/introspection, ready state,
and role assignment. Ironic is growing features to expose things like
RAID management via vendor passthrough API's. Should TripleO take a
role in exercising those API's? It's something that could be built
into the flow of the unified CLI if we were to end up going that
route.

Bootstrapping - The undercloud needs to be
bootstrapped/deployed/installed itself. We have the seed vm to do
that. I've also worked on an implementation to install an undercloud
via an installation script assuming the base OS is already installed.
Are these the only 2 options we should consider, or are there other
ideas that will integrate better into existing infrastructure?


And also should we think about possibly renaming these? I find that many
times when talking about TripleO to someone new they find the whole
undercloud/overcloud thing confusing. Calling the undercloud the
"baremetal cloud" makes it click.


I don't think we need more; what I would like to have instead is a tool, 
targeted at end users, capable of setting up an undercloud without going 
through the seed


this said, I am really not sure if that should be a wrapper around 
devtest --no-undercloud or a tool which turns the existing base os into 
an undercloud; both seem to have pros and cons



Release Cadence with wider OpenStack - I'd love to be able to say on
the day that a new release of OpenStack goes live that you can use
TripleO to deploy that release in production...and here's how you'd do
it


personally, while I tried to join this conversation in the past, I am 
still unsure whether for tripleo a stable/master approach would work 
better or not than a synchronized release cadence

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List

Re: [openstack-dev] [Ironic] Thinking about our python client UX

2015-03-09 Thread Ruby Loo
Hi Devananda,

Thanks for bringing this up. I've seen some recent discussions about
changing our python-client so that it supports a range of versions of the
server. I think that makes sense and that's how/where we can fix the client
so that it supports requests/responses that are particular to a version.
(The trick is to do it so that the code doesn't become unwieldy.)

Our client has been "broken" since probably day 11:-), so I don't think it
makes sense to have newer clients properly support Ironic servers prior to
when microversioning was added. It would be great to have, but I am not
sure the amount of effort to do that is warranted, given everything else on
our plate.

--ruby


On 7 March 2015 at 19:12, Devananda van der Veen 
wrote:

> Hi folks,
>
> Recently, I've been thinking more of how users of our python client
> will interact with the service, and in particular, how they might
> expect different instances of Ironic to behave.
>
> We added several extensions to the API this cycle, and along with
> that, also landed microversion support (I'll say more on that in
> another thread). However, I don't feel like we've collectively given
> nearly enough thought to the python client. It seems to work well
> enough for our CI testing, but is that really enough? What about user
> experience?
>
> In my own testing of the client versioning patch that landed on
> Friday, I noticed some pretty appalling errors (some unrelated to that
> patch) when pointing the current client at a server running the
> stable/juno code...
>
> http://paste.openstack.org/show/u91DtCf0fwRyv0auQWpx/
>
>
> I haven't filed specific bugs from yet this because I think the issue
> is large enough that we should talk about a plan first. I think that
> starts by agreeing on who the intended audience is and what level of
> forward-and-backward compatibility we are going to commit to [*],
> documenting that agreement, and then come up with a plan to deliver
> that during the L cycle. I'd like to start the discussion now, so I
> have put it on the agenda for Monday, but I also expect it will be a
> topic at the Vancouver summit.
>
> -Devananda
>
>
> [*] full disclosure
>
> I believe we have to commit to building a client that works well with
> every release since Icehouse, and the changes we've introduced in the
> client in this cycle do not.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-09 Thread Morgan Fainberg
On Monday, March 9, 2015, Mike Bayer  wrote:

>
>
> Wei D > wrote:
>
> > +1,
> >
> >
> >
> > I am fan of checking the constraints in the controller level instead of
> relying on FK constraints itself, thanks.
>
> Why shouldn’t the storage backends, be they relational or not, be tasked
> with verifying integrity of data manipulations? If data integrity rules are
> pushed out to the frontend, the frontend starts implementing parts of the
> backend. Other front-ends to the same persistence backend might not have
> the
> same rule checks, and you are now wide open for invalid data to be
> persisted.
>
> Front-ends should of course be encouraged to report on a potential issue in
> integrity before proceeding with an operation, but IMO the backend should
> definitely not allow the operation to proceed if the frontend fails to
> check
> for a constraint. Persistence operations in which related objects must also
> be modified in response to a primary object (e.g. a CASCADE situation),
> else integrity will fail, should also be part of the backend, not the
> front end.
>
>
>
>
You are assuming data is stored in an all SQL environment. In keystone it
is highly unlikely that you can make this assumption. When you discuss
users, groups, projects, domains, roles, assignments, etc... All of these
could be crossing SQL, LDAP, MongoDB, etc. in short, do not assume you are
even talking the same language.  This is why FKs are of minimal benefit to
us. The manager layer contains the business logic (and should) to handle
the cross-referencing of objects. The only FKs we have are for
uuid/PK identitifiers at the moment (afaik), these are/should-be immutable.

So tl;dr, we have an architecture that is not conducive to foreign keys,
and therefore should not use them beyond bare-minimums, instead rely on the
manager to do business logic. This is not the case for all OpenStack
projects.


>
>
> > Best Regards,
> >
> > Dave Chen
> >
> >
> >
> > From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com ]
> > Sent: Monday, March 09, 2015 2:29 AM
> > To: David Stanek; OpenStack Development Mailing List (not for usage
> questions)
> > Subject: Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE
> CASCADE
> >
> >
> >
> > On March 8, 2015 at 11:24:37 AM, David Stanek (dsta...@dstanek.com
> ) wrote:
> >
> >
> > On Sun, Mar 8, 2015 at 1:37 PM, Mike Bayer  > wrote:
> >
> > can you elaborate on your reasoning that FK constraints should be used
> less
> > overall?  or do you just mean that the client side should be mirroring
> the same
> > rules that would be enforced by the FKs?
> >
> >
> > I don't think he means that we will use them less.  Our SQL backends are
> full of them.  What Keystone can't do is rely on them because not all
> implementations of our backends support FKs.
> >
> > 100% spot on David. We support implementations that have no real concept
> of FK and we cannot assume that a cascade (or restrict) will occur on these
> implementations.
> >
> >
> >
> > —Morga
> >
> >
> >
> > --
> >
> > David
> > blog: http://www.traceback.org
> > twitter: http://twitter.com/dstanek
> >
> > www: http://dstanek.com
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tooz] 1.0 goals

2015-03-09 Thread Joshua Harlow

So another idea.

I know kazoo has/had the following:

https://github.com/python-zk/kazoo/pull/141

"Add shared locks and revocable shared lock support"

It might be nice if tooz had support for that (the kazoo PR was rejected 
since there just wasn't enough maintainers in the kazoo project to 
support it, that hopefully can be different this time around if that PR 
is fixed up...);


It might be nice if most of the tooz drivers had a similar feature...

Would people find that useful?

-Josh

Julien Danjou wrote:

Hi fellow developers,

It'd be nice to achieve a 1.0 release for tooz, as some projects are
already using it, and more are going to adopt it.

I think we should collect features and potential bugs/limitations we'd
like to have and fix before that. Ideas, thoughts?

Cheers,

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Kilo angular virtual sprint

2015-03-09 Thread Tripp, Travis S
Hi everyone,

The virtual sprint[1] has been going really well. Last Thursday in the virtual 
sprint we got 3 patches reviewed and merged! (156810, 157128, 160093)  We also 
had good discussion on the transfer tables and and spinner. With the transfer 
tables, we decided to make them a parent dependency of launch instance so that 
we could ensure that the updates worked for the steps.  There also has been 
discussion on how to best style the data in the columns (translations, etc), 
but I’m not sure we had a final outcome other than making the steps depend on 
table changes from Richard / Kelly.  With the spinner, Tyr demo’d the look and 
feel after which David and the other folks on the call felt that it makes sense.

Can we do another 22:00 UTC meeting Tuesday (US March 10)?  This would be 2 – 
4:00 PM PDT. This seems to be the time that accommodates the most active 
developers on the features.

I’d like an opportunity for discussion on launch instance steps to happen if 
needed.  Primarily, there has been concern on table data integration and 
styling.

I went through the ether pad and propose the following patches for review and 
discussion in the next session.  Thai, if there are any other dependency 
patches you need to discuss for user tables, please let us know.

  *   Transfer tables update (warning icons, drag and drop enabled) - 
https://review.openstack.org/#/c/159541/ - Kelly Domino
 *   Discussion on how to style / translate data
  *   Metadata display widget - https://review.openstack.org/#/c/151745/ (note 
has dependency on https://review.openstack.org/#/c/136437 )(Szymon)
  *   Magic Search angular widget - under dev by Eucalyptus, XStatic under 
review, creating demo on Users, then on to Instances (Randy, Nikunj)
  *   Password match validation (tqtran) 
https://review.openstack.org/#/c/161344/
  *   Wait spinner on wizard-based launch instance submit: 
https://review.openstack.org/#/c/158819/ - Tyr Johanson
 *   Lower priority, but just discussed on Thursday, so wrap up.

In addition, I have been working on ways to break the model patch out into 
different injectable service dependencies that I would like to get some input.

[1] https://etherpad.openstack.org/p/horizon-kilo-virtual-sprint

Thanks,
Travis

For reference:

From: David Lyle mailto:dkly...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 16, 2015 at 11:19 AM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [horizon]

A couple of high priority items for Horizon's Kilo release could use some 
targeted attention to drive progress forward. These items are related to 
angularJS based UX improvements, especially Launch Instance and the conversion 
of the Identity Views.

These efforts are suffering from a few issues, education, consensus on 
direction, working through blockers and drawn out dev and review cycles. In 
order to help insure these high priority issues have the best possible chance 
to land in Kilo, I have proposed a virtual sprint to happen this week. I 
created an etherpad [1] with proposed dates and times. Anyone who is interested 
is welcome to participate, please register your intent in the etherpad and 
availability.

David



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api][neutron] Re: Best API for generating subnets from pool

2015-03-09 Thread Ryan Moats

Sorry, pulling this from the archives [1] because I seem to have lost the
original in my mailbox...

> 1) (this is more for neutron people) Is there a real use case for
> requesting specific gateway IPs and allocation pools when allocating from
a
> pool? If not, maybe we should let the pool set a default gateway IP and
> allocation pools. The user can then update them with another call.
Another
> option would be to provide "subnet templates" from which a user can
choose.
> For instance one template could have the gateway as first IP, and then a
> single pool for the rest of the CIDR.

The use case that always comes to mind when questions like this are asked
is "green-field" vs
"brown-field" - if I am deploying something new, then no, I don't need the
ability to specify gateway IPs.  However, if I am dealing with an existing
deployment migration,
then I will want to be able to represent what is already out there.

> 2) Is the action of creating a subnet from a pool better realized as a
> different way of creating a subnet, or should there be some sort of "pool
> action"? Eg.:
>
> POST /subnet_pools/my_pool_id/subnet
> {'prefix_len': 24}
>
> which would return a subnet response like this (note prefix_len might not
> be needed in this case)
>
> {'id': 'meh',
>  'cidr': '192.168.0.0/24',
>  'gateway_ip': '192.168.0.1',
>  'pool_id': 'my_pool_id'}
>
> I am generally not a big fan of RESTful actions. But in this case the
> semantics of the API operation are that of a subnet creation from within
a
> pool, so that might be ok.

This would be my preferred approach of the two given.

Ryan Moats

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-March/058538.html__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Weekly subteam status report

2015-03-09 Thread Ruby Loo
Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)


(As of Mon, 09 Mar 17:00 UTC)
Open: 133 (+4)
3 new (0), 32 in progress (0), 0 critical, 18 high and 8 incomplete


Drivers
==

IPA (jroll/JayF/JoshNang)
--
pxe_ipa job now running as non-voting in check \o/

iLO (wanyen)
--
Milestone for these CRs needs to be set to 'kilo-3' as these are important
fixes for iLO drivers.
- https://bugs.launchpad.net/ironic/+bug/1418327
  pxe_ilo and iscsi_ilo driver sets capabilities:boot_mode in node property
  if there is none

- https://bugs.launchpad.net/ironic/+bug/1412559
  agent_ilo deploy driver does not set boot mode properly

iRMC (naohirot)
-
iRMC Virtual Media driver is being tested using PRIMARGY servers, so far
there is no major issues.

At the same time, Kilo-3 higher priority features source code are being
investigated so that Fujitsu can contribute to the review.

iRMC virtual-media-deploy has been bumped to Liberty due to time
constraints on the core review team


Oslo (GheRivero)
==

New patch sets waiting:
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:oslo,n,z
- oslo.policy
- oslo.log
- oslo.review
- Sync from oslo.incubator

Pending generate config:
- Consensus in how to generate config files.
- Legacy config_generator scripts still present, but options from oslo libs
  are no longer generated (only those in oslo.incubator)
- General approach by other projects (
https://review.openstack.org/#/c/128005/)
  is awful.
- Possible solutions:
- Continue using legacy config.generator scripts
  - no external opts, oslo.log, olso.policy,... will be generated
  - no longer maintained and removed from the repo
- Move all config options to one single file (or a couple of them:
drivers, ironic)
- ... still digging...



Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Recent issues with our review workflow

2015-03-09 Thread Jay Pipes

+1 on both points, Ryan.

On 03/09/2015 01:21 PM, Ryan Moe wrote:

Hi All,

I've noticed a few times recently where reviews have been abandoned by
people who were not the original authors. These reviews were only days
old and there was no prior notice or discussion. This is both rude and
discouraging to contributors. Reasons for abandoning should be discussed
on the review and/or in email before any action is taken.

I would also like to talk about issues with our backporting procedure
[0]. Over the past few weeks I've seen changes proposed to stable
branches before the change in master was merged. This serves no purpose
other than to increase our workload. We also run the risk of
inconsistency between the same commit on master and stable branches.
Please, do not propose backports until the change has been merged to master.

[0]
https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series

Thanks,
Ryan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] bower

2015-03-09 Thread Matthew Farina
Richard, thanks for sharing this. I hope we can move to bower sooner rather
than later.

On Sat, Mar 7, 2015 at 5:26 PM, Richard Jones 
wrote:

> On Sun, 8 Mar 2015 at 04:59 Michael Krotscheck 
> wrote:
>
>> Anyone wanna hack on a bower mirror puppet module with me?
>>
>
> BTW are you aware of this spec for bower/Horizon/infra?
>
> https://review.openstack.org/#/c/154297/
>
>
> Richard
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] auth migration and user data migration

2015-03-09 Thread Weidong Shao
John,

thanks for the reply. See questions inline.

thanks,
Weidong

On Mon, Mar 9, 2015 at 8:23 AM John Dickinson  wrote:

>
> > On Mar 9, 2015, at 9:46 AM, Weidong Shao  wrote:
> >
> > hi,
> >
> > I have a standalone swift cluster with swauth as the auth module. By
> standalone, I mean the cluster is not in the context of OpenStack, or
> keystone server.
>
> That's completely fine (and not uncommon at all).
>
> >
> > Now I have moved ACL logic to application level and decided to have all
> data in swift under one user account. I have a few questions on this change:
> >
> > 1) is it possible to migrate swauth to the tempAuth? (assuming tempauth
> will be supported in newer swift versions).
>
> Why?
>
> Yes, tempauth is still in swift. It's mostly there for testing. I wouldn't
> recommend using it in production.
>
>
I noticed swauth project is not actively maintained. In my local testing,
swauth did not work after I upgraded swift to latest.

I want to migrate off swauth. What are the auth alternative beside tempauth?


>
> >
> > 2) Is there a way to migrate data associated with one user account to
> another user?
>
> "user account" Do you mean the identity or the account part of the Swift
> URL? If the former, then changing the reference in the auth system should
> probably work. If the latter, then you'll need to copy from one account to
> the other (Swift supports account-to-account server-side copy).
>
>
>
I think it is for both.

The former applies because I plan to change auth, I hope that all the data
access in intact with a auth url change, so that I do not need to migrate
the data after the auth change.

On account-to-account server-side copy, is there an operation that is
similar to "mv"? i.e., I want the data associated with an account to assume
ownership of  a new account, but I do not want to copy the actual data on
the disks.


> >
> > Thanks,
> > Weidong
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tooz] 1.0 goals

2015-03-09 Thread Kiall Mac Innes
On 09/03/15 13:56, Julien Danjou wrote:
> Hi fellow developers,
> 
> It'd be nice to achieve a 1.0 release for tooz, as some projects are
> already using it, and more are going to adopt it.
> 
> I think we should collect features and potential bugs/limitations we'd
> like to have and fix before that. Ideas, thoughts?
> 
> Cheers,
> 

Funnily enough, We just started looking at tooz for Designate -
specifically hoping for functionality similar to Kazoo's partitioner[1].

For our metering and billing use case, we'll need to periodically emit
and "dns.zone.exists" event for every zone, unlike for example nova
where nova-compute handles this, and the # of events it needs to emit
each hour is bounded by the number of VMs you can squeeze onto a single
compute node, there can be millions of zones owned by a single node.
Having something like Kazoo's partitioner would certainly help here!

Thanks,
Kiall

[1]: http://kazoo.readthedocs.org/en/latest/api/recipe/partitioner.html



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [swift] dependency on non-standardized, private APIs

2015-03-09 Thread Matthew Farina
David,

FYI, the last time I chatted with John Dickinson I learned there are
numerous API elements not documented. Not meant to be private but the docs
have not kept up. How should we handle that?


Thanks,
Matt Farina

On Sat, Mar 7, 2015 at 5:25 PM, David Lyle  wrote:

> I agree that Horizon should not be requiring optional headers. Changing
> status of bug.
>
> On Tue, Mar 3, 2015 at 5:51 PM, Jay Pipes  wrote:
>
>> Added [swift] to topic.
>>
>> On 03/03/2015 07:41 AM, Matthew Farina wrote:
>>
>>> Radoslaw,
>>>
>>> Unfortunately the documentation for OpenStack has some holes. What you
>>> are calling a private API may be something missed in the documentation.
>>> Is there a documentation bug on the issue? If not one should be created.
>>>
>>
>> There is no indication that the X-Timestamp or X-Object-Meta-Mtime HTTP
>> headers are part of the public Swift API:
>>
>> http://developer.openstack.org/api-ref-objectstorage-v1.html
>>
>> I don't believe this is a bug in the Swift API documentation, either.
>> John Dickinson (cc'd) mentioned that the X-Timestamp HTTP header is
>> required for the Swift implementation of container replication (John,
>> please do correct me if wrong on that).
>>
>> But that is the private implementation and not part of the public API.
>>
>>  In practice OpenStack isn't a specification and implementation. The
>>> documentation has enough missing information you can't treat it this
>>> way. If you want to contribute to improving the documentation I'm sure
>>> the documentation team would appreciate it. The last time I looked there
>>> were a number of undocumented public swift API details.
>>>
>>
>> The bug here is not in the documentation. The bug is that Horizon is
>> coded to rely on HTTP headers that are not in the Swift API. Horizon should
>> be fixed to use .get('X-Timestamp') instead of doing
>> ['X-Timestamp'] in its view pages for container details. There are
>> already patches up that the Horizon developers have, IMO erroneously,
>> rejected stating this is a problem in Ceph RadosGW for not properly
>> following the Swift API).
>>
>> Best,
>> -jay
>>
>>  Best of luck,
>>> Matt Farina
>>>
>>> On Tue, Mar 3, 2015 at 9:59 AM, Radoslaw Zarzynski
>>> mailto:rzarzyn...@mirantis.com>> wrote:
>>>
>>> Guys,
>>>
>>> I would like discuss a problem which can be seen in Horizon: breaking
>>> the boundaries of public, well-specified Object Storage API in favour
>>> of utilizing a Swift-specific extensions. Ticket #1297173 [1] may
>>> serve
>>> as a good example of such violation. It is about relying on
>>> non-standard (in the terms of OpenStack Object Storage API v1) and
>>> undocumented HTTP header provided by Swift. In order to make
>>> Ceph RADOS Gateway work correctly with Horizon, developers had to
>>> inspect sources of Swift and implement the same behaviour.
>>>
>>>  From my perspective, that practise breaks the the mission of
>>> OpenStack
>>> which is much more than delivering yet another IaaS/PaaS
>>> implementation.
>>> I think its main goal is to provide a universal set of APIs covering
>>> all
>>> functional areas relevant for cloud computing, and to place that set
>>> of APIs in front as many implementations as possible. Having an open
>>> source reference implementation of a particular API is required to
>>> prove
>>> its viability, but is secondary to having an open and documented API.
>>>
>>> I have full understanding that situations where the public OpenStack
>>> interfaces are insufficient to get the work done might exist.
>>> However, introduction of dependency on implementation-specific
>>> feature
>>> (especially without giving the users a choice via e.g. some
>>> configuration option) is not the proper way to deal with the problem.
>>>  From my point of view, such cases should be handled with adoption of
>>> new, carefully designed and documented version of the given API.
>>>
>>> In any case I think that Horizon, at least basic functionality,
>>> should
>>> work with any storage which provides Object Storage API.
>>> That being said, I'm willing to contribute such patches, if we decide
>>> to go that way.
>>>
>>> Best regards,
>>> Radoslaw Zarzynski
>>>
>>> [1] https://bugs.launchpad.net/horizon/+bug/1297173
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> >> unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>>> un

Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE CASCADE

2015-03-09 Thread Mike Bayer


Morgan Fainberg  wrote:

> 
> 
> On Monday, March 9, 2015, Mike Bayer  wrote:
> 
> 
> Wei D  wrote:
> 
> > +1,
> >
> >
> >
> > I am fan of checking the constraints in the controller level instead of 
> > relying on FK constraints itself, thanks.
> 
> Why shouldn’t the storage backends, be they relational or not, be tasked
> with verifying integrity of data manipulations? If data integrity rules are
> pushed out to the frontend, the frontend starts implementing parts of the
> backend. Other front-ends to the same persistence backend might not have the
> same rule checks, and you are now wide open for invalid data to be
> persisted.
> 
> Front-ends should of course be encouraged to report on a potential issue in
> integrity before proceeding with an operation, but IMO the backend should
> definitely not allow the operation to proceed if the frontend fails to check
> for a constraint. Persistence operations in which related objects must also
> be modified in response to a primary object (e.g. a CASCADE situation),
> else integrity will fail, should also be part of the backend, not the front 
> end.
> 
> 
> 
> 
> You are assuming data is stored in an all SQL environment. In keystone it is 
> highly unlikely that you can make this assumption. When you discuss users, 
> groups, projects, domains, roles, assignments, etc... All of these could be 
> crossing SQL, LDAP, MongoDB, etc. in short, do not assume you are even 
> talking the same language.  This is why FKs are of minimal benefit to us.

You should read my paragraph above again; I referred to “the storage
backends, **be they relational or not**, be tasked with verifying
integrity”. Which means, for example in an LDAP system where deleting a
parent key means all the child keys are automatically deleted, that is what
I mean by “the backend has verified integrity”. The controller didn’t need
to dip into the LDAP backend’s system and make sure that the child keys of
the parent were removed first.   The LDAP system naturally performs this
task.

In a relational backend, it should not be possible to perform an operation
where a row is in place which refers to a primary key that no longer exists.
This should be independent of the system which refers to this schema, even
if that system might be unaware that the backend is in fact relational. I’m
a little surprised this is suddenly controversial.

> 
> So tl;dr, we have an architecture that is not conducive to foreign keys, and 
> therefore should not use them beyond bare-minimums, instead rely on the 
> manager to do business logic. This is not the case for all OpenStack projects.

If your relational backend contains more than one table, and any of these
tables happen to store primary key identifiers from some of the other
tables, then foreign keys are relevant and necessary.


> > Best Regards,
> >
> > Dave Chen
> >
> >
> >
> > From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
> > Sent: Monday, March 09, 2015 2:29 AM
> > To: David Stanek; OpenStack Development Mailing List (not for usage 
> > questions)
> > Subject: Re: [openstack-dev] [Keystone]ON DELETE RESTRICT VS ON DELETE 
> > CASCADE
> >
> >
> >
> > On March 8, 2015 at 11:24:37 AM, David Stanek (dsta...@dstanek.com) wrote:
> >
> >
> > On Sun, Mar 8, 2015 at 1:37 PM, Mike Bayer  wrote:
> >
> > can you elaborate on your reasoning that FK constraints should be used less
> > overall?  or do you just mean that the client side should be mirroring the 
> > same
> > rules that would be enforced by the FKs?
> >
> >
> > I don't think he means that we will use them less.  Our SQL backends are 
> > full of them.  What Keystone can't do is rely on them because not all 
> > implementations of our backends support FKs.
> >
> > 100% spot on David. We support implementations that have no real concept of 
> > FK and we cannot assume that a cascade (or restrict) will occur on these 
> > implementations.
> >
> >
> >
> > —Morga
> >
> >
> >
> > --
> >
> > David
> > blog: http://www.traceback.org
> > twitter: http://twitter.com/dstanek
> >
> > www: http://dstanek.com
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> _

Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread Jay Pipes

On 03/08/2015 08:10 AM, Alex Xu wrote:

Thanks for Jay point this out! If we have agreement on this and document
it, that will be great for guiding developer how to add new API.

I know we didn't want extension for API. But I think we still
need modularity. I don't think we should put everything in a single
file, that file will become huge in the future and hard to maintenance.


I don't think everything should be in a single file either. In fact, 
I've never advocated for that.



We can make the 'extension' not configurable. Replace 'extension' with
another name, deprecate the extension info api int he future But
that is not mean we should put everything in a file.


I didn't say that in my email. I'm not sure where you got the impression 
I want to put everything in one file?



For modularity, we need define what should be in a separated module(it
is extension now.) There are three cases:

1. Add new resource
 This is totally worth to put in a separated module.


Agreed.


2. Add new sub-resource
 like server-tags, I prefer to put in a separated module, I don't
think put another 100 lines code in the servers.py is good choice.


Agreed, which is exactly what I said in my email:

"Similarly, new microversion API functionality should live in a
module, as a top-level (or subcollection) Controller in
/nova/api/openstack/compute/, and should not be in the
/nova/api/openstack/compute/plugins/ directory. Why? Because it's
not a plugin."


3. extend attributes and methods for a existed resource
like add new attributes for servers, we can choice one of existed
module to put it in. Just like this patch
https://review.openstack.org/#/c/155853/
But for servers-tags, it's sub-resource, we can put it in its-own
module.


Agreed, and that's what I put in my email.


If we didn't want to support extension right now, we can begin from not
show servers-tags in extension info API first. That means extension info
is freeze now. We deprecated the extension info api in later version.


I don't understand what you're saying here. Could you elaborate? What I 
am asking for is for new functionality (like the server-tags 
subcollection resource), just add a new module called 
/nova/api/openstack/compute/server_tags.py, create a Controller object 
in that file with the new server tags resource, and don't use any of the 
API extensions framework whatsoever.


In addition to that, for the changes to the main GET 
/servers/{server_id} resource, use microversions to decorate the 
/nova/api/openstack/compute/servers.py.Controller.show() method for 2.4 
and add a "tags" key to the dict (note: NOT a "os-server-tags:tags" key) 
returned by GET /servers/{server_id}. No use of API extensions needed.


Best,
-jay


Thanks
Alex

2015-03-08 8:31 GMT+08:00 Jay Pipes mailto:jaypi...@gmail.com>>:

Hi Stackers,

Now that microversions have been introduced to the Nova API (meaning
we can now have novaclient request, say, version 2.3 of the Nova API
using the special X-OpenStack-Nova-API-Version HTTP header), is
there any good reason to require API extensions at all for *new*
functionality.

Sergey Nikitin is currently in the process of code review for the
final patch that adds server instance tagging to the Nova API:

https://review.openstack.org/#__/c/128940


Unfortunately, for some reason I really don't understand, Sergey is
being required to create an API extension called "os-server-tags" in
order to add the server tag functionality to the API. The patch
implements the 2.4 Nova API microversion, though, as you can see
from this part of the patch:


https://review.openstack.org/#__/c/128940/43/nova/api/__openstack/compute/plugins/v3/__server_tags.py



What is the point of creating a new "plugin"/API extension for this
new functionality? Why can't we just modify the
nova/api/openstack/compute/__server.py Controller.show() method and
decorate it with a 2.4 microversion that adds a "tags" attribute to
the returned server dictionary?


https://github.com/openstack/__nova/blob/master/nova/api/__openstack/compute/servers.py#__L369



Because we're using an API extension for this new server tags
functionality, we are instead having the extension "extend" the
server dictionary with an "os-server-tags:tags" key containing the
list of string tags.

This is ugly and pointless. We don't need to use API extensions any
more for this stuff.

A client knows that server tags are supported by the 2.4 API
microversion. If the client requests the 2.4+ API, then we should
just include the "tags" attribute in the server dictionary.

Similarly, new microversion API functionality should l

Re: [openstack-dev] [neutron][vpnaas] VPNaaS Subteam meetings

2015-03-09 Thread Paul Michali
I guess I'll vote for (D), so that there is the possibility of early (1400
UTC) and late (2100) on alternating weeks, given we don't have much to
discuss lately and then changing to (C), if things pick up.

Let's discuss at Tuesday's meeting (note DST change for US folks), at 1500
UTC.



PCM (Paul Michali)

IRC pc_m (irc.freenode.com)
Twitter... @pmichali


On Fri, Mar 6, 2015 at 1:14 AM, Joshua Zhang 
wrote:

> Hi all,
>
> I would also vote for (A) with 1500 UTC which is 23:00 in Beijing time
> -:)
>
> On Fri, Mar 6, 2015 at 1:22 PM, Mohammad Hanif  wrote:
>
>>   Hi all,
>>
>>  I would also vote for (C) with 1600 UTC or later.  This  will hopefully
>> increase more participation from the Pacific time zone.
>>
>>  Thanks,
>> —Hanif.
>>
>>   From: Mathieu Rohon
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> Date: Thursday, March 5, 2015 at 1:52 AM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> Subject: Re: [openstack-dev] [neutron][vpnaas] VPNaaS Subteam meetings
>>
>> Hi,
>>
>>  I'm fine with C) and 1600 UTC would be more adapted for EU time Zone :)
>>
>>  However, I Agree that neutron-vpnaas meetings was mainly focus on
>> maintaining the current IPSec implementation, by managing the slip out,
>> adding StrongSwan support and adding functional tests.
>>  Maybe we will get a broader audience once we will speak about adding new
>> use cases such as edge-vpn.
>>  Edge-vpn use cases overlap with the Telco WG VPN use case [1]. May be
>> those edge-vpn discussions should occur during the Telco WG meeting?
>>
>> [1]
>> https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases#VPN_Instantiation
>>
>> On Thu, Mar 5, 2015 at 3:02 AM, Sridhar Ramaswamy 
>> wrote:
>>
>>> Hi Paul.
>>>
>>>  I'd vote for (C) and a slightly later time-slot on Tuesdays - 1630 UTC
>>> (or later).
>>>
>>>  The meetings so far was indeed quite useful. I guess the current busy
>>> Kilo cycle is also contributing to the low turnout. As we pick up things
>>> going forward this forum will be quite useful to discuss edge-vpn and,
>>> perhaps, other vpn variants.
>>>
>>>  - Sridhar
>>>
>>>  On Tue, Mar 3, 2015 at 3:38 AM, Paul Michali  wrote:
>>>
  Hi all! The email, that I sent on 2/24 didn't make it to the mailing
 list (no wonder I didn't get responses!). I think I had an issue with my
 email address used - sorry for the confusion!

  So, I'll hold the meeting today (1500 UTC meeting-4, if it is still
 available), and we can discuss this...


  We've been having very low turnout for meetings for the past several
 weeks, so I'd like to ask those in the community interested in VPNaaS, what
 the preference would be regarding meetings...

  A) hold at the same day/time, but only on-demand.
 B) hold at a different day/time.
 C) hold at a different day/time, but only on-demand.
 D) hold as a on-demand topic in main Neutron meeting.

  Please vote your interest, and provide desired day/time, if you pick
 B or C. The fallback will be (D), if there's not much interest anymore for
 meeting, or we can't seem to come to a consensus (or super-majority :)

  Regards,

  PCM

  Twitter: @pmichali
 TEXT: 6032894458
 PCM (Paul Michali)

  IRC pc_m (irc.freenode.com)
 Twitter... @pmichali



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards
> Zhang Hua(张华)
> Software Engineer | Canonical
> IRC:  zhhuabj
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listin

Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread Jay Pipes

On 03/09/2015 07:32 AM, Christopher Yeoh wrote:


So the first thing I think we want to distinguish between plugins
being a REST API user or operator concept and it being a tool
developers use as a framework to support the Nova REST API. As I've
mentioned before I've no problem with the feature set of the API
being fixed (per microversion) across all Nova deployments. Get back
to me when we have consensus on that and its trivial to implement and
we'll no longer have the concept of core andextension/plugin.


Why are we continuing to add API extensions for new stuff?


But plugin like implementations using Stevedore as a tool for developers
to keep good modularity has proven to be very useful to keep complexity
level lower and interactions between modules much clearer.


Uhm, I beg to differ. The "plugin" implementation using Stevedore has 
made things way more complicated than they need to be.


It would be much simpler to have a single directory call 
/nova/api/openstack/compute/ with modules representing each resource 
collection controller.


> servers.py is

an example of this where in v2 I think we have/had the most complex
method and even with all the fix up work which has been done on it it is
still very complicated to understand.


It doesn't *need* to be more complicated. The microversions framework 
decorators allow us to encapsulate the differences between microversions 
for a particular method.


Best,
-jay


On Sun, Mar 8, 2015 at 11:01 AM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:

Hi Stackers,

Now that microversions have been introduced to the Nova API (meaning
we can now have novaclient request, say, version 2.3 of the Nova API
using the special X-OpenStack-Nova-API-Version HTTP header), is
there any good reason to require API extensions at all for *new*
functionality.

Sergey Nikitin is currently in the process of code review for the
final patch that adds server instance tagging to the Nova API:

https://review.openstack.org/#__/c/128940


Unfortunately, for some reason I really don't understand, Sergey is
being required to create an API extension called "os-server-tags" in
order to add the server tag functionality to the API. The patch
implements the 2.4 Nova API microversion, though, as you can see
from this part of the patch:


https://review.openstack.org/#__/c/128940/43/nova/api/__openstack/compute/plugins/v3/__server_tags.py



What is the point of creating a new "plugin"/API extension for this
new functionality? Why can't we just modify the
nova/api/openstack/compute/__server.py Controller.show() method and
decorate it with a 2.4 microversion that adds a "tags" attribute to
the returned server dictionary?


Actually I think it does more than just add extra reponse information:
- it adds extra tags parameter to show
   - it doesn't add it to index, but it probably should add the response
information to detail to be consistent with the rest of the API
- It adds a new resource /servers/server_id/tags
- with create, delete and delete all supported. I don't think that
these belong in servers.py


https://github.com/openstack/__nova/blob/master/nova/api/__openstack/compute/servers.py#__L369



Because we're using an API extension for this new server tags
functionality, we are instead having the extension "extend" the
server dictionary with an "os-server-tags:tags" key containing the
list of string tags.

This is ugly and pointless. We don't need to use API extensions any
more for this stuff.


So we had a prefix rule originally in V2 to allow for extensions and
guarantee no name clashes. I'd be happy removing this requirement, even
removing old ones as long as we have consensus.

A client knows that server tags are supported by the 2.4 API
microversion. If the client requests the 2.4+ API, then we should
just include the "tags" attribute in the server dictionary.

Similarly, new microversion API functionality should live in a
module, as a top-level (or subcollection) Controller in
/nova/api/openstack/compute/, and should not be in the
/nova/api/openstack/compute/__plugins/ directory. Why? Because it's
not a plugin.

So I don't see how that changes whether we're using plugins (from a user
point of view) or not. The good news for you is that
there is fixing the shambles of a directory structure for the api is on
the list of things to do, it just wasn't a high prioirty things for us
in Kilo,
get v2.1 and microversions out. For example, we have v3 in the directory
path as well for historical reasons and we also have a contrib directory
in compute and none of those are really "contrib" now either.  Now the
nova/

Re: [openstack-dev] [Horizon] [swift] dependency on non-standardized, private APIs

2015-03-09 Thread Anne Gentle
On Mon, Mar 9, 2015 at 2:32 PM, Matthew Farina  wrote:

> David,
>
> FYI, the last time I chatted with John Dickinson I learned there are
> numerous API elements not documented. Not meant to be private but the docs
> have not kept up. How should we handle that?
>
>
I've read through this thread and the bug comments and searched through the
docs and I'd like more specifics: which docs have not kept up? Private API
docs for swift internal workings? Or is this a header that could be in
_some_ swift (not ceph) deployments?

Thanks,
Anne


>
> Thanks,
> Matt Farina
>
> On Sat, Mar 7, 2015 at 5:25 PM, David Lyle  wrote:
>
>> I agree that Horizon should not be requiring optional headers. Changing
>> status of bug.
>>
>> On Tue, Mar 3, 2015 at 5:51 PM, Jay Pipes  wrote:
>>
>>> Added [swift] to topic.
>>>
>>> On 03/03/2015 07:41 AM, Matthew Farina wrote:
>>>
 Radoslaw,

 Unfortunately the documentation for OpenStack has some holes. What you
 are calling a private API may be something missed in the documentation.
 Is there a documentation bug on the issue? If not one should be created.

>>>
>>> There is no indication that the X-Timestamp or X-Object-Meta-Mtime HTTP
>>> headers are part of the public Swift API:
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html
>>>
>>> I don't believe this is a bug in the Swift API documentation, either.
>>> John Dickinson (cc'd) mentioned that the X-Timestamp HTTP header is
>>> required for the Swift implementation of container replication (John,
>>> please do correct me if wrong on that).
>>>
>>> But that is the private implementation and not part of the public API.
>>>
>>>  In practice OpenStack isn't a specification and implementation. The
 documentation has enough missing information you can't treat it this
 way. If you want to contribute to improving the documentation I'm sure
 the documentation team would appreciate it. The last time I looked there
 were a number of undocumented public swift API details.

>>>
>>> The bug here is not in the documentation. The bug is that Horizon is
>>> coded to rely on HTTP headers that are not in the Swift API. Horizon should
>>> be fixed to use .get('X-Timestamp') instead of doing
>>> ['X-Timestamp'] in its view pages for container details. There are
>>> already patches up that the Horizon developers have, IMO erroneously,
>>> rejected stating this is a problem in Ceph RadosGW for not properly
>>> following the Swift API).
>>>
>>> Best,
>>> -jay
>>>
>>>  Best of luck,
 Matt Farina

 On Tue, Mar 3, 2015 at 9:59 AM, Radoslaw Zarzynski
 mailto:rzarzyn...@mirantis.com>> wrote:

 Guys,

 I would like discuss a problem which can be seen in Horizon:
 breaking
 the boundaries of public, well-specified Object Storage API in
 favour
 of utilizing a Swift-specific extensions. Ticket #1297173 [1] may
 serve
 as a good example of such violation. It is about relying on
 non-standard (in the terms of OpenStack Object Storage API v1) and
 undocumented HTTP header provided by Swift. In order to make
 Ceph RADOS Gateway work correctly with Horizon, developers had to
 inspect sources of Swift and implement the same behaviour.

  From my perspective, that practise breaks the the mission of
 OpenStack
 which is much more than delivering yet another IaaS/PaaS
 implementation.
 I think its main goal is to provide a universal set of APIs
 covering all
 functional areas relevant for cloud computing, and to place that set
 of APIs in front as many implementations as possible. Having an open
 source reference implementation of a particular API is required to
 prove
 its viability, but is secondary to having an open and documented
 API.

 I have full understanding that situations where the public OpenStack
 interfaces are insufficient to get the work done might exist.
 However, introduction of dependency on implementation-specific
 feature
 (especially without giving the users a choice via e.g. some
 configuration option) is not the proper way to deal with the
 problem.
  From my point of view, such cases should be handled with adoption
 of
 new, carefully designed and documented version of the given API.

 In any case I think that Horizon, at least basic functionality,
 should
 work with any storage which provides Object Storage API.
 That being said, I'm willing to contribute such patches, if we
 decide
 to go that way.

 Best regards,
 Radoslaw Zarzynski

 [1] https://bugs.launchpad.net/horizon/+bug/1297173

 
 __
 OpenStack Development Mailing List (not for usage questions)
   

Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread Sean Dague
On 03/09/2015 03:37 PM, Jay Pipes wrote:
> On 03/08/2015 08:10 AM, Alex Xu wrote:
>> Thanks for Jay point this out! If we have agreement on this and document
>> it, that will be great for guiding developer how to add new API.
>>
>> I know we didn't want extension for API. But I think we still
>> need modularity. I don't think we should put everything in a single
>> file, that file will become huge in the future and hard to maintenance.
> 
> I don't think everything should be in a single file either. In fact,
> I've never advocated for that.
> 
>> We can make the 'extension' not configurable. Replace 'extension' with
>> another name, deprecate the extension info api int he future But
>> that is not mean we should put everything in a file.
> 
> I didn't say that in my email. I'm not sure where you got the impression
> I want to put everything in one file?
> 
>> For modularity, we need define what should be in a separated module(it
>> is extension now.) There are three cases:
>>
>> 1. Add new resource
>>  This is totally worth to put in a separated module.
> 
> Agreed.
> 
>> 2. Add new sub-resource
>>  like server-tags, I prefer to put in a separated module, I don't
>> think put another 100 lines code in the servers.py is good choice.
> 
> Agreed, which is exactly what I said in my email:
> 
> "Similarly, new microversion API functionality should live in a
> module, as a top-level (or subcollection) Controller in
> /nova/api/openstack/compute/, and should not be in the
> /nova/api/openstack/compute/plugins/ directory. Why? Because it's
> not a plugin."
> 
>> 3. extend attributes and methods for a existed resource
>> like add new attributes for servers, we can choice one of existed
>> module to put it in. Just like this patch
>> https://review.openstack.org/#/c/155853/
>> But for servers-tags, it's sub-resource, we can put it in its-own
>> module.
> 
> Agreed, and that's what I put in my email.
> 
>> If we didn't want to support extension right now, we can begin from not
>> show servers-tags in extension info API first. That means extension info
>> is freeze now. We deprecated the extension info api in later version.
> 
> I don't understand what you're saying here. Could you elaborate? What I
> am asking for is for new functionality (like the server-tags
> subcollection resource), just add a new module called
> /nova/api/openstack/compute/server_tags.py, create a Controller object
> in that file with the new server tags resource, and don't use any of the
> API extensions framework whatsoever.
> 
> In addition to that, for the changes to the main GET
> /servers/{server_id} resource, use microversions to decorate the
> /nova/api/openstack/compute/servers.py.Controller.show() method for 2.4
> and add a "tags" key to the dict (note: NOT a "os-server-tags:tags" key)
> returned by GET /servers/{server_id}. No use of API extensions needed.

So possibly another way to think about this is our prior signaling of
what was supported by Nova was signaled by the extension list. Our code
was refactored into a way that supported optional loading by that unit.

As we're making things less optional it probably makes sense to evolve
the API code tree to look more like our REST resource tree. Yes, that
means servers.py ends up being big, but it is less confusing that all
servers related code is in that file vs all over a bunch of other files.

So I'd agree that in this case server tags probably should just be in
servers.py. I also think long term we should do some "plugin collapse"
for stuff that's all really just features on one resource tree so that
the local filesystem code structure looks a bit more like the REST url tree.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] oslo.log 1.0.0 release

2015-03-09 Thread Doug Hellmann


On Mon, Mar 9, 2015, at 09:21 AM, Doug Hellmann wrote:
> The Oslo team is excited to announce the release of:
> 
> oslo.log 1.0.0: oslo.log library
> 
> For more details, please see the git log history below and:
> 
> http://launchpad.net/oslo.log/+milestone/1.0.0
> 
> Please report issues through launchpad:
> 
> http://bugs.launchpad.net/oslo.log
> 
> Notable changes
> 
> 
> We hope to make this the last release of the library for the Kilo cycle.
> 
> Changes in oslo.log 0.4.0..1.0.0
> 
> 
> 2142405 Updated from global requirements
> 2bf8164 Make use_syslog=True log to syslog via /dev/log
> cc8d42a update urllib3.util.retry log level to WARN
> 
> Diffstat (except docs and test files)
> -
> 
> oslo_log/_options.py | 2 ++
> oslo_log/log.py  | 6 --
> requirements.txt | 4 ++--
> 3 files changed, 8 insertions(+), 4 deletions(-)
> 
> 
> Requirements updates
> 
> 
> diff --git a/requirements.txt b/requirements.txt
> index 61a5b83..54ada9b 100644
> --- a/requirements.txt
> +++ b/requirements.txt
> @@ -9,2 +9,2 @@ iso8601>=0.1.9
> -oslo.config>=1.6.0  # Apache-2.0
> -oslo.context>=0.1.0 # Apache-2.0
> +oslo.config>=1.9.0  # Apache-2.0
> +oslo.context>=0.2.0 # Apache-2.0
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

The gnocchi team found an issue with this release in their test
environment because their test dependencies ended up mixing the
incubated and library versions of the log code. The symptom is a test
failure with the message:

DuplicateOptError: duplicate option: default_log_levels

The simplest solution in this case was to update the application to use
the log library instead of the incubated logging code.

If you encounter a similar issue, please check with your Oslo liaison or
in #openstack-oslo if you would like help with updating to use oslo.log.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] oslo.config 1.9.1 release

2015-03-09 Thread Doug Hellmann


On Mon, Mar 9, 2015, at 09:09 AM, Doug Hellmann wrote:
> The Oslo team is glad to announce the release of:
> 
> oslo.config 1.9.1: Oslo Configuration API
> 
> For more details, please see the git log history below and:
> 
> http://launchpad.net/oslo.config/+milestone/1.9.1
> 
> Please report issues through launchpad:
> 
> http://bugs.launchpad.net/oslo.config
> 
> Notable changes
> 
> 
> We hope to make this the last release of the library for the Kilo cycle.
> 
> Changes in oslo.config 1.9.0..1.9.1
> ---
> 
> 0f550d7 Generate help text indicating possible values
> 9a6de3f fix bug link in readme
> 
> Diffstat (except docs and test files)
> -
> 
> README.rst  |  2 +-
> oslo_config/generator.py|  4 
> 4 files changed, 37 insertions(+), 1 deletion(-)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

This change to the config sample generator is breaking on StrOpt option
definitions where None is one of the predefined set of choices. We have
a bug [1] with a patch in the gate now, and expect to have an updated
release later today.

Doug

[1] https://bugs.launchpad.net/oslo.config/+bug/1429981

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Common library for shared code

2015-03-09 Thread Jan Provazník

Hi,
it would make sense to have a library for the code shared by Tuskar UI 
and CLI (I mean TripleO CLI - whatever it will be, not tuskarclient 
which is just a thing wrapper for Tuskar API). There are various actions 
which consist from "more that a single API call to an openstack 
service", to give some examples:


- nodes registration - for loading a list of nodes from a user defined 
file, this means parsing a CSV file and then feeding Ironic with this data
- decommission a resource node - this might consist of disabling 
monitoring/health checks on this node, then gracefully shut down the node
- stack breakpoints - setting breakpoints will allow manual 
inspection/validation of changes during stack-update, user can then 
update nodes one-by-one and trigger rollback if needed


It would be nice to have a place (library) where the code could live and 
where it could be shared both by web UI and CLI. We already have 
os-cloud-config [1] library which focuses on configuring OS cloud after 
first installation only (setting endpoints, certificates, flavors...) so 
not all shared code fits here. It would make sense to create a new 
library where this code could live. This lib could be placed on 
Stackforge for now and it might have very similar structure as 
os-cloud-config.


And most important... what is the best name? Some of ideas were:
- tuskar-common
- tripleo-common
- os-cloud-management - I like this one, it's consistent with the 
os-cloud-config naming


Any thoughts? Thanks, Jan


[1] https://github.com/openstack/os-cloud-config

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][qa] Forward plan for heat scenario tests

2015-03-09 Thread Steve Baker

On 10/03/15 03:23, Matthew Treinish wrote:

On Mon, Mar 09, 2015 at 09:52:54AM -0400, David Kranz wrote:

Since test_server_cfn_init was recently moved from tempest to the heat
functional tests, there are no subclasses of OrchestrationScenarioTest.
If there is no plan to add any more heat scenario tests to tempest I would
like to remove that class. So I want to confirm that future scenario tests
will go in the heat tree.


I think it perfectly fine to remove it, it was something that probably should
have been part of Steve's patch which removed the testing. It's unused code
right now so there is no reason to keep it around.


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] new failures running Barbican functional tests

2015-03-09 Thread Steve Heyman
We are getting issues running barbican functional tests - seems to have started 
sometime between Thursday last week (3/5) and today (3/9)

Seems that oslo config giving us DuplicateOptErrors now.  Our functional tests 
use oslo config (via tempest) as well as barbican server code.   Looking into 
it...seems that oslo_config is 1.9.1 and oslo_log is 1.0.0 and a system I have 
working has oslo_config 1.9 and oslo_log at 0.4 (both with same barbican code).

Also getting "Failure: ArgsAlreadyParsedError (arguments already parsed: cannot 
register CLI option)"which seems to be related.

Is this a known issue? Is there a launchpad bug yet?

Thanks!

[cid:5076AFB4-808D-4676-8F1C-A6E468E2CD73]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Question about bug 1314614

2015-03-09 Thread Sławek Kapłoński
Hello,

Answears below

Dnia niedziela, 8 marca 2015 13:53:51 Ian Wells pisze:
> On 6 March 2015 at 13:16, Sławek Kapłoński  wrote:
> > Hello,
> > 
> > Today I found bug https://bugs.launchpad.net/neutron/+bug/1314614 because
> > I
> > have such problem on my infra.
> 
> (For reference, if you delete a port that a Nova is using - it just goes
> ahead and deletes the port from Neutron and leaves the VIF in an odd state,
> disconnected and referring to a port that no longer exists.)

I know and for me problem is that in such situation nova on some instance 
still have got some IP and neutron provide it to other vm because it is free 
in neutron's db.

> 
> I saw that bug is "In progress" but change is abandoned quite long time
> 
> > ago. I
> > was wondering is it possible that neutron will send notification to nova
> > that
> > such port was deleted in neutron? I know that in Juno neutron is sending
> > notifications to nova when port is UP on compute node so maybe same
> > mechanism
> > can be used to notify nova that port is no longer exists and nova should
> > delete it?
> 
> What behaviour are you looking for?
I was thinking that maybe neutron can send notification to nova in such 
situation and nova can do "interface-detach" in that case.

> 
> The patch associated with the bug falls attempts to stop deletion of used
> ports.  It falls far short of implementing consistent behaviour, which
> would have to take into account everything that used ports (including DHCP,
> L3, network services, etc.), it would probably need to add an 'in-use' flag
> to the port itself, and it changes the current API behaviour rather
> markedly.  We could go there but there's much more code to be written.
> 


> Someone on the bug suggests removing the VIF from the instance if the port
> is deleted, but I don't think that's terribly practical - for some instance
> containers it would not be possible.
Ok, so if it is not possible for some containers, than I was wrong and idea 
about notification to nova is not good. I was using only kvm vms so for me 
such solution is possible I think.
> 
> The current behaviour does seem to be consistent and logical, if perhaps
> unexpected and a bit rough around the edges.  I'm not sure orphaning and
> isolating a VIF is actually a bad thing if you know it's going to happen,
> though it needs to be clear from the non-Neutron side that the VIF is no
> longer bound to a port, which is where things seem to fall down right now.
> 
I see only problem with IP assignment in such situation.

> I've also found no documentation about when delete should work and when it
> shouldn't, or what happens if the port is bound (the API and CLI document
> say that the operation 'deletes a port' and not much else).

--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][qa] Forward plan for heat scenario tests

2015-03-09 Thread Steven Hardy
On Mon, Mar 09, 2015 at 09:52:54AM -0400, David Kranz wrote:
> Since test_server_cfn_init was recently moved from tempest to the heat
> functional tests, there are no subclasses of OrchestrationScenarioTest.
> If there is no plan to add any more heat scenario tests to tempest I would
> like to remove that class. So I want to confirm that future scenario tests
> will go in the heat tree.

+1 on removing it - IMO having the scenario/functional tests in the heat
tree is working out well, so this shouldn't be needed.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] proposing rameshg87 to ironic-core

2015-03-09 Thread Devananda van der Veen
Hi all,

I'd like to propose adding Ramakrishnan (rameshg87) to ironic-core.

He's been consistently providing good code reviews, and been in the top
five active reviewers for the last 90 days and top 10 for the last 180
days. Two cores have recently approached me to let me know that they, too,
find his reviews valuable.

Furthermore, Ramakrishnan has made significant code contributions to Ironic
over the last year. While working primarily on the iLO driver, he has also
done a lot of refactoring of the core code, touched on several other
drivers, and maintains the proliantutils library on stackforge. All in all,
I feel this demonstrates a good and growing knowledge of the codebase and
architecture of our project, and feel he'd be a valuable member of the core
team.

Stats, for those that want them, are below the break.

Best Regards,
Devananda



http://stackalytics.com/?release=all&module=ironic-group&user_id=rameshg87

http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt
http://russellbryant.net/openstack-stats/ironic-reviewers-180.txt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Microversions. And why do we need API extensions for new API functionality?

2015-03-09 Thread melanie witt
On Mar 9, 2015, at 13:14, Sean Dague  wrote:

> So possibly another way to think about this is our prior signaling of
> what was supported by Nova was signaled by the extension list. Our code
> was refactored into a way that supported optional loading by that unit.
> 
> As we're making things less optional it probably makes sense to evolve
> the API code tree to look more like our REST resource tree. Yes, that
> means servers.py ends up being big, but it is less confusing that all
> servers related code is in that file vs all over a bunch of other files.
> 
> So I'd agree that in this case server tags probably should just be in
> servers.py. I also think long term we should do some "plugin collapse"
> for stuff that's all really just features on one resource tree so that
> the local filesystem code structure looks a bit more like the REST url tree.

I think this makes a lot of sense. When I read the question, "why is server 
tags being added as an extension" the answer that comes to mind first is, 
"because the extension framework is there and that's how things have been done 
so far."

I think the original thinking on extensions was, make everything optional so 
users can enable/disable as they please, operators can disable any feature by 
removing the extension. Another benefit is the ability for anyone to add a 
(non-useful to the community at-large) feature without having to patch in 
several places.

I used to be for extensions for the aforementioned benefits, but now I tend to 
think it's too flexible and complex. It's so flexible that you can easily get 
yourself into a situation where your deployment can't work with other useful 
tools/libraries/etc which expect a certain contract from the Nova API. It 
doesn't make sense to let the API we provide be so arbitrary. It's certainly 
not friendly to API users. 

We still have the ability to disable or limit features based on policy -- I 
don't think we need to do it via extensions.

The only problem that seems to be left is, how can we allow people to add 
un-upstreamable features to the API in their internal deployments? I know the 
ideal answer is "don't do that" but the reality is some things will never be 
agreed upon upstream and I do see value in the extension framework for that. I 
don't think anything in-tree should be implemented as extensions, though.

melanie (melwitt)






signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum][Heat] Expression of Bay Status

2015-03-09 Thread Adrian Otto
Magnum Team,

In the following review, we have the start of a discussion about how to tackle 
bay status:

https://review.openstack.org/159546

I think a key issue here is that we are not subscribing to an event feed from 
Heat to tell us about each state transition, so we have a low degree of 
confidence that our state will match the actual state of the stack in 
real-time. At best, we have an eventually consistent state for Bay following a 
bay creation.

Here are some options for us to consider to solve this:

1) Propose enhancements to Heat (or learn about existing features) to emit a 
set of notifications upon state changes to stack resources so the state can be 
mirrored in the Bay resource.

2) Spawn a task to poll the Heat stack resource for state changes, and express 
them in the Bay status, and allow that task to exit once the stack reaches its 
terminal (completed) state.

3) Don’t store any state in the Bay object, and simply query the heat stack for 
status as needed.

Are each of these options viable? Are there other options to consider? What are 
the pro/con arguments for each?

Thanks,

Adrian



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] proposing rameshg87 to ironic-core

2015-03-09 Thread Jay Faulkner
I am not core reviewer on all the projects in question, but I’m +1 to this 
addition. Thanks Ramakrishnan for all the good reviews.

-Jay

On Mar 9, 2015, at 3:03 PM, Devananda van der Veen 
mailto:devananda@gmail.com>> wrote:

Hi all,

I'd like to propose adding Ramakrishnan (rameshg87) to ironic-core.

He's been consistently providing good code reviews, and been in the top five 
active reviewers for the last 90 days and top 10 for the last 180 days. Two 
cores have recently approached me to let me know that they, too, find his 
reviews valuable.

Furthermore, Ramakrishnan has made significant code contributions to Ironic 
over the last year. While working primarily on the iLO driver, he has also done 
a lot of refactoring of the core code, touched on several other drivers, and 
maintains the proliantutils library on stackforge. All in all, I feel this 
demonstrates a good and growing knowledge of the codebase and architecture of 
our project, and feel he'd be a valuable member of the core team.

Stats, for those that want them, are below the break.

Best Regards,
Devananda



http://stackalytics.com/?release=all&module=ironic-group&user_id=rameshg87

http://russellbryant.net/openstack-stats/ironic-reviewers-90.txt
http://russellbryant.net/openstack-stats/ironic-reviewers-180.txt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] bower

2015-03-09 Thread Michael Krotscheck
Turns out that the bower registry doesn't actually support https. Git does
- so we can prove authenticity of the code - but the address lookup that
tells us which git url to pull from? Not so much.

I'm still digging, but unless we can get that fixed upstream (or can get
everyone to be ok with not loading things over https), the best we can
expect from bower is switching over to using git url's pointed at our
xstatic packages.

I'll put the rest of my findings on the review.

Michael

On Mon, Mar 9, 2015 at 12:33 PM Matthew Farina  wrote:

> Richard, thanks for sharing this. I hope we can move to bower sooner
> rather than later.
>
> On Sat, Mar 7, 2015 at 5:26 PM, Richard Jones 
> wrote:
>
>> On Sun, 8 Mar 2015 at 04:59 Michael Krotscheck 
>> wrote:
>>
>>> Anyone wanna hack on a bower mirror puppet module with me?
>>>
>>
>> BTW are you aware of this spec for bower/Horizon/infra?
>>
>> https://review.openstack.org/#/c/154297/
>>
>>
>> Richard
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-09 Thread Nikhil Komawar


Thank you all! The nominations and the consolidation has been implemented.

Cheers
-Nikhil


From: Nikhil Komawar 
Sent: Sunday, March 8, 2015 2:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Core nominations.

Thank you for the feedback Flavio and Gary!

I've noted down your concerns and will address them in a separate thread. So, I 
think we'd go ahead with nominations mentioned here (by Monday) and start the 
core-member discussion later.

Thanks,
-Nikhil


From: Gary Kotton 
Sent: Sunday, March 8, 2015 11:45 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Core nominations.

On 3/8/15, 2:34 PM, "Flavio Percoco"  wrote:

>On 07/03/15 23:16 +, Nikhil Komawar wrote:
>>Thank you for the response, Hemanth! Those are some excellent questions.
>>
>>
>>In order to avoid diverging the conversation, I would like to give my
>>general
>>sense of direction. Please do keep in mind that a lot of these thoughts
>>need to
>>be better formulated, preferably on a different thread.
>>
>>
>>Core-members would be generic concept unlike core-reviewers. The one
>>important
>>thing that this should achieve is clear understanding of the individuals
>>(usually ones who are new or interact less often in Glance) - who
>>actually is a
>>"Core" in the program? "There are a few things that can be part of their
>>rights
>>like being able to vote for important decisions (like the current
>>thread), they
>>may or may not have core-reviewer rights based on their participation
>>area. For
>>example, they could be security liaison or they may _officially_ do
>>release
>>management for the libraries without being a core-reviewer, etc. The
>>responsibilities should complement the rights.
>>
>>
>>Those are just initial thoughts and can be better formulated. I will
>>attempt to
>>craft out the details of the core-member concept in the near future and
>>you all
>>are welcome to join me in doing so.
>
>I think I misread the original proposal with ragards to
>"core-members". As it is explained here, I'm opposed on having this.
>As soon as you start tagging people and adding more layers to the
>community, it'll be harder to manage it and more importantly it'll be
>more fragmented than it is, which is something I believe we don't
>need.

Agree 100%

>
>Citing the example you mentioned in your previous email:
>
>> "There are a few things that can be part of their rights like being
>> able to vote for important decisions"
>
>This breaks openess and it reads like: "If you're not a 'core-member',
>your vote won't count"
>
>We've fought hard to remove all these kind of labels and exclusive
>rights by reducing them to the minimum, hence the core-reviewers team.
>
>Anyone should feel free to vote, speak up and most importantly,
>everyones opinion *must* be taken into account.
>
>I'll wait for your final proposal to give a more constructive and
>extended opinion based on that.
>
>Flavio
>
>>
>>
>>Hope that answered your questions, at least for the time being!
>>
>>
>>Cheers
>>-Nikhil
>>━
>>━━
>>From: Hemanth Makkapati 
>>Sent: Friday, March 6, 2015 7:15 PM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [Glance] Core nominations.
>>
>>
>>I like the idea of a 'core-member'. But, how are core-members different
>>from
>>core-reviewers? For instance, with core-reviewers it is very clear that
>>these
>>are folks you would trust with merging code because they are supposed to
>>have a
>>good understanding of the overall project. What about core-members? Are
>>core-members essentially just core-reviewers who somehow don't fit the
>>criteria
>>of core-reviewers but are good candidates nevertheless? Just trying to
>>understand here ... no offense meant.
>>
>>
>>Also, +1 to both the criteria for removing existing cores and addition
>>of new
>>cores.
>>
>>
>>-Hemanth.
>>
>>━
>>━━
>>From: Nikhil Komawar 
>>Sent: Friday, March 6, 2015 4:04 PM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [Glance] Core nominations.
>>
>>
>>Thank you all for the input outside of the program: Kyle, Ihar, Thierry,
>>Daniel!
>>
>>
>>Mike, Ian: It's a good idea to have the policy however, we need to craft
>>one
>>that is custom to the Glance program. It will be a bit different to ones
>>out
>>there as we've contributors who are dedicated to only subset of the code
>>- for
>>example just glance_store or python-glanceclient or metadefs. From here
>>on, we
>>may see that for Artifacts and other such features. It's already being
>>observed
>>for metadefs.
>>
>>
>>While I like Mike's suggestion to (semi-)adopt what Nova and Neutron are
>>doing,
>>

  1   2   >