[openstack-dev] [Fuel][Nailgun] Random failures in unit tests

2016-03-19 Thread Igor Kalnitsky
Hey Fuelers,

As you might know recently we encounter a lot of random test failures
on CI, and they are still there (likely with a bit less probability).
A nature of that random failures is actually not a random, they are
happened because of so called fake threads.

Fake threads, actually, ain't fake at all. They are native OS threads
that are designed to emulate Astute behaviour (i.e. catch RPC call and
respond with appropriate message). Since they are native threads and
we use SQLAlchemy's scoped_session, fake threads are using a separate
database session, hence - transaction. That leads to the following
issues:

* Races. We don't know when threads are switched, therefore, we don't
know what's committed and what's not. Some Nailgun tests sends
something via RPC (catched by fake threads) and immediately checks
something. The issue is, we can't guarantee fake threads is already
committed produced result. That could be avoided by waiting for
'ready' status of created nailgun task, however, it's better to simply
do not use fake threads in that case and simply call appropriate
Nailgun receiver's method directly in the test.

* Deadlocks. It's incredibly hard to ensure the same order of database
locks in test + business code on one hand and fake thread code on
other hand. That's why we can (and we do) encounter deadlocks on CI,
when test case waits for lock acquired by fake thread, and fake thread
waits for lock acquired by test case.

Fake threads are became a bottleneck of landing patches to master in
time, and we can't ignore it anymore. We have ~190 tests that use fake
threads, and fixing them all at once is a boring routine. So I kindly
ask Nailgun contrubitors to fix them as soon as we face them. Let's
file a bug on each file in CI, and quicly prepare a separate patch
that removes fake thread from failed test.

Thanks in advance,
Igor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Wishlist bugs == (trivial) blueprint?

2016-03-19 Thread Markus Zoeller
The correct tldr:

TL;DR: Use the openstack-*ops* ML and discuss the most wanted RFEs at 
   the summit?

Markus Zoeller/Germany/IBM wrote on 03/17/2016 04:57:27 PM:

> From: Markus Zoeller/Germany/IBM
> To: "OpenStack Development Mailing List \(not for usage questions\)" 
> 
> Date: 03/17/2016 04:57 PM
> Subject: Re: [openstack-dev] [nova] Wishlist bugs == (trivial) 
blueprint?
> 
> Top post as I summarize below what was said. At the end is a proposal 
> of actions.
> 
> TL;DR: Use the openstack-dev ML and discuss the most wanted RFEs at 
>the summit?
> 
> The two diametral points are:
> 
> * The ops like to have a frictionless way for RFEs which need to be
>   resolved explicitely (accepted|declined instead of 'starving to 
death')
> * The nova-bugs team wants to focus on faulty behavior of existing 
>   features only without the "noise" of RFEs.
> 
> Just to make myself clear about my motivation and the conditions:
> 
> * We have (almost) no volunteers for the "bug skimming duty" [1] to do
>   a pre-sorting of reports (except auggy, she did/does an awesome job!)
> * We have (almost) no bug tag owners which do a deeper analysis of the
>   incoming valid reports [2]
> * We have ~ 1000 bug reports open, a lot are (very) old and need 
>   re-assessment
> * Some RFEs are not written like RFEs and the nova bugs team needs 
>   to figure out if this is a bug or an RFE. As I don't know every detail
>   and have to research these details, it distracts a lot from real bugs
> * Some of the RFEs *need* a spec as they need a REST API change. Some
>   need involvement of other projects like Cinder and Neutron.
> * I'm convinced that a low number of high quality features will help 
>   the ops in a better way than more features which work most of the 
time.
>   This is not a criticism on the skill of the developers, bugs are a
>   normal part of SW development and need to be fixed.
> * The resource restrictions described above force me (and others) to
>   focus on the important things. I don't have the intention to exclude 
>   people's ideas.
> * I sometimes hear "Is OpenStack even Enterprise ready?". There is
>   a lot ongoing with the project navigator and clear deprecation 
>   policies but without focusing on the quality aspect I have a hard 
>   time to say "yes, it is ready".
> * I don't care a lot about the overall number of bug reports. But it's
>   not comprehensible anymore and setting a focus on bugs to fix first
>   is not possible this way. Bringing the list back to a comprehensible
>   size is the first step in adjusting the (fixes, reviews) pipeline.
>   Finishing less items is more helpful than a lot of "in progress" items
> * I *do* want the ops feedback. I have the hope that ttx's proposal
>   of the summit split [3] (which I support) will become *the* input 
>   channel for us.
> 
> Alternative to wishlist bugs:
> 
> I'm also subscribed to the "openstack-ops" mailing list (I assume most
> of us are). Posting a RFE on that list would have the following 
> advantages: 
> * It's the most easy way to post an idea (no Launchpad account needed)
> * Other ops can chime in if they want that or not. All without querying
>   Launchpad for multiple projects.
> * You see RFE's for other projects too and can make conclusions if
>   they maybe depend on or contradict each other.
> * The triaging effort for the bugs team for RFEs is none
> * The ML is (somewhat) queryable (just use [nova][RFE] in the subject)
> * The ops community can filter the top priority missing features by 
>   themselves before they reach out for implementation. Some ideas die 
>   naturally as other ops explain how they do it (=> share knowledge)
> 
> The design summit can then have a session which goes through that list
> of pre-filtered most wanted ops features and take that into account
> when the priorization for the next cycle is done. This doesn't solve 
> the challenge to find developers who implement that, but as these items
> will have more focus there could be more volunteers.
> 
> This way could also be a good transition or supplement of the way
> we would do the requirements engineering after the (hopefully coming)
> split of the design summit. I'm not up-to-date how this is planned.
> 
> Suggested action items:
> 
> 1. I close the open wish list items older than 6 months (=138 reports)
>and explain in the closing comment that they are outdated and the 
>ML should be used for future RFEs (as described above).
> 2. I post on the openstack-ops ML to explain why we do this
> 3. I change the Nova bug report template to explain this to avoid more
>RFEs in the bug report list in the future.
> 4. In 6 months I double-check the rest of the open wishlist bugs
>if they found developers, if not I'll close them too.
> 5. Continously double-check if wishlist bug reports get created
> 
> Doubts? Thoughts? Concerns? Agreements?
> 
> References:
> [1] 

Re: [openstack-dev] [all] purplerbot irc bot for logs and transclusion

2016-03-19 Thread Paul Belanger
On Wed, Mar 16, 2016 at 01:55:56PM +, Chris Dent wrote:
> 
> I built an IRC bot
> 
> https://anticdent.org/purple-irc-bot.html
> 
> that provides (see the blog posting):
> 
> * granular logging
> * some in channel commands to get recent history and recent mentions
>   of your nick
> * inter channel transclusion of messages
> 
> and put it on a few channels (openstack-sdks, openstack-telemetry
> and openstack-nova, openstack-dev). A few people have expressed that it
> is useful so I thought I would ask if people would like it added to more
> channels. I don't want to just add them willy nilly without checking
> with the community.
> 
> For reference: for the time being the bot runs on and logs to the same
> little pet where I run my blog and a few other things.
> 
> Let me know.
> 
So, I cannot comment on the how useful the bot is but if projects are in fact
using it I would like to see it added to openstack-infra so we can properly
manage it.

I would suggest joining #openstack-infra on IRC and maybe discuss the usage of
the bot and if it could be add into our existing IRC bots or maybe pull your
codebase into -infra.
> -- 
> Chris Dent   (?s°□°)?s?喋擤ォ?http://anticdent.org/
> freenode: cdent tw: @anticdent

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] PTL Candidacy

2016-03-19 Thread Thomas Herve
Hi everyone,

I'm happy to announce my candidacy for PTL of Heat for the Newton cycle.

The project and the community are in a good place in my opinion currently, both
diverse and active. As much as possible I'd like to continue encourage and
improve that.

Heat is being used more and more by projects inside OpenStack. As we know first
hand, being broken by other projects is not a great experience, so I want to
make sure we don't do this. I believe it goes by being proactive (taking care
of compatibility, making sure gates are not broken before merging) and reactive
(handle issues promptly, not being afraid of reverts).

On the other hand, I also want to reach application deployment outside of
OpenStack itself, with a focus on documentation, and improving our
heat-templates repository.

I don't believe we should work much on particular features outside of
continuing our resource coverage. Pushing convergence, and working on
scalability and performance sounds like what we should aim for in the near
future.

All of that said, being PTL is also a lot about release coordination, which I
hope to learn with the help of our successful lineage of PTLs.

Thanks!

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Clarification of expanded mission statement

2016-03-19 Thread Fox, Kevin M
I'd assume a volume plugin for cinder support and/or a volume plugin for manila 
support?

Either would be useful.

Thanks,
Kevin

From: Russell Bryant [rbry...@redhat.com]
Sent: Friday, March 18, 2016 4:59 AM
To: OpenStack Development Mailing List (not for usage questions); 
gal.sa...@gmail.com
Subject: [openstack-dev] [Kuryr] Clarification of expanded mission statement

The Kuryr project proposed an update to its mission statement and I agreed to 
start a ML thread seeking clarification on the update.

https://review.openstack.org/#/c/289993

The change expands the current networking focus to also include storage 
integration.

I was interested to learn more about what work you expect to be doing.  On the 
networking side, it's clear to me: a libnetwork plugin, and now perhaps a CNI 
plugin.  What specific code do you expect to deliver as a part of your expanded 
scope?  Will that code be in Kuryr, or be in upstream projects?

If you don't know yet, that's fine.  I was just curious what you had in mind.  
We don't really have OpenStack projects that are organizing around contributing 
to other upstreams, but I think this case is fine.

--
Russell Bryant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-19 Thread Assaf Muller
On Wed, Mar 16, 2016 at 10:41 PM, Jim Rollenhagen
 wrote:
> On Wed, Mar 16, 2016 at 06:20:11PM -0700, Ken'ichi Ohmichi wrote:
>> Hi
>>
>> I have one proposal[1] related to negative tests in Tempest, and
>> hoping opinions before doing that.
>>
>> Now Tempest contains negative tests and sometimes patches are being
>> posted for adding more negative tests, but I'd like to propose
>> removing them from Tempest instead.
>>
>> Negative tests verify surfaces of REST APIs for each component without
>> any integrations between components. That doesn't seem integration
>> tests which are scope of Tempest.
>> In addition, we need to spend the test operating time on different
>> component's gate if adding negative tests into Tempest. For example,
>> we are operating negative tests of Keystone and more
>> components on the gate of Nova. That is meaningless, so we need to
>> avoid more negative tests into Tempest now.
>>
>> If wanting to add negative tests, it is a nice option to implement
>> these tests on each component repo with Tempest plugin interface. We
>> can avoid operating negative tests on different component gates and
>> each component team can decide what negative tests are valuable on the
>> gate.
>>
>> In long term, all negative tests will be migrated into each component
>> repo with Tempest plugin interface. We will be able to operate
>> valuable negative tests only on each gate.
>
> So, positive tests in tempest, negative tests as a plugin.
>
> Is there any longer term goal to have all tests for all projects in a
> plugin for that project? Seems odd to separate them.

I'd love to see this idea explored further. What happens if Tempest
ends up without tests, as a library for shared code as well as a
centralized place to run tests from via plugins?

>
> // jim
>
>>
>> Any thoughts?
>>
>> Thanks
>> Ken Ohmichi
>>
>> ---
>> [1]: https://review.openstack.org/#/c/293197/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovn][Neutron] OVN support for routed networks(plugin interface for host mapping)

2016-03-19 Thread Russell Bryant
On Tue, Mar 15, 2016 at 7:02 PM, Hong Hui Xiao  wrote:

> Hi all.
>
> I did some investigation recently. And I think we can start some
> discussion now.
>
> All below thinking is based on the current implementation of neutron. With
> routed network, a subnet will be considered as a L2 domain. Things might
> change.
>
> I think routed network in OVN can implement in this way:
> User creates provider network. For example:
> neutron net-create provider-101 --shared \
> --provider:physical_network providernet \
> --provider:network_type vlan \
> --provider:segmentation_id 101
>
> These attributes "--provider:physical_network" will be recorded in the
> external_ids of Logical_Switch in OVN_Northbound.
>


>
>
> To Russell:
> I will expect OVN to do the following things.
> 1) The OVN_Southbound will have the latest information of
> "ovn-bridge-mappings" of each Chassis.
> 2) After creating a new network with "provider:physical_network" set, the
> OVN will update Logical_Switch in OVN_Northbound.
> The Logical_Switch will have new key:value pair in external_ids.
> neutron:available_hosts="compute-host1,compute-host2"
> 3) When a compute host join/leave the OpenStack topology, or a compute
> host just updates its ovn-bridge-mappings, OVN should updated
> Logical_Switch with related physical_network. This is a bottom-up change,
> which is similar to the port status change.
> 4) networking-ovn should be able to catch the update of Logical_Switch in
> 2) & 3) and update the SegmentHostMapping, which will be introduced in
> [2].
>
> I think 1) 2) & 3) need additional work in OVN code. And 4) need code
> change in networking-ovn.
>

​There's some work happening in OVN where the information currently in
ovn-bridge-mappings on each hypervisor will become accessible in the OVN
Southbound database.

As a nice side effect, the Neutron plugin should be able to read those
bridge mappings from the OVN database and have all of the information it
needs.​

-- 
Russell Bryant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] FF exception request for HugePages

2016-03-19 Thread Dmitry Klenov
Folks,

Majority of the commits for HugePages feature are merged in time [0].

One commit for validation is still to be merged [1]. So we would ask for 2
more days to complete the feature.

Regards,
Dmitry.

[0]
https://review.openstack.org/#/q/status:merged+AND+topic:bp/support-hugepages
[1] https://review.openstack.org/#/c/286495/

On Fri, Mar 4, 2016 at 1:58 AM, Dmitry Borodaenko 
wrote:

> Granted, merge deadline March 16, feature to be marked experimental
> until QA has signed off that it's fully tested and stable.
>
> --
> Dmitry Borodaenko
>
>
> On Tue, Mar 01, 2016 at 10:23:06PM +0300, Dmitry Klenov wrote:
> > Hi,
> >
> > I'd like to to request a feature freeze exception for "Support for Huge
> > pages for improved performance" feature [0].
> >
> > Part of this feature is already merged [1]. We have the following patches
> > in work / on review:
> >
> > https://review.openstack.org/#/c/286628/
> > https://review.openstack.org/#/c/282367/
> > https://review.openstack.org/#/c/286495/
> >
> > And we need to write new patches for the following parts of this feature:
> > https://blueprints.launchpad.net/fuel/+spec/support-hugepages
> >
> > We need 1.5 weeks after FF to finish this feature.
> > Risk of not delivering it after 1.5 weeks is low.
> >
> > Regards,
> > Dmitry
> >
> > [0] https://blueprints.launchpad.net/fuel/+spec/support-hugepages
> > [1]
> >
> https://review.openstack.org/#/q/status:merged+topic:bp/support-hugepages
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] revert new gerrit

2016-03-19 Thread Flavio Percoco

On 18/03/16 09:35 -0600, Monty Taylor wrote:

On 03/18/2016 08:31 AM, Andrey Kurilin wrote:

Hi all!

I want to start this thread because I'm tired. I spent a lot of time,
but I can't review as easy as it was with old interface. New Gerrit is
awful. Here are several issues:

* It is not possible to review patches at mobile phone. "New" "modern"
theme is not adopted for small screens.
* Leaving comments is a hard task. Position of page can jump anytime.
* It is impossible to turn off hot-keys. position of page changed->I
don't see that comment pop-up is closed->continue type several
letters->make unexpected things(open edit mode, modify something, save,
exit...)
* patch-dependency tree is not user-friendly
* summary table doesn't include status of patch(I need list to the end
of a page to know if patch is merged or not)
* there is no button "Comment"/"Reply" at the end of page(after all
comments).
* it is impossible to turn off "new" search mechanism

Does it possible to return old, classic theme? It was a good time when
we have old and new themes together...


Sadly no. Upstream is pretty tied to the new very terrible interface. 
We're not sure why.


If you haven't tried gertty yet, I highly recommend it.


gertty FTW!


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo][Fuel][Kolla][Ansible][Puppet] Parsing and Managing Policy in Keystone

2016-03-19 Thread Adam Young
The policy API is currently a Blob-based operation. Keystone knows 
nothing about the data stored or retrieved.


There is an API to fetch the policy file for a given endpoint.

http://git.openstack.org/cgit/openstack/keystone-specs/tree/api/v3/identity-api-v3-os-endpoint-policy.rst

What I would like to do is get the policy management syncronized with 
the Endpoint registration.  It should look something like this:


When a service is registered with Keystone, upload the associate policy 
file for that service to Keystone, and create a service level association:


|PUT 
/policies/{policy_id}/OS-ENDPOINT-POLICY/services/{service_id}/regions/{region_id}|


If there is a need to modify the policy, the updated policy goes to 
Keystone, along with a new policy_id, the association is updated, then 
synchronized down to the other services.


Lots of question here:

Keystone is capable of sending out notifications.  Does it makes sense 
to Have the undercloud Heat listen to notification from Keystone, and 
have Keystone send out a notification if a Policy association changes?  
Can heat update a file on stack?  Is that too much Keystone-specific 
knowledge?


What about the Container cases?  Can Kolla update a policy file in a 
container, or does it need to spin up a new container with the updated 
values?  It so, what happens with the endpoint ID, does it stay the same?


IN the OSAD case, what would be the right service to listen for the 
notifications?


What other support would the Content management systems need from 
Keystone?  Obviously, Client and CLI support, Puppet modules.


Let's get the conversation started here on the mailing list, and expect 
to dive into it deep in Austin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-19 Thread Hayes, Graham
On 16/03/2016 06:28, Nikhil Komawar wrote:
> Hello everyone,
>
> tl;dr;
> I'm writing to request some feedback on whether the cross project Quotas
> work should move ahead as a service or a library or going to a far
> extent I'd ask should this even be in a common repository, would
> projects prefer to implement everything from scratch in-tree? Should we
> limit it to a guideline spec?



> Service:
> This would entail creating a new project and will introduce managing
> tables for quotas for all the projects that will use this service. For
> example if Nova, Glance, and Cinder decide to use it, this 'entity' will
> be responsible for handling the enforcement, management and DB upgrades
> of the quotas logic for all resources for all three projects. This means
> less pain for projects during the implementation and maintenance phase,
> holistic view of the cloud and almost a guarantee of best practices
> followed (no clutter or guessing around what different projects are
> doing). However, it results into a big dependency; all projects rely on
> this one service for right enforcement, avoiding races (if do not
> incline on implementing some of that in-tree) and DB
> migrations/upgrades. It will be at the core of the cloud and prone to
> attack vectors, bugs and margin of error.

In my view:

Pros:

1. Improved UX
   - Currently it is difficult to have a common method of updating
 and viewing project quotas is difficult, and (last time I looked)
 impossible in Horizon.
2. Much easier to iterate on quotas - e.g. Nested Quotas

Cons:

1. Latency
2. Yet another thing in the critical path of API requests
3. Keeping compatibility across releases to allow for phased upgrades
could be problematic
4. How does the big tent feed into this - is it plugin based (which
increases the complexity of deploying additional OpenStack services)
or does this service have all projects in tree?

I know that there is probably tons of problems with this idea, but
something occurred to me, could we have this as part of keystone?

When a user gets a token, as part of the token they get a quota
object. It does a few things:

1. Removes the need for another service
2. Allows services to use an already existing interface
3. Still has a central place for updates / querying

It also has a few downsides that I can think of off the top of my head:

1. Mainly - would Keystone even be willing to take this as part of
their mission statement?
2. PKI tokens would have to be re-issued to get new quotas
3. PKI Tokens would have to be invalidated when decreasing tokens
4. The token size might explode for PKI
5. Extra load on Keystone


> Library:
> A library could be thought of in two different ways:
> 1) Something that does not deal with backed DB models, provides a
> generic enforcement and management engine. To think ahead a little bit
> it may be a ABC or even a few standard implementation vectors that can
> be imported into a project space. The project will have it's own API for
> quotas and the drivers will enforce different types of logic; per se
> flat quota driver or hierarchical quota driver with custom/project
> specific logic in project tree. Project maintains it's own DB and
> upgrades thereof.
> 2) A library that has models for DB tables that the project can import
> from. Thus the individual projects will have a handy outline of what the
> tables should look like, implicitly considering the right table values,
> arguments, etc. Project has it's own API and implements drivers in-tree
> by importing this semi-defined structure. Project maintains it's own
> upgrades but will be somewhat influenced by the common repo.

This would actually work quite well for us in Designate as the quotas
component is a plugin, we could switch to a openstack-common style
system quite easily.

It does not help in the UX side of things - but seems like it is the
quickest route to something being completed.

> Library would keep things simple for the common repository and sourcing
> of code can be done asynchronously as per project plans and priorities
> without having a strong dependency. On the other hand, there is a
> likelihood of re-implementing similar patterns in different projects
> with individual projects taking responsibility to keep things up to
> date. Attack vectors, bugs and margin of error are project responsibilities
>
> Third option is to avoid all of this and simply give guidelines, best
> practices, right packages to each projects to implement quotas in-house.
> Somewhat undesirable at this point, I'd say. But we're all ears!
>
> Thank you for reading and I anticipate more feedback.
>
> [1] https://review.openstack.org/#/c/284454/
>

Thanks

- Graham


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Fuel] [ironic] [inspector] Rewriting nailgun agent on Python proposal

2016-03-19 Thread Pavlo Shchelokovskyy
Hi Evgeniy,

On Fri, Mar 18, 2016 at 4:26 PM, Evgeniy L  wrote:
>
>
>> On the other side, there is ongoing work to have an ansible-based deploy
>> ramdisk in Ironic, maybe inspector could benefit from it too. Didn't think
>> about it yet, would be interesting to discuss on the summit.
>
>
> And here, I would appreciate if you have any link to get more context (I
> was able to find only playbook for Ironic installation).
> In Fuel we had an idea to implement tasks (abstract from specific
> deployment tool) to make configuration and get information about specific
> hardware.
>

Please see this patch https://review.openstack.org/#/c/238183/

This is a PoC of ansible-deploy driver for Ironic. We are in the process of
polishing/refactoring it to be more "ansibly" but the current prototype is
already working. We also plan to add an Inspect interface to it at some
point (or implement similar logic in Inspector if it would be needed) - it
should be as easy as running the like of "ansible -m setup" and parsing the
result.

Best regards,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Maintaining httplib2 python library

2016-03-19 Thread Ian Cordasco
 

-Original Message-
From: Cory Benfield 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: March 18, 2016 at 13:06:02
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [all] Maintaining httplib2 python library

>  
> > On 18 Mar 2016, at 17:05, Doug Wiegley wrote:
> >> On Mar 18, 2016, at 8:31 AM, Cory Benfield wrote:
> >>
> >> Getting requests to talk over a Unix domain socket is not particularly 
> >> tricky, and  
> there are third-party libraries that hook into requests appropriately to make 
> that  
> happen. For example, the requests-unixsocket module exists that can do the 
> appropriate  
> things.
> >
> > That’s the module that I was eyeing, but we’re just trading one dependency 
> > for another.  
> Is there something about httplib2 maintenance in particular that makes us 
> want that  
> gone?
> >
> > doug
>  
> The original message in this thread was about the fact that httplib2 is 
> currently unmaintained  
> and looking for new maintainers. I believe that was the impetus for the 
> discussion.

Unrelatedly, the author hasn't responded to either email or twitter. I've 
offered to help keep it on life support but they've not responded. So perhaps 
they're not interested in adding maintainers after all.

Either way, it's likely a dying project and not one we should hold onto.

But I mean, ignoring that it's dying, it's a great piece of software.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new-project][jacket] Introduction to jacket, a new project

2016-03-19 Thread Kevin.ZhangSen
Hi all,


There is a new project "Jacket" to unify the API models of different clouds 
using OpenStack API.  Its wiki is: https://wiki.openstack.org/wiki/Jacket


After the discussion of last week, I update the description in wiki about the 
relations between Jacket and Tricircle, and add the "FAQ" section. Please 
review and give your suggestions, thanks.


Thanks again for the good suggestions and support from Gordon Chung, Janki 
Chhatbar, Shinobu Kinjo, Joe Huang and Phuong.


Best Regars,
Kevin (Sen Zhang)



Q: Is Jacket one kind of API gateway for different clouds?

Jacket isn't an API gateway for different clouds. The aim of Jacket is to offer 
the unified OpenStack API model for different clouds, so the major task of 
Jacket is to shield the differences between provider cloud and OpenStack 
through the sub services in Jacket such as “Unified resource uuid allocation”, 
"Fake volume management" and so on.




Q: What is the relation between Tricircle and Jacket?

Jacket focuses on how to unify the API models of clouds using OpenStack API 
model, and how to use one OpenStack instance to manage one provider cloud. 
Tricircle focuses on how to manage multiply OpenStack instances and networking 
automation across multiple OpenStack instances. So it is a good solution to use 
Tricircle to manage multiply different clouds at the same time, each one of 
which is managed by OpenStack instance through Jacket.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][all] What would you like changed/fixed/new in oslo??

2016-03-19 Thread Joshua Harlow

Howday all,

Just to start some conversation for the next cycle,

I wanted to start thinking about what folks may like to see in oslo (or 
yes, even what u dislike in any of the oslo libraries).


For those who don't know, oslo[1] is a lot of libraries (27+) so one of 
my complaints (and one I will try to help make better) is that most 
people probably don't know what the different 'offerings' of these 
libraries are or how to use them (docs, tutorials, docs, and more docs).


I'll pick another pet-peeve of mine as a second one to get people thinking.

2) The lack of oslo.messaging having a good security scheme (even 
something basic as a hmac or signature that can be verified, this scares 
the heck out of me what is possible over RPC) turned on by default so 
I'd like to start figuring out how to get *something* (basic == HMAC 
signature, or maybe advanced == barbican or ???)


What other thoughts do people have?

Good, bad, crazy (just keep it PG-13) thoughts are ok ;)

-Josh

[1] https://wiki.openstack.org/wiki/Oslo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] purplerbot irc bot for logs and transclusion

2016-03-19 Thread Anita Kuno
On 03/16/2016 10:45 AM, Paul Belanger wrote:
> I would like to see it added to openstack-infra so we can properly
> manage it.

I agree with Paul here.

To that end I have added an item on next week's infra meeting agenda
about IRC Bots with the aim of discussing this, as this is not the only
bot folks seem to want to run.
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

No guarantee we will get to the item but all are welcome to participate
and discuss if they have thoughts on this topic.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-19 Thread Steve Gordon
- Original Message -
> From: "Kai Qiang Wu" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Tuesday, March 15, 2016 3:20:46 PM
> Subject: Re: [openstack-dev] [magnum] Discussion of supporting 
> single/multiple OS distro
> 
> Hi  Stdake,
> 
> There is a patch about Atomic 23 support in Magnum.  And atomic 23 uses
> kubernetes 1.0.6, and docker 1.9.1.
> From Steve Gordon, I learnt they did have a two-weekly release. To me it
> seems each atomic 23 release not much difference, (minor change)
> The major rebases/updates may still have to wait for e.g. Fedora Atomic 24.

Well, the emphasis here is on *may*. As was pointed out in that same thread [1] 
rebases certainly can occur although those builds need to get karma in the 
fedora build system to be pushed into updates and subsequently included in the 
next rebuild (e.g. see [2] for a newer K8S build). The main point is that if a 
rebase involves introducing some element of backwards incompatibility then that 
would have to wait to the next major (F24) - outside of that there is some 
flexibility.

> So maybe we not need to test every Atomic 23 two-weekly.
> Pick one or update old, when we find it is integrated with new kubernetes
> or docker, etcd etc. If other small changes(not include security), seems
> not need to update so frequently, it can save some efforts.

A question I have posed before and that I think will need to be answered if 
Magnum is indeed to move towards the model for handling drivers proposed in 
this thread is what are the expectations Magnum has for each image/coe 
combination in terms of versions of key components for a given Magnum release, 
and what are the expectations Magnum has for same when looking forwards to say 
Newton.

Based on our discussion it seemed like there were some issues that mean 
kubernetes-1.1.0 would be preferable for example (although that it wasn't there 
was in fact itself a bug it would seem, but regardless it's a valid example), 
but is that expectation documented somewhere? It seems like based on feature 
roadmap it should be possible to at least put forward minimum required versions 
for key components (e.g. docker, k8s, flanel, etcd for the K8S COE)? This would 
make it easier to guide the relevant upstreams to ensure their images support 
the Magnum team's needs and at least minimize the need to do custom builds if 
not eliminate it.

-Steve

[1] 
https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/ZJARDKSB3KGMKLACCZSQALZHV54PAJUB/
[2] https://bodhi.fedoraproject.org/updates/FEDORA-2016-a89f5ce5f4

> From: "Steven Dake (stdake)" 
> To:   "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 16/03/2016 03:23 am
> Subject:  Re: [openstack-dev] [magnum] Discussion of supporting
> single/multiple OS distro
> 
> 
> 
> WFM as long as we stick to the spirit of the proposal and don't end up in a
> situation where there is only one distribution.  Others in the thread had
> indicated there would be only one distribution in tree, which I'd find
> disturbing for reasons already described on this thread.
> 
> While we are about it, we should move to the latest version of atomic and
> chase atomic every two weeks on their release.  Thoughts?
> 
> Regards
> -steve
> 
> 
> From: Hongbin Lu 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Monday, March 14, 2016 at 8:10 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [magnum] Discussion of supporting
> single/multiple OS distro
> 
> 
> 
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: March-14-16 4:49 PM
> To: OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [magnum] Discussion of supporting
> single/multiple OS distro
> 
> Steve,
> 
> I think you may have misunderstood our intent here. We are not
> seeking to lock in to a single OS vendor. Each COE driver can
> have a different OS. We can have multiple drivers per COE. The
> point is that drivers should be simple, and therefore should
> support one Bay node OS each. That would mean taking what we
> have today in our Kubernetes Bay type implementation and
> breaking it down into two drivers: one for CoreOS and another
> for Fedora/Atomic. New drivers would start out in a contrib
> directory where complete functional testing would not be
> required. In order to graduate one out of contrib and into the
> realm of support of the Magnum dev 

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Steven Dake (stdake)


On 3/18/16, 12:59 PM, "Fox, Kevin M"  wrote:

>+1. We should be encouraging a common way of solving these issues across
>all the openstack projects and security is a really important thing.
>spreading it across lots of projects causes more bugs and security
>related bugs cause security incidents. No one wants those.
>
>I'd also like to know why, if an old cloud is willing to deploy a new
>magnum, its unreasonable to deploy a new barbican at the same time.
>
>If its a technical reason, lets fix the issue. If its something else,
>lets discuss it. If its just an operator not wanting to install 2 things
>instead of just one, I think its a totally understandable, but
>unreasonable request.

Kevin,

I think the issue comes down to "how" the common way of solving this
problem should be approached.  In barbican's case a daemon and database
are required.  What I wanted early on with Magnum when I was involved was
a library approach.

Having maintained a deployment project for 2 years, I can tell you each
time we add a new big tent project it adds a bunch of footprint to our
workload.  Operators typically don't even have a tidy deployment tool like
Kolla to work with.  As an example, ceilometer has had containers
available in Kolla for 18 months yet nobody has finished the job on
implementing ceilometer playbooks, even though ceilometer is a soft
dependency of heat for autoscaling.

Many Operators self-deploy so they understand how the system operates.
They lack the ~200 contributors Kolla has to maintain a deployment tool,
and as such, I really don't think the idea that deploying "Y to get X when
Y could and should be a small footprint library" is unreasonable.

Regards,
-steve
  
>
>Thanks,
>Kevin
>
>From: Douglas Mendizábal [douglas.mendiza...@rackspace.com]
>Sent: Friday, March 18, 2016 6:45 AM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [magnum] High Availability
>
>Hongbin,
>
>I think Adrian makes some excellent points regarding the adoption of
>Barbican.  As the PTL for Barbican, it's frustrating to me to constantly
>hear from other projects that securing their sensitive data is a
>requirement but then turn around and say that deploying Barbican is a
>problem.
>
>I guess I'm having a hard time understanding the operator persona that
>is willing to deploy new services with security features but unwilling
>to also deploy the service that is meant to secure sensitive data across
>all of OpenStack.
>
>I understand one barrier to entry for Barbican is the high cost of
>Hardware Security Modules, which we recommend as the best option for the
>Storage and Crypto backends for Barbican.  But there are also other
>options for securing Barbican using open source software like DogTag or
>SoftHSM.
>
>I also expect Barbican adoption to increase in the future, and I was
>hoping that Magnum would help drive that adoption.  There are also other
>projects that are actively developing security features like Swift
>Encryption, and DNSSEC support in Desginate.  Eventually these features
>will also require Barbican, so I agree with Adrian that we as a
>community should be encouraging deployers to adopt the best security
>practices.
>
>Regarding the Keystone solution, I'd like to hear the Keystone team's
>feadback on that.  It definitely sounds to me like you're trying to put
>a square peg in a round hole.
>
>- Doug
>
>On 3/17/16 8:45 PM, Hongbin Lu wrote:
>> Thanks Adrian,
>>
>>
>>
>> I think the Keystone approach will work. For others, please speak up if
>> it doesn¹t work for you.
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
>> *Sent:* March-17-16 9:28 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [magnum] High Availability
>>
>>
>>
>> Hongbin,
>>
>>
>>
>> I tweaked the blueprint in accordance with this approach, and approved
>> it for Newton:
>>
>> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
>>
>>
>>
>> I think this is something we can all agree on as a middle ground, If
>> not, I¹m open to revisiting the discussion.
>>
>>
>>
>> Thanks,
>>
>>
>>
>> Adrian
>>
>>
>>
>> On Mar 17, 2016, at 6:13 PM, Adrian Otto > > wrote:
>>
>>
>>
>> Hongbin,
>>
>> One alternative we could discuss as an option for operators that
>> have a good reason not to use Barbican, is to use Keystone.
>>
>> Keystone credentials store:
>> 
>>http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v
>>3.html#credentials-v3-credentials
>>
>> The contents are stored in plain text in the Keystone DB, so we
>> would want to generate an encryption key per bay, encrypt the
>> certificate and store it in keystone. We would then use the same key
>> to decrypt it upon reading the key back. This might be an 

Re: [openstack-dev] [all][infra][ptls] tagging reviews, making tags searchable

2016-03-19 Thread Flavio Percoco

On 18/03/16 14:15 +, Amrith Kumar wrote:

As we were working through reviews for the Mitaka release, the Trove team was
trying to track groups of reviews that were needed for a specific milestone,
like m-1, or m-3 or in the recent days for rc1.



The only way we could find was to have someone (in this instance, me) ‘star’
the reviews that we wanted and then have people look for reviews with
‘starredby:amrith’ and status:open.



How do other projects do this? Is there a simple way to tag reviews in a
searchable way?


When we needed this in Glance, we used one of these 2 options:

1) Topics: We changed reviews topics accordingly based on the milestone:
(glance-RC1, glance-RC2, glance-M1, etc)

2) Commit Message: We used this to tag priority reviews for the cycle. This was
less effective, TBH and I wouldn't recommend it.

I know there's a gerrit feature that is, AFAIK, not released yet that adds
"hashtags" to gerrit. Basically the ability to add tags to reviews and search
them.

I'm not a huge fan of the starring option but I can see how that could work for
other folks. One thing in favor of starring patches is that it's persistent
across PS.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] How do we move forward with xstatic releases?

2016-03-19 Thread Richard Jones
On 13 March 2016 at 07:06, Matthias Runge  wrote:

> On 10/03/16 08:46, Richard Jones wrote:
> > On 10 March 2016 at 18:23, Matthias Runge  > > wrote:
> >
> > 4.alt.2:
> > move to proper packages for static file management. I mean, they
> need to
> > be built anyways.
> >
> >
> > Please define what you mean by "proper packages" here. I *think* you
> > might mean system packages (aka Debian or Red Hat) which is not feasible
> > given other environments that Horizon runs under. Please correct me if
> > I'm wrong!
>
> Exactly. I mean system packages. If there are issues with system
> packages, then let's address the issue rather than re-inventing the wheel.
>

OK, the sticking point is that installation through system packages alone
forces us to make all software on a system work with a single version of
any given library. This has spawned the global-requirements and now
upper-constraints systems in OpenStack, and ultimately leads to the
problematic race condition that resulted in me starting this email thread.
That is, we can update *either* the version of a library being used *or*
the software that is compatible with that version *but not the two at the
same time, atomically*.



> Weren't we just talking about providing dependencies for the gate?


Well, partially, because the reason the problem surfaces is because of the
Continuous Deployment guarantee that OpenStack makes, which is enforced by
the gate, so sure, let's say it's the gate's fault 



> I mean, for production, it's quite the same situation we are at the
> moment. Nobody requires you to install Horizon and dependencies
> especially via rpm, deb or pip: Take what you want.
>

I'm not sure it's this simple, is it? Folks want to be able to install
OpenStack from a single source, and that seems reasonable. They want to be
able to do that "offline" too, so that rules out bower as a source of
packages.



> > It has been mentioned, xstatic packages can block the gate. We
> currently
> > control xstatic package releases, thus we can roll back, if something
> > goes wrong.
> >
> > If we're pulling directly with npm/bower, someone from the outside
> can
> > break us. We already have the situation with pypi packages.
> > With proper packages, we could even use the gate to release those
> > packages and thus verify, we're not breaking anyone.
> >
> >
> > We're going to have potential breakage (gate breakage, in the integrated
> > tests) any time we release a package (regardless of release mechanism)
> > and have to update two separate repositories resulting in out-of-sync
> > version specification and expectation (ie. upper-constraints
> > specification and Horizon's code expectation) as described in my OP. The
> > only solution that we're aware of is to synchronise updating those two
> > things, through one of the mechanisms proposed so far (or possibly
> > through a mechanism not yet proposed.)
> >
>
> Yes, and my proposal to address this is to gate updating/releasing
> dependencies the same way we're currently gating each change in horizon.
>

This is not going to solve the race condition I mention; it's actually
during our work implementing gating these releases that we found we had to
solve this problem.



> > 1. Horizon maintains its own constrained version list for the xstatic
> > packages,
> > 2. Plugins to Horizon must depend on Horizon to supply xstatic packages
> > except where they use additional packages that Horizon does not use, and
> > 3. We avoid installing app-catalog (the system, not the plugin) in the
> > integrated tests (I don't believe doing this is even on the ...
> > "horizon" so to speak) *or* in a Debian / Red Hat (i.e. system-packaged)
> > system that also has Horizon installed. Or we try to convince
> > app-catalog to stay lock-step with Horizon's xstatic versions. I
> > understand the risk of a collision between app-catalog and Horizon in
> > the same system-packaged environment is very low.
>
> I don't really see a chance for app-catalog to require Horizon as a
> dependency and different versions of xstatic packages. This would be an
> immediate show-stopper for app-catalog either on Debian or on RPM based
> systems.
>

I think I used the wrong word above. Where I said "system" I probably
should have said "server". app-catalog the stand-alone server should not
depend on Horizon, just app-catalog the plugin to Horizon should (like all
Horizon plugins should).



> Let me repeat myself: you're free to install dependencies as you like,
> npm, bower, whatever. I was simply speaking about the gate and about
> gating dependencies to be sure, we're not broken by someone from outside.


Again, I don't believe we have the freedom to actually install dependencies
as we like, as I said above.


  Richard
__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Ricardo Rocha
Hi.

We're on the way, the API is using haproxy load balancing in the same
way all openstack services do here - this part seems to work fine.

For the conductor we're stopped due to bay certificates - we don't
currently have barbican so local was the only option. To get them
accessible on all nodes we're considering two options:
- store bay certs in a shared filesystem, meaning a new set of
credentials in the boxes (and a process to renew fs tokens)
- deploy barbican (some bits of puppet missing we're sorting out)

More news next week.

Cheers,
Ricardo

On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans)
 wrote:
> All,
>
> Does anyone have experience deploying Magnum in a highly-available fashion?
> If so, I’m interested in learning from your experience. My biggest unknown
> is the Conductor service. Any insight you can provide is greatly
> appreciated.
>
> Regards,
> Daneyon Hansen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] RC1 candidate

2016-03-19 Thread Armando M.
On 16 March 2016 at 19:58, Armando M.  wrote:

> An update:
>
> On 15 March 2016 at 21:38, Armando M.  wrote:
>
>> Neutrinos,
>>
>> I believe we reached the point [1] where RC1 can be cut [2]. If I made an
>> error of judgement, or any other catastrophic failure arises, please report
>> a bug, and tag it as mitaka-rc-potential [3]. Please, sign off on
>> postmortem [4], so that we can finalize the specs status for Mitaka and
>> open up to Newton.
>>
>> Please, consider this the last warning to ensure that everything is in
>> the right order so that you can feel proud of what you and your teammates
>> have accomplished this release!
>>
>
> We bumped [2] already thanks to Cedric finding DB migration issues with
> Postgres. I am about to bump [2] again to contain Kevin's fix for bug
> 1513765. Anything that landed in between is rather safe. At this point I
> don't expect to see any other rc-potential fix that's gonna be in shape in
> time for the end of the week. Salvatore mentioned something odd about
> quota, but until we find out more, and judge whether we need an RC2, it's
> time we draw a line and pull the trigger on RC1, once change for bug
> 1513765 lands.
>

RC1 was released about the same time change [1] landed. As a result, some
upstream gate jobs are now failing both in master and stable/mitaka. This
doesn't mean that RC1 is useless, so don't panic!

Please bear with us until we rectify the situation, but if you wondered if
we needed an RC2 now, clear your doubts, because we obviously will.

A.

[1] https://review.openstack.org/#/c/292573


>
>
>>
>> Cheers,
>> Armando
>>
>> [1] https://launchpad.net/neutron/+milestone/mitaka-rc1
>> [2] https://review.openstack.org/#/c/292445/
>> [3]
>> https://bugs.launchpad.net/neutron/+bugs?field.tag=mitaka-rc-potential
>> [4] https://review.openstack.org/#/c/286413/
>> [5] https://review.openstack.org/#/c/283383/
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Sahara Job Binaries Storage

2016-03-19 Thread Jerico Revote
Hello,

When deploying Sahara, Sahara docos suggests to increase max_allowed_packet to 
256MB,
for internal database storing of job binaries.
There could be hundreds of job binaries to be uploaded/created into Sahara,
which would then cause the database to grow as well.
Does anyone using Sahara encountered database sizing issues using internal db 
storage?

It looks like swift is the more logical place for storing job binaries 
(in our case we have a global swift cluster), and this is also available to the 
user.
Is there a way to only enable the swift way for storing job binaries?

Thanks,

Jerico



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-19 Thread Ken'ichi Ohmichi
2016-03-16 20:27 GMT-07:00 Assaf Muller :
> On Wed, Mar 16, 2016 at 10:41 PM, Jim Rollenhagen
>  wrote:
>> On Wed, Mar 16, 2016 at 06:20:11PM -0700, Ken'ichi Ohmichi wrote:
>>> Hi
>>>
>>> I have one proposal[1] related to negative tests in Tempest, and
>>> hoping opinions before doing that.
>>>
>>> Now Tempest contains negative tests and sometimes patches are being
>>> posted for adding more negative tests, but I'd like to propose
>>> removing them from Tempest instead.
>>>
>>> Negative tests verify surfaces of REST APIs for each component without
>>> any integrations between components. That doesn't seem integration
>>> tests which are scope of Tempest.
>>> In addition, we need to spend the test operating time on different
>>> component's gate if adding negative tests into Tempest. For example,
>>> we are operating negative tests of Keystone and more
>>> components on the gate of Nova. That is meaningless, so we need to
>>> avoid more negative tests into Tempest now.
>>>
>>> If wanting to add negative tests, it is a nice option to implement
>>> these tests on each component repo with Tempest plugin interface. We
>>> can avoid operating negative tests on different component gates and
>>> each component team can decide what negative tests are valuable on the
>>> gate.
>>>
>>> In long term, all negative tests will be migrated into each component
>>> repo with Tempest plugin interface. We will be able to operate
>>> valuable negative tests only on each gate.
>>
>> So, positive tests in tempest, negative tests as a plugin.
>>
>> Is there any longer term goal to have all tests for all projects in a
>> plugin for that project? Seems odd to separate them.
>
> I'd love to see this idea explored further. What happens if Tempest
> ends up without tests, as a library for shared code as well as a
> centralized place to run tests from via plugins?

Now Tempest contains library code and the other projects can use them
as library.
We are trying to increase the library code for more usability.
The qa-spac https://review.openstack.org/#/c/275966/ is nice to be understood.

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Design Summit - Proposed slot allocation

2016-03-19 Thread Emilien Macchi
On Thu, Mar 17, 2016 at 11:43 AM, Kirill Zaitsev  wrote:
> Is it too late to ask for a half-day Contributors Meetup for murano?
>
> We had an extremely successful contributors meetup in Tokyo and I guess it
> is an error on our side, that we have not requested one for in Austin.

Puppet OpenStack team can survive without 1/2 day for community
meetup, 1/4 could work and we can share the room with you if Thierry
can't find a slot for you.

> --
> Kirill Zaitsev
> Murano team
> Software Engineer
> Mirantis, Inc
>
> On 16 March 2016 at 12:57:30, Thierry Carrez (thie...@openstack.org) wrote:
>
> Hi PTLs,
>
> Here is the proposed slot allocation for project teams at the Newton
> Design Summit in Austin. This is based on the requests the mitaka PTLs
> have made, space availability and project activity & collaboration needs.
>
> | fb: fishbowl 40-min slots
> | wr: workroom 40-min slots
> | cm: Friday contributors meetup
> | | full: full day, half: only morning or only afternoon
>
> Neutron: 9fb, cm:full
> Nova: 18fb, cm:full
> Fuel: 3fb, 11wr, cm:full
> Horizon: 1fb, 7wr, cm:half
> Cinder: 4fb, 5wr, cm:full
> Keystone: 5fb, 8wr; cm:full
> Ironic: 5fb, 5wr, cm:half
> Heat: 4fb, 8wr, cm:half
> TripleO: 2fb, 3wr, cm:half
> Kolla: 4fb, 10wr, cm:full
> Oslo: 3fb, 5wr
> Ceilometer: 2fb, 7wr, cm:half
> Manila: 2fb, 4wr, cm:half
> Murano: 1fb, 2wr
> Rally: 2fb, 2wr
> Sahara: 2fb, 6wr, cm:half
> Glance: 3fb, 5wr, cm:full
> Magnum: 5fb, 5wr, cm:full
> Swift: 2fb, 12wr, cm:full
> OpenStackClient: 1fb, 1wr, cm:half
> Senlin: 1fb, 5wr, cm:half
> Monasca: 5wr
> Trove: 3fb, 6wr, cm:half
> Dragonflow: 1fb, 4wr, cm:half*
> Mistral: 1fb, 3wr
> Zaqar: 1fb, 3wr, cm:half
> Barbican: 2fb, 6wr, cm:half
> Designate: 1fb, 5wr, cm:half
> Astara: 1fb, cm:full
> Freezer: 1fb, 2wr, cm:half
> Congress: 1fb, 3wr
> Tacker: 1fb, 3wr, cm:half
> Kuryr: 1fb, 5wr, cm:half*
> Searchlight: 1fb, 2wr
> Cue: no space request received
> Solum: 1fb, 1wr
> Winstackers: 1wr
> CloudKitty: 1fb
> EC2API: 2wr
>
> Infrastructure: 3fb, 4wr, cm:day**
> Documentation: 4fb, 4wr, cm:half
> Quality Assurance: 4fb, 4wr, cm:day**
> PuppetOpenStack: 2fb, 3wr, cm:half
> OpenStackAnsible: 1fb, 8wr, cm:half
> Release mgmt: 1fb, cm:half
> Security: 3fb, 2wr, cm:half
> ChefOpenstack: 1fb, 2wr
> Stable maint: 1fb
> I18n: cm:half
> Refstack: 3wr
> OpenStack UX: 2wr
> RpmPackaging: 1fb***, 1wr
> App catalog: 1fb, 2wr
> Packaging-deb: 1fb***, 1wr
>
> *: shared meetup between Kuryr and Dragonflow
> **: shared meetup between Infra and QA
> ***: shared fishbowl between RPM packaging and DEB packaging, for
> collecting wider packaging feedback
>
> We'll start working on laying out those sessions over the available
> rooms and time slots. Most of you have communicated constraints together
> with their room requests (like Manila not wanting overlap with Cinder
> sessions), and we'll try to accommodate them the best we can. If you
> have extra constraints you haven't communicated yet, please reply to me
> ASAP.
>
> Now is time to think about the content you'd like to cover during those
> sessions and fire up those newton etherpads :)
>
> Cheers,
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Does the OpenStack community(or Cinder team) allow one driver to call another driver's public method?

2016-03-19 Thread Sean McGinnis
On Fri, Mar 18, 2016 at 04:05:34AM +, liuxinguo wrote:
> Hi Cinder team,
> 
> We are going to implement storage-assisted volume migrate in our driver 
> between different backend storage array or even different array of different 
> vendors.
> This is really high-efficiency than the host-copy migration between different 
> array of different vendors.
> 
> To implement this, we need to call other backend's method like 
> create_volume() or initialize_connection(). We can call them like the 
> cinder/volume/manage.py:
> 
> rpcapi.create_volume(ctxt, new_volume, host['host'],
>  None, None, allow_reschedule=False)
> 
> or
> conn = rpcapi.initialize_connection(ctxt, volume, properties)
> 
> And my question is: Does the OpenStack community(or Cinder team) allow driver 
> to call rpcapi in order to call other driver's method like create_volume() or 
> initialize_connection()?
> 

This is an interesting question. I have thought in the past we may be
able to do some interesting things, particularly with more involved
replication or migration scenarios.

We do not currently do this. Ideally I think we would want the other
driver instance passed in to the source driver so each driver would not
need to do something special to look it up.

You do have the option today of optimizing migrate for your driver [1].
But I think especially in cross-vendor migrations, there are things that
need to be done outside the scope of a driver that are currently handled
by Cinder.

There could be a valid use case for driver to driver interfaces, but I
think as it is now, what I think you are looking for is something that
is a little more involved and would need a little more design (and a lot
more discussion) to support.

[1]
https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L1552

> 
> Thanks for any input!
> --
> Wilson Liu

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new-project][gate] devstack-plugin-additional-pkg-repos

2016-03-19 Thread Tony Breeds
Hi All,
I just wanted to update everyone on a new devstack-plugin that allows us to
install alternate versions of libvirt(and qemu) into a devstack.

The motivation for this started in a Vancouver summit session but got more
steam from from[2].  There is a project-config[3] change to add an experimental
job to use this.

The current state is that the plugin will install libvirt+qemu from Ubuntu
Cloud Archive (liberty) and that's pretty mush it.

I admit with Ubuntu 16.04 coming out RSN the utility of this is limited but
that's just the first step.

Some of the design considerations were:
 * Decouple the building of $packages from the devstack run
 * Use distrubution packages not tarballs etc
 * The plugin is a framework that allows this to be used (for example) by
   Neutron to test OVS[4]

Short term TODO list:
1) Add support for virt-preview on Fedora-23 [should happen next week]
2) Use libvirt packages other than UCA [realistically post summit]

Longer term TODO list:
1) find a home in the big tent
2) test all the things
3) Use this for Neutron to test OVS
- Is there and existing package repo we can test on trusty?

So it you're interested in helping out or using the plugin reach out!

Yours Tony.

[1] 
https://review.openstack.org/#/q/project:openstack/devstack-plugin-additional-pkg-repos
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-November/079679.html
[3] https://review.openstack.org/#/c/289255/
[4] http://lists.openstack.org/pipermail/openstack-dev/2016-January/083947.html


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-19 Thread Joshua Harlow




This has been proposed a number of times in the past with projects such as Boson
(https://wiki.openstack.org/wiki/Boson) and an extended discussion at one of the
summits (I think it was San Diego).

Then, there were major reservations from the PTLs at the impacts in terms of
latency, ability to reconcile and loss of control (transactions are difficult, 
transactions
across services more so).


Understood and I get the worry that this causes people.

But just some food for thought, I have heard from a grapevine this is 
how a company that starts with 'g' and ends with 'oogle' does quota on 
there resources. A service (I don't know the name) manages the quotas 
and other services can subscribe to that other services events (to say 
sync themselves with that service).


But this is just the grapevine, so the information may be a little 
distorted and/or not correct, ha...


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [ironic] [inspector] Rewriting nailgun agent on Python proposal

2016-03-19 Thread Vladimir Kozhukalov
Sorry, typo:

*cloud case does NOT assume running any kind of agent
inside user instance

Vladimir Kozhukalov

On Fri, Mar 18, 2016 at 7:26 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> >Well, there's a number of reasons. Ironic is not meant only for an
> >"undercloud" (deploying OpenStack on ironic instances). There are both
> >public and private cloud deployments of ironic in production today, that
> >make bare metal instances available to users of the cloud. Those users
> >may not want an agent running inside their instance, and more
> >importantly, the operators of those clouds may not want to expose the
> >ironic or inspector APIs to their users.
>
> >I'm not sure ironic should say "no, that isn't allowed" but at a minimum
> >it would need to be opt-in behavior.
>
> For me it's absolutely clear why cloud case does assume running any kind
> of agent
> inside user instance. It is clear why cloud case does not assume exposing
> API
> to the user instance. But cloud is not the only case that exists.
> Fuel is a deployment tool. Fuel case is not cloud.  It is 'cattle' (cattle
> vs. pets), but
> it is not cloud in a sense that instances are 'user instances'.
> Fuel 'user instances' are not even 'user' instances.
> Fuel manages the content of instances throughout their whole life cycle.
>
> As you might remember we talked about this about two years ago (when we
> tried to contribute lvm and md features to IPA). I don't know why this
> case
> (deployment) was rejected again and again while it's still viable and
> widely used.
> And I don't know why it could not be implemented to be 'opt-in'.
> Since that we have invented our own fuel-agent (that supports lvm, md) and
> a driver for Ironic conductor that allows to use Ironic with fuel-agent.
>
> >Is the fuel team having a summit session of some sort about integrating
> >with ironic better? I'd be happy to come to that if it can be scheduled
> >at a time that ironic doesn't have a session. Otherwise maybe we can
> >catch up on Friday or something.
>
> >I'm glad to see Fuel wanting to integrate better with Ironic.
>
> We are still quite interested in closer integration with Ironic (we need
> power
> management features that Ironic provides). We'll be happy to schedule yet
> another discussion on closer integration with Ironic.
>
> BTW, about a year ago (in Grenoble) we agreed that it is not even
> necessary to merge such custom things into Ironic tree. Happily, Ironic is
> smart enough to consume drivers using stevedore. About ironic-inspector
> the case is the same. Whether we are going to run it inside 'user instance'
> or inside ramdisk it does not affect ironic-inspector itself. If Ironic
> team is
> open for merging "non-cloud" features (of course 'opt-in') we'll be happy
> to contribute.
>
> Vladimir Kozhukalov
>
> On Fri, Mar 18, 2016 at 6:03 PM, Jim Rollenhagen 
> wrote:
>
>> On Fri, Mar 18, 2016 at 05:26:13PM +0300, Evgeniy L wrote:
>> > On Thu, Mar 17, 2016 at 3:16 PM, Dmitry Tantsur 
>> wrote:
>> >
>> > > On 03/16/2016 01:39 PM, Evgeniy L wrote:
>> > >
>> > >> Hi Dmitry,
>> > >>
>> > >> I can try to provide you description on what current Nailgun agent
>> is,
>> > >> and what are potential requirements we may need from HW discovery
>> system.
>> > >>
>> > >> Nailgun agent is a one-file Ruby script [0] which is periodically run
>> > >> under cron. It collects information about HW using ohai [1], plus it
>> > >> does custom parsing, filtration, retrieval of HW information. After
>> the
>> > >> information is collected, it is sent to Nailgun, that is how node
>> gets
>> > >> discovered in Fuel.
>> > >>
>> > >
>> > > Quick clarification: does it run on user instances? or does it run on
>> > > hardware while it's still not deployed to? The former is something
>> that
>> > > Ironic tries not to do. There is an interest in the latter.
>> >
>> >
>> > Both, on user instances (with deployed OpenStack) and on instances which
>> > are not deployed and in bootstrap.
>> > What are the reasons Ironic tries not to do that (running HW discovery
>> on
>> > deployed node)?
>>
>> Well, there's a number of reasons. Ironic is not meant only for an
>> "undercloud" (deploying OpenStack on ironic instances). There are both
>> public and private cloud deployments of ironic in production today, that
>> make bare metal instances available to users of the cloud. Those users
>> may not want an agent running inside their instance, and more
>> importantly, the operators of those clouds may not want to expose the
>> ironic or inspector APIs to their users.
>>
>> I'm not sure ironic should say "no, that isn't allowed" but at a minimum
>> it would need to be opt-in behavior.
>>
>> >
>> >
>> > >
>> > >
>> > >> To summarise entire process:
>> > >> 1. After Fuel master node is installed, user restarts the nodes and
>> they
>> > >> get booted via PXE with bootstrap image.
>> > >> 2. Inside of bootstrap image Nailgun agent is 

Re: [openstack-dev] [Tempest] [Devstack] Where to keep tempest configuration?

2016-03-19 Thread Boris Pavlovic
There is as well another way to deal with this.

In Rally we have "rally verify" command that you can use to run tempest &
do auto configuration of it.
We can just extend it with new projects, in this case we are simplifying a
lot of life of everybody who wants to use tempest (ops, devops, devs,...)



Best regards,
Boris Pavlovic

On Thu, Mar 17, 2016 at 4:50 AM, Jordan Pittier 
wrote:

>
>
> On Thu, Mar 17, 2016 at 12:24 PM, Vasyl Saienko 
> wrote:
>
>> Hello Community,
>>
>> We started using tempest/devstack plugins. They allows to do not bother
>> other teams when Project specific changes need to be done. Tempest
>> configuration is still performed at devstack [0].
>> So I would like to rise the following questions:
>>
>>
>>- Where we should keep Projects specific tempest configuration?
>>Example [1]
>>
>> This iniset calls should be made from a devstack-plugin. See [1]
>
>>
>>- Where to keep shared between projects tempest configuration?
>>Example [2]
>>
>> Again, in a devstack plugin. You shouldn't make the iniset call directly
> but instead define some variables in a "setting" file (sourced by
> devstack). Hopefully these variables will be read by lib/tempest (in
> devstack) when the iniset calls will be made.  See [2]
>
>>
>>-
>>
>> As for me it would be good to move Projects related tempest configuration
>> to Projects repositories.
>>
> That's the general idea. It should be possible already right now.
>
>>
>> [0] https://github.com/openstack-dev/devstack/blob/master/lib/tempest
>> [1]
>> https://github.com/openstack-dev/devstack/blob/master/lib/tempest#L509-L513
>> [2]
>> https://github.com/openstack-dev/devstack/blob/master/lib/tempest#L514-L523
>>
>> Thank you in advance,
>> Vasyl Saienko
>>
>> [1]
> https://github.com/openstack/manila/blob/9834c802b8bf565099abf357fe054e086978cf6e/devstack/plugin.sh#L665
>
> [2]
> https://github.com/openstack/devstack-plugin-ceph/blob/18ee55a0a7de7948c41d066cd4a692e56fe8c425/devstack/settings#L14
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-19 Thread Kai Qiang Wu
HI Steve,

Some points to highlight here:

1> There are some work discussion about COE dynamic supports across
different OS distro.


2>  For atomic, we did have many requirements before, it was an old story,
seem some not meet our needs (which once asked in atomic IRC channel or
community) So we built some images by ourselves. But if atomic community
could provide related support, it would more beneficial for both( as we use
it, it would be tested by us daily jenkins and developers)


Maybe for the requirements, need some clear channel, like:


1>  What's the official channel to open requirements to Atomic community ?
Is it github or something else which can easily track ?

2> What's the normal process to deal with such requirements, and coordinate
ways.

3> Others





Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Steve Gordon 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   17/03/2016 09:24 pm
Subject:Re: [openstack-dev] [magnum] Discussion of supporting
single/multiple OS distro



- Original Message -
> From: "Kai Qiang Wu" 
> To: "OpenStack Development Mailing List (not for usage questions)"

> Sent: Tuesday, March 15, 2016 3:20:46 PM
> Subject: Re: [openstack-dev] [magnum] Discussion of supporting
single/multiple OS distro
>
> Hi  Stdake,
>
> There is a patch about Atomic 23 support in Magnum.  And atomic 23 uses
> kubernetes 1.0.6, and docker 1.9.1.
> From Steve Gordon, I learnt they did have a two-weekly release. To me it
> seems each atomic 23 release not much difference, (minor change)
> The major rebases/updates may still have to wait for e.g. Fedora Atomic
24.

Well, the emphasis here is on *may*. As was pointed out in that same thread
[1] rebases certainly can occur although those builds need to get karma in
the fedora build system to be pushed into updates and subsequently included
in the next rebuild (e.g. see [2] for a newer K8S build). The main point is
that if a rebase involves introducing some element of backwards
incompatibility then that would have to wait to the next major (F24) -
outside of that there is some flexibility.

> So maybe we not need to test every Atomic 23 two-weekly.
> Pick one or update old, when we find it is integrated with new kubernetes
> or docker, etcd etc. If other small changes(not include security), seems
> not need to update so frequently, it can save some efforts.

A question I have posed before and that I think will need to be answered if
Magnum is indeed to move towards the model for handling drivers proposed in
this thread is what are the expectations Magnum has for each image/coe
combination in terms of versions of key components for a given Magnum
release, and what are the expectations Magnum has for same when looking
forwards to say Newton.

Based on our discussion it seemed like there were some issues that mean
kubernetes-1.1.0 would be preferable for example (although that it wasn't
there was in fact itself a bug it would seem, but regardless it's a valid
example), but is that expectation documented somewhere? It seems like based
on feature roadmap it should be possible to at least put forward minimum
required versions for key components (e.g. docker, k8s, flanel, etcd for
the K8S COE)? This would make it easier to guide the relevant upstreams to
ensure their images support the Magnum team's needs and at least minimize
the need to do custom builds if not eliminate it.

-Steve

[1]
https://lists.fedoraproject.org/archives/list/cl...@lists.fedoraproject.org/thread/ZJARDKSB3KGMKLACCZSQALZHV54PAJUB/

[2] https://bodhi.fedoraproject.org/updates/FEDORA-2016-a89f5ce5f4

> From:  "Steven Dake (stdake)" 
> To:"OpenStack Development Mailing List (not for usage questions)"
> 
> Date:  16/03/2016 03:23 am
> Subject:   Re: [openstack-dev] [magnum] Discussion of supporting
> single/multiple OS distro
>
>
>
> WFM as long as we stick to the spirit of the proposal and don't end up in
a
> situation where there is only one distribution.  Others in the thread had
> indicated there would be only one distribution in tree, which I'd find
> disturbing for reasons already described on this thread.
>
> While we are about it, we should move to the latest version of atomic and
> chase atomic every two weeks on their release.  Thoughts?
>
> 

Re: [openstack-dev] [oslo] oslo.messaging dispatching into private/protected methods?

2016-03-19 Thread Joshua Harlow

Good find ;)

Davanum Srinivas wrote:

Josh,

Haha, see note from russellb :)
http://git.openstack.org/cgit/openstack/nova/tree/nova/network/rpcapi.py#n308

On Thu, Mar 17, 2016 at 6:44 PM, Joshua Harlow  wrote:

In a follow-up to this.

Seems like the patch to disable/disallow this itself found some 'violations'
@
http://logs.openstack.org/24/289624/3/check/gate-oslo.messaging-src-dsvm-full-amqp1-centos7/e3b485c/console.html.gz#_2016-03-11_00_06_56_177

Details: {u'message': u'Unable to associate floating IP 172.24.5.1 to fixed
IP 10.1.14.255 for instance 3660f872-a8c2-4469-99c3-062ed1a90131. Error:
Remote error: NoSuchMethod Endpoint does not support RPC method
_associate_floating_ip\n[u\'Traceback (most recent call last):\\n\', u\'
File "/opt/stack/new/oslo.messaging/oslo_messaging/rpc/dispatcher.py", line
138, in _dispatch_and_reply\\nincoming.message))\\n\', u\' File
"/opt/stack/new/oslo.messaging/oslo_messaging/rpc/dispatcher.py", line 170,
in _dispatch\\nraise NoSuchMethod(method)\\n\', u\'NoSuchMethod:
Endpoint does not support RPC method _associate_floating_ip\\n\'].',
u'code': 400}

I believe this is a nova error as the test name is
'tempest.api.compute.floating_ips.test_floating_ips_actions'

So I guess the question becomes, should we start warning using warnings.warn
(instead of raising a NoSuchMethod error) and at a later point in the future
stop using warnings.warn and switch to NoSuchMethod, giving people ample
enough time to stop dispatching into protected/private methods.

Thoughts?

-Josh

On 03/08/2016 09:43 AM, Joshua Harlow wrote:

Hi all,

As I was working through https://review.openstack.org/#/c/288719/ for
kevin benton to do some things with in neutron it came to my
understanding that this code (the dispatcher code that is) can dispatch
into nearly arbitrary callables of any object (or that is what it looks
like it can do):


https://github.com/openstack/oslo.messaging/blob/4.5.0/oslo_messaging/rpc/dispatcher.py#L169


So during this exploration of this code for the above review it made me
wonder if this is a feature or bug, or if we should at least close the
hole of allowing calling into nearly any endpoint method/attribute (even
non-callable ones to?).

So before doing much more of this (which I started in review
https://review.openstack.org/#/c/289624/) I wanted to see if people are
actually using this 'ability' (for lack of better words) to call into
private/protected methods before pursuing 289624 much more...

Thoughts?

-Josh


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Maintaining httplib2 python library

2016-03-19 Thread Cory Benfield

> On 18 Mar 2016, at 17:05, Doug Wiegley  wrote:
>> On Mar 18, 2016, at 8:31 AM, Cory Benfield  wrote:
>> 
>> Getting requests to talk over a Unix domain socket is not particularly 
>> tricky, and there are third-party libraries that hook into requests 
>> appropriately to make that happen. For example, the requests-unixsocket 
>> module exists that can do the appropriate things.
> 
> That’s the module that I was eyeing, but we’re just trading one dependency 
> for another. Is there something about httplib2 maintenance in particular that 
> makes us want that gone?
> 
> doug

The original message in this thread was about the fact that httplib2 is 
currently unmaintained and looking for new maintainers. I believe that was the 
impetus for the discussion.

Cory


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] notes for Newton about the release process, learnings from Mitaka thus far

2016-03-19 Thread Doug Hellmann
Excerpts from Amrith Kumar's message of 2016-03-18 13:44:34 +:
> Folks on the Trove team,
> 
> I've certainly learnt a lot of things going through the Mitaka release 
> process thus far, and I'm sure there are more things that I will learn as 
> well. But, I want to send this to all so that we don't 'learn' the same 
> things again in Newton.
> 
> 
> 1.  The release calendar is sent out way in advance of the release. Here is 
> the Newton release schedule[1].
> 
> 
> 
> 2.  Some important things to remember:
> 
> a.  There is a deadline for "Final release for client libraries". That came 
> at R-5 in Mitaka[2]. On that date, any features that we have in 
> python-troveclient IS the final set for the release. You can have bug fixes 
> after that date but not new features.

The reason for this is that new library releases only take effect if we
update the global requirements list and/or constraints list, and that
list is frozen during the final few weeks of the server feature
development period leading up to the third milestone to try to maintain
stable dependencies in the gate. So, we update it for critical bugs in
libraries, but not every little thing and definitely not for features.

Doug

> 
> b.  Feature freeze exceptions only cover things OTHER THAN client libraries. 
> If you have a change that involves client and server side stuff, the deadline 
> for "Final release for client libraries" does not get moved for your FFE'ed 
> things.
> 
> c.  Once RC1 is tagged, you have a branch for the release. Therefore if you 
> have changes after RC1, remember that they are backports from master onto 
> stable/newton (now stable/mitaka).
> 
> d.  RC1 and all of the other tags have to be dropped on openstack/trove, 
> openstack/python-troveclient and openstack/trove-dashboard.
> 
> 3.  We will have to follow some process to identify the things that we want 
> to get merged at each stage of the process. We used the starredby: thing this 
> time around, as we've done in releases past. We may have to figure something 
> else out, I'm looking to see if we can tag reviews but so far I haven't found 
> anything.
> 
> Those are some of the things I've learned and I wanted to make a note of 
> these for myself, but figured I may as well share with the team.
> 
> Thanks,
> 
> -amrith
> 
> [1] http://releases.openstack.org/newton/schedule.html
> [2] http://releases.openstack.org/mitaka/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packstack] Stable/Mitaka branch has been created

2016-03-19 Thread David Moreau Simard
Hi all,

Please note that the stable/mitaka branch for Packstack was created to
prepare for the Mitaka release.
As such, new features should not be added to stable/mitaka but on the
master branch.

Bugfixes can be backported to the stable branches if they are impacted.

stable/mitaka is not yet gated but should be soon when the review for
it [1] merges.

Time to start working on the Mitaka cycle release notes ! It was a
great cycle and we need to highlight the work that was done.
I'll try and submit a review for the release notes using Reno [2] like
the rest of the OpenStack projects.

[1]: https://review.openstack.org/#/c/294234/
[2]: http://docs.openstack.org/developer/reno/

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread David Stanek
On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal <
douglas.mendiza...@rackspace.com> wrote:

> [snip]
> >
> > Regarding the Keystone solution, I'd like to hear the Keystone team's
> feadback on that.  It definitely sounds to me like you're trying to put a
> square peg in a round hole.
> >
>
>
I believe that using Keystone for this is a mistake. As mentioned in the
blueprint, Keystone is not encrypting the data so magnum would be on the
hook to do it. So that means that if security is a requirement you'd have
to duplicate more than just code. magnum would start having a larger
security burden. Since we have a system designed to securely store data I
think that's the best place for data that needs to be secure.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] eventlet in the py34 unit tests

2016-03-19 Thread Chris Dent


This review demonstrates a fix for the py34 unit tests sometimes
taking an age in the gate and eventually timing out:

   https://review.openstack.org/#/c/293372/

The changed test, without the change, will block in epoll() for a
rather long time (900s), only in python34. In python 27 it sails.

With the change, wherein the request is placed in its own
greenthread, it sails in both. The fix seems rather dirty but it
works...

The need for the fix suggests one or both of two things:

* eventlet still not quite good enough in Python 3
* Our use of eventlet to drive services in the functional tests,
  once we have them running against Python 3 is going to be quite
  the problem and even if we fix this minor problem now, we're going
  to have a much bigger one later (unless eventlet matures).

Does anyone else have additional ideas or thoughts.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] Clarification of expanded mission statement

2016-03-19 Thread Russell Bryant
The Kuryr project proposed an update to its mission statement and I agreed
to start a ML thread seeking clarification on the update.

https://review.openstack.org/#/c/289993

The change expands the current networking focus to also include storage
integration.

I was interested to learn more about what work you expect to be doing.  On
the networking side, it's clear to me: a libnetwork plugin, and now perhaps
a CNI plugin.  What specific code do you expect to deliver as a part of
your expanded scope?  Will that code be in Kuryr, or be in upstream
projects?

If you don't know yet, that's fine.  I was just curious what you had in
mind.  We don't really have OpenStack projects that are organizing around
contributing to other upstreams, but I think this case is fine.

-- 
Russell Bryant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [senlin][ptl] PTL Candidacy

2016-03-19 Thread Qiming Teng
Dear All,

With this mail I'm nominating myself as the PTL of the Senlin project
for the Newton cycle.

It has been an honor and a pleasure to work with developers who share
the same passion to make Senlin a useful and usable service to users.
Senlin is still at its infancy after joining the big tent family. We
still have a lot of work to do going forward although we are feeling
very happy it is pretty stable and usable now.

If I'm getting an opportunity to continue serving the team for its very
first official cycle, I'd strive to work with the team on the following
items:

- Better alignment with community

  * Getting API micro-versioning in place so that future revisions to
the API can be better managed;
  * Completing API/scenario tests using Tempest plugins, e.g. we are
to test not only how the API works normally but also how it fails;
  * Supports to live upgrade thus paving the way for future development;
  * Advanced filters for listing operations;

  etc.

- Full support to high-availability

  * A flexible, usable health policy;
  * Event/message listeners for node failure events;
  * Fencing logic regarding compute, network and storage;
  * Customizability of this feature for various usage scenarios;

- Support to container clusters

  * Enabler functions to create/manage containers in VMs / Bare-metals;
  * Simple placement policy to schedule containers based on dynamic
resource measurements;
  * Evaluate options for network and storage provisioning;

- Cross-project collaborations

  * Continue working with Heat regarding Senlin resource types;
  * Start working with Zarqar with respect to message queue receivers;
  * Engage with Tacker to support its autoscaling use case;
  * Work with Telemetry and Metering on event notifications and metrics;
  * Explore interaction with Mistral and Congress on workflow and
  * conformance (could be a policy?)
  * Explore Tooz for large-scale deployments

- Improvement to usability and scalability

Right. We have a lot work to do. We do perfer a PTL rotation practice
as some projects do and we do have strong candidates in the team.
However, when I asked around, I got the feeling that most are at the
moment over-committed in their daily jobs. Before stepping up as a PTL
candidate, one has to secure enough bandwidth for the job. Fortunately,
I am still enjoying such a support. That is the reason behind this post.

Regards,
  Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] working on bug reports; what blocks you?

2016-03-19 Thread Markus Zoeller
Kashyap Chamarthy  wrote on 03/18/2016 07:28:09 AM:

> From: Kashyap Chamarthy 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 03/18/2016 07:30 AM
> Subject: Re: [openstack-dev] [nova] working on bug reports; what blocks 
you?
> 
> On Thu, Mar 17, 2016 at 03:28:48PM -0500, Matt Riedemann wrote:
> > On 3/17/2016 11:41 AM, Markus Zoeller wrote:
> > >What are the various reasons which block you to work on bug reports?
> > >This question goes especially to the new contributors but also to the
> > >rest of us. For me, personally, it's that most bug reports miss the
> > >steps to reproduce which allow me to see the issue on my local system
> > >before I start to dig into the code.
> > >
> > >I'm asking this because I'm not sure what the main reasons are that
> > >our bug list is this huge (~1000 open bug reports). Maybe you have
> > >reasons which can be resolved or mitigated by me in my bug czar role.
> > >Let me know.
> 
> Effective bug reporting is top issue for me.  By "effective" I mean:
> 
>   - Not assuming any prior context while writing a report.  (Especially
> when writing how to reproduce the problem.)
>   - Not forgetting to state changes made to various configuration
> attributes
>   - Describing the problem's symptoms in chronological order.
>   - Describing the test environment precisely.
> 
> Writing a thoughtful report is hard and time-taking.

Yeah, and I assume that's the reason many bug reports lack that 
information. I have the hope to dig deeper into the logging capabilities
of Nova during the P cycle, to figure out how much we internally already
know but don't offer easily enough. In some bug reports I suggested to
use sosreport and attach the file then, but I didn't see that happen.
In my head there would be openstack CLI commands like these two:

$ openstack report-bug 
$ openstack report-bug --last 10m 

That should then result in an upstream bug report which answers the
usual questions we have. Don't ask me details how this would happen.

> https://wiki.openstack.org/wiki/BugFilingRecommendations
> 
> > Clear recreate steps is probably #1, but also logs if there are
> > obvious failures. A stacktrace goes a long way with a clear
> > description of the failure scenario. Obviously need to know the level
> > of code being tested.
> > 
> > For a lot of bugs that are opened on n-2 releases, like kilo at this
> > point, my first question is, have you tried this on master to see if
> > it's still an issue. That's lazy on my part, but it's easy if I'm not
> > aware of a fix that just needs backporting.
> 
> I don't view it as being lazy on your part.  Other open source projects
> use a similar method -- e.g. in Fedora Project, one month after N+2
> (Fedora-24) is released, 'N' (Fedora-22) goes End-of-Life.  And, all
> bugs (that are not solved) reported against 'N' (for components with
> high bug volume) are closed, with a request to re-test them on N+2
> (latest stable release), and re-open it if the issue persists.
> Otherwise, it becomes difficult to cope with volume.

In an ideal world with unlimited resources we could do that on our own
without asking the reporter, but I guess we should do the "please 
recheck if it's in master" thing more often.

> 
> -- 
> /kashyap
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

2016-03-19 Thread Zane Bitter

On 16/03/16 06:57, Sergey Kraynev wrote:

Hi Heaters,

The Mitaka release is close to finish, so it's good time for reviewing
results of work.
One of this results is analyze contribution results for the last release cycle.
According to the data [1] we have one good candidate for nomination to
core-review team:
Oleksii Chuprykov.


+1


During this release he showed significant value of review metric.
His review were valuable and useful. Also He has enough level of
expertise in Heat code.
So I think he is worthy to join to core-reviewers team.

I ask you to vote and decide his destiny.
  +1 - if you agree with his candidature
  -1  - if you disagree with his candidature

[1] http://stackalytics.com/report/contribution/heat-group/120




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-19 Thread Andrea Frittoli
On Thu, Mar 17, 2016 at 2:57 AM Ken'ichi Ohmichi 
wrote:

> 2016-03-16 19:41 GMT-07:00 Jim Rollenhagen :
> > On Wed, Mar 16, 2016 at 06:20:11PM -0700, Ken'ichi Ohmichi wrote:
> >> Hi
> >>
> >> I have one proposal[1] related to negative tests in Tempest, and
> >> hoping opinions before doing that.
> >>
> >> Now Tempest contains negative tests and sometimes patches are being
> >> posted for adding more negative tests, but I'd like to propose
> >> removing them from Tempest instead.
> >>
> >> Negative tests verify surfaces of REST APIs for each component without
> >> any integrations between components. That doesn't seem integration
> >> tests which are scope of Tempest.
> >> In addition, we need to spend the test operating time on different
> >> component's gate if adding negative tests into Tempest. For example,
> >> we are operating negative tests of Keystone and more
> >> components on the gate of Nova. That is meaningless, so we need to
> >> avoid more negative tests into Tempest now.
> >>
> >> If wanting to add negative tests, it is a nice option to implement
> >> these tests on each component repo with Tempest plugin interface. We
> >> can avoid operating negative tests on different component gates and
> >> each component team can decide what negative tests are valuable on the
> >> gate.
> >>
> >> In long term, all negative tests will be migrated into each component
> >> repo with Tempest plugin interface. We will be able to operate
> >> valuable negative tests only on each gate.
> >
> > So, positive tests in tempest, negative tests as a plugin.
> >
> > Is there any longer term goal to have all tests for all projects in a
> > plugin for that project? Seems odd to separate them.
>
> Yeah, from implementation viewpoint, that seems a little odd.
> but from the main scope of Tempest and to avoid unnecessary gate
> operation time, that can be acceptable, I feel.
> Negative tests can be corner cases in most cases, they don't seem
> integration tests.
>

I think it's difficult to define a single black and white criteria for
negative tests, as they encompass a wide range of types of tests.

I agree that things that only testing the API level of a service (not even
a DB behind) do not necessarily belong in tempest - i.e. testing of input
validation done by an API.  We could have a guideline for such tests to be
implemented as unit/functional tests in tree of the service.

However Tempest is also interoperability, so we should keep at least a few
negative API checks in tempest (for the core six services) to enforce that
return codes do not change inadvertently in negative cases, which could
break existing clients and applications.

If a project was to move all negative tests out of tempest, than they might
consider have hacking rules to prevent modifying the code and tests at the
same time, and change behaviour inadvertently.

andrea


>
> Thanks
> Ken Ohmichi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Fox, Kevin M
Yeah, I get that. I've got some sizeable deployments too.

But in the case of using a library, your scattering all the security bits 
around the various services and it just pushes the burden to securing it, 
patching all the services, etc some place else. Its better then each project 
rolling their own security solution for sure. but if your deploying the system 
securely, I don't think it really is less of a burden. You switch out having to 
figure out how to deploy an extra service with having to pay careful attention 
to every other service to secure them more carefully. I'd argue it should be 
easier to deploy the centralized service then doing it across the other 
services.

Thanks,
Kevin 

From: Steven Dake (stdake) [std...@cisco.com]
Sent: Friday, March 18, 2016 1:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

On 3/18/16, 12:59 PM, "Fox, Kevin M"  wrote:

>+1. We should be encouraging a common way of solving these issues across
>all the openstack projects and security is a really important thing.
>spreading it across lots of projects causes more bugs and security
>related bugs cause security incidents. No one wants those.
>
>I'd also like to know why, if an old cloud is willing to deploy a new
>magnum, its unreasonable to deploy a new barbican at the same time.
>
>If its a technical reason, lets fix the issue. If its something else,
>lets discuss it. If its just an operator not wanting to install 2 things
>instead of just one, I think its a totally understandable, but
>unreasonable request.

Kevin,

I think the issue comes down to "how" the common way of solving this
problem should be approached.  In barbican's case a daemon and database
are required.  What I wanted early on with Magnum when I was involved was
a library approach.

Having maintained a deployment project for 2 years, I can tell you each
time we add a new big tent project it adds a bunch of footprint to our
workload.  Operators typically don't even have a tidy deployment tool like
Kolla to work with.  As an example, ceilometer has had containers
available in Kolla for 18 months yet nobody has finished the job on
implementing ceilometer playbooks, even though ceilometer is a soft
dependency of heat for autoscaling.

Many Operators self-deploy so they understand how the system operates.
They lack the ~200 contributors Kolla has to maintain a deployment tool,
and as such, I really don't think the idea that deploying "Y to get X when
Y could and should be a small footprint library" is unreasonable.

Regards,
-steve

>
>Thanks,
>Kevin
>
>From: Douglas Mendizábal [douglas.mendiza...@rackspace.com]
>Sent: Friday, March 18, 2016 6:45 AM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [magnum] High Availability
>
>Hongbin,
>
>I think Adrian makes some excellent points regarding the adoption of
>Barbican.  As the PTL for Barbican, it's frustrating to me to constantly
>hear from other projects that securing their sensitive data is a
>requirement but then turn around and say that deploying Barbican is a
>problem.
>
>I guess I'm having a hard time understanding the operator persona that
>is willing to deploy new services with security features but unwilling
>to also deploy the service that is meant to secure sensitive data across
>all of OpenStack.
>
>I understand one barrier to entry for Barbican is the high cost of
>Hardware Security Modules, which we recommend as the best option for the
>Storage and Crypto backends for Barbican.  But there are also other
>options for securing Barbican using open source software like DogTag or
>SoftHSM.
>
>I also expect Barbican adoption to increase in the future, and I was
>hoping that Magnum would help drive that adoption.  There are also other
>projects that are actively developing security features like Swift
>Encryption, and DNSSEC support in Desginate.  Eventually these features
>will also require Barbican, so I agree with Adrian that we as a
>community should be encouraging deployers to adopt the best security
>practices.
>
>Regarding the Keystone solution, I'd like to hear the Keystone team's
>feadback on that.  It definitely sounds to me like you're trying to put
>a square peg in a round hole.
>
>- Doug
>
>On 3/17/16 8:45 PM, Hongbin Lu wrote:
>> Thanks Adrian,
>>
>>
>>
>> I think the Keystone approach will work. For others, please speak up if
>> it doesn¹t work for you.
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
>> *Sent:* March-17-16 9:28 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [magnum] High Availability
>>
>>
>>
>> Hongbin,
>>
>>
>>
>> I tweaked the blueprint in accordance with this approach, and approved
>> it for Newton:
>>
>> 

Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-19 Thread Rodrigo Duarte
Totally agree here, also, having positive/negative API tests in Tempest
helps in the API stability effort. Although the API is owned by the service
in question, it interacts with other services and making sure the API is
stable is valuable for the communication between them.

We know a recent example where a change in keystone API caused a change in
Cinder.

On Thu, Mar 17, 2016 at 8:05 AM, Andrea Frittoli 
wrote:

>
>
> On Thu, Mar 17, 2016 at 2:57 AM Ken'ichi Ohmichi 
> wrote:
>
>> 2016-03-16 19:41 GMT-07:00 Jim Rollenhagen :
>> > On Wed, Mar 16, 2016 at 06:20:11PM -0700, Ken'ichi Ohmichi wrote:
>> >> Hi
>> >>
>> >> I have one proposal[1] related to negative tests in Tempest, and
>> >> hoping opinions before doing that.
>> >>
>> >> Now Tempest contains negative tests and sometimes patches are being
>> >> posted for adding more negative tests, but I'd like to propose
>> >> removing them from Tempest instead.
>> >>
>> >> Negative tests verify surfaces of REST APIs for each component without
>> >> any integrations between components. That doesn't seem integration
>> >> tests which are scope of Tempest.
>> >> In addition, we need to spend the test operating time on different
>> >> component's gate if adding negative tests into Tempest. For example,
>> >> we are operating negative tests of Keystone and more
>> >> components on the gate of Nova. That is meaningless, so we need to
>> >> avoid more negative tests into Tempest now.
>> >>
>> >> If wanting to add negative tests, it is a nice option to implement
>> >> these tests on each component repo with Tempest plugin interface. We
>> >> can avoid operating negative tests on different component gates and
>> >> each component team can decide what negative tests are valuable on the
>> >> gate.
>> >>
>> >> In long term, all negative tests will be migrated into each component
>> >> repo with Tempest plugin interface. We will be able to operate
>> >> valuable negative tests only on each gate.
>> >
>> > So, positive tests in tempest, negative tests as a plugin.
>> >
>> > Is there any longer term goal to have all tests for all projects in a
>> > plugin for that project? Seems odd to separate them.
>>
>> Yeah, from implementation viewpoint, that seems a little odd.
>> but from the main scope of Tempest and to avoid unnecessary gate
>> operation time, that can be acceptable, I feel.
>> Negative tests can be corner cases in most cases, they don't seem
>> integration tests.
>>
>
> I think it's difficult to define a single black and white criteria for
> negative tests, as they encompass a wide range of types of tests.
>
> I agree that things that only testing the API level of a service (not even
> a DB behind) do not necessarily belong in tempest - i.e. testing of input
> validation done by an API.  We could have a guideline for such tests to be
> implemented as unit/functional tests in tree of the service.
>
> However Tempest is also interoperability, so we should keep at least a few
> negative API checks in tempest (for the core six services) to enforce that
> return codes do not change inadvertently in negative cases, which could
> break existing clients and applications.
>
> If a project was to move all negative tests out of tempest, than they
> might consider have hacking rules to prevent modifying the code and tests
> at the same time, and change behaviour inadvertently.
>
> andrea
>
>
>>
>> Thanks
>> Ken Ohmichi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-19 Thread gordon chung


On 16/03/2016 9:39 AM, Sean Dague wrote:
> This has to live inside all the upgrade constraints we currently have
> (like online data migration in the Nova case), otherwise it's a non starter.

completely agree

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Mitaka RC1 done

2016-03-19 Thread Shinobu Kinjo
On Sun, Mar 20, 2016 at 9:34 AM, Ben Swartzlander  wrote:
> Thanks everyone for the hard work getting RC1 done. We had record high
> number of bugs at feature freeze this cycle, and we ended up pushing a few
> out, but there were SIXTY bugs fixed in the last 2 weeks which I consider a
> great accomplishment!
>
> Soon the official RC1 tag should be posted and everyone should start testing
> the release to look for any bugs we missed. While you wait for the tag, go
> ahead and vote for one of these logo designs for the stickers in Austin:
>
> http://surveymonkey.com/r/J8266ZH

Nice -;

Cheers,
S

>
> -Ben Swartzlander
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Daneyon Hansen (danehans)
Adrian/Hongbin,

Thanks for taking the time to provide your input on this matter. After 
reviewing your feedback, my takeaway is that Magnum is not ready for production 
without implementing Barbican or some other future feature such as the Keystone 
option Adrian provided. 

All,

Is anyone using Magnum in production? If so, I would appreciate your input.

-Daneyon Hansen

> On Mar 17, 2016, at 6:16 PM, Adrian Otto  wrote:
> 
> Hongbin,
> 
> One alternative we could discuss as an option for operators that have a good 
> reason not to use Barbican, is to use Keystone.
> 
> Keystone credentials store: 
> http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#credentials-v3-credentials
> 
> The contents are stored in plain text in the Keystone DB, so we would want to 
> generate an encryption key per bay, encrypt the certificate and store it in 
> keystone. We would then use the same key to decrypt it upon reading the key 
> back. This might be an acceptable middle ground for clouds that will not or 
> can not run Barbican. This should work for any OpenStack cloud since Grizzly. 
> The total amount of code in Magnum would be small, as the API already exists. 
> We would need a library function to encrypt and decrypt the data, and ideally 
> a way to select different encryption algorithms in case one is judged weak at 
> some point in the future, justifying the use of an alternate.
> 
> Adrian
> 
>> On Mar 17, 2016, at 4:55 PM, Adrian Otto  wrote:
>> 
>> Hongbin,
>> 
>>> On Mar 17, 2016, at 2:25 PM, Hongbin Lu  wrote:
>>> 
>>> Adrian,
>>> 
>>> I think we need a boarder set of inputs in this matter, so I moved the 
>>> discussion from whiteboard back to here. Please check my replies inline.
>>> 
 I would like to get a clear problem statement written for this.
 As I see it, the problem is that there is no safe place to put 
 certificates in clouds that do not run Barbican.
 It seems the solution is to make it easy to add Barbican such that it's 
 included in the setup for Magnum.
>>> No, the solution is to explore an non-Barbican solution to store 
>>> certificates securely.
>> 
>> I am seeking more clarity about why a non-Barbican solution is desired. Why 
>> is there resistance to adopting both Magnum and Barbican together? I think 
>> the answer is that people think they can make Magnum work with really old 
>> clouds that were set up before Barbican was introduced. That expectation is 
>> simply not reasonable. If there were a way to easily add Barbican to older 
>> clouds, perhaps this reluctance would melt away.
>> 
 Magnum should not be in the business of credential storage when there is 
 an existing service focused on that need.
 
 Is there an issue with running Barbican on older clouds?
 Anyone can choose to use the builtin option with Magnum if hey don't have 
 Barbican.
 A known limitation of that approach is that certificates are not 
 replicated.
>>> I guess the *builtin* option you referred is simply placing the 
>>> certificates to local file system. A few of us had concerns on this 
>>> approach (In particular, Tom Cammann has gave -2 on the review [1]) because 
>>> it cannot scale beyond a single conductor. Finally, we made a compromise to 
>>> land this option and use it for testing/debugging only. In other words, 
>>> this option is not for production. As a result, Barbican becomes the only 
>>> option for production which is the root of the problem. It basically forces 
>>> everyone to install Barbican in order to use Magnum.
>>> 
>>> [1] https://review.openstack.org/#/c/212395/ 
>>> 
 It's probably a bad idea to replicate them.
 That's what Barbican is for. --adrian_otto
>>> Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
>>> agreed to have two phases of implementation and the statement was made by 
>>> you [2].
>>> 
>>> 
>>> #agreed Magnum will use Barbican for an initial implementation for 
>>> certificate generation and secure storage/retrieval.  We will commit to a 
>>> second phase of development to eliminating the hard requirement on Barbican 
>>> with an alternate implementation that implements the functional equivalent 
>>> implemented in Magnum, which may depend on libraries, but not Barbican.
>>> 
>>> 
>>> [2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html
>> 
>> The context there is important. Barbican was considered for two purposes: 
>> (1) CA signing capability, and (2) certificate storage. My willingness to 
>> implement an alternative was based on our need to get a certificate 
>> generation and signing solution that actually worked, as Barbican did not 
>> work for that at the time. I have always viewed Barbican as a suitable 
>> 

Re: [openstack-dev] [jacket] Introduction to jacket, a new project

2016-03-19 Thread Kevin.ZhangSen
Thanks for Joe's explanation. :)


Best Regards,
Kevin (Sen Zhang)




在 2016-03-17 09:58:38,"joehuang"  写道:


Agree with Kevin,

 

Currently Tricircle mainly focuses on the API gateway and networking automation 
across multiple OpenStack instances. The hybrid cloud PoC is built based on 
Tricircle in Tokyo Summit: 
https://www.openstack.org/summit/tokyo-2015/videos/presentation/huawei-openstack-enabled-hybrid-cloud

 

Where Tricircle manages multiple small OpenStack instances, and different 
OpenStack instance using “jacket” to integrate AWS/vCloud. The jacket provide a 
way of abstract of hybrid cloud.

 

Best Regards

Chaoyi Huang ( Joe Huang )

 

From: zs [mailto:okay22m...@163.com]
Sent: Wednesday, March 16, 2016 8:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [jacket] Introduction to jacket, a new project

 

Hi Gordon,

Thank you for your suggestion.

I think jacket is different from tricircle. Because tricircle focuses on 
OpenStack deployment across multiple sites, but jacket focuses on how to manage 
the different clouds just like one cloud.  There are some differences:
1. Account management and API model: Tricircle faces multiply OpenStack 
instances which can share one Keystone and have the same API model, but jacket 
will face the different clouds which have the respective service and different 
API model. For example, VMware vCloud Director has no volume management like 
OpenStack and AWS, jacket will offer a fake volume management for this kind of 
cloud.
2. Image management: One image just can run in one cloud, jacket need consider 
how to solve this problem.
3. Flavor management: Different clouds have different flavors which can not be 
operated by users. Jacket will face this problem but there will be no this 
problem in tricircle.
4. Legacy resources adoption: Because of the different API modles, it will be a 
huge challenge for jacket.


I think it is maybe a good solution that jacket works to unify the API model 
for different clouds, and then using tricircle to offer the management of  a 
large scale VMs.


Best Regards,
Kevin (Sen Zhang)

 


At 2016-03-16 19:51:33, "gordon chung"  wrote:
> 
> 
>On 16/03/2016 4:03 AM, zs wrote:
>> Hi all,
>> 
>> There is a new project "jacket" to manage multiply clouds. The jacket
>> wiki is: https://wiki.openstack.org/wiki/Jacket
>>   Please review it and give your comments. Thanks.
>> 
>> Best Regards,
>> 
>> Kevin (Sen Zhang)
>> 
>> 
> 
>i don't know exact details of either project, but i suggest you 
>collaborate with tricircle project[1] because it seems you are 
>addressing the same user story (and in a very similar fashion). not sure 
>if it's a user story for OpenStack itself, but no point duplicating efforts.
> 
>[1] https://wiki.openstack.org/wiki/Tricircle
> 
>cheers,
> 
>-- 
>gord
> 
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Wishlist bugs == (trivial) blueprint?

2016-03-19 Thread Rochelle Grober
(Inline because the mail formatted friendly this time)

From: Tim Bell March 17, 2016 11:26 AM:
On 17/03/16 18:29, "Sean Dague"  wrote:

>On 03/17/2016 11:57 AM, Markus Zoeller wrote:
>
>> Suggested action items:
>> 
>> 1. I close the open wish list items older than 6 months (=138 reports)
>>and explain in the closing comment that they are outdated and the 
>>ML should be used for future RFEs (as described above).
>> 2. I post on the openstack-ops ML to explain why we do this
>> 3. I change the Nova bug report template to explain this to avoid more
>>RFEs in the bug report list in the future.

Please take a look at how Neutron is doing this.  [1] is their list of RFEs. 
[2] is the ML post Kyle provided to document how Ops and other users can submit 
RFEs without needing to know how to submit specs or code OpenStack Neutron. 
I'll let Kyle post on how successful the process is, if he wants to.

The point here is that Neutron uses wishlist combined with [RFE] in the title 
to identify Ops and user requests.  This identifies items as Ops/user asks that 
these comuunities consider important.  Also, the point is that Yes, post the 
RFE on the ops list, but open the RFE bug and allow comments, voting there.  
The bug system does much better keeping track of the request and Ops votes once 
it exists.  Plus, once Ops and others know about the lightweight process, 
they'll know where to go looking so they can vote/add comments.  Please don't 
restrict RFEs to mailing list.  It's a great way to lose them.  So my 
suggestion here is:

1.  Close the wishlist (all of it???) and post in each that if it's a new 
feature the submitter thinks is useful to himself and others, resubmit with 
[RFE] in title, priority wishlist, pointer to the Neutron docs.
2.  Post to openstack-ops and usercommittee why, and ask them to discuss on the 
ML and review all [RFE]s that they submit (before or after, but if the bug 
number is on ML, they can vote on it and add comments, etc.)
3. Change the template to highlight/require the information needed to move 
forward with *any* submitted bug by dev.

>> 4. In 6 months I double-check the rest of the open wishlist bugs
>>if they found developers, if not I'll close them too.
>> 5. Continously double-check if wishlist bug reports get created
>>
>> Doubts? Thoughts? Concerns? Agreements?
>
>This sounds like a very reasonable plan to me. Thanks for summarizing
>all the concerns and coming up with a pretty balanced plan here. +1.
>
>   -Sean

I’d recommend running it by the -ops* list along with the RFE proposal. I think 
many of the cases
had been raised since people did not have the skills/know how to proceed.

Engaging with the ops list would also bring in the product working group who 
could potentially
help out on the next step (i.e. identifying the best places to invest for RFEs) 
and the other
topical working groups (e.g. Telco, scientific) who could help with 
prioritisation/triage.

I don’t think that a launchpad account on its own is a big problem. Thus, I 
could also see an approach
where a blueprint was created in launchpad with some reasonably structured set 
of chapters. My
personal experience was that the challenges came more later on trying to get 
the review matched up and
the right bp directories.

There is a big benefit to good visibility in the -ops community for RFEs 
though. Quite often, the
features are implemented but people did not know how to find them in the doc 
(or maybe its a doc bug).
Equally, the OSops scripts repo can give people workarounds while the requested 
feature is in the
priority queue.

It would be a very interesting topic to kick off in the ops list and then have 
a further review in
Austin to agree how to proceed.

Tim 

You can review how the [RFE] experiment is going in six weeks or more.  We can 
also get an Ops session specifically for reviewing/commenting on RFEs and/or 
hot Nova bugs. I think you'd get good attendance.  I'd be happy to moderate, or 
be the secretary for that session.

I really think if we can get Ops to use the RFE system that Neutron already 
employs, you'll see fewer duplicates, more participation and better feedback 
across all bugs from Ops (and others).  The Ops folks will participate 
enthusiastically as long as they get feedback from devs and/or see progress in 
getting their needs addressed.  If you post the mail and the process (and an 
example of what a good RFE might look like) to the ops list soon, there can be 
a good list of RFEs by the summit to get Ops to discuss and start the 
conversation on just what they need and Nova can provide along those lines in 
Newton, taking into account Nova's other Newton priorities.  Plus, you will 
have a differentiator of what folks need as new features as they are discovered 
during Ops' rollout to the newer releases.

--Rocky


[1] 

Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-19 Thread Jordan Pittier
Hi,

On Thu, Mar 17, 2016 at 2:20 AM, Ken'ichi Ohmichi 
wrote:

> Hi
>
> I have one proposal[1] related to negative tests in Tempest, and
> hoping opinions before doing that.
>
> Now Tempest contains negative tests and sometimes patches are being
> posted for adding more negative tests, but I'd like to propose
> removing them from Tempest instead.
>
> Negative tests verify surfaces of REST APIs for each component without
> any integrations between components. That doesn't seem integration
> tests which are scope of Tempest.
>
Tempest is not only about integration tests. I mean, we have hundreds of
tests that are not integration tests.


> In addition, we need to spend the test operating time on different
> component's gate if adding negative tests into Tempest. For example,
> we are operating negative tests of Keystone and more
> components on the gate of Nova. That is meaningless, so we need to
> avoid more negative tests into Tempest now.
>
You have a good point here. But this problem (running tests for project X
on project Y's gate) should be addressed more generally not only for
negative tests.


>
> If wanting to add negative tests, it is a nice option to implement
> these tests on each component repo with Tempest plugin interface. We
> can avoid operating negative tests on different component gates and
> each component team can decide what negative tests are valuable on the
> gate.
>
> In long term, all negative tests will be migrated into each component
> repo with Tempest plugin interface. We will be able to operate
> valuable negative tests only on each gate.
>
> Any thoughts?
>

I am not sure we should remove negative tests from Tempest. Agreed that we
should reject most new negative tests, but some negative
tests do test useful things imo. Also I ran all the negative tests today:
"Ran: 452 tests in 144. sec." They just account for 2 minutes and 20sec
in the gate. That's very little, removing them won't bring a lot. And the
code for negative tests is quite contain, not a big maintenance burden.

Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][python-neutronclient] Adding new options to the existing Neutron CLIs

2016-03-19 Thread Richard Theis
OpenStackClient (OSC) doesn't and won't allow "unsupported" CLI options. 
With the transition to OSC, the plan is to deprecate the neutron client 
CLI which will implicitly include the deprecation of such options.

Richard Theis (rtheis)
rth...@us.ibm.com



From:   reedip banerjee 
To: openstack-dev@lists.openstack.org
Date:   03/17/2016 10:09 PM
Subject:[openstack-dev] [Neutron][python-neutronclient] Adding new 
options to the existing Neutron CLIs



Dear All Neutron Developers and Reviewers,

I have a query/concern related to the parsing of options in 
python-neutronclient.
I would like to bring this up, as it "may" also impact the transition of 
the CLIs to the openstack client as well.

NeutronClient is pretty special in its behavior, and has one pretty 
powerful feature of parsing extra options. This feature states that, if 
the CLI does not support an option but the API does, and the user passes a 
value for this option, then the "unsupported" CLI option is parsed , and 
forwarded to the Neutron Server for processing.

Example:
Currently "neutron net-create" does not support --router:external. If you 
see the output of "neutron net-create -h" you would not find 
"--router-external". However, this option is supported in the API since 
Juno [2]. So therefore , if a user executes the following CLI 
" neutron net-create TestNetwork --router-external" 

then [1] would be observed as an output.

Now the query/concern comes next
Any option which is not supported by the CLI is open to the above parsing.
Therefore , for net-create and net-update, all the following are possible:

neutron net-create --router:external=True TESTNetwork --(A)
neutron net-create --router:external TESTNetwork  --(B)
neutron net-create TESTNetwork --router:external --(C)
neutron net-create TESTNetwork --router:external=True --(D)
neutron net-create TESTNetwork --router:external True --(E)
However, user is not aware of the --router:external option because it is 
not visible in the HELP section ( this is true for other CLI options as 
well).
In order to demonstrate these options to the User, we have to update 
add_known_arguments function to display them. And once they are known to 
the CLI, the parsing changes, and some of the options from (A) to (E) may 
not be supported ( Please see [3] for an ongoing, though now dormant, 
discussion ). 
Note that this discussion is not limited only to net-create, but possibly 
other CLIs as well which do not completely expose the Options which the 
API can support.I am , however, taking the net-create example as a 
case-study.
I would like to know how we can move forward in this regards:
-- Should NeutronClient continue to support all options from (A) to (E), 
but deprecate some of them in Openstack Client?
-- Should we deprecate them in NeutronClient itself, so that the users are 
comfortable with the options when the migration to Openstack Client 
occurs?
-- Any other suggestions
[1]: http://paste.openstack.org/show/491032/
[2]: 
http://docs.openstack.org/juno/install-guide/install/apt/content/neutron_initial-external-network.html
[3]: https://review.openstack.org/#/c/137279/

-- 
Thanks and Regards,
Reedip Banerjee
IRC: reedip



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] RBAC: Fix port query and deletion for network owner

2016-03-19 Thread Kevin Benton
Oh, I understand the issue now. I was thrown off because the quota engine
doesn't log anything on that count path.

We do need to figure out why this interferes with the count query.
On Mar 17, 2016 6:38 AM, "Salvatore Orlando"  wrote:

> Indeed the VMware plugins were not using resource tracking (they know that
> my code should not be trusted!)
>
> I think this bears however another question that we need to answer... it
> is likely that some change broke quota enforcement for plugins which do not
> use usage tracking.
> When I developed reservations & usage tracking we made an assumption that
> plugins should not be forced to use usage tracking. If they did not, the
> code will fallback to the old logic which just executed a count query.
>
> If we want to make usage tracking mandatory I'm fine with that, but we
> first need to make sure that every plugin enables it for every resource it
> handles.
>
> Salvatore
>
> On 17 March 2016 at 12:41, Gary Kotton  wrote:
>
>> Thanks!
>>
>> Much appreciated. Will check
>>
>> From: Kevin Benton 
>> Reply-To: OpenStack List 
>> Date: Thursday, March 17, 2016 at 1:09 PM
>> To: OpenStack List 
>> Subject: Re: [openstack-dev] [Neutron] RBAC: Fix port query and deletion
>> for network owner
>>
>> After reviewing your logs[1], it seems that quotas are not working
>> correctly in your plugin. There are no statements about tenants being
>> marked dirty, etc.
>>
>> I think you are missing the quota registry setup code in your plugin
>> init. Here is the ML2 example:
>> https://github.com/openstack/neutron/blob/44ef44c0ff97d5b166d48d2ef93feafa9a0f7ea6/neutron/plugins/ml2/plugin.py#L167-L173
>> 
>>
>>
>>
>> http://208.91.1.172/logs/neutron/293483/1/check-tempest-vmware-nsx-v3/q-svc.log.txt.gz
>> 
>>
>> On Thu, Mar 17, 2016 at 1:30 AM, Gary Kotton  wrote:
>>
>>> Hi,
>>> The review https://review.openstack.org/#/c/255285/ breaks our CI.
>>> Since this has landed we are getting failed tests with the:
>>> "Details: {u'message': u"Quota exceeded for resources: ['port'].",
>>> u'type': u'OverQuota', u'detail': u’’}"
>>> When I revert the patch and run our CI without it the tests pass. Is
>>> anyone else hitting the same or a similar issue?
>>> I think that for Mitaka we need to revert this patch
>>> Thanks
>>> Gary
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Does anybody need OAuth1 API in keystone?

2016-03-19 Thread Alexander Makarov
Hi!

I'm working on unifying all the models that store actor access rights to
the resources [0],
and now I'm wondering if we can just drop current OAuth1 implementation [1].
It's definitely not perfect and require considerable effort to bring it in
good shape so the question is if the feature worth the attention?

​[0]​ https://blueprints.launchpad.net/keystone/+spec/unified-delegation
[1] https://github.com/openstack/keystone/tree/master/keystone/oauth1

-- 
Kind Regards,
Alexander Makarov,
Senior Software Developer,

Mirantis, Inc.
35b/3, Vorontsovskaya St., 109147, Moscow, Russia

Tel.: +7 (495) 640-49-04
Tel.: +7 (926) 204-50-60

Skype: MAKAPOB.AJIEKCAHDP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

2016-03-19 Thread Steve Baker

+1

On 16/03/16 23:57, Sergey Kraynev wrote:

Hi Heaters,

The Mitaka release is close to finish, so it's good time for reviewing
results of work.
One of this results is analyze contribution results for the last release cycle.
According to the data [1] we have one good candidate for nomination to
core-review team:
Oleksii Chuprykov.
During this release he showed significant value of review metric.
His review were valuable and useful. Also He has enough level of
expertise in Heat code.
So I think he is worthy to join to core-reviewers team.

I ask you to vote and decide his destiny.
  +1 - if you agree with his candidature
  -1  - if you disagree with his candidature

[1] http://stackalytics.com/report/contribution/heat-group/120




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Daneyon Hansen (danehans)

Aside from the bay certificates/Barbican issue. Is anyone aware of any other 
potential problems for high-availability, especially for Conductor?

Regards,
Daneyon Hansen

> On Mar 17, 2016, at 12:03 PM, Hongbin Lu  wrote:
> 
> The problem of missing Barbican alternative implementation has been raised 
> several times by different people. IMO, this is a very serious issue that 
> will hurt Magnum adoption. I created a blueprint for that [1] and set the PTL 
> as approver. It will be picked up by a contributor once it is approved.
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store 
> 
> Best regards,
> Hongbin
> 
> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
> Sent: March-17-16 2:39 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hi.
> 
> We're on the way, the API is using haproxy load balancing in the same way all 
> openstack services do here - this part seems to work fine.
> 
> For the conductor we're stopped due to bay certificates - we don't currently 
> have barbican so local was the only option. To get them accessible on all 
> nodes we're considering two options:
> - store bay certs in a shared filesystem, meaning a new set of credentials in 
> the boxes (and a process to renew fs tokens)
> - deploy barbican (some bits of puppet missing we're sorting out)
> 
> More news next week.
> 
> Cheers,
> Ricardo
> 
>> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) 
>>  wrote:
>> All,
>> 
>> Does anyone have experience deploying Magnum in a highly-available fashion?
>> If so, I’m interested in learning from your experience. My biggest 
>> unknown is the Conductor service. Any insight you can provide is 
>> greatly appreciated.
>> 
>> Regards,
>> Daneyon Hansen
>> 
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Neutron] Mitaka RC1 available

2016-03-19 Thread Jeremy Stanley
On 2016-03-17 09:44:59 +0530 (+0530), Armando M. wrote:
> Unfortunately, Neutron is also going to need an RC2 due to
> upstream CI issues triggered by infra change [1] that merged right
> about the same time RC1 was being cut.

Do you have any details on the impact that caused for Neutron? I
don't think I heard about it. Was there another ML thread I missed?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Wishlist bugs == (trivial) blueprint?

2016-03-19 Thread Sean Dague
On 03/17/2016 11:57 AM, Markus Zoeller wrote:

> Suggested action items:
> 
> 1. I close the open wish list items older than 6 months (=138 reports)
>and explain in the closing comment that they are outdated and the 
>ML should be used for future RFEs (as described above).
> 2. I post on the openstack-ops ML to explain why we do this
> 3. I change the Nova bug report template to explain this to avoid more
>RFEs in the bug report list in the future.
> 4. In 6 months I double-check the rest of the open wishlist bugs
>if they found developers, if not I'll close them too.
> 5. Continously double-check if wishlist bug reports get created
>
> Doubts? Thoughts? Concerns? Agreements?

This sounds like a very reasonable plan to me. Thanks for summarizing
all the concerns and coming up with a pretty balanced plan here. +1.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [ironic] [inspector] Rewriting nailgun agent on Python proposal

2016-03-19 Thread Vladimir Kozhukalov
>Well, there's a number of reasons. Ironic is not meant only for an
>"undercloud" (deploying OpenStack on ironic instances). There are both
>public and private cloud deployments of ironic in production today, that
>make bare metal instances available to users of the cloud. Those users
>may not want an agent running inside their instance, and more
>importantly, the operators of those clouds may not want to expose the
>ironic or inspector APIs to their users.

>I'm not sure ironic should say "no, that isn't allowed" but at a minimum
>it would need to be opt-in behavior.

For me it's absolutely clear why cloud case does assume running any kind of
agent
inside user instance. It is clear why cloud case does not assume exposing
API
to the user instance. But cloud is not the only case that exists.
Fuel is a deployment tool. Fuel case is not cloud.  It is 'cattle' (cattle
vs. pets), but
it is not cloud in a sense that instances are 'user instances'.
Fuel 'user instances' are not even 'user' instances.
Fuel manages the content of instances throughout their whole life cycle.

As you might remember we talked about this about two years ago (when we
tried to contribute lvm and md features to IPA). I don't know why this case
(deployment) was rejected again and again while it's still viable and
widely used.
And I don't know why it could not be implemented to be 'opt-in'.
Since that we have invented our own fuel-agent (that supports lvm, md) and
a driver for Ironic conductor that allows to use Ironic with fuel-agent.

>Is the fuel team having a summit session of some sort about integrating
>with ironic better? I'd be happy to come to that if it can be scheduled
>at a time that ironic doesn't have a session. Otherwise maybe we can
>catch up on Friday or something.

>I'm glad to see Fuel wanting to integrate better with Ironic.

We are still quite interested in closer integration with Ironic (we need
power
management features that Ironic provides). We'll be happy to schedule yet
another discussion on closer integration with Ironic.

BTW, about a year ago (in Grenoble) we agreed that it is not even
necessary to merge such custom things into Ironic tree. Happily, Ironic is
smart enough to consume drivers using stevedore. About ironic-inspector
the case is the same. Whether we are going to run it inside 'user instance'
or inside ramdisk it does not affect ironic-inspector itself. If Ironic
team is
open for merging "non-cloud" features (of course 'opt-in') we'll be happy
to contribute.

Vladimir Kozhukalov

On Fri, Mar 18, 2016 at 6:03 PM, Jim Rollenhagen 
wrote:

> On Fri, Mar 18, 2016 at 05:26:13PM +0300, Evgeniy L wrote:
> > On Thu, Mar 17, 2016 at 3:16 PM, Dmitry Tantsur 
> wrote:
> >
> > > On 03/16/2016 01:39 PM, Evgeniy L wrote:
> > >
> > >> Hi Dmitry,
> > >>
> > >> I can try to provide you description on what current Nailgun agent is,
> > >> and what are potential requirements we may need from HW discovery
> system.
> > >>
> > >> Nailgun agent is a one-file Ruby script [0] which is periodically run
> > >> under cron. It collects information about HW using ohai [1], plus it
> > >> does custom parsing, filtration, retrieval of HW information. After
> the
> > >> information is collected, it is sent to Nailgun, that is how node gets
> > >> discovered in Fuel.
> > >>
> > >
> > > Quick clarification: does it run on user instances? or does it run on
> > > hardware while it's still not deployed to? The former is something that
> > > Ironic tries not to do. There is an interest in the latter.
> >
> >
> > Both, on user instances (with deployed OpenStack) and on instances which
> > are not deployed and in bootstrap.
> > What are the reasons Ironic tries not to do that (running HW discovery on
> > deployed node)?
>
> Well, there's a number of reasons. Ironic is not meant only for an
> "undercloud" (deploying OpenStack on ironic instances). There are both
> public and private cloud deployments of ironic in production today, that
> make bare metal instances available to users of the cloud. Those users
> may not want an agent running inside their instance, and more
> importantly, the operators of those clouds may not want to expose the
> ironic or inspector APIs to their users.
>
> I'm not sure ironic should say "no, that isn't allowed" but at a minimum
> it would need to be opt-in behavior.
>
> >
> >
> > >
> > >
> > >> To summarise entire process:
> > >> 1. After Fuel master node is installed, user restarts the nodes and
> they
> > >> get booted via PXE with bootstrap image.
> > >> 2. Inside of bootstrap image Nailgun agent is configured and
> installed.
> > >> 3. Cron runs Nailgun agent.
> > >> 3. Information is collected by Nailgun agent.
> > >> 4. Information is sent to Nailgun.
> > >> 5. Nailgun creates new node, for which user using UI can define
> > >> partitioning schema and networks allocation.
> > >> 6. After that provisioning/deployment can be run.
> > >>
> > >
> > > So it looks 

Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-19 Thread Erno Kuvaja
On Wed, Mar 16, 2016 at 6:25 AM, Nikhil Komawar 
wrote:

> Hello everyone,
>
> tl;dr;
> I'm writing to request some feedback on whether the cross project Quotas
> work should move ahead as a service or a library or going to a far
> extent I'd ask should this even be in a common repository, would
> projects prefer to implement everything from scratch in-tree? Should we
> limit it to a guideline spec?
>
> But before I ask anymore, I want to specifically thank Doug Hellmann,
> Joshua Harlow, Davanum Srinivas, Sean Dague, Sean McGinnis and  Andrew
> Laski for the early feedback that has helped provide some good shape to
> the already discussions.
>
> Some more context on what the happenings:
> We've this in progress spec [1] up for providing context and platform
> for such discussions. I will rephrase it to say that we plan to
> introduce a new 'entity' in the Openstack realm that may be a library or
> a service. Both concepts have trade-offs and the WG wanted to get more
> ideas around such trade-offs from the larger community.
>
> Would you mind to expand this "we" here?


> Service:
> This would entail creating a new project and will introduce managing
> tables for quotas for all the projects that will use this service. For
> example if Nova, Glance, and Cinder decide to use it, this 'entity' will
> be responsible for handling the enforcement, management and DB upgrades
> of the quotas logic for all resources for all three projects. This means
> less pain for projects during the implementation and maintenance phase,
> holistic view of the cloud and almost a guarantee of best practices
> followed (no clutter or guessing around what different projects are
> doing). However, it results into a big dependency; all projects rely on
> this one service for right enforcement, avoiding races (if do not
> incline on implementing some of that in-tree) and DB
> migrations/upgrades. It will be at the core of the cloud and prone to
> attack vectors, bugs and margin of error.
>
> I'd prefer not. As lots of concern raised already ref. latency, extra api
etc.
Based on the unifying the user interface, common api might be desired
option, but it's own service, not so much.


> Library:
> A library could be thought of in two different ways:
> 1) Something that does not deal with backed DB models, provides a
> generic enforcement and management engine. To think ahead a little bit
> it may be a ABC or even a few standard implementation vectors that can
> be imported into a project space. The project will have it's own API for
> quotas and the drivers will enforce different types of logic; per se
> flat quota driver or hierarchical quota driver with custom/project
> specific logic in project tree. Project maintains it's own DB and
> upgrades thereof.
>

Partially decent idea, just the fact annoys me that this is climbing the
tree
arse ahead. The individual API's is perhaps the worst option with common
code for the quotas as each project has their own requirements where the
API might be generalized.

2) A library that has models for DB tables that the project can import
> from. Thus the individual projects will have a handy outline of what the
> tables should look like, implicitly considering the right table values,
> arguments, etc. Project has it's own API and implements drivers in-tree
> by importing this semi-defined structure. Project maintains it's own
> upgrades but will be somewhat influenced by the common repo.
>
> This is really not benefitting anyone. Again each project has their own
requirements for quotas, while the user experience is the one thing we
should try to unify. I have really difficulties to see Zaqar, Nova and
Glance
fitting into the single quota model, while the API interacting with those
could
be similar.

Library would keep things simple for the common repository and sourcing
> of code can be done asynchronously as per project plans and priorities
> without having a strong dependency. On the other hand, there is a
> likelihood of re-implementing similar patterns in different projects
> with individual projects taking responsibility to keep things up to
> date. Attack vectors, bugs and margin of error are project responsibilities
>
> This is the problem I see with oslo approach currently. Originally
intended for
place to collect the common code from projects turning to "enforcing" entity
of code some people thinks should be common and does not fit most.


> Third option is to avoid all of this and simply give guidelines, best
> practices, right packages to each projects to implement quotas in-house.
> Somewhat undesirable at this point, I'd say. But we're all ears!
>

This is probably the best solution without "best practices", again one model
does not suite all, but common concepts, specially on the interacting API
side is desired. _If_ proven that most of the projects would fit to the same
suite, common lib could be built out of the outcome.

>
> Thank you for reading and I 

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Fox, Kevin M
+1. We should be encouraging a common way of solving these issues across all 
the openstack projects and security is a really important thing. spreading it 
across lots of projects causes more bugs and security related bugs cause 
security incidents. No one wants those.

I'd also like to know why, if an old cloud is willing to deploy a new magnum, 
its unreasonable to deploy a new barbican at the same time.

If its a technical reason, lets fix the issue. If its something else, lets 
discuss it. If its just an operator not wanting to install 2 things instead of 
just one, I think its a totally understandable, but unreasonable request.

Thanks,
Kevin

From: Douglas Mendizábal [douglas.mendiza...@rackspace.com]
Sent: Friday, March 18, 2016 6:45 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] High Availability

Hongbin,

I think Adrian makes some excellent points regarding the adoption of
Barbican.  As the PTL for Barbican, it's frustrating to me to constantly
hear from other projects that securing their sensitive data is a
requirement but then turn around and say that deploying Barbican is a
problem.

I guess I'm having a hard time understanding the operator persona that
is willing to deploy new services with security features but unwilling
to also deploy the service that is meant to secure sensitive data across
all of OpenStack.

I understand one barrier to entry for Barbican is the high cost of
Hardware Security Modules, which we recommend as the best option for the
Storage and Crypto backends for Barbican.  But there are also other
options for securing Barbican using open source software like DogTag or
SoftHSM.

I also expect Barbican adoption to increase in the future, and I was
hoping that Magnum would help drive that adoption.  There are also other
projects that are actively developing security features like Swift
Encryption, and DNSSEC support in Desginate.  Eventually these features
will also require Barbican, so I agree with Adrian that we as a
community should be encouraging deployers to adopt the best security
practices.

Regarding the Keystone solution, I'd like to hear the Keystone team's
feadback on that.  It definitely sounds to me like you're trying to put
a square peg in a round hole.

- Doug

On 3/17/16 8:45 PM, Hongbin Lu wrote:
> Thanks Adrian,
>
>
>
> I think the Keystone approach will work. For others, please speak up if
> it doesn’t work for you.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
> *Sent:* March-17-16 9:28 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] High Availability
>
>
>
> Hongbin,
>
>
>
> I tweaked the blueprint in accordance with this approach, and approved
> it for Newton:
>
> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
>
>
>
> I think this is something we can all agree on as a middle ground, If
> not, I’m open to revisiting the discussion.
>
>
>
> Thanks,
>
>
>
> Adrian
>
>
>
> On Mar 17, 2016, at 6:13 PM, Adrian Otto  > wrote:
>
>
>
> Hongbin,
>
> One alternative we could discuss as an option for operators that
> have a good reason not to use Barbican, is to use Keystone.
>
> Keystone credentials store:
> 
> http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#credentials-v3-credentials
>
> The contents are stored in plain text in the Keystone DB, so we
> would want to generate an encryption key per bay, encrypt the
> certificate and store it in keystone. We would then use the same key
> to decrypt it upon reading the key back. This might be an acceptable
> middle ground for clouds that will not or can not run Barbican. This
> should work for any OpenStack cloud since Grizzly. The total amount
> of code in Magnum would be small, as the API already exists. We
> would need a library function to encrypt and decrypt the data, and
> ideally a way to select different encryption algorithms in case one
> is judged weak at some point in the future, justifying the use of an
> alternate.
>
> Adrian
>
>
> On Mar 17, 2016, at 4:55 PM, Adrian Otto  > wrote:
>
> Hongbin,
>
>
> On Mar 17, 2016, at 2:25 PM, Hongbin Lu  > wrote:
>
> Adrian,
>
> I think we need a boarder set of inputs in this matter, so I moved
> the discussion from whiteboard back to here. Please check my replies
> inline.
>
>
> I would like to get a clear problem statement written for this.
> As I see it, the problem is that there is no safe place to put
> certificates in clouds that do not run Barbican.
> It seems the solution is to make it easy to add Barbican such that
>  

Re: [openstack-dev] [Neutron] RBAC: Fix port query and deletion for network owner

2016-03-19 Thread Armando M.
On 17 March 2016 at 14:00, Gary Kotton  wrote:

> Hi,
> The review https://review.openstack.org/#/c/255285/ breaks our CI. Since
> this has landed we are getting failed tests with the:
> "Details: {u'message': u"Quota exceeded for resources: ['port'].",
> u'type': u'OverQuota', u'detail': u’’}"
> When I revert the patch and run our CI without it the tests pass. Is
> anyone else hitting the same or a similar issue?
> I think that for Mitaka we need to revert this patch
>

We will not revert the patch for Mitaka, unless we have clear evidence of
where the issue lies, and why it's having a limited effect rather than a
widespread one.

It's possible that we can come up with a fix rather than a revert, if you
shared more details. Can you document the issue on a but report and tag it
as mitaka-rc-potential?

Thanks,
Armando


> Thanks
> Gary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Maintaining httplib2 python library

2016-03-19 Thread Joshua Harlow

On 03/18/2016 02:34 AM, Thierry Carrez wrote:

Thomas Goirand wrote:

On 03/14/2016 03:28 PM, Davanum Srinivas wrote:

Ian,

+1 to get rid of that dependency if possible.


+1 for any action aiming toward removing *any* dependency.

We don't have enough of such actions, and we have a way too many
dependencies, with many duplicate functionalities too. Just to name a
few:
- pecan vs falcon
- oslo.concurrency vs lockfile
- nose vs testr vs pytest
- pymemcache vs memcached
- you-name-it...

And this isn't even motivated by the fact I maintain lots of packages, I
don't maintain httplib2 for example, so I'm not impacted much,
especially by this kind of package that don't upgrade often.


Converging dependencies is a bit of a thankless debt reduction job: you
have to push changes in a lot of projects, and those are rarely seen as
a priority. It's a bit like pushing for Python 3 compatibility... you
need to find someone caring enough about it to persist in pushing those
changes, otherwise it just doesn't happen.

We could have a squad of "convergers" that would define a very small
list of targets every cycle and push that through.



+1 for the above. Have a few people that would come and help here on 
these kinds of tasks and overtime hopefully the list of targets shrinks 
(although likely never to zero).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] revert new gerrit

2016-03-19 Thread Jeremy Stanley
On 2016-03-18 13:34:48 -0400 (-0400), Anita Kuno wrote:
> On 03/18/2016 01:16 PM, Boris Pavlovic wrote:
> > Hi everybody,
> > 
> > What about if we just create new project for alternative Gerrit WebUI and
> > use it?
> > I don't think that with current set of web frameworks it would be too hard.
> > 
> > Best regards,
> > Boris Pavlovic
> 
> It's called vinz: http://git.openstack.org/cgit/openstack-infra/vinz/
> 
> Patches welcome.

Yes, we had a session on it several summits ago, a group of
contributors said they were going to work on developing it, pushed
up a skeleton repo, and then we never heard back from them after
that. Unfortunate.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] becoming third party CI

2016-03-19 Thread Paul Belanger
On Thu, Mar 17, 2016 at 11:59:22AM -0500, Ben Nemec wrote:
> On 03/10/2016 05:24 PM, Jeremy Stanley wrote:
> > On 2016-03-10 16:09:44 -0500 (-0500), Dan Prince wrote:
> >> This seems to be the week people want to pile it on TripleO. Talking
> >> about upstream is great but I suppose I'd rather debate major changes
> >> after we branch Mitaka. :/
> > [...]
> > 
> > I didn't mean to pile on TripleO, nor did I intend to imply this was
> > something which should happen ASAP (or even necessarily at all), but
> > I do want to better understand what actual benefit is currently
> > derived from this implementation vs. a more typical third-party CI
> > (which lots of projects are doing when they find their testing needs
> > are not met by the constraints of our generic test infrastructure).
> > 
> >> With regards to Jenkins restarts I think it is understood that our job
> >> times are long. How often do you find infra needs to restart Jenkins?
> > 
> > We're restarting all 8 of our production Jenkins masters weekly at a
> > minimum, but generally more often when things are busy (2-3 times a
> > week). For many months we've been struggling with a thread leak for
> > which their development team has not seen as a priority to even
> > triage our bug report effectively. At this point I think we've
> > mostly given up on expecting it to be solved by anything other than
> > our upcoming migration off of Jenkins, but that's another topic
> > altogether.
> > 
> >> And regardless of that what if we just said we didn't mind the
> >> destructiveness of losing a few jobs now and then (until our job
> >> times are under the line... say 1.5 hours or so). To be clear I'd
> >> be fine with infra pulling the rug on running jobs if this is the
> >> root cause of the long running jobs in TripleO.
> > 
> > For manual Jenkins restarts this is probably doable (if additional
> > hassle), but I don't know whether that's something we can easily
> > shoehorn into our orchestrated/automated restarts.
> > 
> >> I think the "benefits are minimal" is bit of an overstatement. The
> >> initial vision for TripleO CI stands and I would still like to see
> >> individual projects entertain the option to use us in their gates.
> > [...]
> > 
> > This is what I'd like to delve deeper into. The current
> > implementation isn't providing you with any mechanism to prevent
> > changes which fail jobs running in the tripleo-test cloud from
> > merging to your repos, is it? You're still having to manually
> > inspect the job results posted by it? How is that particularly
> > different from relying on third-party CI integration?
> > 
> > As for other projects making use of the same jobs, right now the
> > only convenience I'm aware of is that they can add check-tripleo
> > pipeline jobs in our Zuul layout file instead of having you add it
> > to yours (which could itself reside in a Git repo under your
> > control, giving you even more flexibility over those choices). In
> > fact, with a third-party CI using its own separate Gerrit account,
> > you would be able to leave clear -1/+1 votes on check results which
> > is not possible with the present solution.
> > 
> > So anyway, I'm not saying that I definitely believe the third-party
> > CI route will be better for TripleO, but I'm not (yet) clear on what
> > tangible benefit you're receiving now that you lose by switching to
> > that model.
> > 
> 
> FWIW, I think third-party CI probably makes sense for TripleO.
> Practically speaking we are third-party CI right now - we run our own
> independent hardware infrastructure, we aren't multi-region, and we
> can't leave a vote on changes.  Since the first two aren't likely to
> change any time soon (although I believe it's still a long-term goal to
> get to a place where we can run in regular infra and just contribute our
> existing CI hardware to the general infra pool, but that's still a long
> way off), and moving to actual third-party CI would get us the ability
> to vote, I think it's worth pursuing.
> 
> As an added bit of fun, we have a forced move of our CI hardware coming
> up in the relatively near future, and if we don't want to have multiple
> days (and possibly more, depending on how the move goes) of TripleO CI
> outage we're probably going to need to stand up a new environment in
> parallel anyway.  If we're doing that it might make sense to try hooking
> it in through the third-party infra instead of the way we do it today.
> Hopefully that would allow us to work out the kinks before the old
> environment goes away.
> 
> Anyway, I'm sure we'll need a bunch more discussion about this, but I
> wanted to chime in with my two cents.
> 
Do you have any ETA on when your outage would be?  Is it before or after the
summit in Austin?

Personally, I'm going to attend a few TripleO design session where ever
possible in Austin. It would be great to maybe have a fishbowl session about it.

> -Ben
> 
> 

Re: [openstack-dev] [all] Maintaining httplib2 python library

2016-03-19 Thread Cory Benfield

> On 18 Mar 2016, at 13:57, Brian Haley  wrote:
> 
> On 03/17/2016 06:04 PM, Doug Wiegley wrote:
 Here is the non comprehensive list of usages based on what trees I
 happen to have checked out (which is quite a few, but not all of
 OpenStack for sure).
 
 I think before deciding to take over ownership of an upstream lib (which
 is a large commitment over space and time), we should figure out the
 migration cost. All the uses in Tempest come from usage in Glance IIRC
 (and dealing with chunked encoding).
 
 Neutron seems to use it for a couple of proxies, but that seems like
 requests/urllib3 might be sufficient.
>>> 
>>> The Neutron team should talk to Cory Benfield (CC'd) and myself more about 
>>> this if they run into problems. requests and urllib3 are a little limited 
>>> with respect to proxies due to limitations in httplib itself.
>>> 
>>> Both of us might be able to dedicate time during the day to fix this if 
>>> Neutron/OpenStack have specific requirements that requests is not currently 
>>> capable of supporting.
>> 
>> Looks like neutron is using it to do HTTP requests via unix domain sockets. 
>> Unless I’m missing something, requests doesn’t support that directly. There 
>> are a couple of other libs that do, or we could monkey patch the socket. Or 
>> modify the agents to use localhost.
> 
> We have to use Unix domain sockets in the metadata proxy because it's running 
> in a namespace, so can't use localhost to talk to the agent.  But we could 
> use some other library of course.
> 

Getting requests to talk over a Unix domain socket is not particularly tricky, 
and there are third-party libraries that hook into requests appropriately to 
make that happen. For example, the requests-unixsocket module exists that can 
do the appropriate things.

Cory



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [jacket] Introduction to jacket, a new project

2016-03-19 Thread Kevin.ZhangSen
Hi Phuong,

I will mail you when the works starts. Thanks. :)

Best Regards,
Kevin (Sen Zhang)







在 2016-03-18 11:07:13,"phuon...@vn.fujitsu.com"  写道:


Hi Kevin,

 

I am interesting in Jacket too, so I would like to contribute once the work 
starts.

 

Thanks,

Phuong.

 

From: Janki Chhatbar [mailto:jankihchhat...@gmail.com]
Sent: Wednesday, March 16, 2016 8:21 PM
To: zs
Subject: Re: [openstack-dev] [jacket] Introduction to jacket, a new project

 

Hi Kevin

 

I read the wiki and quite liked it. Good going. I would ‎like to contribute to 
it once the work starts

 Do let me know about it.





Thanks

Janki

 

Sent from my BlackBerry 10 smartphone.

|

From: zs

Sent: Wednesday, 16 March 2016 18:30

To: OpenStack Development Mailing List (not for usage questions)

Reply To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [jacket] Introduction to jacket, a new project

|

 

Hi Gordon,

Thank you for your suggestion.

I think jacket is different from tricircle. Because tricircle focuses on 
OpenStack deployment across multiple sites, but jacket focuses on how to manage 
the different clouds just like one cloud.  There are some differences:
1. Account management and API model: Tricircle faces multiply OpenStack 
instances which can share one Keystone and have the same API model, but jacket 
will face the different clouds which have the respective service and different 
API model. For example, VMware vCloud Director has no volume management like 
OpenStack and AWS, jacket will offer a fake volume management for this kind of 
cloud.
2. Image management: One image just can run in one cloud, jacket need consider 
how to solve this problem.
3. Flavor management: Different clouds have different flavors which can not be 
operated by users. Jacket will face this problem but there will be no this 
problem in tricircle.
4. Legacy resources adoption: Because of the different API modles, it will be a 
huge challenge for jacket.


I think it is maybe a good solution that jacket works to unify the API model 
for different clouds, and then using tricircle to offer the management of  a 
large scale VMs.


Best Regards,
Kevin (Sen Zhang)

 


At 2016-03-16 19:51:33, "gordon chung"  wrote:
> 
> 
>On 16/03/2016 4:03 AM, zs wrote:
>> Hi all,
>> 
>> There is a new project "jacket" to manage multiply clouds. The jacket
>> wiki is: https://wiki.openstack.org/wiki/Jacket
>>   Please review it and give your comments. Thanks.
>> 
>> Best Regards,
>> 
>> Kevin (Sen Zhang)
>> 
>> 
> 
>i don't know exact details of either project, but i suggest you 
>collaborate with tricircle project[1] because it seems you are 
>addressing the same user story (and in a very similar fashion). not sure 
>if it's a user story for OpenStack itself, but no point duplicating efforts.
> 
>[1] https://wiki.openstack.org/wiki/Tricircle
> 
>cheers,
> 
>-- 
>gord
> 
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Push Type Driver implementation

2016-03-19 Thread Masahito MUROI
Hi folks,

This[1] is the driver I mentioned at meeting. It is used for OPNFV
Doctor[2]. So I plan to push it into master in Newton release, since
feature freeze for Mitaka was passed and the schema of its translator is
under the discussion.

If it's worth to push it in current release to test push driver, I don't
mind doing it.

[1]
https://github.com/muroi/congress/blob/doctor-poc/congress/datasources/doctor_driver.py
[2] https://wiki.opnfv.org/doctor

-- 
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-19 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-03-16 06:09:47 -0400:
> On 03/16/2016 05:46 AM, Duncan Thomas wrote:
> > On 16 March 2016 at 09:15, Tim Bell  > > wrote:
> > 
> > Then, there were major reservations from the PTLs at the impacts in
> > terms of
> > latency, ability to reconcile and loss of control (transactions are
> > difficult, transactions
> > across services more so).
> > 
> > 
> > Not just PTLs :-)
> >  
> > 
> > 
> > I would favor a library, at least initially. If we cannot agree on a
> > library, it
> > is unlikely that we can get a service adopted (even if it is desirable).
> > 
> > A library (along the lines of 1 or 2 above) would allow consistent
> > implementation
> > of nested quotas and user quotas. Nested quotas is currently only
> > implemented
> > in Cinder and user quota implementations vary between projects which is
> > confusing.
> > 
> > 
> > It is worth noting that the cinder implementation has been found rather
> > lacking in correctness, atomicity requirements and testing - I wouldn't
> > suggest taking it as anything other than a PoC to be honest. Certainly
> > it should not be cargo-culted into another project in its present state.
> 
> I think a library approach should probably start from scratch, with
> lessons learned from Cinder, but not really copied code, for just that
> reason.
> 
> This is hard code to get right, which is why it's various degrees of
> wrong in every project in OpenStack.
> 
> A common library with it's own db tables and migration train is the only
> way I can imagine this every getting accomplished given the atomicity
> and two phase commit constraints of getting quota on long lived, async
> created resources, with sub resources that also have quota. Definitely
> think that's the nearest term path to victory.

When we talked about this in Paris (I think, all these hotel basements
are starting to look the same), the main issue with the library was how
to tie in db table management with the existing tables owned by the app.
It's not impossible to solve, but we need some thought to happen
around the tools for that. Maybe some of the lessons of incremental
on-demand table updates in nova will help there.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Douglas Mendizábal
Hongbin,

I'm looking forward to discussing this further at the Austin summit.
I'm very interested in learning more about the negative feedback you're
getting regarding Barbican, so that our team can help alleviate those
concerns where possible.

Thanks,
- Douglas

On 3/18/16 10:18 AM, Hongbin Lu wrote:
> Douglas,
> 
> I am not opposed to adopt Barbican in Magnum (In fact, we already adopted 
> Barbican). What I am opposed to is a Barbican lock-in, which already has a 
> negative impact on Magnum adoption based on our feedback. I also want to see 
> an increase of Barbican adoption in the future, and all our users have 
> Barbican installed in their clouds. If that happens, I have no problem to 
> have a hard dependency on Barbican.
> 
> Best regards,
> Hongbin
> 
> -Original Message-
> From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com] 
> Sent: March-18-16 9:45 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hongbin,
> 
> I think Adrian makes some excellent points regarding the adoption of 
> Barbican.  As the PTL for Barbican, it's frustrating to me to constantly hear 
> from other projects that securing their sensitive data is a requirement but 
> then turn around and say that deploying Barbican is a problem.
> 
> I guess I'm having a hard time understanding the operator persona that is 
> willing to deploy new services with security features but unwilling to also 
> deploy the service that is meant to secure sensitive data across all of 
> OpenStack.
> 
> I understand one barrier to entry for Barbican is the high cost of Hardware 
> Security Modules, which we recommend as the best option for the Storage and 
> Crypto backends for Barbican.  But there are also other options for securing 
> Barbican using open source software like DogTag or SoftHSM.
> 
> I also expect Barbican adoption to increase in the future, and I was hoping 
> that Magnum would help drive that adoption.  There are also other projects 
> that are actively developing security features like Swift Encryption, and 
> DNSSEC support in Desginate.  Eventually these features will also require 
> Barbican, so I agree with Adrian that we as a community should be encouraging 
> deployers to adopt the best security practices.
> 
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
> 
> - Doug
> 
> On 3/17/16 8:45 PM, Hongbin Lu wrote:
>> Thanks Adrian,
>>
>>  
>>
>> I think the Keystone approach will work. For others, please speak up 
>> if it doesn't work for you.
>>
>>  
>>
>> Best regards,
>>
>> Hongbin
>>
>>  
>>
>> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
>> *Sent:* March-17-16 9:28 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [magnum] High Availability
>>
>>  
>>
>> Hongbin,
>>
>>  
>>
>> I tweaked the blueprint in accordance with this approach, and approved 
>> it for Newton:
>>
>> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
>> re
>>
>>  
>>
>> I think this is something we can all agree on as a middle ground, If 
>> not, I'm open to revisiting the discussion.
>>
>>  
>>
>> Thanks,
>>
>>  
>>
>> Adrian
>>
>>  
>>
>> On Mar 17, 2016, at 6:13 PM, Adrian Otto > > wrote:
>>
>>  
>>
>> Hongbin,
>>
>> One alternative we could discuss as an option for operators that
>> have a good reason not to use Barbican, is to use Keystone.
>>
>> Keystone credentials store:
>> 
>> http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-ap
>> i-v3.html#credentials-v3-credentials
>>
>> The contents are stored in plain text in the Keystone DB, so we
>> would want to generate an encryption key per bay, encrypt the
>> certificate and store it in keystone. We would then use the same key
>> to decrypt it upon reading the key back. This might be an acceptable
>> middle ground for clouds that will not or can not run Barbican. This
>> should work for any OpenStack cloud since Grizzly. The total amount
>> of code in Magnum would be small, as the API already exists. We
>> would need a library function to encrypt and decrypt the data, and
>> ideally a way to select different encryption algorithms in case one
>> is judged weak at some point in the future, justifying the use of an
>> alternate.
>>
>> Adrian
>>
>>
>> On Mar 17, 2016, at 4:55 PM, Adrian Otto > > wrote:
>>
>> Hongbin,
>>
>>
>> On Mar 17, 2016, at 2:25 PM, Hongbin Lu > > wrote:
>>
>> Adrian,
>>
>> I think we need a boarder set of inputs in this matter, so I moved
>> the discussion from whiteboard back to 

Re: [openstack-dev] [nova] Reminder to move implemented nova specs from mitaka

2016-03-19 Thread Michael Still
I normally do this in one big batch, but haven't had a chance yet. I'll do
that later this week.

Michael
On 17 Mar 2016 7:50 AM, "Matt Riedemann"  wrote:

> Specs are proposed to the 'approved' subdirectory and when they are
> completely implemented in launchpad (the blueprint status is
> 'Implemented'), we should move the spec from the 'approved' subdirectory to
> the 'implemented' subdirectory in the nova-specs repo.
>
> For example:
>
> https://review.openstack.org/#/c/248142/
>
> These are the mitaka series blueprints from launchpad:
>
> https://blueprints.launchpad.net/nova/mitaka
>
> If anyone is really daring they could go through and move all of the
> implemented ones in a single change.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Proposing Tony Breeds for stable-maint-core

2016-03-19 Thread Anita Kuno
On 03/18/2016 04:11 PM, Matt Riedemann wrote:
> I'd like to propose tonyb for stable-maint-core. Tony is pretty much my
> day to day guy on stable, he's generally in every stable team meeting
> (which is not attended well so I appreciate it), and he's as proactive
> as ever on staying on top of gate issues when they come up, so he's well
> deserving of it in my mind.
> 
> Here are review stats for stable for the last 90 days (as defined in the
> reviewstats repo):
> 
> http://paste.openstack.org/show/491155/
> 
> Tony is also the latest nova-stable-maint core and he's done a great job
> there (as expected) and is very active, which is again much appreciated.
> 
> Please respond with ack/nack.
> 
My vote probably doesn't count, but I can't pass up the opportunity to
say it is nice to see Tony's hard work being acknowledged and appreciated.

I appreciate it.

Thanks Matt,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Design Summit - Proposed slot allocation

2016-03-19 Thread Kirill Zaitsev
Is it too late to ask for a half-day Contributors Meetup for murano?

We had an extremely successful contributors meetup in Tokyo and I guess it is 
an error on our side, that we have not requested one for in Austin.

-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

On 16 March 2016 at 12:57:30, Thierry Carrez (thie...@openstack.org) wrote:

Hi PTLs,  

Here is the proposed slot allocation for project teams at the Newton  
Design Summit in Austin. This is based on the requests the mitaka PTLs  
have made, space availability and project activity & collaboration needs.  

| fb: fishbowl 40-min slots  
| wr: workroom 40-min slots  
| cm: Friday contributors meetup  
| | full: full day, half: only morning or only afternoon  

Neutron: 9fb, cm:full  
Nova: 18fb, cm:full  
Fuel: 3fb, 11wr, cm:full  
Horizon: 1fb, 7wr, cm:half  
Cinder: 4fb, 5wr, cm:full  
Keystone: 5fb, 8wr; cm:full  
Ironic: 5fb, 5wr, cm:half  
Heat: 4fb, 8wr, cm:half  
TripleO: 2fb, 3wr, cm:half  
Kolla: 4fb, 10wr, cm:full  
Oslo: 3fb, 5wr  
Ceilometer: 2fb, 7wr, cm:half  
Manila: 2fb, 4wr, cm:half  
Murano: 1fb, 2wr  
Rally: 2fb, 2wr  
Sahara: 2fb, 6wr, cm:half  
Glance: 3fb, 5wr, cm:full  
Magnum: 5fb, 5wr, cm:full  
Swift: 2fb, 12wr, cm:full  
OpenStackClient: 1fb, 1wr, cm:half  
Senlin: 1fb, 5wr, cm:half  
Monasca: 5wr  
Trove: 3fb, 6wr, cm:half  
Dragonflow: 1fb, 4wr, cm:half*  
Mistral: 1fb, 3wr  
Zaqar: 1fb, 3wr, cm:half  
Barbican: 2fb, 6wr, cm:half  
Designate: 1fb, 5wr, cm:half  
Astara: 1fb, cm:full  
Freezer: 1fb, 2wr, cm:half  
Congress: 1fb, 3wr  
Tacker: 1fb, 3wr, cm:half  
Kuryr: 1fb, 5wr, cm:half*  
Searchlight: 1fb, 2wr  
Cue: no space request received  
Solum: 1fb, 1wr  
Winstackers: 1wr  
CloudKitty: 1fb  
EC2API: 2wr  

Infrastructure: 3fb, 4wr, cm:day**  
Documentation: 4fb, 4wr, cm:half  
Quality Assurance: 4fb, 4wr, cm:day**  
PuppetOpenStack: 2fb, 3wr, cm:half  
OpenStackAnsible: 1fb, 8wr, cm:half  
Release mgmt: 1fb, cm:half  
Security: 3fb, 2wr, cm:half  
ChefOpenstack: 1fb, 2wr  
Stable maint: 1fb  
I18n: cm:half  
Refstack: 3wr  
OpenStack UX: 2wr  
RpmPackaging: 1fb***, 1wr  
App catalog: 1fb, 2wr  
Packaging-deb: 1fb***, 1wr  

*: shared meetup between Kuryr and Dragonflow  
**: shared meetup between Infra and QA  
***: shared fishbowl between RPM packaging and DEB packaging, for  
collecting wider packaging feedback  

We'll start working on laying out those sessions over the available  
rooms and time slots. Most of you have communicated constraints together  
with their room requests (like Manila not wanting overlap with Cinder  
sessions), and we'll try to accommodate them the best we can. If you  
have extra constraints you haven't communicated yet, please reply to me  
ASAP.  

Now is time to think about the content you'd like to cover during those  
sessions and fire up those newton etherpads :)  

Cheers,  

--  
Thierry Carrez (ttx)  

__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Reminder to move implemented nova specs from mitaka

2016-03-19 Thread Markus Zoeller
Matt Riedemann  wrote on 03/16/2016 09:49:06 
PM:

> From: Matt Riedemann 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 03/16/2016 09:50 PM
> Subject: [openstack-dev] [nova] Reminder to move implemented nova 
> specs from mitaka
> 
> Specs are proposed to the 'approved' subdirectory and when they are 
> completely implemented in launchpad (the blueprint status is 
> 'Implemented'), we should move the spec from the 'approved' subdirectory 

> to the 'implemented' subdirectory in the nova-specs repo.
> 
> For example:
> 
> https://review.openstack.org/#/c/248142/
> 
> These are the mitaka series blueprints from launchpad:
> 
> https://blueprints.launchpad.net/nova/mitaka
> 
> If anyone is really daring they could go through and move all of the 
> implemented ones in a single change.
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 

Is there a best practice how to handle a partially implemented bp (with
spec file)? For example [1] needs additional effort during Newton to 
finish it.

References:
[1] https://blueprints.launchpad.net/nova/+spec/centralize-config-options

Regards, Markus Zoeller (markus_z)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] purplerbot irc bot for logs and transclusion

2016-03-19 Thread Chris Dent

On Wed, 16 Mar 2016, Jeremy Stanley wrote:


On 2016-03-16 13:55:56 + (+), Chris Dent wrote:


I built an IRC bot

https://anticdent.org/purple-irc-bot.html

[...]

Oof, a gerritbot derivative... at least it's not based on
supybot/twisted so doesn't suffer the IPv6+SSL issue we have on
meetbot and statusbot. Still, I've been holding out hope someone
might start work on a unified replacement for all of those in a more
modern codebase like errbot.


I just started from the simple bot in the python irc package, and
then went to gerritbot when I got a bit stuck. Not wed to using
that, was just the simple way to get started.

However, I'd really like to avoid having one bot uber alles.

We should have different bots for different tasks. Microbots or
what have you.

My bot is primarily for the p!spy and tranclusion features. The log just
happens to fall out as an easy result of the need to persist the data.
A bot that does everything would be much harder to maintain and the
cost of running a bot is tiny.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Newton Design Summit - Proposed slot allocation

2016-03-19 Thread Thierry Carrez

Armando M. wrote:

It's be nice if Neutron didn't overlap as much with Nova, Ironic, QA and
infra sessions, but I appreciate this could be a tall order.


It's difficult not to overlap with Nova, since they have a session on 
every time slot from Wednesday to Friday ;) So I can guarantee *all* 
your sessions will overlap with Nova sessions. We'll do what we can to 
reduce overlap with Ironic/QA/Infra.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] RC1 candidate

2016-03-19 Thread Armando M.
An update:

On 15 March 2016 at 21:38, Armando M.  wrote:

> Neutrinos,
>
> I believe we reached the point [1] where RC1 can be cut [2]. If I made an
> error of judgement, or any other catastrophic failure arises, please report
> a bug, and tag it as mitaka-rc-potential [3]. Please, sign off on
> postmortem [4], so that we can finalize the specs status for Mitaka and
> open up to Newton.
>
> Please, consider this the last warning to ensure that everything is in the
> right order so that you can feel proud of what you and your teammates have
> accomplished this release!
>

We bumped [2] already thanks to Cedric finding DB migration issues with
Postgres. I am about to bump [2] again to contain Kevin's fix for bug
1513765. Anything that landed in between is rather safe. At this point I
don't expect to see any other rc-potential fix that's gonna be in shape in
time for the end of the week. Salvatore mentioned something odd about
quota, but until we find out more, and judge whether we need an RC2, it's
time we draw a line and pull the trigger on RC1, once change for bug
1513765 lands.


>
> Cheers,
> Armando
>
> [1] https://launchpad.net/neutron/+milestone/mitaka-rc1
> [2] https://review.openstack.org/#/c/292445/
> [3] https://bugs.launchpad.net/neutron/+bugs?field.tag=mitaka-rc-potential
> [4] https://review.openstack.org/#/c/286413/
> [5] https://review.openstack.org/#/c/283383/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Newton Design Summit ideas kick-off

2016-03-19 Thread Armando M.
Hi folks,

It's the time of the year where we need to plan for design summit sessions.

This summit we are going for 9 fishbowl sessions, plus a full day on Friday
for team get-together.

We will break down sessions in three separate tracks as we did last
summit. Each track will have its own theme and more details will be
provided in due course. Due to the number of sessions allocated to us, most
likely we won't be having a lightning talk session this time. I apologize
in advance to Salvatore Orlando, the official Neutron Design Summit jester,
for this. I hope he'll understand.

I started etherpad [1] to collect inputs and ideas. Please start
brainstorming!

Cheers,
Armando

[1] https://etherpad.openstack.org/p/newton-neutron-summit-ideas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-19 Thread Ken'ichi Ohmichi
2016-03-17 5:32 GMT-07:00 Adam Young :
> On 03/16/2016 11:01 PM, Ken'ichi Ohmichi wrote:
>>
>> 2016-03-16 19:29 GMT-07:00 Adam Young :
>>>
>>> On 03/16/2016 09:20 PM, Ken'ichi Ohmichi wrote:

 Hi

 I have one proposal[1] related to negative tests in Tempest, and
 hoping opinions before doing that.

 Now Tempest contains negative tests and sometimes patches are being
 posted for adding more negative tests, but I'd like to propose
 removing them from Tempest instead.

 Negative tests verify surfaces of REST APIs for each component without
 any integrations between components. That doesn't seem integration
 tests which are scope of Tempest.
 In addition, we need to spend the test operating time on different
 component's gate if adding negative tests into Tempest. For example,
 we are operating negative tests of Keystone and more
 components on the gate of Nova. That is meaningless, so we need to
 avoid more negative tests into Tempest now.

 If wanting to add negative tests, it is a nice option to implement
 these tests on each component repo with Tempest plugin interface. We
 can avoid operating negative tests on different component gates and
 each component team can decide what negative tests are valuable on the
 gate.
>>>
>>>
>>> Hear hear!  For Keystone, please put them in the Keystone Client
>>> Functional
>>> tests.
>>
>> They are negative tests of Keystone API, not keystoneclient tests.
>> These tests are implemented with Tempest original rest clients without
>> official clients because it is nice to test variable cases in Tempest.
>> So it is nice to keep them in Keystone repo, I feel.
>
>
> That works, too, and there is a Functional test section in Keystone Proper.
>
> Most of the Keystone APIs are fairly well covered by the Unit tests, which
> make a full HTTP call through a liteish server, so the functional tests are
> really to cover the live database (soon to include LDAP, hopefully)
> integrations.

Nice point.
Yeah, most these cases(without DB) are covered by unit tests of each
project, I guess.
At least, Nova also covers these cases with unit tests.

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][ptl] Kolla PTL Candidacy for Steven Dake

2016-03-19 Thread Steven Dake (stdake)
My Peers,


Kolla in Mitaka introduced many significant features, and in my opinion

is the first version of Kolla to offer a fully functional deployment system.

Our community has completely decoupled Kolla from various upstream releases

of Docker>=1.10.0, resolved the data container data loss problem with

named volumes, implemented a fantastic upgrade implementation, added

reconfiguration, improved security via dropping root and TLS implementations

on the external network, implemented the start of a rockin' diagnostics

system, dramatically improved our gating, and most importantly accepted

with open arms the work of various community members around kicking off

a Mesos based implementation of Kolla in addition to our existing

implementation in Ansible.  Most important to me personally is that we

have done all of this work without harming our diversity, which

remains consistently strong and protects our community and implementation.


I don't personally take credit for this work; Mitaka Kolla is the hard work

of everyone in the community working together towards a common goal.  Every

metric that can be pulled out of Stackalytics shows our project has

doubled in committers, reviewers, commits, reviews, and IC project

interaction.


A leader's job is to take the community on the trip they want to go on.  I

personally feel I've done a done a good job of balancing the various interests

in Kolla to maintain a high quality implementation while maintaining

diversity.


I view the role of PTL as a facilitator rather than giving directives.  My

personal growth in this area is only because of on the job training over

the last twenty years of development leadership, coupled with the rockin'

teams I've led and recruited, including Corosync, Heat, Magnum, and now

Kolla.


For Newton I wish to directly contribute to or facilitate the following

activities:


* Continue to deliver diversity in our Community.

* Implement reno support and obtain the release:managed tag [1].

* Obtain the vulnerability:managed tag [2].

* Obtain real-world production deployments using Kolla.

* Grow out community of developers, reviewers, and operators.

* Turn our leaky functional testing gate into an Iris [3].

* Implement plugin support for Horizon, Neutron, Nova, and Cinder both

  from source and from binary.

* Implement BiFrost integration.

* Expand on our diagnostics system.

* Release a production-ready implementation of kolla-mesos.

* Containerize and deliver more Big Tent server projects.

* Make the image building and functional gating voting(!) by delivering

  mirrors of our upstream software dependencies internally in OpenStack

  Infrastructure.  This work was partially done in Mitaka but more work

  is required.

* Continue to provide excellent project management and improve our processes.


I am pleased to accept your vote and serve as your PTL for the Newton

release cycle.  As a Community I am certain we can make Newton as successful

as Kilo, Liberty, and Mitaka have been!


Warm regards,

-steve


[1] 
https://github.com/openstack/governance/blob/master/reference/tags/release_managed.rst

[2] 
https://github.com/openstack/governance/blob/master/reference/tags/vulnerability_managed.rst

[3] 
https://www.reddit.com/r/DIY/comments/3iw44k/i_made_an_iris_aperture_engagement_ring_box/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packstack] Update packstack core list

2016-03-19 Thread Ivan Chavero


- Mensaje original -
> De: "Alan Pevec" 
> Para: "OpenStack Development Mailing List (not for usage questions)" 
> 
> CC: "Javier Pena" , "David Moreau Simard" 
> 
> Enviados: Miércoles, 16 de Marzo 2016 4:35:25
> Asunto: Re: [openstack-dev] [packstack] Update packstack core list
> 
> 2016-03-16 11:23 GMT+01:00 Lukas Bezdicka :
> >> ...
> >> - Martin Mágr
> >> - Iván Chavero
> >> - Javier Peña
> >> - Alan Pevec
> >>
> >> I have a doubt about Lukas, he's contributed an awful lot to
> >> Packstack, just not over the last 90 days. Lukas, will you be
> >> contributing in the future? If so, I'd include him in the proposal as
> >> well.
> >
> > Thanks, yeah I do plan to contribute just haven't had time lately for
> > packstack.
> 
> I'm also adding David Simard who recently contributed integration tests.
> 
> Since there hasn't been -1 votes for a week, I went ahead and
> implemented group membership changes in gerrit.
> Thanks to the past core members, we will welcome you back on the next
> 
> One more topic to discuss is if we need PTL election? I'm not sure we
> need formal election yet and de-facto PTL has been Martin Magr, so if
> there aren't other proposal let's just name Martin our overlord?

+1 Martin should be the PTL

> Cheers,
> Alan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint][stable/liberty][trove] stable/liberty changes in Trove that are ready for merge

2016-03-19 Thread Tony Breeds
On Sat, Mar 19, 2016 at 09:53:41AM -0400, Amrith Kumar wrote:
> On 03/19/2016 01:43 AM, Tony Breeds wrote:

> > Did I misunderstand what these reviews are doing?
 
> Tony, your understanding is correct. These requests led to a lot of
> discussion and the fact that these are non-voting experimental jobs was
> discussed at length. It was felt that these would be useful for people
> who have to support stable/liberty. But, as one of the reviewers points
> out, this request opens a can of worms. Are we implying that over time
> there are going to be more fixes to get these tests working? And I don't
> think any of us is comfortable with the answer to that being "yes".
> 
> So, I've put this discussion on the agenda for the Trove meeting on
> Wednesday and I will get back to you after that.

Cool.  I looked at attending in case I could provide helpful stable context,
but it's at 5am (for me).  I'll read over the log and minutes instead.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] rabbitmq / ipv6 issue

2016-03-19 Thread Emilien Macchi
Quick update:
Sofer patches puppetlabs-rabbitmq to follow-up:
https://github.com/puppetlabs/puppetlabs-rabbitmq/pull/444/
https://github.com/puppetlabs/puppetlabs-rabbitmq/pull/445/

We are still under testing but it should work now.

Please let us know any weird thing you would see in CI related to RabbitMQ.

Thanks,

On Wed, Mar 16, 2016 at 5:48 AM, Derek Higgins  wrote:
> On 16 March 2016 at 02:41, Emilien Macchi  wrote:
>> I did some testing again and I'm still running in curl issues:
>> http://paste.openstack.org/show/BU7UY0mUrxoMUGDhXgWs/
>>
>> I'll continue investigation tomorrow.
>
> btw, tripleo-ci seems to be doing reasonably well this morning, I
> don't see any failures over the last few hours so the problem your
> seeing looks to be something that isn't a problem in all cases
>
>
>>
>> On Tue, Mar 15, 2016 at 8:00 PM, Emilien Macchi  wrote:
>>> Both Pull-requests got merged upstream (kudos to Puppetlabs).
>>>
>>> I rebased https://review.openstack.org/#/c/289445/ on master and
>>> abandoned the pin. Let's see how CI works now.
>>> If it still does not work, feel free to restore the pin and rebase
>>> again on the pin, so we can make progress.
>>>
>>> On Tue, Mar 15, 2016 at 6:21 PM, Emilien Macchi  wrote:
 So this is an attempt to fix everything in Puppet modules:

 * https://github.com/puppetlabs/puppetlabs-stdlib/pull/577
 * https://github.com/puppetlabs/puppetlabs-rabbitmq/pull/443

 If we have the patches like this, there will be no need to patch TripleO.

 Please review the patches if needed,
 Thanks

 On Tue, Mar 15, 2016 at 1:57 PM, Emilien Macchi  wrote:
> So from now, we pin [5] puppetlabs-rabbitmq to the commit before [3]
> and I rebased Attila's patch to test CI again.
> This pin is a workaround, in the meantime we are working on a fix in
> puppetlabs-rabbitmq.
>
> [5] https://review.openstack.org/293074
>
> I also reported the issue in TripleO Launchpad:
> https://bugs.launchpad.net/tripleo/+bug/1557680
>
> Also a quick note:
> Puppet OpenStack CI did not detect this failure because we don't
> deploy puppetlabs-rabbitmq from master but from the latest release
> (tag).
>
> On Tue, Mar 15, 2016 at 1:17 PM, Emilien Macchi  
> wrote:
>> TL;DR;This e-mail tracks down the work done to make RabbitMQ working
>> on IPv6 deployments.
>> It's currently broken and we might need to patch different Puppet
>> modules to make it work.
>>
>> Long story:
>>
>> Attila Darazs is currently working on [1] to get IPv6 tested by
>> TripleO CI but is stuck because a RabbitMQ issue in Puppet catalog
>> [2], reported by Dan Sneddon.
>> [1] https://review.openstack.org/#/c/289445
>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1317693
>>
>> [2] is caused by a patch in puppetlabs-rabbitmq [3], that change the
>> way we validate RabbitMQ is working from testing localhost to testing
>> the actual binding IP.
>> [3] 
>> https://github.com/puppetlabs/puppetlabs-rabbitmq/commit/dac8de9d95c5771b7ef7596b73a59d4108138e3a
>>
>> The problem is that when testing the actual IPv6, it curls fails for
>> some different reasons explained on [4] by Sofer.
>> [4] https://review.openstack.org/#/c/292664/
>>
>> So we need to investigate puppetlabs-rabbitmq and puppet-staging to
>> see if whether or not we need to change something there.
>> For now, I don't think we need to patch anything in TripleO Heat
>> Templates, but we'll see after the investigation.
>>
>> I'm currently working on this task, but any help is welcome,
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



 --
 Emilien Macchi
>>>
>>>
>>>
>>> --
>>> Emilien Macchi
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] no more expand/contract for live upgrade?

2016-03-19 Thread Dulko, Michal
On Fri, 2016-03-18 at 08:27 +, Tan, Lin wrote:
> Hi,
> 
> I noticed that expand/migrate/contract was revert in 
> https://review.openstack.org/#/c/239922/
> There is a new CMD 'online_data_migrations' was introduced to Nova and some 
> data-migration scripts have been added.
> So I wonder will Nova keep expand the DB schema at beginning of live upgrade 
> like before Or Nova have some new ways to handle DB Schema change?
> The upgrade doc was not update for a long time 
> http://docs.openstack.org/developer/nova/upgrade.html
> 
> Thanks a lot.
> 
> Best Regards,
> 
> Tan

[1] will help you understand current way of doing live schema upgrades.

[1] 
http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-19 Thread GHANSHYAM MANN
On Fri, Mar 18, 2016 at 9:06 AM, Ken'ichi Ohmichi  wrote:
> 2016-03-17 4:05 GMT-07:00 Andrea Frittoli :
>> On Thu, Mar 17, 2016 at 2:57 AM Ken'ichi Ohmichi 
>> wrote:
>>>
>>> 2016-03-16 19:41 GMT-07:00 Jim Rollenhagen :
>>> > On Wed, Mar 16, 2016 at 06:20:11PM -0700, Ken'ichi Ohmichi wrote:
>>> >> Hi
>>> >>
>>> >> I have one proposal[1] related to negative tests in Tempest, and
>>> >> hoping opinions before doing that.
>>> >>
>>> >> Now Tempest contains negative tests and sometimes patches are being
>>> >> posted for adding more negative tests, but I'd like to propose
>>> >> removing them from Tempest instead.
>>> >>
>>> >> Negative tests verify surfaces of REST APIs for each component without
>>> >> any integrations between components. That doesn't seem integration
>>> >> tests which are scope of Tempest.
>>> >> In addition, we need to spend the test operating time on different
>>> >> component's gate if adding negative tests into Tempest. For example,
>>> >> we are operating negative tests of Keystone and more
>>> >> components on the gate of Nova. That is meaningless, so we need to
>>> >> avoid more negative tests into Tempest now.
>>> >>
>>> >> If wanting to add negative tests, it is a nice option to implement
>>> >> these tests on each component repo with Tempest plugin interface. We
>>> >> can avoid operating negative tests on different component gates and
>>> >> each component team can decide what negative tests are valuable on the
>>> >> gate.
>>> >>
>>> >> In long term, all negative tests will be migrated into each component
>>> >> repo with Tempest plugin interface. We will be able to operate
>>> >> valuable negative tests only on each gate.
>>> >
>>> > So, positive tests in tempest, negative tests as a plugin.
>>> >
>>> > Is there any longer term goal to have all tests for all projects in a
>>> > plugin for that project? Seems odd to separate them.
>>>
>>> Yeah, from implementation viewpoint, that seems a little odd.
>>> but from the main scope of Tempest and to avoid unnecessary gate
>>> operation time, that can be acceptable, I feel.
>>> Negative tests can be corner cases in most cases, they don't seem
>>> integration tests.
>>
>> I think it's difficult to define a single black and white criteria for
>> negative tests, as they encompass a wide range of types of tests.
>>
>> I agree that things that only testing the API level of a service (not even a
>> DB behind) do not necessarily belong in tempest - i.e. testing of input
>> validation done by an API.  We could have a guideline for such tests to be
>> implemented as unit/functional tests in tree of the service.

Yes, this is key point here. If we see ~70% of the negative tests are
just checking API surface level (wrong input validation), which
defiantly
not belong to Tempest scope. Those should be in respective projects
repo either by functional/unit/plugin.
But in that case we have to define a very clear criteria about what
level of negative testing should be in scope of Tempest.

Also another key point is that, as we have lot of surface level
negative testing in Tempest, should we reject the new one?
For me sometime it makes difficult to reject those as we already have
some in tempest.

My vote here is we reject the new surface level negative tests and try
to move all existing negative tests(surface level) out of Tempest
ASAP.
And those can be just moved to projects functional/unit tests.

>
> Yeah, it is difficult to distinguish corner cases or not on negative
> tests as the criteria.
> If necessary to do that, we(QA team) need to read the implementation
> code of the core six services deeply during Tempest reviews. Then I
> rushed to remove all of them. My first proposal is not good according
> to feedback, but I'm happy to get feedback to see our direction :-)
>
> The guideline is a nice idea.
> If necessary to add more negative tests into Tempest, how about
> requiring to write the reason which explains new tests are not corner
> cases in the commit message?
> We can know the merit of new negative ones when reviewing.
>
>> However Tempest is also interoperability, so we should keep at least a few
>> negative API checks in tempest (for the core six services) to enforce that
>> return codes do not change inadvertently in negative cases, which could
>> break existing clients and applications.
>
> This also is a nice point.
> How to change error return codes is unclear to me at this time.
> In Nova, there are some exceptions for changing error return code
> without microversion bumping as [1]. This kind of guideline will be
> discussed later.

This makes Tempest scope little bit unclear again. If we want to
verify all error codes in Tempest then it leads to have all surface
level negative testing also in Tempest. There are lot of scenario
where error codes can be verified and will be difficult to cover all
in Tempest.

Current negative tests 

Re: [openstack-dev] [all] Newton Design Summit - Proposed slot allocation

2016-03-19 Thread Emilien Macchi
On Thu, Mar 17, 2016 at 1:29 PM, Thierry Carrez  wrote:
> Emilien Macchi wrote:
>>
>> On Thu, Mar 17, 2016 at 11:43 AM, Kirill Zaitsev 
>> wrote:
>>>
>>> Is it too late to ask for a half-day Contributors Meetup for murano?
>>>
>>> We had an extremely successful contributors meetup in Tokyo and I guess
>>> it
>>> is an error on our side, that we have not requested one for in Austin.
>>
>>
>> Puppet OpenStack team can survive without 1/2 day for community
>> meetup, 1/4 could work and we can share the room with you if Thierry
>> can't find a slot for you.
>
>
> Yeah, all the meetup spaces we have for Friday have been allocated, and some
> of them will be pretty small :) Two options: share with another team, or
> just gather in one corner of the large room full of roundtables we'll have
> for lunch...
>
> At this stage given available room sizes I'd recommend the latter.

Sharing the half day works for us, we don't need so much time.
If it works for you, you can split the half day, PuppetOpenStack the
first 1/4 and Murano for the second 1/4.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][release] Announcement of release of Mitaka rc1!

2016-03-19 Thread Steven Dake (stdake)
Thanks - although it wasn't me that made it happen, it was the team.

Regards,
-steve


From: "Daneyon Hansen (danehans)" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, March 18, 2016 at 1:57 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla][release] Announcement of release of Mitaka 
rc1!

Steve,

Congratulations on the release!!!

Regards,
Daneyon

On Mar 17, 2016, at 9:32 AM, Steven Dake (stdake) 
> wrote:


The Kolla community is pleased to announce the release of Mitaka

milestone RC1.  This may look like a large list of features, but really

it was finishing the job on 1 or 2 services that were missing for each

of our major blueprints for Mitaka.


Mitaka RC1 Features:

  *   MariaDB lights out recovery
  *   Full upgrades of all OpenStack services
  *   Full upgrades of all OpenStack infrastructure
  *   Full reconfiguration of all OpenStack services
  *   Full reconfiguration of all OpenStack infrastructureAll containers now 
run as non-root user
  *   Added support for Docker IPC host namespace
  *   Cleaned up false haproxy warning about reource unavailability
  *   Improved Vagrant scripts
  *   Mesos DNS container
  *   Ability to use a local archive or directory for source build
  *   Mesos has new per service cli to make deployment and upgrades more 
flexible
  *   Mesos has better constraints to deal with multi-host deployment
  *   Mesos has better depedencies for nova and neutron (inter-host 
dependencies)
  *

For more details, check out our blueprint feature and bug tracker here:


https://launchpad.net/kolla/+milestone/mitaka-rc1


We are super excited about the release of Kolla Mitaka-rc1!  We did some really

impressive output in 12 days, implementing solutions for 4

blueprints and 46 bugs.  This cycle our core team grew by one member

Alicja Kwasniewska.  Our community continues to remain extremely diverse

and is growing with 203 IC interactions and 40 corporate affiliations.  Check

out our stackalytics page at:


http://stackalytics.com/?module=kolla-group=person-day

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-19 Thread Sean Dague
On 03/16/2016 08:27 AM, Amrith Kumar wrote:
> Nikhil, thank you for the very timely posting. This is a topic that has
> been discussed quite a bit recently within the Trove team. I've read the
> document you reference as [1] and I have not been part of earlier
> conversations on this subject so I may be missing some context here.
> 
> I feel that the conversation (in [1], in the email thread) has gone to a
> discussion of implementation details (library vs. service, quota
> enforcement engine, interface, ...) when I think there is still some
> ambiguity in my mind about the requirements. What is it that this
> capability will provide and what is the contract implied when a service
> adopts this model.
> 
> For example, consider this case that we see in Trove. In response to a
> user request to create a cluster of databases, Trove must provision
> storage (cinder), compute (nova), networks (let's say neutron), and so
> on. As stated by Boris in his email, it would be ideal if Trove had a
> confirmation from all projects that there was quota available for the
> requests that would be made before the requests actually are made. This
> implies therefore that participating projects (cinder, nova, neutron,
> ...) would have to support some reservations scheme and subsequently
> honor requests based on a reservation. So, I think there's more to this
> than just another library or project, there's an implication for
> projects that wish to participate in this scheme. Or am I wrong in this
> understanding?

I think you have to wind it back further. While Trove wants to get a
consistent lock on quotas in all the projects below it, any single one
of those is massively racy on it's internal quota.

It's totally possible to have nova believe it has enough cpu, memory,
disk, security_groups, floating_ips, instances available for your user,
fail on a reschedule, and end up leaking off chunks of this, and
eventually fail you. So before asking the question about "Can Trove get
a unified quota answer" we have to solve "can the underlying projects
guaruntee consistent quota answers".

There is a giant pile of bugs in Nova about these races, has been
forever, until we solve this in the lower level projects there is no
hope of solving the Trove use case.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] config options help text improvement: current status

2016-03-19 Thread Markus Zoeller
If you have open changes in Gerrit for this task, please be aware
that the commit message must be updated to the Newton blueprint [1]:

Implements blueprint centralize-config-options-newton

New changes should use this line in the commit message too. This was
necessary to keep a clean track record of the work done in Mitaka.
I'll find two IRC meeting slots (Europe/Asia and US friendly) shortly
after the Mitaka release where we can discuss open question. I will
also attend the Newton summit, in case you want to reach out there.
The next weeks until the release are focused on creating a stable
and reliable product and preparation for the summit, so I have less
time to review these changes. But you can always contact me in IRC.

References:
[1] 
https://blueprints.launchpad.net/nova/+spec/centralize-config-options-newton

Regards, Markus Zoeller (markus_z)

Markus Zoeller/Germany/IBM@IBMDE wrote on 03/02/2016 06:45:45 PM:

> From: Markus Zoeller/Germany/IBM@IBMDE
> To: "OpenStack Development Mailing List" 

> Date: 03/02/2016 06:47 PM
> Subject: [openstack-dev] [nova] config options help text improvement: 
> current status
> 
> TL;DR: From ~600 nova specific config options are:
> ~140 at a central location with an improved help text
> ~220 options in open reviews (currently on hold)
> ~240 options todo
> 
> 
> Background
> ==
> Nova has a lot of config options. Most of them weren't well
> documented and without looking in the code you probably don't
> understand what they do. That's fine for us developers but the ops
> had more problems with the interface we provide for them [1]. After
> the Mitaka summit we came to the conclusion that this should be 
> improved, which is currently in progress with blueprint [2].
> 
> 
> Current Status
> ==
> After asking on the ML for help [3] the progress improved a lot. 
> The goal is clear now and we know how to achieve it. The organization 
> is done via [4] which also has a section of "odd config options". 
> This section is important for a later step when we want do deprecate 
> config options to get rid of unnecessary ones. 
> 
> As we reached the Mitaka-3 milestone we decided to put the effort [5] 
> on hold to stabilize the project and focus the review effort on bug 
> fixes. When the Newton cycle opens, we can continue the work. The 
> current result can be seen in the sample "nova.conf" file generated 
> after each commit [6]. The appendix at the end of this post shows an
> example.
> 
> All options we have will be treated that way and moved to a central
> location at "nova/conf/". That's the central location which hosts
> now the interface to the ops. It's easier to get an overview now.
> The appendix shows how the config options were spread at the beginning
> and how they are located now.
> 
> I initially thought that we have around 800 config options in Nova
> but I learned meanwhile that we import a lot from other libs, for 
> example from "oslo.db" and expose them as Nova options. We have around
> 600 Nova specific config options, and ~140 are already treaded like
> described above and ca. 220 are in the pipeline of open reviews.
> Which leaves us ~240 which are not looked at yet.
> 
> 
> Outlook
> ===
> The numbers of the beginning of this ML post make me believe that we
> can finish the work in the upcoming Newton cycle. "Finished" means
> here: 
> * all config options we provide to our ops have proper and usable docs
> * we have an understanding which options don't make sense anymore
> * we know which options should get stronger validation to reduce errors
> 
> I'm looking forward to it :)
> 
> 
> Thanks
> ==
> I'd like to thank all the people who are working on this and making
> this possible. A special thanks goes to Ed Leafe, Esra Celik and
> Stephen Finucane. They put a tremendous amount of work in it.
> 
> 
> References:
> ===
> [1] 
> http://lists.openstack.org/pipermail/openstack-operators/2016-January/
> 009301.html
> [2] 
https://blueprints.launchpad.net/nova/+spec/centralize-config-options
> [3] 
> 
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081271.html

> [4] https://etherpad.openstack.org/p/config-options
> [5] Gerrit reviews for this topic: 
> https://review.openstack.org/#/q/status:open+project:openstack/nova
> +branch:master+topic:bp/centralize-config-options
> [6] The sample config file which gets generated after each commit:
> http://docs.openstack.org/developer/nova/sample_config.html
> 
> 
> Appendix
> 
> 
> Example of the help text improvement
> ---
> As an example, compare the previous documentation of the scheduler 
> option "scheduler_tracks_instance_changes". 
> Before we started:
> 
> # Determines if the Scheduler tracks changes to instances to help 
> # with its filtering decisions. (boolean value)
> #scheduler_tracks_instance_changes = true
> 

Re: [openstack-dev] [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

2016-03-19 Thread Rabi Mishra
> Hi Heaters,
> 
> The Mitaka release is close to finish, so it's good time for reviewing
> results of work.
> One of this results is analyze contribution results for the last release
> cycle.
> According to the data [1] we have one good candidate for nomination to
> core-review team:
> Oleksii Chuprykov.
> During this release he showed significant value of review metric.
> His review were valuable and useful. Also He has enough level of
> expertise in Heat code.
> So I think he is worthy to join to core-reviewers team.
> 
> I ask you to vote and decide his destiny.
>  +1 - if you agree with his candidature
>  -1  - if you disagree with his candidature
> 
> [1] http://stackalytics.com/report/contribution/heat-group/120

+1
 
> --
> Regards,
> Sergey.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-19 Thread Salvatore Orlando
I'm not sure if this was mentioned already throughout the thread, however
as I've been working a bit on quotas in the past I might have some
additional information:

- Looking at quotas it is worth distinguishing between management (eg::
resource limits per tenant and/or users), and enforcement (eg.: can the
bakery service give me 4 cookies or did I already eat too many?)
  While for the reasons listed throughout this thread the latter should
really happen in the same context where the request is going to served,
quota management instead might be its own service, or however being done in
a common endpoint for all OpenStack resources.
- As far as quota enforcement is concerned, Dims already shared all the
relevant links. You might be already aware that we had a consensus around a
library, but hit a bit of a blocker on the fact that the library should've
introduced db model changes (at the time I devised a massive hack disguised
as abstraction around it). Considering alembic advancements (are we all
using alembic aren't we?) this should not be anymore an issue. I really
would love to have a library that does quota enforcement.
- It has also been raised a good point about securing a chunk of resources
across project, that is also related to John's point about business
quotas... I'm not sure it is necessary, but Blazar [1] kind of achieves
this - even if it was conceived with different purposes.

Salvatore

[1] https://wiki.openstack.org/wiki/Blazar


On 16 March 2016 at 18:27, John Dickinson  wrote:

> There are two types of quotas you may want to enforce in an OpenStack
> project: technical and business.
>
> Technical quotas are things that are hard limits of the system based on
> either actual resources available or protecting the system itself. For
> example, you can't provision a 2TB volume if you only have 1TB of capacity
> available. Similarly, you may want to ratelimit a user to a certain number
> of operations per second in order to keep the system usable by every user.
>
> These sort of quotas should absolutely stay in the realm of each
> individual project. And, for example, if Trove needs to provision a Cinder
> volume but that fails, it's Trove's responsibility for handling that
> elegantly.
>
> Business quotas are different. This is stuff like "a user is allowed to
> provision 1TB of Cinder per Nova compute unit that is provisioned" or "a
> user can provision 1Gb of network capacity per 200TB of data stored in
> Swift". Simpler rules that don't have cross-project dependencies are
> possible too (eg "A user can have no more than 3 compute instances" or "a
> user can have no more than 100k objects or 500TB stored in Swift").
> Oftentimes, these business quotas will be tied in to (or dependent on)
> other product-specific tools like billing or CRM systems.
>
> These business quotas should have a common rules engine in an OpenStack
> deployment. I've long thought that this sort of quota enforcement is an
> authZ decision (i.e. Keystone), but perhaps it's in some other project
> (Congress?). The hard part is that if it's in a central place, that service
> has to be enormously scalable. Specifically, it has to be able to handle
> the aggregate request rate load of every service it is enforcing quotas on.
>
> If we end up with an OpenStack project that is doing centralized business
> quotas, you've got the start of building an ERP system (
> https://en.wikipedia.org/wiki/Enterprise_resource_planning). Frankly, I
> don't think we should be doing that. It's outside of our scope of building
> cloud infrastructure software.
>
> However, we should be all about fixing any problems any individual project
> has about handling technical quotas. That work should stay within its
> respective project. There's no need to consolidate or combine
> project-specific resource management because they happen to all be called
> "quotas".
>
> --John
>
>
>
>
> On 15 Mar 2016, at 23:25, Nikhil Komawar wrote:
>
> > Hello everyone,
> >
> > tl;dr;
> > I'm writing to request some feedback on whether the cross project Quotas
> > work should move ahead as a service or a library or going to a far
> > extent I'd ask should this even be in a common repository, would
> > projects prefer to implement everything from scratch in-tree? Should we
> > limit it to a guideline spec?
> >
> > But before I ask anymore, I want to specifically thank Doug Hellmann,
> > Joshua Harlow, Davanum Srinivas, Sean Dague, Sean McGinnis and  Andrew
> > Laski for the early feedback that has helped provide some good shape to
> > the already discussions.
> >
> > Some more context on what the happenings:
> > We've this in progress spec [1] up for providing context and platform
> > for such discussions. I will rephrase it to say that we plan to
> > introduce a new 'entity' in the Openstack realm that may be a library or
> > a service. Both concepts have trade-offs and the WG wanted to get more
> > ideas around such trade-offs from the larger 

Re: [openstack-dev] [Horizon] How do we move forward with xstatic releases?

2016-03-19 Thread Thomas Goirand
On 03/17/2016 07:12 AM, Richard Jones wrote:
> On 13 March 2016 at 07:11, Matthias Runge  > wrote:
> 
> On 10/03/16 11:48, Beth Elwell wrote:
> > If we will anyway have potential breakage I don’t understand why the
> > better solution here would not be to just use the bower and npm tools
> > which are standardised for JavaScript and would move Horizon more
> > towards using widely recognised tooling from within not just Openstack
> > but the wider development community. Back versions always need to be
> > supported for a time, however I would add that long term this could end
> > up saving time and create a stable longer term solution.
> >
> 
> I have a few issues with those "package managers":
> - downloads are not verified, there is a chance of getting a "bad"
> download.
> - they are pointing to the outside world, like to github etc. While they
> appear to work "most of the time", that might not good enough for
> the gate
> - how often have we been blocked by releases of software not managed by
> OpenStack? Seriously, that happens quite a few times over a release
> cycle, not to mention breakages by releases of our own tools turning out
> to block one or the other sub-project
> 
> 
> To be fair to those package managers,  the issues OpenStack has had with
> releases of libraries breaking things is a result of us either:
> 
> a) not pinning releases (upper-constraints now fixes that for *things
> that use it*, which isn't everything, sadly) or
> b) the system that tests upper-constraints changes not having broad
> enough testing across OpenStack for us to notice when a new library
> release breaks things. I would like to increase the inclusion of
> Horizon's test suite in the constraints testing for this reason. At
> least, it's on my TODO :-)
> 
> Horizon, for example, currently does *not* use the upper-constraints
> pinning in its test suite or installation, so we're vulnerable to, say,
> a python-*client release that's not compatible. I have a patch in the
> works to address this, but it kinda depends on us moving over from
> run_tests.sh to tox, which is definitely something to wait until N for.

Thanks for this work. I'm looking forward to it. Do you also have in the
pipe to stop using nose / python -m coverage, and switch to testr?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-19 Thread Daniel Mellado


El 17/03/16 a las 04:27, Assaf Muller escribió:
> On Wed, Mar 16, 2016 at 10:41 PM, Jim Rollenhagen
>  wrote:
>> On Wed, Mar 16, 2016 at 06:20:11PM -0700, Ken'ichi Ohmichi wrote:
>>> Hi
>>>
>>> I have one proposal[1] related to negative tests in Tempest, and
>>> hoping opinions before doing that.
>>>
>>> Now Tempest contains negative tests and sometimes patches are being
>>> posted for adding more negative tests, but I'd like to propose
>>> removing them from Tempest instead.
>>>
>>> Negative tests verify surfaces of REST APIs for each component without
>>> any integrations between components. That doesn't seem integration
>>> tests which are scope of Tempest.
>>> In addition, we need to spend the test operating time on different
>>> component's gate if adding negative tests into Tempest. For example,
>>> we are operating negative tests of Keystone and more
>>> components on the gate of Nova. That is meaningless, so we need to
>>> avoid more negative tests into Tempest now.
>>>
>>> If wanting to add negative tests, it is a nice option to implement
>>> these tests on each component repo with Tempest plugin interface. We
>>> can avoid operating negative tests on different component gates and
>>> each component team can decide what negative tests are valuable on the
>>> gate.
>>>
>>> In long term, all negative tests will be migrated into each component
>>> repo with Tempest plugin interface. We will be able to operate
>>> valuable negative tests only on each gate.
>> So, positive tests in tempest, negative tests as a plugin.
>>
>> Is there any longer term goal to have all tests for all projects in a
>> plugin for that project? Seems odd to separate them.
> I'd love to see this idea explored further. What happens if Tempest
> ends up without tests, as a library for shared code as well as a
> centralized place to run tests from via plugins?
I think this should be further discussed as some tests, such as the
scenario ones, make use of several projects. So for scenario tests at
least I think that we should keep them inside the core tempest repo.
Besides that such change would also affect different projects, such as
Defcore/Refstack, where the plugin usage would make complex to keep a
list of non-tree tests.
>
>> // jim
>>
>>> Any thoughts?
>>>
>>> Thanks
>>> Ken Ohmichi
>>>
>>> ---
>>> [1]: https://review.openstack.org/#/c/293197/
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-2, Mar 21-25

2016-03-19 Thread Craig Vyvial
Trove has 3 patches in the gate that are awaiting merge.

https://review.openstack.org/#/c/281576/
https://review.openstack.org/#/c/288123/
https://review.openstack.org/#/c/273204/

I expect these will merge in the next few hours at that time we will be
submitting the rc-1 release.

Thanks,
Craig Vyvial

On Thu, Mar 17, 2016 at 9:12 PM Jim Rollenhagen 
wrote:

> Ironic and IPA should have releases coming next week.
>
> // jim
>
> > On Mar 17, 2016, at 12:23, Doug Hellmann  wrote:
> >
> > We're almost to the finish line with Mitaka!
> >
> > Focus
> > -
> >
> > Project teams following the cycle-with-milestone model should be
> > testing their release candidates and fixing release-critical bugs.
> >
> > Project teams following the cycle-with-intermediary model should
> > ensure they have at least one Mitaka release, and determine whether
> > they will need another release before the end of the Mitaka cycle.
> >
> > All feature ongoing work should be retargeted to the Newton cycle.
> >
> > All project teams should be working on release-critical bugs.
> >
> > General Notes
> > -
> >
> > The global requirements list is frozen. If you need to change a
> > dependency, for example to include a bug fix in one of our libraries
> > or an upstream library, please provide enough detail in the change
> > request to allow the requirements review team to evaluate the change.
> >
> > User-facing strings are frozen to allow the translation team time
> > to finish their work.
> >
> > Release Actions
> > ---
> >
> > We still have quite a few managed cycle-with-milestones projects
> > without a release candidate:
> >
> > aodh
> > ceilometer
> > barbican
> > designate
> > horizon
> > manila
> > sahara
> > trove
> > zaqar
> >
> > And there are a few managed cycle-with-intermediary projects without
> > a clear indication if they have cut their final release:
> >
> > ironic
> > ironic-python-agent
> > python-manilaclient
> > sahara-tests
> > swift
> >
> > Please contact the release team, or submit a release request to the
> > releases repository, to address these missing releases.
> >
> > Important Dates
> > ---
> >
> > Final release candidates: R-1, Mar 28-1
> > Mitaka final release: Apr 7
> >
> > Mitaka release schedule:
> http://releases.openstack.org/mitaka/schedule.html
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] propose ejuaso for core

2016-03-19 Thread Dougal Matthews
On 14 March 2016 at 14:38, Dan Prince  wrote:

> http://russellbryant.net/openstack-stats/tripleo-reviewers-180.txt
>
> Our top reviewer over the last half a year ejuaso (goes by Ozz for
> Osorio or jaosorior on IRC). His reviews seem consistent, he
> consistently attends the meetings and he chimes in on lots of things.
> I'd like to propose we add him to our core team (probably long overdue
> now too).
>
> If you agree please +1. If there is no negative feedback I'll add him
> next Monday.
>

+1


>
> Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Packaging CI for Fuel

2016-03-19 Thread Vladimir Kozhukalov
Hi,

> Are there any effort on OpenStack packaging?

The short answer is yes. We are putting efforts to migrate all
our packaging activities to the community RPM/DEB projects.

Long story is as follows:

At the moment Fuel is distributed as a set of RPM packages. This
Packaging CI that Aleksandra mentioned is nothing more
but a copy of CI that we have been using at Mirantis for about
two years. At the moment we are working hard to split Fuel
into upstream (community) and downstream (part of MOS) and this
CI is a part of this work.

Currently, OpenStack RPM project is at the early development stage.
It is another project (not Fuel) and I hope it will be finally
possible to build RPM packages using OpenStack Infrastructure.
We (Fuel) are in contact with OpenStack RPM team and we are planning
to move all Fuel RPM specs under this project. I hope this migration
will be finished in Newton cycle.

Besides, packaging is not just preparing RPM/DEB specs. We also
need tools to build packages as well as CI strategy
 - get spec here
 - get source code there
 - prepare build environment
 - build packages
 - publish packages to testing repository
 - test packages
 - publish packages to current public repository
Fuel Packaging CI already does this. And the fact that we made a public
Packaging CI instance reflects our intention to share our experience.

I'd also like to mention our recent initiative Packetary [1] which is
a tool that is to cover the whole RPM/DEB packaging domain
(building packages, building repositories, clonning repositories).
It is not something totally new, it is to become a single convenient API to
widely used tools like createrepo, python-debian, mock, sbuild, etc.
This tools could be potentially used as a part of whatever CI or by
a regular user via CLI.

As for DEB packaging, as Thomas wrote, he is currently working on making
this possible to build DEB packages (including Fuel) using OpenStack
Infrastructure.

[1] https://wiki.openstack.org/wiki/Packetary


Vladimir Kozhukalov

On Fri, Mar 18, 2016 at 12:11 AM, Thomas Goirand  wrote:

> On 03/16/2016 06:22 PM, Emilien Macchi wrote:
> > Are they any effort on OpenStack packaging?
> >
> > http://governance.openstack.org/reference/projects/packaging-deb.html
> > http://governance.openstack.org/reference/projects/packaging-rpm.html
> >
> > I would like to see packaging built & tested by OpenStack Infra, so
> > downstream CI (Fuel, Puppet OpenStack, TripleO, Kolla, etc) could use
> > it as a single place and efforts would converge.
>
> Hi Emilien,
>
> As you know, things were a bit stuck, with the Debian image patch not
> being approved. But it has changed, and we do have a debian-jessie image
> in infra now. Therefore, I've move to the next step, which is actually
> building packages. Here's the CR:
>
> https://review.openstack.org/#/c/294022/
>
> I've been able to test the pkgdeb-install-sbuild.sh script that I'm
> proposing to setup sbuild on a copy of the Debian image (thanks a lot to
> Pabelanger and Fungi copy the image, and give it to me for download),
> and sbuild was setup properly. The pkgdeb-build-pkg.sh also worked,
> though I'm not 100% sure yet about the content of
> /home/jenkins/workspace/${JOB_NAME}, if it will have the correct branch
> or what, but everything else should be working to build packages.
>
> Once packages are built, then we will want to publish them somewhere.
> That's the part where there's lots of unknown. This has so far never
> been done on OpenStack infra. Hopefully, our new PTL will probably help
> here (or someone else from infra)! :) Also, managing a Debian repository
> isn't really hard to do: one can generate the necessary artifacts with a
> small shell script which uses apt-ftparchive (you can look how its done
> at src/pkgos-scan-repo in openstack-pkg-tools).
>
> Finally, we'll need a way to build backports from Sid and also publish
> them.
>
> That's where we are now. Let's go back to the first step, which is the
> CR linked above. Help and comments welcome.
>
> Cheers,
>
> Thomas Goirand (zigo)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

2016-03-19 Thread Sergey Kraynev
Looks like it was unanimously decision :)
Oleksii, my congratulations !
Good work. I will add you to necessary groups ;)

On 17 March 2016 at 04:34, Huangtianhua  wrote:
> +1 :)
>
> -邮件原件-
> 发件人: Sergey Kraynev [mailto:skray...@mirantis.com]
> 发送时间: 2016年3月16日 18:58
> 收件人: OpenStack Development Mailing List (not for usage questions)
> 主题: [openstack-dev] [Heat] Nomination Oleksii Chuprykov to Heat core reviewer
>
> Hi Heaters,
>
> The Mitaka release is close to finish, so it's good time for reviewing 
> results of work.
> One of this results is analyze contribution results for the last release 
> cycle.
> According to the data [1] we have one good candidate for nomination to 
> core-review team:
> Oleksii Chuprykov.
> During this release he showed significant value of review metric.
> His review were valuable and useful. Also He has enough level of expertise in 
> Heat code.
> So I think he is worthy to join to core-reviewers team.
>
> I ask you to vote and decide his destiny.
>  +1 - if you agree with his candidature
>  -1  - if you disagree with his candidature
>
> [1] http://stackalytics.com/report/contribution/heat-group/120
>
> --
> Regards,
> Sergey.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] COE drivers spec

2016-03-19 Thread Kai Qiang Wu
Here are some of my raw points,


1. For the driver mentioned, I think we not necessary use bay-driver here,
as have network-driver, volume-driver, maybe it is not needed to introduce
driver in bay level.(bay is higher level than network or volume)

maybe like

coes/
   swarm/
   mesos/
   kubernetes

Each coes include, take swarm as example,

coes/
swarm/
 default/
 contrib/
Or we not use contrib here, just like (one is support by default, others
are contributed by more contributors and tested in jenkins pipeline)
 coes/
 swarm/
 atomic/
 ubuntu/


We have BaseCoE, other specific CoE inherit from that, Each CoE have
related life management operations, like Create, Update, Get, Delete life
cycle management.



2.  We need to think more about scale manager, which involves scale cluster
up and down, Maybe a related auto-scale and manual scale ways.


The user cases, like as a Cloud Administrator, I could easily use OpenStack
to provide CoEs cluster, and manage CoE life cycle. and scale CoEs.
CoEs could do its best to use OpenStack network, and volume services to
provide CoE related network, volume support.


Others interesting case(not required), if user just want to deploy one
container in Magnum, we schedule it to the right CoE, (if user manual
specify, it would schedule to the specific CoE)


Or more user cases .



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Jamie Hannaford 
To: "openstack-dev@lists.openstack.org"

Date:   17/03/2016 07:24 pm
Subject:[openstack-dev] [Magnum] COE drivers spec



Hi all,

I'm writing the spec for the COE drivers, and I wanted some feedback about
what it should include. I'm trying to reconstruct some of the discussion
that was had at the mid-cycle meet-up, but since I wasn't there I need to
rely on people who were :)

From my perspective, the spec should recommend the following:

1. Change the BayModel `coe` attribute to `bay_driver`, the value of which
will correspond to the name of the directory where the COE code will
reside, i.e. drivers/{driver_name}

2. Introduce a base Driver class that each COE driver extends. This would
reside in the drivers dir too. This base driver will specify the interface
for interacting with a Bay. The following operations would need to be
defined by each COE driver: Get, Create, List, List detailed, Update,
Delete. Each COE driver would implement each operation differently
depending on their needs, but would satisfy the base interface. The base
class would also contain common logic to avoid code duplication. Any
operations that fall outside this interface would not exist in the COE
driver class, but rather an extension situated elsewhere. The JSON payloads
for requests would differ from COE to COE.

Cinder already uses this approach to great effect for volume drivers:

https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/lvm.py
https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py

Question: Is this base class a feasible idea for Magnum? If so, do we need
any other operations in the base class that I haven't mentioned?

3. Each COE driver would have its own Heat template for creating a bay
node. It would also have a template definition that lists the JSON
parameters which are fed into the Heat template.

Question: From a very top-level POV, what logic or codebase changes would
Magnum need Heat templates in the above way?

4. Removal of all old code that does not fit the above paradigm.

​---

Any custom COE operations that are not common Bay operations (i.e. the six
listed in #2) would reside in a COE extension. This is outside of the scope
of the COE drivers spec and would require an entirely different spec that
utilizes a common paradigm for extensions in OpenStack. Such a spec would
also need to cover how the conductor would link off to each COE. Is this
summary correct?

Does Magnum already have a scale manager? If not, should this be introduced
as a separate BP/spec?

Is there anything else that a COE drivers spec need to cover which I have
not mentioned?​

Jamie


Rackspace International GmbH a company registered in the Canton of
Zurich, Switzerland (company identification number CH-020.4.047.077-1)
whose registered office is at Pfingstweidstrasse 60, 8005 Zurich,
Switzerland. Rackspace International GmbH privacy policy can be viewed at
www.rackspace.co.uk/legal/swiss-privacy-policy - This e-mail message may
contain 

[openstack-dev] [magnum] High Availability

2016-03-19 Thread Daneyon Hansen (danehans)
All,

Does anyone have experience deploying Magnum in a highly-available fashion? If 
so, I'm interested in learning from your experience. My biggest unknown is the 
Conductor service. Any insight you can provide is greatly appreciated.

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Infra] Nailgun extensions testing

2016-03-19 Thread Sylwester Brzeczkowski
Hi everyone!

I’m looking for boilerplates/good practices regarding to testing
extensions with core code.

Since we unlocked Nailgun extensions system [0] and now there
is a possibility to install the extensions from external sources we
want to also provide a way to test your own extensions against
Nailgun and some other extensions. Here is the spec for this activity [1]

The idea is to write python (or shell) script which will:
- clone all required repos (like fuel-web, extensions repos) using
  probably zuul-cloner
- checkout to appropriate stable branches / will cherry-pick some
  commit / stay on master
- run tests

This script will be used to:
- test extension with different Nailgun versions (to check if it’s
compatible)
  locally and on extension’s jenkins gate jobs
- test extension with different Nailgun versions and with other extensions
  enabled (depending on needs)
- test Nailgun with some core extensions locally and on fuel-web
  jenkins gate jobs

The script will be placed in fuel-web repo as extensions will need
to have Nailgun in its requirements anyway.

There will be new jenkins job which will consume names of
extensions to test and the branches/commits/versions which
the tests should be run against. The job will basically fetch fuel-web
repo, and run the script mentioned above.

What do you think about the idea? Is it a good approach?
Am I missing some already existing solutions for this problem?

Regards

[0]
https://blueprints.launchpad.net/fuel/+spec/stevedore-extensions-discovery
[1] https://review.openstack.org/#/c/281749/


-- 
*Sylwester Brzeczkowski*
Python Software Engineer
Product Development-Core : Product Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [puppet] move puppet-pacemaker

2016-03-19 Thread Sergii Golovatiuk
Guys,

Fuel has own implementation of pacemaker [1]. It's functionality may be
useful in other projects.

[1] https://github.com/fuel-infra/puppet-pacemaker

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Sat, Feb 13, 2016 at 6:20 AM, Emilien Macchi 
wrote:

>
> On Feb 12, 2016 11:06 PM, "Spencer Krum"  wrote:
> >
> > The module would also be welcome under the voxpupuli[0] namespace on
> > github. We currently have a puppet-corosync[1] module, and there is some
> > overlap there, but a pure pacemaker module would be a welcome addition.
> >
> > I'm not sure which I would prefer, just that VP is an option. For
> > greater openstack integration, gerrit is the way to go. For greater
> > participation from the wider puppet community, github is the way to go.
> > Voxpupuli provides testing and releasing infrastructure.
>
> The thing is, we might want to gate it on tripleo since it's the first
> consumer right now. Though I agree VP would be a good place too, to attract
> more puppet users.
>
> Dilemma!
> Maybe we could start using VP, with good testing and see how it works.
>
> Iterate later if needed. Thoughts?
>
> >
> > [0] https://voxpupuli.org/
> > [1] https://github.com/voxpupuli/puppet-corosync
> >
> > --
> >   Spencer Krum
> >   n...@spencerkrum.com
> >
> > On Fri, Feb 12, 2016, at 09:44 AM, Emilien Macchi wrote:
> > > Please look and vote:
> > > https://review.openstack.org/279698
> > >
> > >
> > > Thanks for your feedback!
> > >
> > > On 02/10/2016 04:04 AM, Juan Antonio Osorio wrote:
> > > > I like the idea of moving it to use the OpenStack infrastructure.
> > > >
> > > > On Wed, Feb 10, 2016 at 12:13 AM, Ben Nemec  > > > > wrote:
> > > >
> > > > On 02/09/2016 08:05 AM, Emilien Macchi wrote:
> > > > > Hi,
> > > > >
> > > > > TripleO is currently using puppet-pacemaker [1] which is a
> module
> > > > hosted
> > > > > & managed by Github.
> > > > > The module was created and mainly maintained by Redhat. It
> tends to
> > > > > break TripleO quite often since we don't have any gate.
> > > > >
> > > > > I propose to move the module to OpenStack so we'll use
> OpenStack Infra
> > > > > benefits (Gerrit, Releases, Gating, etc). Another idea would
> be to
> > > > gate
> > > > > the module with TripleO HA jobs.
> > > > >
> > > > > The question is, under which umbrella put the module? Puppet ?
> > > > TripleO ?
> > > > >
> > > > > Or no umbrella, like puppet-ceph. <-- I like this idea
> > > >
> > > >
> > > > I think the module not being under an umbrella makes sense.
> > > >
> > > >
> > > > >
> > > > > Any feedback is welcome,
> > > > >
> > > > > [1] https://github.com/redhat-openstack/puppet-pacemaker
> > > >
> > > > Seems like a module that would be useful outside of TripleO, so
> it
> > > > doesn't seem like it should live under that.  Other than that I
> don't
> > > > have enough knowledge of the organization of the puppet modules
> to
> > > > comment.
> > > >
> > > >
> > > >
> > > >
>  __
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> > > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > > <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Juan Antonio Osorio R.
> > > > e-mail: jaosor...@gmail.com 
> > > >
> > > >
> > > >
> > > >
> __
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > >
> > > --
> > > Emilien Macchi
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > Email had 1 attachment:
> > > + signature.asc
> > >   1k (application/pgp-signature)
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

  1   2   3   4   >