[openstack-dev] [nova] Supporting volume_type when booting from volume

2017-06-01 Thread 한승진
Hello, stackers

I am just curious about the results of lots of discussions on the below
blueprint.

https://blueprints.launchpad.net/nova/+spec/support-volume-type-with-bdm-parameter

Can I ask what the concolusion is?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [dev] [doc] Operations Guide future

2017-06-01 Thread Arkady.Kanevsky
Option 3 sound reasonable if wiki can be searchable.

-Original Message-
From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com] 
Sent: Thursday, June 01, 2017 8:44 PM
To: Alexandra Settle 
Cc: OpenStack Operators ; 
openstack-d...@lists.openstack.org; OpenStack Development Mailing List (not for 
usage questions) ; George Mihaiescu 

Subject: Re: [openstack-dev] [Openstack-operators] [dev] [doc] Operations Guide 
future

Hi Alex,

Likewise for option 3. If I recall correctly from the summit session that was 
also the main preference in the room?

On 2 June 2017 at 11:15, George Mihaiescu  wrote:
> +1 for option 3
>
>
>
> On Jun 1, 2017, at 11:06, Alexandra Settle  wrote:
>
> Hi everyone,
>
>
>
> I haven’t had any feedback regarding moving the Operations Guide to 
> the OpenStack wiki. I’m not taking silence as compliance. I would 
> really like to hear people’s opinions on this matter.
>
>
>
> To recap:
>
>
>
> Option one: Kill the Operations Guide completely and move the 
> Administration Guide to project repos.
> Option two: Combine the Operations and Administration Guides (and then 
> this will be moved into the project-specific repos) Option three: Move 
> Operations Guide to OpenStack wiki (for ease of operator-specific 
> maintainability) and move the Administration Guide to project repos.
>
>
>
> Personally, I think that option 3 is more realistic. The idea for the 
> last option is that operators are maintaining operator-specific 
> documentation and updating it as they go along and we’re not losing 
> anything by combining or deleting. I don’t want to lose what we have 
> by going with option 1, and I think option 2 is just a workaround 
> without fixing the problem – we are not getting contributions to the project.
>
>
>
> Thoughts?
>
>
>
> Alex
>
>
>
> From: Alexandra Settle 
> Date: Friday, May 19, 2017 at 1:38 PM
> To: Melvin Hillsman , OpenStack Operators 
> 
> Subject: Re: [Openstack-operators] Fwd: [openstack-dev] 
> [openstack-doc] [dev] What's up doc? Summit recap edition
>
>
>
> Hi everyone,
>
>
>
> Adding to this, I would like to draw your attention to the last dot 
> point of my email:
>
>
>
> “One of the key takeaways from the summit was the session that I joint 
> moderated with Melvin Hillsman regarding the Operations and 
> Administration Guides. You can find the etherpad with notes here:
> https://etherpad.openstack.org/p/admin-ops-guides  The session was 
> really helpful – we were able to discuss with the operators present 
> the current situation of the documentation team, and how they could 
> help us maintain the two guides, aimed at the same audience. The 
> operator’s present at the session agreed that the Administration Guide 
> was important, and could be maintained upstream. However, they voted 
> and agreed that the best course of action for the Operations Guide was 
> for it to be pulled down and put into a wiki that the operators could 
> manage themselves. We will be looking at actioning this item as soon as 
> possible.”
>
>
>
> I would like to go ahead with this, but I would appreciate feedback 
> from operators who were not able to attend the summit. In the etherpad 
> you will see the three options that the operators in the room 
> recommended as being viable, and the voted option being moving the 
> Operations Guide out of docs.openstack.org into a wiki. The aim of 
> this was to empower the operations community to take more control of 
> the updates in an environment they are more familiar with (and available to 
> others).
>
>
>
> What does everyone think of the proposed options? Questions? Other thoughts?
>
>
>
> Alex
>
>
>
> From: Melvin Hillsman 
> Date: Friday, May 19, 2017 at 1:30 PM
> To: OpenStack Operators 
> Subject: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] 
> [dev] What's up doc? Summit recap edition
>
>
>
>
>
> -- Forwarded message --
> From: Alexandra Settle 
> Date: Fri, May 19, 2017 at 6:12 AM
> Subject: [openstack-dev] [openstack-doc] [dev] What's up doc? Summit 
> recap edition
> To: "openstack-d...@lists.openstack.org"
> 
> Cc: "OpenStack Development Mailing List (not for usage questions)"
> 
>
>
> Hi everyone,
>
>
> The OpenStack manuals project had a really productive week at the 
> OpenStack summit in Boston. You can find a list of all the etherpads 
> and attendees
> here: https://etherpad.openstack.org/p/docs-summit
>
>
>
> As we all know, we are rapidly losing key contributors and core reviewers.
> We are not alone, this is happening across the board. It is making 
> things harder, but not 

Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-06-01 Thread Matthew Treinish
On Thu, Jun 01, 2017 at 11:09:56AM +0100, Chris Dent wrote:
> A lot of this results, in part, from there being no single guiding
> pattern and principle for how (and where) the tests are to be
> managed. 

It sounds like you want to write a general testing guide for openstack.
Have you started this effort anywhere? I don't think anyone would be opposed
to starting a document for that, it seems like a reasonable thing to have.
But, I think you'll find there is not a one size fits all solution though,
because every project has their own requirements and needs for testing. 

> When there's a choice between one, some and all, "some" is
> almost always the wrong way to manage something. "some" is how we do
> tempest (and fair few other OpenStack things).
> 
> If it is the case that we want some projects to not put their tests
> in the main tempest repo then the only conceivable pattern from a
> memorability, discoverability, and equality standpoint is actually
> for all the tests to be in plugins.
> 
> If that isn't possible (and it is clear there are many reasons why
> that may be the case) then we need to be extra sure that we explore
> and uncover the issues that the "some" approach presents and provide
> sufficient documentation, tooling, and guidance to help people get
> around them. And that we recognize and acknowledge the impact it has.

So have you read the documentation:

https://docs.openstack.org/developer/tempest/ (or any of the other relevant
documentation

and filed bugs about where you think there are gaps? This is something that
really bugs me sometimes (yes the pun is intended) just like anything else this
is all about iterative improvements. These broad trends are things tempest
and (every project hopefully) have been working on. But improvements don't
just magically occur overnight it takes time to implement them.

Just compare the state of the documentation and tooling from 2 years ago (when
tempest started adding the plugin interface) to today. Things have steadily
improved over time and the situation now is much better. This will continue and
in the future things will get even better.

The thing is this is open source collaborative development and there is an
expectation that people who have issues with something in the project will
report them or contribute a fix and communicate with the maintainers. The users
of tempest's plugin interface tend to be other openstack projects (but not
exclusively) and if there are something that's not clear we need to work
together to fix them.

Based on this paragraph I feel like you think the decision to add a tempest
plugin interface and decrease it's scope was taken lightly without forethought
or careful consideration. But, it's the exact opposite there was extensive
debate and exploration of the problem space and took a long time to reach a
consensus.

> 
> If the answer to that is "who is going to do that?" or "who has the
> time?" then I ask you to ask yourself why we think the "non-core"
> projects have time to fiddle about with tempest plugins?

I think this unfair simplification, no one is required to write a tempest
plugin it's a choice the projects made. While I won't say the interface
is perfect, things are always improving. If a project chooses to write
a plugin, the expectation is that we'll all work together to help fix
issues as they are encountered. No individual can do everything by themselves 
and
it's a shared group effort. But, even so there is no shortage of work for
anyone, it's all about prioritization of effort.

> 
> And finally, I actually don't have too strong of a position in the
> case of tempest and tempest plugins. What I take issue with is the
> process whereby we discuss and decide these things and characterize
> the various projects
> 
> If I have any position on tempest at all it is that we should limit
> it to gross cloud validation and maybe interop testing, and projects
> should manage their own integration testing in tree using whatever
> tooling they feel is most appropriate. If that turns out to be
> tempest, cool.

I fail to see how this is any different than how things work today. No one is
required to use a tempest plugin and they can write tests however they want.
Tempest itself has a well defined scope (which does evolve over time like any
other project) and doesn't try to be all the testing everywhere. Almost every
other project has it's own in tree testing outside of tempest or tempest
plugins. Also, projects which have in-tree tempest tests also have tempest
plugins to expand on that set of functionality.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Keystone][PublicCloud] Introducing Adjutant, an OpenStack service for signups, user invites, password reset and more!

2017-06-01 Thread Andy Botting
We caught up with some of the Catalyst guys at the Melbourne OpenStack
Australia day and they gave us a demo of this. Looks like a really nice
project that I think might replace some of existing user management
workflow.

Allowing project managers to invite collaborators and self-manage their
permissions and to create nested projects (without requiring admin
intervention) would work well for us on the Nectar cloud.

Thanks for the contribution Adrian. Hoping we can find some time soon to
test this out and contribute.

cheers,
Andy

On 29 May 2017 at 17:01, Adrian Turjak  wrote:

> Hello OpenStack Community,
>
> I'd like to introduce to you all a service we have developed at Catalyst
> and are now ready to release to the OpenStack community in hopes that
> others may find it useful. As a public cloud provider we quickly ran into a
> bunch of little issues around user management, sign-ups, and other pieces
> of business logic that needed to fit into how we administer the cloud but
> didn't entirely make sense as additions to existing services. There were
> also a lot of actions we wanted to delegate to our customers but couldn't
> do without giving them too much power in Keystone, or wanted those actions
> to send emails, or extend to external non-OpenStack services.
>
> Enter Adjutant. Adjutant (previously called StackTask) was built as a
> service to allow us to create business workflows that can be exposed in
> some fashion over an API. A way for us to build reusable snippets of code
> that we can tie together, and a flexible and pluggable API layer we can
> expose those on. We needed these to be able to talk to our external
> systems, as well as our OpenStack services, and provide us some basic steps
> and in some cases the ability to require approval before an action
> completes. In many ways Adjutant also works as a layer around Keystone for
> us to build business logic around certain things we'd like our customers to
> be able to do in very limited ways.
>
> The service itself is built on Django with Django-Rest-Framework and is an
> API service with the gui component built as a ui plugin for Horizon that
> allows easy integration into an OpenStack dashboard.
>
> Adjutant, as the name implies, is a helper, not a major service, but one
> that smooths some situations and an easy place to offload some admin tasks
> that a customer or non-admin should be able to trigger in a more limited
> way. Not only that, but it stores the history of all these tasks, who asked
> for them, and when they were completed. Anything a user does through
> Adjutant is stored and able to be audited, with in future the ability for
> project admins to be able to audit their own tasks and see who of their
> users did something.
>
> Out of the box it provides the following functionality:
>
>- User invitation by users with the 'project_admin' or 'project_mod'
>role.
>   - This will send out an email to the person you've invited with a
>   submission token and let them setup their password and then grants them
>   roles on your project. If their user exists already, will only require
>   confirmation and then grant roles.
>- As a 'project_admin' or 'project_mod' you can list the users with
>roles on your project and edit their roles or revoke them from your 
> project.
>- Let non-admin users request a password reset.
>   - User will be emailed a token which will let them reset their
>   password.
>- Basic signup
>   - Let a user request a new project. Requires admin approval and
>   will create a new project and user, granting default roles on the new
>   project. Will reuse existing user if present, or send an email to the 
> user
>   to setup their password.
>- Let a user update their email address.
>   - Will notify old email, and send a confirmation token to the new.
>
> Features coming in the future (most either almost done, or in prototype
> stages):
>
>- Forced password reset
>- users with 'project_admin' or 'project_mod' can force a password
>   reset for a given user in their projects
>   - cloud admins can force password resets for users on their cloud.
>   - changes user password to a randomly generated value and sends
>   user a password reset token to their email.
>   - user must reset before they can log in again.
>- Quota management for your project
>   - As a 'project_admin' or 'project_mod' you can request a change in
>   quota to a set of predefined sizes (as set in the Adjutant conf). Sizes
>   allows you to increase multiple related quotas at the same time. You can
>   move to adjacent sizes without approval a number of times in a 
> configurable
>   window (days), or an admin can approve your quota change as well.
>- Hierarchical Multi-Tenancy in a single domain environment
>   - 'project_admin' to be able to create sub-projects off the current
>   

[openstack-dev] [karbor] Karbor weekly irc meeting

2017-06-01 Thread Chen Ying
Hi guys,

  Karbor weekly meeting will be held at 1500 UTC each even week and
0900 UTC each odd week #openstack-meeting.

  So next karbor IRC meeting will be held at  2017-06-06 0900 UTC.


Please feel free to add your topics here.

https://wiki.openstack.org/wiki/Meetings/Karbor

Thanks very much.


Best Wishes.

   chenying
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [dev] [doc] Operations Guide future

2017-06-01 Thread Blair Bethwaite
Hi Alex,

Likewise for option 3. If I recall correctly from the summit session
that was also the main preference in the room?

On 2 June 2017 at 11:15, George Mihaiescu  wrote:
> +1 for option 3
>
>
>
> On Jun 1, 2017, at 11:06, Alexandra Settle  wrote:
>
> Hi everyone,
>
>
>
> I haven’t had any feedback regarding moving the Operations Guide to the
> OpenStack wiki. I’m not taking silence as compliance. I would really like to
> hear people’s opinions on this matter.
>
>
>
> To recap:
>
>
>
> Option one: Kill the Operations Guide completely and move the Administration
> Guide to project repos.
> Option two: Combine the Operations and Administration Guides (and then this
> will be moved into the project-specific repos)
> Option three: Move Operations Guide to OpenStack wiki (for ease of
> operator-specific maintainability) and move the Administration Guide to
> project repos.
>
>
>
> Personally, I think that option 3 is more realistic. The idea for the last
> option is that operators are maintaining operator-specific documentation and
> updating it as they go along and we’re not losing anything by combining or
> deleting. I don’t want to lose what we have by going with option 1, and I
> think option 2 is just a workaround without fixing the problem – we are not
> getting contributions to the project.
>
>
>
> Thoughts?
>
>
>
> Alex
>
>
>
> From: Alexandra Settle 
> Date: Friday, May 19, 2017 at 1:38 PM
> To: Melvin Hillsman , OpenStack Operators
> 
> Subject: Re: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc]
> [dev] What's up doc? Summit recap edition
>
>
>
> Hi everyone,
>
>
>
> Adding to this, I would like to draw your attention to the last dot point of
> my email:
>
>
>
> “One of the key takeaways from the summit was the session that I joint
> moderated with Melvin Hillsman regarding the Operations and Administration
> Guides. You can find the etherpad with notes here:
> https://etherpad.openstack.org/p/admin-ops-guides  The session was really
> helpful – we were able to discuss with the operators present the current
> situation of the documentation team, and how they could help us maintain the
> two guides, aimed at the same audience. The operator’s present at the
> session agreed that the Administration Guide was important, and could be
> maintained upstream. However, they voted and agreed that the best course of
> action for the Operations Guide was for it to be pulled down and put into a
> wiki that the operators could manage themselves. We will be looking at
> actioning this item as soon as possible.”
>
>
>
> I would like to go ahead with this, but I would appreciate feedback from
> operators who were not able to attend the summit. In the etherpad you will
> see the three options that the operators in the room recommended as being
> viable, and the voted option being moving the Operations Guide out of
> docs.openstack.org into a wiki. The aim of this was to empower the
> operations community to take more control of the updates in an environment
> they are more familiar with (and available to others).
>
>
>
> What does everyone think of the proposed options? Questions? Other thoughts?
>
>
>
> Alex
>
>
>
> From: Melvin Hillsman 
> Date: Friday, May 19, 2017 at 1:30 PM
> To: OpenStack Operators 
> Subject: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] [dev]
> What's up doc? Summit recap edition
>
>
>
>
>
> -- Forwarded message --
> From: Alexandra Settle 
> Date: Fri, May 19, 2017 at 6:12 AM
> Subject: [openstack-dev] [openstack-doc] [dev] What's up doc? Summit recap
> edition
> To: "openstack-d...@lists.openstack.org"
> 
> Cc: "OpenStack Development Mailing List (not for usage questions)"
> 
>
>
> Hi everyone,
>
>
> The OpenStack manuals project had a really productive week at the OpenStack
> summit in Boston. You can find a list of all the etherpads and attendees
> here: https://etherpad.openstack.org/p/docs-summit
>
>
>
> As we all know, we are rapidly losing key contributors and core reviewers.
> We are not alone, this is happening across the board. It is making things
> harder, but not impossible. Since our inception in 2010, we’ve been climbing
> higher and higher trying to achieve the best documentation we could, and
> uphold our high standards. This is something to be incredibly proud of.
> However, we now need to take a step back and realise that the amount of work
> we are attempting to maintain is now out of reach for the team size that we
> have. At the moment we have 13 cores, of which none are full time
> contributors or reviewers. This includes myself.
>
>
>
> That being said! I have spent the last week at the summit talking to some of
> our leaders, including Doug 

Re: [openstack-dev] [Openstack-operators] [dev] [doc] Operations Guide future

2017-06-01 Thread George Mihaiescu
+1 for option 3



> On Jun 1, 2017, at 11:06, Alexandra Settle  wrote:
> 
> Hi everyone,
>  
> I haven’t had any feedback regarding moving the Operations Guide to the 
> OpenStack wiki. I’m not taking silence as compliance. I would really like to 
> hear people’s opinions on this matter.
>  
> To recap:
>  
> Option one: Kill the Operations Guide completely and move the Administration 
> Guide to project repos.
> Option two: Combine the Operations and Administration Guides (and then this 
> will be moved into the project-specific repos)
> Option three: Move Operations Guide to OpenStack wiki (for ease of 
> operator-specific maintainability) and move the Administration Guide to 
> project repos.
>  
> Personally, I think that option 3 is more realistic. The idea for the last 
> option is that operators are maintaining operator-specific documentation and 
> updating it as they go along and we’re not losing anything by combining or 
> deleting. I don’t want to lose what we have by going with option 1, and I 
> think option 2 is just a workaround without fixing the problem – we are not 
> getting contributions to the project.
>  
> Thoughts?
>  
> Alex
>  
> From: Alexandra Settle 
> Date: Friday, May 19, 2017 at 1:38 PM
> To: Melvin Hillsman , OpenStack Operators 
> 
> Subject: Re: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] [dev] 
> What's up doc? Summit recap edition
>  
> Hi everyone,
>  
> Adding to this, I would like to draw your attention to the last dot point of 
> my email:
>  
> “One of the key takeaways from the summit was the session that I joint 
> moderated with Melvin Hillsman regarding the Operations and Administration 
> Guides. You can find the etherpad with notes here: 
> https://etherpad.openstack.org/p/admin-ops-guides  The session was really 
> helpful – we were able to discuss with the operators present the current 
> situation of the documentation team, and how they could help us maintain the 
> two guides, aimed at the same audience. The operator’s present at the session 
> agreed that the Administration Guide was important, and could be maintained 
> upstream. However, they voted and agreed that the best course of action for 
> the Operations Guide was for it to be pulled down and put into a wiki that 
> the operators could manage themselves. We will be looking at actioning this 
> item as soon as possible.”
>  
> I would like to go ahead with this, but I would appreciate feedback from 
> operators who were not able to attend the summit. In the etherpad you will 
> see the three options that the operators in the room recommended as being 
> viable, and the voted option being moving the Operations Guide out of 
> docs.openstack.org into a wiki. The aim of this was to empower the operations 
> community to take more control of the updates in an environment they are more 
> familiar with (and available to others).
>  
> What does everyone think of the proposed options? Questions? Other thoughts?
>  
> Alex
>  
> From: Melvin Hillsman 
> Date: Friday, May 19, 2017 at 1:30 PM
> To: OpenStack Operators 
> Subject: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] [dev] 
> What's up doc? Summit recap edition
>  
>  
> -- Forwarded message --
> From: Alexandra Settle 
> Date: Fri, May 19, 2017 at 6:12 AM
> Subject: [openstack-dev] [openstack-doc] [dev] What's up doc? Summit recap 
> edition
> To: "openstack-d...@lists.openstack.org" 
> Cc: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> 
> 
> Hi everyone,
> 
> The OpenStack manuals project had a really productive week at the OpenStack 
> summit in Boston. You can find a list of all the etherpads and attendees 
> here: https://etherpad.openstack.org/p/docs-summit
>  
> As we all know, we are rapidly losing key contributors and core reviewers. We 
> are not alone, this is happening across the board. It is making things 
> harder, but not impossible. Since our inception in 2010, we’ve been climbing  
> higher and higher trying to achieve the best documentation we could, and 
> uphold our high standards. This is something to be incredibly proud of. 
> However, we now need to take a step back and realise that the amount of work 
> we are attempting to maintain is now out of reach for the team size that we 
> have. At the moment we have 13 cores, of which none are full time 
> contributors or reviewers. This includes myself.
>  
> That being said! I have spent the last week at the summit talking to some of 
> our leaders, including Doug Hellmann (cc’d), Jonathan Bryce and Mike Perez 
> regarding the future of the project. Between myself and other community 
> members, we have been drafting plans and coming up with a new 

Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-06-01 Thread Matthew Treinish
On Thu, Jun 01, 2017 at 11:57:00AM -0400, Doug Hellmann wrote:
> Excerpts from Thierry Carrez's message of 2017-06-01 11:51:50 +0200:
> > Graham Hayes wrote:
> > > On 01/06/17 01:30, Matthew Treinish wrote:
> > >> TBH, it's a bit premature to have the discussion. These additional 
> > >> programs do
> > >> not exist yet, and there is a governance road block around this. Right 
> > >> now the
> > >> set of projects that can be used defcore/interopWG is limited to the set 
> > >> of 
> > >> projects in:
> > >>
> > >> https://governance.openstack.org/tc/reference/tags/tc_approved-release.html
> > > 
> > > Sure - but that is a solved problem, when the interop committee is
> > > ready to propose them, they can add projects into that tag. Or am I
> > > misunderstanding [1] (again)?
> > 
> > I think you understand it well. The Board/InteropWG should propose
> > additions/removals of this tag, which will then be approved by the TC:
> > 
> > https://governance.openstack.org/tc/reference/tags/tc_approved-release.html#tag-application-process
> > 
> > > [...]
> > >> We had a forum session on it (I can't find the etherpad for the session) 
> > >> which
> > >> was pretty speculative because it was about planning the new programs. 
> > >> Part of
> > >> that discussion was around the feasibility of using tests in plugins and 
> > >> whether
> > >> that would be desirable. Personally, I was in favor of doing that for 
> > >> some of
> > >> the proposed programs because of the way they were organized it was a 
> > >> good fit.
> > >> This is because the proposed new programs were extra additions on top of 
> > >> the
> > >> base existing interop program. But it was hardly a definitive discussion.
> > > 
> > > Which will create 2 classes of testing for interop programs.
> > 
> > FWIW I would rather have a single way of doing "tests used in trademark
> > programs" without differentiating between old and new trademark programs.
> > 
> > I fear that we are discussing solutions before defining the problem. We
> > want:
> > 
> > 1- Decentralize test maintenance, through more tempest plugins, to
> > account for limited QA resources
> > 2- Additional codereview constraints and approval rules for tests that
> > happen to be used in trademark programs
> > 3- Discoverability/ease-of-install of the set of tests that happen to be
> > used in trademark programs
> > 4- A git repo layout that can be simply explained, for new teams to
> > understand
> > 
> > It feels like the current git repo layout (result of that 2016-05-04
> > resolution) optimizes for 2 and 3, which kind of works until you add
> > more trademark programs, at which point it breaks 1 and 4.
> > 
> > I feel like you could get 2 and 3 without necessarily using git repo
> > boundaries (using Gerrit approval rules and some tooling to install/run
> > subset of tests across multiple git repos), which would allow you to
> > optimize git repo layout to get 1 and 4...
> > 
> > Or am I missing something ?
> > 
> 
> Right. The point of having the trademark tests "in tempest" was not
> to have them "in the tempest repo", that was just an implementation
> detail of the policy of "put them in a repository managed by people
> who understand the expanded review rules".

There was more to it than this, a big part was duplication of effort as well.
Tempest itself is almost a perfect fit for the scope of the testing defcore is
doing. While tempest does additional testing that defcore doesn't use, a large
subset is exactly what they want.

> 
> There were a lot of unexpected issues when we started treating the
> test suite as a production tool for validating a cloud.  We have
> to be careful about how we change the behavior of tests, for example,
> even if the API responses are expected to be the same.  It's not
> fair to vendors or operators who get trademark approval with one
> release to have significant changes in behavior in the exact same
> tests for the next release.

I actually find this to be kinda misleading. Tempest has always had
running on any cloud as part of it's mission. I think you're referring
to the monster defcore thread from last summer about proprietary nova extensions
adding on to API responses. This is honestly a completely separate problem
which is not something I want to dive into again, because that was a much more
nuanced problem that involved much more than just code review.

> 
> At the early stage, when the DefCore team was still figuring out
> these issues, it made sense to put all of the tests in one place
> with a review team that was actively participating in establishing
> the process. If we better understand the "rules" for these tests
> now, we can document them and distribute the work of maintaining the
> test suites.

I think you're overestimating how much work is actually being done
bidirectionally here. The interaction with defcore is more straight consumption
then you might think. They tend to just pick and choose from what tempest has
and don't 

Re: [openstack-dev] [cyborg]Nominate Rushil Chugh and Justin Kilpatrick as new core reviewers

2017-06-01 Thread Zhipeng Huang
Hi Team,

Given that there had been no objections, I would like to congrat and
welcome Justin and Rushil to our core team :)

On Thu, May 25, 2017 at 8:27 PM, Harm Sluiman 
wrote:

> +1
> I haven't had time to actively participate these past months but have
> monitored and agree :)
>
> Thanks for your time
> Harm Sluiman
> harm.slui...@gmail.com
>
>
> On May 25, 2017, at 10:24 AM, Zhipeng Huang  wrote:
>
> Hi Team,
>
> This is an email for nomination of rushil and justin to the core team.
> They have been very active in our development and the specs they helped
> draft have been merged after several rounds of review. The statistics could
> be found at http://stackalytics.com/?project_type=all=
> cyborg=person-day .
>
> Since we are not an official project and i'm the only core reviewer at the
> moment, I think we should have a simple procedure for the first additional
> core reviewers to be added. Therefore if there are no outstanding
> oppositions by the end of the day of next Wed, I will suppose there is a
> consensus and add these guys to the core team to help accelerating our
> development.
>
> Please voice your support or concerns if there are any within the next 7
> days :)
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Product Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-06-01 Thread Lance Bragstad
On Thu, Jun 1, 2017 at 3:46 PM, Andrey Grebennikov <
agrebenni...@mirantis.com> wrote:

> We had a very similar conversation multiple times with Keystone cores
> (multi-site Keystone).
>
Geo-rep Galera was suggested first and it was immediately declined (one of
> the reasons was the case of complete corruption of Keystone DB everywhere
> in case of accidental table corrupt in one site) by me as well as current
> customer.
> Right after that I was told many times that federation is the only right
> way to go nowadays.
>

After doing some digging, I found the original specification [0] and the
meeting agenda [1] where we talked about the alternative.

If I recall correctly, I thought I remember the proposal (being able to
specify project IDs at creation time) being driven by not wanting to
replicate all of keystone's backends in multi-region deployments,but still
wanting to validate tokens across regions. Today, if you have a region in
Seattle and region in Sydney, a token obtained from a keystone in Seattle
and validated in Sydney would require both regions to share identity,
resource, and assignment backends (among others depending on what kind of
token it is). The request in the specification would allow only the
identity and role backends to be replicated but the project backend in each
region wouldn't need to be synced or replicated. Instead, operators could
create projects with matching IDs in each region in order for tokens
generated from one to be validated in the other. Most folks involved in the
meeting considered this behavior for project IDs to be a slippery-slope.

Federation was brought up because sharing identity information globally,
but not project or role information globally sounded like federation (e.g.
having all your user information in an IdP somewhere and setting up each
region's keystone to federate to the IdP). The group seemed eager to expose
gaps in the federation implementation that prevented that case and address
those.

Hopefully that helps capture some of the context (feel free to fill in gaps
if I missed any).


[0] https://review.openstack.org/#/c/323499/
[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2016-05-31.log.html#t2016-05-31T18:05:05


>
> Is this statement still valid?
>
> On Thu, Jun 1, 2017 at 12:51 PM, Jay Pipes  wrote:
>
>> On 05/31/2017 11:06 PM, Mike Bayer wrote:
>>
>>> I'd also throw in, there's lots of versions of Galera with different
>>> bugfixes / improvements as we go along, not to mention configuration
>>> settings if Jay observes it working great on a distributed cluster and
>>> Clint observes it working terribly, it could be that these were not the
>>> same Galera versions being used.
>>>
>>
>> Agreed. The version of Galera we were using IIRC was Percona XtraDB
>> Cluster 5.6. And, remember that the wsrep_provider_options do make a big
>> difference, especially in WAN-replicated setups.
>>
>> We also increased the tolerance settings for network disruption so that
>> the cluster operated without hiccups over the WAN. I think the
>> wsrep_provider_options setting was evs.inactive_timeout=PT30Sm
>> evs.suspect_timeout=PT15S, and evs.join_retrans_period=PT1S.
>>
>> Also, regardless of settings, if your network sucks, none of these
>> distributed databases are going to be fun to operate :)
>>
>> At AT, we jumped through a lot of hoops to ensure multiple levels of
>> redundancy and high performance for the network links inside and between
>> datacenters. It really makes a huge difference when your network rocks.
>>
>>
>> Best,
>> -jay
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Andrey Grebennikov
> Principal Deployment Engineer
> Mirantis Inc, Austin TX
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-06-01 Thread Sean McGinnis
> 
> And yes, I agree with the argument that we should be fair and treat
> all projects the same way. If we're going to move tests out of the
> tempest repository, we should move all of them. The QA team can
> still help maintain the test suites for whatever projects they want,
> even if those tests are in plugins.
> 
> Doug
> 

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-06-01 Thread Andrey Grebennikov
We had a very similar conversation multiple times with Keystone cores
(multi-site Keystone).
Geo-rep Galera was suggested first and it was immediately declined (one of
the reasons was the case of complete corruption of Keystone DB everywhere
in case of accidental table corrupt in one site) by me as well as current
customer.
Right after that I was told many times that federation is the only right
way to go nowadays.

Is this statement still valid?

On Thu, Jun 1, 2017 at 12:51 PM, Jay Pipes  wrote:

> On 05/31/2017 11:06 PM, Mike Bayer wrote:
>
>> I'd also throw in, there's lots of versions of Galera with different
>> bugfixes / improvements as we go along, not to mention configuration
>> settings if Jay observes it working great on a distributed cluster and
>> Clint observes it working terribly, it could be that these were not the
>> same Galera versions being used.
>>
>
> Agreed. The version of Galera we were using IIRC was Percona XtraDB
> Cluster 5.6. And, remember that the wsrep_provider_options do make a big
> difference, especially in WAN-replicated setups.
>
> We also increased the tolerance settings for network disruption so that
> the cluster operated without hiccups over the WAN. I think the
> wsrep_provider_options setting was evs.inactive_timeout=PT30Sm
> evs.suspect_timeout=PT15S, and evs.join_retrans_period=PT1S.
>
> Also, regardless of settings, if your network sucks, none of these
> distributed databases are going to be fun to operate :)
>
> At AT, we jumped through a lot of hoops to ensure multiple levels of
> redundancy and high performance for the network links inside and between
> datacenters. It really makes a huge difference when your network rocks.
>
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Andrey Grebennikov
Principal Deployment Engineer
Mirantis Inc, Austin TX
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] tempest failures when deploying neutron-server in wsgi with apache

2017-06-01 Thread Morales, Victor
Hi Emilien, 

I noticed that the configuration file was created using puppet.  I submitted a 
patch[1] that was targeting to include the changes in Devstack. My major 
concern is with the value of WSGIScriptAlias which should be pointing to WSGI 
script.

Regards/Saludos
Victor Morales

[1] https://review.openstack.org/#/c/439191

On 5/31/17, 4:40 AM, "Emilien Macchi"  wrote:

Hey folks,

I've been playing with deploying Neutron in WSGI with Apache and
Tempest tests fail on spawning Nova server when creating Neutron
ports:

http://logs.openstack.org/89/459489/4/check/gate-puppet-openstack-integration-4-scenario001-tempest-centos-7/f2ee8bf/console.html#_2017-05-30_13_09_22_715400

I haven't found anything useful in neutron-server logs:

http://logs.openstack.org/89/459489/4/check/gate-puppet-openstack-integration-4-scenario001-tempest-centos-7/f2ee8bf/logs/apache/neutron_wsgi_access_ssl.txt.gz

Before I file a bug in neutron, can anyone look at the logs with me
and see if I missed something in the config:

http://logs.openstack.org/89/459489/4/check/gate-puppet-openstack-integration-4-scenario001-tempest-centos-7/f2ee8bf/logs/apache_config/10-neutron_wsgi.conf.txt.gz

Thanks for the help,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-06-01 Thread John Dickinson


On 1 Jun 2017, at 7:38, Thierry Carrez wrote:

> Thierry Carrez wrote:
>> In a previous thread[1] I introduced the idea of moving the PTG from a
>> purely horizontal/vertical week split to a more
>> inter-project/intra-project activities split, and the initial comments
>> were positive.
>>
>> We need to solidify how the week will look like before we open up
>> registration (first week of June), so that people can plan their
>> attendance accordingly. Based on the currently-signed-up teams and
>> projected room availability, I built a strawman proposal of how that
>> could look:
>>
>> https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true
>
> OK, it looks like the feedback on this strawman proposal was generally
> positive, so we'll move on with this.
>
> For teams that are placed on the Wednesday-Friday segment, please let us
> know whether you'd like to make use of the room on Friday (pick between
> 2 days or 3 days). Note that it's not a problem if you do (we have space
> booked all through Friday) and this can avoid people leaving too early
> on Thursday afternoon. We just need to know how many rooms we might be
> able to free up early.
>
> In the same vein, if your team (or workgroup, or inter-project goal) is
> not yet listed and you'd like to have a room in Denver, let us know ASAP.
>
> -- 
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Swift would like to go through Friday.

--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security bug in diskimage-builder

2017-06-01 Thread Ben Nemec



On 06/01/2017 11:31 AM, Jeremy Stanley wrote:

On 2017-06-01 10:40:34 -0500 (-0500), Ben Nemec wrote:
[...]

Okay, so we're all set up, but now it appears we're all subscribed to every
tripleo bug as well.  I think oslo-coresec used to be the same way, but at
some point it changed so I only get explicitly notified of security bugs.
Does anyone know how to set up tripleo-coresec that way too?  I've poked
around the launchpad settings but I haven't found anything that looks
promising.

[...]

Go to the project's bugs page in LP and follow the "Subscribe to bug
mail" link in the right sidebar. Select the bug mail recipient as
"One of the teams you administer" and then choose the relevant team
in the drop-down there. Set the receive mail option to "are added or
changed in any way" and then you get some extra checkboxes and
submenus. Expand the "Information types" tree and you can uncheck
"Public" to stop getting notifications about normal (non-security,
non-private) bug reports.


Excellent, thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Blueprint process question

2017-06-01 Thread Waines, Greg
Hey David,

Yeah I agree.
I responded to Rob with the same thinking.

I don’t mind taking a first pass at the Horizon blueprint / work to provide
an extensible space in the page header for plugins to post content.

And I will then move the content of this blueprint, as you suggest,
to the Vitrage-Dashboard repo ... which will leverage  the above new
Horizon Page Header Plugin blueprint.

Greg.

From: David Lyle 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Thursday, June 1, 2017 at 12:50 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [horizon] Blueprint process question

There have been a couple of projects that would like some space in the
page header. I think the work in Horizon is to provide an extensible
space in the page header for plugins to post content. The UI plugin
for Vitrage, in this case, would then be responsible for populating
that content if desired. This specific blueprint should really be
targeted at the Vitrage UI plugin and a separate blueprint should be
added to Horizon to create the extension point in the page header.

David

On Wed, May 31, 2017 at 11:06 AM, Waines, Greg
> wrote:
Hey Rob,



Just thought I’d check in on whether Horizon team has had a chance to review
the following blueprint:

https://blueprints.launchpad.net/horizon/+spec/vitrage-alarm-counts-in-topnavbar



The blueprint in Vitrage which the above Horizon blueprint depends on has
been approved by Vitrage team.

i.e.   https://blueprints.launchpad.net/vitrage/+spec/alarm-counts-api



let me know if you’d like to setup a meeting to discuss,

Greg.



From: Rob Cresswell 
>
Reply-To: 
"openstack-dev@lists.openstack.org"
>
Date: Thursday, May 18, 2017 at 11:40 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: Re: [openstack-dev] [horizon] Blueprint process question



There isn't a specific time for blueprint review at the moment. It's usually
whenever I get time, or someone asks via email or IRC. During the weekly
meetings we always have time for open discussion of bugs/blueprints/patches
etc.



Rob



On 18 May 2017 at 16:31, Waines, Greg 
> wrote:

A blueprint question for horizon team.



I registered a new blueprint the other day.

https://blueprints.launchpad.net/horizon/+spec/vitrage-alarm-counts-in-topnavbar



Do I need to do anything else to get this reviewed?  I don’t think so, but
wanted to double check.

How frequently do horizon blueprints get reviewed?  once a week?



Greg.





p.s. ... the above blueprint does depend on a Vitrage blueprint which I do
have in review.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-06-01 Thread Ed Leafe
Greetings OpenStack community,

A very lively meeting today, with all the participants in a seemingly jovial 
mood. Must be something in the water. Or maybe June just brings out 
lightheartedness.

Much of the discussion centered on Monty Taylor's chain of patches [4] 
regarding version discovery and the service catalog. No controversy about them, 
but some grumbling over the sheer volume of content. This isn't a complaint; 
rather, I think we are in awe of the size of the brain dump as Monty shares his 
accumulated knowledge. We can use some help reviewing these, so that they are 
as understandable as such esoteric material can possibly be.

We also discussed the proposed change to the microversions guideline about how 
to signal an upcoming raising of the minimum version [5]. There was agreement 
that having such a signal was a good thing, but there is still some confusion 
about how to communicate when this change will happen. The idea is that it 
isn't enough to say that it's going to be raised; it's also important to 
communicate how long a user has to update their code to handle this raising. We 
came up with adding a field that states the earliest possible date of the 
change, with the understanding that it could happen later than that. This was 
to help users get an idea of the urgency of the change. So while we can agree 
on that, as usual it is the naming of the field that is problematic. The 
current choice is 'not_raise_min_before', but we're open to improvements. Let 
the bikeshedding begin!

# Newly Published Guidelines

Nothing new at this time.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None at this time but please check out the reviews below.

# Guidelines Currently Under Review [3]

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

* A suite of several documents about using the service catalog and doing 
version discovery
  Start at https://review.openstack.org/#/c/462814/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your 
concerns in an email to the OpenStack developer mailing list[1] with the tag 
"[api]" in the subject. In your email, you should include any relevant reviews, 
links, and comments to help guide the discussion of the specific challenge you 
are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] Start at https://review.openstack.org/#/c/462814/
[5] https://review.openstack.org/#/c/446138/


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][ptls][all] Potential Queens Goal: Move policy and policy docs into code

2017-06-01 Thread Lance Bragstad
Hi all,

I've proposed a community-wide goal for Queens to move policy into code and
supply documentation for each policy [0]. I've included references to
existing documentation and specifications completed by various projects and
attempted to lay out the benefits for both developers and operators.

I'd greatly appreciate any feedback or discussion.

Thanks!

Lance


[0] https://review.openstack.org/#/c/469954/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-06-01 Thread Jay Pipes

On 05/31/2017 11:06 PM, Mike Bayer wrote:
I'd also throw in, there's lots of versions of Galera with different 
bugfixes / improvements as we go along, not to mention configuration 
settings if Jay observes it working great on a distributed cluster 
and Clint observes it working terribly, it could be that these were not 
the same Galera versions being used.


Agreed. The version of Galera we were using IIRC was Percona XtraDB 
Cluster 5.6. And, remember that the wsrep_provider_options do make a big 
difference, especially in WAN-replicated setups.


We also increased the tolerance settings for network disruption so that 
the cluster operated without hiccups over the WAN. I think the 
wsrep_provider_options setting was evs.inactive_timeout=PT30Sm 
evs.suspect_timeout=PT15S, and evs.join_retrans_period=PT1S.


Also, regardless of settings, if your network sucks, none of these 
distributed databases are going to be fun to operate :)


At AT, we jumped through a lot of hoops to ensure multiple levels of 
redundancy and high performance for the network links inside and between 
datacenters. It really makes a huge difference when your network rocks.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-06-01 Thread Amrith Kumar
OK, I'd assumed that from earlier conversations about py35 being a 
multi-release goal, that it would have been a shoe-in. But yes, if it is still 
a possibility that the community would do py1.75 in pike and not commit to the 
other py1.75 in queens, sure, there's only one committed goal for queens (which 
I still don't understand, I've been told).

-amrith

--
Amrith Kumar


> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Thursday, June 1, 2017 12:38 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate
> Off Paste
> 
> Amrith Kumar wrote:
> > Thierry, isn't the py35 goal continuing for Queens?
> 
> That's an open question, which is discussed in its own thread:
> http://lists.openstack.org/pipermail/openstack-dev/2017-May/117746.html
> 
> --
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] priorities for the coming week (06/02-06-08)

2017-06-01 Thread Brian Rosmaita
As discussed at today's Glance meeting, the priorities for this week
are to prepare for the P-2 milestone release.  The changes below must
be merged by 12:00 UTC Wednesday 7 June 2017.  Please give them your
full attention.

1  Image import refactor MVP
- https://review.openstack.org/#/c/443636/
- https://review.openstack.org/#/c/443633/
- https://review.openstack.org/#/c/468835/

2  WSGI community goal
- 
https://review.openstack.org/#/q/status:open+project:openstack/glance+branch:master+topic:goal-deploy-api-in-wsgi
- https://review.openstack.org/#/c/459451/

Have a productive week!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Blueprint process question

2017-06-01 Thread David Lyle
There have been a couple of projects that would like some space in the
page header. I think the work in Horizon is to provide an extensible
space in the page header for plugins to post content. The UI plugin
for Vitrage, in this case, would then be responsible for populating
that content if desired. This specific blueprint should really be
targeted at the Vitrage UI plugin and a separate blueprint should be
added to Horizon to create the extension point in the page header.

David

On Wed, May 31, 2017 at 11:06 AM, Waines, Greg
 wrote:
> Hey Rob,
>
>
>
> Just thought I’d check in on whether Horizon team has had a chance to review
> the following blueprint:
>
> https://blueprints.launchpad.net/horizon/+spec/vitrage-alarm-counts-in-topnavbar
>
>
>
> The blueprint in Vitrage which the above Horizon blueprint depends on has
> been approved by Vitrage team.
>
> i.e.   https://blueprints.launchpad.net/vitrage/+spec/alarm-counts-api
>
>
>
> let me know if you’d like to setup a meeting to discuss,
>
> Greg.
>
>
>
> From: Rob Cresswell 
> Reply-To: "openstack-dev@lists.openstack.org"
> 
> Date: Thursday, May 18, 2017 at 11:40 AM
> To: "openstack-dev@lists.openstack.org" 
> Subject: Re: [openstack-dev] [horizon] Blueprint process question
>
>
>
> There isn't a specific time for blueprint review at the moment. It's usually
> whenever I get time, or someone asks via email or IRC. During the weekly
> meetings we always have time for open discussion of bugs/blueprints/patches
> etc.
>
>
>
> Rob
>
>
>
> On 18 May 2017 at 16:31, Waines, Greg  wrote:
>
> A blueprint question for horizon team.
>
>
>
> I registered a new blueprint the other day.
>
> https://blueprints.launchpad.net/horizon/+spec/vitrage-alarm-counts-in-topnavbar
>
>
>
> Do I need to do anything else to get this reviewed?  I don’t think so, but
> wanted to double check.
>
> How frequently do horizon blueprints get reviewed?  once a week?
>
>
>
> Greg.
>
>
>
>
>
> p.s. ... the above blueprint does depend on a Vitrage blueprint which I do
> have in review.
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][release][Release-job-failures] Tag of openstack/puppet-nova failed

2017-06-01 Thread Emilien Macchi
Thanks Doug for this one!

---
Emilien Macchi

On Jun 1, 2017 6:21 PM, "Doug Hellmann"  wrote:

> Excerpts from jenkins's message of 2017-05-31 20:26:33 +:
> > Build failed.
> >
> > - puppet-nova-releasenotes http://logs.openstack.org/d9/
> d913ccd1ea88f3661c32b0fcfdac58d749cd4eb2/tag/puppet-nova-
> releasenotes/cefa30a/ : FAILURE in 2m 13s
> >
>
> This failure only prevented the release notes from being published, and
> did not block the actual release.
>
> The problem should be fixed by https://review.openstack.org/469872
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-06-01 Thread Thierry Carrez
Michał Jastrzębski wrote:
> Looks good! What is approximate size of L room?

L can fit 30-50 people. We'll get more precise once we start mapping
those to actual rooms.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-06-01 Thread Michał Jastrzębski
On 1 June 2017 at 09:22, Jeremy Stanley  wrote:
> On 2017-06-01 16:38:05 +0200 (+0200), Thierry Carrez wrote:
> [...]
>> For teams that are placed on the Wednesday-Friday segment, please
>> let us know whether you'd like to make use of the room on Friday
>> (pick between 2 days or 3 days).
> [...]
>
> As you didn't specify how to let you know, I'll just reply here.
>
> If at all possible, I'd like the Infrastructure room available
> through Friday.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Looks good! What is approximate size of L room?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-01 Thread Thierry Carrez
Renat Akhmerov wrote:
> We have a weekly meeting next Monday, will it be too late?

Before Thursday EOD (when the Pike-2 deadline hits) should be OK.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-06-01 Thread Thierry Carrez
Amrith Kumar wrote:
> Thierry, isn't the py35 goal continuing for Queens?

That's an open question, which is discussed in its own thread:
http://lists.openstack.org/pipermail/openstack-dev/2017-May/117746.html

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security bug in diskimage-builder

2017-06-01 Thread Jeremy Stanley
On 2017-06-01 10:40:34 -0500 (-0500), Ben Nemec wrote:
[...]
> Okay, so we're all set up, but now it appears we're all subscribed to every
> tripleo bug as well.  I think oslo-coresec used to be the same way, but at
> some point it changed so I only get explicitly notified of security bugs.
> Does anyone know how to set up tripleo-coresec that way too?  I've poked
> around the launchpad settings but I haven't found anything that looks
> promising.
[...]

Go to the project's bugs page in LP and follow the "Subscribe to bug
mail" link in the right sidebar. Select the bug mail recipient as
"One of the teams you administer" and then choose the relevant team
in the drop-down there. Set the receive mail option to "are added or
changed in any way" and then you get some extra checkboxes and
submenus. Expand the "Information types" tree and you can uncheck
"Public" to stop getting notifications about normal (non-security,
non-private) bug reports.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-06-01 Thread Jeremy Stanley
On 2017-06-01 16:38:05 +0200 (+0200), Thierry Carrez wrote:
[...]
> For teams that are placed on the Wednesday-Friday segment, please
> let us know whether you'd like to make use of the room on Friday
> (pick between 2 days or 3 days).
[...]

As you didn't specify how to let you know, I'll just reply here.

If at all possible, I'd like the Infrastructure room available
through Friday.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][release][Release-job-failures] Tag of openstack/puppet-nova failed

2017-06-01 Thread Doug Hellmann
Excerpts from jenkins's message of 2017-05-31 20:26:33 +:
> Build failed.
> 
> - puppet-nova-releasenotes 
> http://logs.openstack.org/d9/d913ccd1ea88f3661c32b0fcfdac58d749cd4eb2/tag/puppet-nova-releasenotes/cefa30a/
>  : FAILURE in 2m 13s
> 

This failure only prevented the release notes from being published, and
did not block the actual release.

The problem should be fixed by https://review.openstack.org/469872

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift][release][Release-job-failures] Tag of openstack/swift failed

2017-06-01 Thread Doug Hellmann
Excerpts from jenkins's message of 2017-05-31 22:46:21 +:
> Build failed.
> 
> - swift-releasenotes 
> http://logs.openstack.org/e9/e9032fbea361df790901022740ac837a2a02daa0/tag/swift-releasenotes/687d120/
>  : FAILURE in 1m 47s
> 

This failure was just with publishing the release notes after the
tag was applied and did not actually block the release.

https://review.openstack.org/469881 should fix the problem

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] {OpenStack-Ansible] Meeting Cancelled

2017-06-01 Thread Amy Marrich
Sorry for the short notice but today's meeting has been cancelled. We'll
resume with our bug-triage on Tuesday and our next Meeting on next Thursday
at their usual times and places.

Amy (spotz)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][ptls][all] Potential Queens Goal: Implement collection link OR full discovery alignment

2017-06-01 Thread Monty Taylor

Hey everybody!

I have submitted two potential goals for Queens:

https://review.openstack.org/#/c/468436 - add collection links
https://review.openstack.org/#/c/468437 - full discovery alignment

One is a subset of the other, so the decision is two-fold

* do we do this at all?
* do we do the smaller or the larger of the two?

Both of these are work driven by the version-discovery API-WG specs, 
which are in turn driven by trying to improve the interop story for our 
APIs to include more SDKs/libs than just shade. So the overall story 
here is "implement things to improve life for folks consuming the 
catalog and version discovery"


Quick summaries:

Add Collection Links


This is the simpler of the two. It involves adding a "collection" link 
to the list of links in the version discovery documents. That is, this:



{
  "version": {
"id": "v2.0",
"links": [
  {
"href": "https://image.example.com/v2;,
"rel": "self"
  }
],
"status": "CURRENT"
  }
}

Becomes this:

{
  "version": {
"id": "v2.0",
"links": [
  {
"href": "https://image.example.com/v2;,
"rel": "self"
  },
  {
"href": "https://image.example.com/;,
"rel": "collection"
  }
],
"status": "CURRENT"
  }
}

The reason for this is as a path to the unversioned discovery document 
on clouds where the versioned endpoint is the thing that's in the 
catalog. The current way to do that is to take the versioned endpoint 
and pop things off of the end.


'collection' is proposed for the rel name. From the list of official 
names: 
https://www.iana.org/assignments/link-relations/link-relations.xhtml it 
seems like the best choice. (If a single-version version document is a 
"Version", then the list of those in the unversioned document seems like 
the "collection" of them)


This one should be _very_ little work per-project. I took a stab at 
implementing this for nova while sitting in the goals room in Boston and 
without any knowledge of how version discovery works in nova I got most 
of it done in about 15 minutes.


Full Discovery Alignment


Full discovery alignment includes the collection link work, but also 
includes a few more things. There isn't a TON more per-project coding 
work. Most of the openstack-side work is in adding support to 
keystoneauth - and I've already written most of those patches. The other 
main bit of work is in updating SDKs and libs for other languages to 
implement the consumption support as well - but we've made good contacts 
with folks and can get that done (and will, regardless of the goal)


The main per-project additional things to do after the keystoneauth 
patches land on top of the collection link are:


* modifying devstack plugins and deployment projects to register the 
service using the official name from the service-types-authority
* modifying devstack plugins and deployment projects to register the 
unversioned endpoint in the catalog
* modifying devstack and plugins to stop using "RegionOne" as the region 
name


The goal also calls out a few specific tempest tests we need to write to 
verify discovery works as expected across the board.


I *think* the Full Alignment goal is doable and not a terrible amount of 
per-project work. But as with everything, it is work.


Thoughts?
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-06-01 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-06-01 11:09:56 +0100:
> On Wed, 31 May 2017, Doug Hellmann wrote:
> > Yeah, it sounds like the current organization of the repo is not
> > ideal in terms of equal playing field for all of our project teams.
> > I would be fine with all of the interop tests being in a plugin
> > together, or of saying that the tempest repo should only contain
> > those tests and that others should move to their own plugins. If we're
> > going to reorganize all of that, we should decide what new structure we
> > want and work it into the goal.
> 
> I feel like the discussion about the interop tests has strayed this
> conversation from the more general point about plugin "fairness" and
> allowed the vagueness in plans for interop to control our thinking
> and discussion about options in the bigger view.

I should have prefaced my initial response with a statement like
"For those of you who don't know or remember the history". It wasn't
meant to imply we shouldn't be making any changes, just that we
need to understand how we ended up where we are now so we don't
make a change that then no longer meets old requirements.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-06-01 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-06-01 11:51:50 +0200:
> Graham Hayes wrote:
> > On 01/06/17 01:30, Matthew Treinish wrote:
> >> TBH, it's a bit premature to have the discussion. These additional 
> >> programs do
> >> not exist yet, and there is a governance road block around this. Right now 
> >> the
> >> set of projects that can be used defcore/interopWG is limited to the set 
> >> of 
> >> projects in:
> >>
> >> https://governance.openstack.org/tc/reference/tags/tc_approved-release.html
> > 
> > Sure - but that is a solved problem, when the interop committee is
> > ready to propose them, they can add projects into that tag. Or am I
> > misunderstanding [1] (again)?
> 
> I think you understand it well. The Board/InteropWG should propose
> additions/removals of this tag, which will then be approved by the TC:
> 
> https://governance.openstack.org/tc/reference/tags/tc_approved-release.html#tag-application-process
> 
> > [...]
> >> We had a forum session on it (I can't find the etherpad for the session) 
> >> which
> >> was pretty speculative because it was about planning the new programs. 
> >> Part of
> >> that discussion was around the feasibility of using tests in plugins and 
> >> whether
> >> that would be desirable. Personally, I was in favor of doing that for some 
> >> of
> >> the proposed programs because of the way they were organized it was a good 
> >> fit.
> >> This is because the proposed new programs were extra additions on top of 
> >> the
> >> base existing interop program. But it was hardly a definitive discussion.
> > 
> > Which will create 2 classes of testing for interop programs.
> 
> FWIW I would rather have a single way of doing "tests used in trademark
> programs" without differentiating between old and new trademark programs.
> 
> I fear that we are discussing solutions before defining the problem. We
> want:
> 
> 1- Decentralize test maintenance, through more tempest plugins, to
> account for limited QA resources
> 2- Additional codereview constraints and approval rules for tests that
> happen to be used in trademark programs
> 3- Discoverability/ease-of-install of the set of tests that happen to be
> used in trademark programs
> 4- A git repo layout that can be simply explained, for new teams to
> understand
> 
> It feels like the current git repo layout (result of that 2016-05-04
> resolution) optimizes for 2 and 3, which kind of works until you add
> more trademark programs, at which point it breaks 1 and 4.
> 
> I feel like you could get 2 and 3 without necessarily using git repo
> boundaries (using Gerrit approval rules and some tooling to install/run
> subset of tests across multiple git repos), which would allow you to
> optimize git repo layout to get 1 and 4...
> 
> Or am I missing something ?
> 

Right. The point of having the trademark tests "in tempest" was not
to have them "in the tempest repo", that was just an implementation
detail of the policy of "put them in a repository managed by people
who understand the expanded review rules".

There were a lot of unexpected issues when we started treating the
test suite as a production tool for validating a cloud.  We have
to be careful about how we change the behavior of tests, for example,
even if the API responses are expected to be the same.  It's not
fair to vendors or operators who get trademark approval with one
release to have significant changes in behavior in the exact same
tests for the next release.

At the early stage, when the DefCore team was still figuring out
these issues, it made sense to put all of the tests in one place
with a review team that was actively participating in establishing
the process. If we better understand the "rules" for these tests
now, we can document them and distribute the work of maintaining the
test suites.

And yes, I agree with the argument that we should be fair and treat
all projects the same way. If we're going to move tests out of the
tempest repository, we should move all of them. The QA team can
still help maintain the test suites for whatever projects they want,
even if those tests are in plugins.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] No drivers meeting 6/1

2017-06-01 Thread Kevin Benton
Hi,

Due to a conflict today I am canceling the drivers meeting.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security bug in diskimage-builder

2017-06-01 Thread Ben Nemec



On 05/30/2017 10:05 AM, Emilien Macchi wrote:

On Tue, May 30, 2017 at 3:43 PM, Ben Nemec  wrote:



On 05/30/2017 08:00 AM, Emilien Macchi wrote:


On Mon, May 29, 2017 at 9:02 PM, Jeremy Stanley  wrote:


On 2017-05-29 15:43:43 +0200 (+0200), Emilien Macchi wrote:


On Wed, May 24, 2017 at 7:45 PM, Ben Nemec 
wrote:


[...]


Emilien, I think we should create a tripleo-coresec group in
launchpad that can be used for this. We have had
tripleo-affecting security bugs in the past and I imagine we
will again. I'm happy to help out with that, although I will
admit my launchpad-fu is kind of weak so I don't know off the
top of my head how to do it.



That or re-use an existing Launchpad group used by OpenStack VMT?



The OpenStack VMT doesn't triage bugs for deliverables aside from
those tagged with vulnerability:managed in governance. For those we
recommend private security bugs only be automatically shared with
the openstack-vuln-mgmt team in LP, and then we manually subscribe
something-coresec to the report once we're sure it was reported
against the correct project. For deliverables without VMT oversight,
it makes sense to have private security bugs automatically shared
with those something-coresec teams directly.


https://governance.openstack.org/tc/reference/tags/vulnerability_managed.html



I created https://launchpad.net/~tripleo-coresec

With me (Pacific Time soon), shardy (Europe), bnemec (East coast) and



If by "coast" you mean the Great Lakes then yes, but I'm in the central time
zone. ;-)


lol.
I added James to cover (real) East coast, so we cover most of our TZs.

Thanks,


Okay, so we're all set up, but now it appears we're all subscribed to 
every tripleo bug as well.  I think oslo-coresec used to be the same 
way, but at some point it changed so I only get explicitly notified of 
security bugs.  Does anyone know how to set up tripleo-coresec that way 
too?  I've poked around the launchpad settings but I haven't found 
anything that looks promising.





Thanks for getting this set up guys.



fungi (East coast) for now. If we feel like we need more people we'll
think about it.
I'll explore Launchpad to see how we can use this group to handle Security
bugs.

Thanks,


--
Jeremy Stanley


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl][all] Potential Queens Goal: Continuing Python 3.5+ Support

2017-06-01 Thread Doug Hellmann
Excerpts from Emilien Macchi's message of 2017-06-01 15:31:10 +0200:
> On Wed, May 31, 2017 at 10:38 PM, Mike  wrote:
> > Hello everyone,
> >
> > For this thread we will be discussing continuing Python 3.5+ support.
> > Emilien who has been helping with coordinating our efforts here with
> > Pike can probably add more here, but glancing at our goals document
> > [1] it looks like we have a lot of unanswered projects’ status, but
> > mostly we have python 3.5 unit test voting jobs done thanks to this
> > effort! I have no idea how to use the graphite dashboard, but here’s a
> > graph [2] showing success vs failure with python-35 jobs across all
> > projects.
> 
> Indeed, nice work from the community to make progress on this effort.
> 
> > Glancing at that I think it’s safe to say we can start discussions on
> > moving forward with having our functional tests support python 3.5.
> > Some projects are already ahead in this. Let the discussions begin so
> > we can aid the decision in the  TC deciding our community wide goals
> > for Queens [3].
> 
> +1 - making progress on functional tests looks like the next thing and
> Queens cycle could be used. I'm happy to keep helping on coordination
> if needed.

Unit tests were optional, according to the goal. The functional and
integration tests are much more important.

I know we have the integrated gate running on python 3, so that
covers cinder, glance, keystone, neutron, nova, as well as devstack
and tempest. How are other projects doing with getting their similar
jobs set up and running?

Doug

> 
> >
> > [1] - https://governance.openstack.org/tc/goals/pike/python35.html
> > [2] - 
> > http://graphite.openstack.org/render/?width=1273=554&_salt=1496261911.56=00%3A00_20170401=23%3A59_20170531=sumSeries(stats.zuul.pipeline.gate.job.gate-*-python35.SUCCESS)=sumSeries(stats.zuul.pipeline.gate.job.gate-*-python35.FAILURE)
> > [3] - https://governance.openstack.org/tc/goals/index.html
> >
> > —
> > Mike Perez
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] regional incoming storage targets

2017-06-01 Thread Mehdi Abaakouk

On Thu, Jun 01, 2017 at 01:46:21PM +0200, Julien Danjou wrote:

On Wed, May 31 2017, gordon chung wrote:


[…]


i'm not entirely sure this is an issue, just thought i'd raise it to
discuss.


It's a really interesting point you raise. I never thought we could do
that but indeed, we could. Maybe we built a great architecture after
all. ;-)

Easy solution: disable refresh. Problem solved.


I have never liked this refresh feature on API side.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] error handling

2017-06-01 Thread Afek, Ifat (Nokia - IL/Kfar Sava)


From: "Yujun Zhang (ZTE)" 
Date: Thursday, 1 June 2017 at 18:10


On Thu, Jun 1, 2017 at 10:49 PM Afek, Ifat (Nokia - IL/Kfar Sava) 
> wrote:

So for now we agree that we need to add a UI for configuration information and 
datasources status.

Sounds good. In order to implement in UI, we shall also need api to expose them 
right?


Of course ☺

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] Gate issues

2017-06-01 Thread Yujun Zhang (ZTE)
We have encountered similar issue also in OPNFV.

It seems to be a problem of setuptools 36.0.0 and it is now removed from
pypi. Hope it resolves the vitrage gate tests as well.

See discussion in https://github.com/pypa/setuptools/issues/1042


On Thu, Jun 1, 2017 at 11:08 PM Afek, Ifat (Nokia - IL/Kfar Sava) <
ifat.a...@nokia.com> wrote:

> Hi,
>
>
>
> Note that we are currently having problems with the Vitrage gate tests,
> related to python-setuptools. Other projects experience similar problems.
> We hope to fix it by the beginning of next week.
>
>
>
> Best Regards,
>
> Ifat.
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Yujun Zhang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [EXTERNAL] Re: [Tripleo] deploy software on Openstack controller on the Overcloud

2017-06-01 Thread Emilien Macchi
On Thu, Jun 1, 2017 at 3:47 PM, Abhishek Kane  wrote:
> Hi Emilien,
>
> The bin does following things on controller:
> 1. Install core HyperScale packages.

Should be done by Puppet, with Package resource.

> 2. Start HyperScale API server

Should be done by Puppet, with Service resource.

> 3. Install UI packages. This will add new files to and modify some existing 
> files of Horison.

Should be done by Puppet, with Package resource and also some changes
in puppet-horizon maybe if you need to change Horizon config.

> 4. Create HyperScale user in mysql db. Create database and dump config. Add 
> permissions of nova and cinder DB to HyperScale user.

We have puppet-openstacklib which already manages DBs, you could
easily re-use it. Please look at puppet-nova for example to see how
things works in nova::db::mysql, etc.

> 5. Add ceilometer pollsters for additional stats and modify ceilometer files.

puppet-ceilometer I guess. What do you mean by "files"? Config files?

> 6. Change OpenStack configuration:
> a. Create rabbitmq exchanges

puppet-* modules already does it.

> b. Create keystone user

puppet-keystone already does it.

> c. Define new flavors

puppet-nova can manage flavors.

> d. Create HyperScale volume type and set default volume type to HyperScale in 
> cinder.conf.

we already support multi backends in tripleo, HyperScale would just be
a new addition. Re-use the bits please: puppet-cinder and
puppet-tripleo will need to be patched.

> e. Restart openstack’s services

Already done by openstack/puppet-* modules.

> 7. Configure HyperScale services

Should be done by your module, (you can either write a _config
provider if it's ini standard otherwise just do a template that you
ship in the module, like puppet-horizon).

> Once the controller is configured, we use HyperScale’s CLI to configure data 
> and compute nodes-
>
> On data node (cinder):
> 1. Install HyperScale data node packages.

Should be done by Puppet, with Package resource.

> 2. Change cinder.conf to add backend and change rpc_backend.

puppet-cinder

> 3. Give the raw data disks and meta disks to HyperScale storage layer for 
> usage.

what does it means? Do you run a CLI for that?

> 4. Configure HyperScale services.

Should be done by Puppet, with Service resource.

> 5. Dump config in the HyperScale database.

can be done by a script maybe?

>
> On compute (nova):
> 1. Install HyperScale compute packages.

Should be done by Puppet, with Package resource.

> 2. Configure cgroup.

we don't manage cgroups in TripleO AFIK yet but it's something we
could add, maybe with a puppet module.

> 3. Disable selinux.

Please don't do that. Disabling SElinux is a NOGO when adding new
features (sorry to care about Security).

> 4. Add ceilometer pollsters for additional stats and modify ceilometer files.

puppet-ceilometer

> 5. Modify qemu.conf to relax ACS checks.

puppet-nova maybe, but not sure we really want to do that:
https://vfio.blogspot.fr/2014/08/iommu-groups-inside-and-out.html

Any details on why you're doing it?

> 6. Modify libvirt.conf and libvirtd to allow live migration.

It's already supported by puppet-nova.

> 7. Change network settings.

Should be done by os-net-config in TripleO.

> 8. Configure HyperScale services.

Done by your module (again).

> 9. Dump config in the HyperScale database.

same as before.

>
> We assume that we will not require steps to install packages if we put 
> packages in the overcloud image. We have started to convert the bin and the 
> CLI into puppet modules.
>
>
> Regards,
> Abhishek

Hope it helped.

> On 6/1/17, 4:24 AM, "Emilien Macchi"  wrote:
>
> On Wed, May 31, 2017 at 6:29 PM, Dnyaneshwar Pawar
>  wrote:
> > Hi Alex,
> >
> > Currently we have puppet modules[0] to configure our software which has
> > components on Openstack Controller, Cinder node and Nova node.
> > As per document[1] we successfully tried out role specific 
> configuration[2].
> >
> > So, does it mean that if we have an overcloud image with our packages
> > inbuilt and we call our configuration scripts using role specific
> > configuration, we may not need puppet modules[0] ? Is it acceptable
> > deployment method?
>
> So running a binary from Puppet, to make configuration management is
> not something we recommend.
> Puppet has been good at managing configuration files and services for
> example. In your module, you just manage a file and execute it. The
> problem with that workflow is we have no idea what happens in backend.
> Also we have no way to make Puppet run idempotent, which is one
> important aspect in TripleO.
>
> Please tell us what does the binary, and maybe we can convert the
> tasks into Puppet resources that could be managed by your module. Also
> make the resources by class (service), so we can plug it into the
> composable 

Re: [openstack-dev] [vitrage] error handling

2017-06-01 Thread Yujun Zhang (ZTE)
On Thu, Jun 1, 2017 at 10:49 PM Afek, Ifat (Nokia - IL/Kfar Sava) <
ifat.a...@nokia.com> wrote:

So for now we agree that we need to add a UI for configuration information
> and datasources status.
>
>
Sounds good. In order to implement in UI, we shall also need api to expose
them right?

-- 
Yujun Zhang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-operators] [dev] [doc] Operations Guide future

2017-06-01 Thread Alexandra Settle
Hi everyone,

I haven’t had any feedback regarding moving the Operations Guide to the 
OpenStack wiki. I’m not taking silence as compliance. I would really like to 
hear people’s opinions on this matter.

To recap:


  1.  Option one: Kill the Operations Guide completely and move the 
Administration Guide to project repos.
  2.  Option two: Combine the Operations and Administration Guides (and then 
this will be moved into the project-specific repos)
  3.  Option three: Move Operations Guide to OpenStack wiki (for ease of 
operator-specific maintainability) and move the Administration Guide to project 
repos.

Personally, I think that option 3 is more realistic. The idea for the last 
option is that operators are maintaining operator-specific documentation and 
updating it as they go along and we’re not losing anything by combining or 
deleting. I don’t want to lose what we have by going with option 1, and I think 
option 2 is just a workaround without fixing the problem – we are not getting 
contributions to the project.

Thoughts?

Alex

From: Alexandra Settle 
Date: Friday, May 19, 2017 at 1:38 PM
To: Melvin Hillsman , OpenStack Operators 

Subject: Re: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] [dev] 
What's up doc? Summit recap edition

Hi everyone,

Adding to this, I would like to draw your attention to the last dot point of my 
email:

“One of the key takeaways from the summit was the session that I joint 
moderated with Melvin Hillsman regarding the Operations and Administration 
Guides. You can find the etherpad with notes here: 
https://etherpad.openstack.org/p/admin-ops-guides  The session was really 
helpful – we were able to discuss with the operators present the current 
situation of the documentation team, and how they could help us maintain the 
two guides, aimed at the same audience. The operator’s present at the session 
agreed that the Administration Guide was important, and could be maintained 
upstream. However, they voted and agreed that the best course of action for the 
Operations Guide was for it to be pulled down and put into a wiki that the 
operators could manage themselves. We will be looking at actioning this item as 
soon as possible.”

I would like to go ahead with this, but I would appreciate feedback from 
operators who were not able to attend the summit. In the etherpad you will see 
the three options that the operators in the room recommended as being viable, 
and the voted option being moving the Operations Guide out of 
docs.openstack.org into a wiki. The aim of this was to empower the operations 
community to take more control of the updates in an environment they are more 
familiar with (and available to others).

What does everyone think of the proposed options? Questions? Other thoughts?

Alex

From: Melvin Hillsman 
Date: Friday, May 19, 2017 at 1:30 PM
To: OpenStack Operators 
Subject: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] [dev] 
What's up doc? Summit recap edition


-- Forwarded message --
From: Alexandra Settle >
Date: Fri, May 19, 2017 at 6:12 AM
Subject: [openstack-dev] [openstack-doc] [dev] What's up doc? Summit recap 
edition
To: 
"openstack-d...@lists.openstack.org" 
>
Cc: "OpenStack Development Mailing List (not for usage questions)" 
>


Hi everyone,

The OpenStack manuals project had a really productive week at the OpenStack 
summit in Boston. You can find a list of all the etherpads and attendees here: 
https://etherpad.openstack.org/p/docs-summit

As we all know, we are rapidly losing key contributors and core reviewers. We 
are not alone, this is happening across the board. It is making things harder, 
but not impossible. Since our inception in 2010, we’ve been climbing higher and 
higher trying to achieve the best documentation we could, and uphold our high 
standards. This is something to be incredibly proud of. However, we now need to 
take a step back and realise that the amount of work we are attempting to 
maintain is now out of reach for the team size that we have. At the moment we 
have 13 cores, of which none are full time contributors or reviewers. This 
includes myself.

That being said! I have spent the last week at the summit talking to some of 
our leaders, including Doug Hellmann (cc’d), Jonathan Bryce and Mike Perez 
regarding the future of the project. Between myself and other community 
members, we have been drafting plans and coming up with a new direction that 
will hopefully be sustainable in the long-term.

I am interested to hear your thoughts. I want to make sure that everyone feels 

[openstack-dev] [vitrage] Gate issues

2017-06-01 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi,

Note that we are currently having problems with the Vitrage gate tests, related 
to python-setuptools. Other projects experience similar problems. We hope to 
fix it by the beginning of next week.

Best Regards,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] regional incoming storage targets

2017-06-01 Thread gordon chung


On 01/06/17 07:46 AM, Julien Danjou wrote:
> Yes, write doc or log an issue at least. It's best way to keep a public
> track now on ideas and what's going on since it's what people are going
> to read and search into.

added here: https://github.com/gnocchixyz/gnocchi/issues/60

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] error handling

2017-06-01 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi Yujun,

Indeed, during the initialization phase it might be beneficial to make sure the 
user is aware of configuration problems (although I’m not sure that crashing is 
the solution). The problem is that the same code is executed both in 
initialization and later on, so telling the difference is not trivial.

So for now we agree that we need to add a UI for configuration information and 
datasources status.

Best Regards,
Ifat.

From: "Yujun Zhang (ZTE)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, 30 May 2017 at 11:50
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [vitrage] error handling

On Tue, May 30, 2017 at 3:59 PM Afek, Ifat (Nokia - IL/Kfar Sava) 
> wrote:
Hi Yujun,

You started an interesting discussion. I think that the distinction between an 
operational error and a programmer error is correct and we should always keep 
that in mind.

I agree that having an overall design for error handling in Vitrage is a good 
idea; but I disagree that until then we better let it crash.

I think that Vitrage is made out of many pieces that don’t necessarily depend 
on one another. For example, if one datasource fails, everything else can work 
as usual – so why crash? Similarly, if one template fails to load, all other 
templates can still be activated.

This usually or always happens during initialization phase, doesn't it? It is a 
period with human inspecting and should be detected in the deployment or user 
acceptance test. So if something fails, it is better to isolate them before 
continue running, e.g. correct the invalid template, invalid data source 
configuration or remove the template and disable the data source. This is 
because such error is permanent and they won't recover automatically.

Here we need to distinguish the error that data source is temporarily 
unavailable due to network connection issue or data source not up yet. In this 
case, I agree we'd better start the rest component and perform a retry 
periodically until it recovers.

Another aspect is that the main purpose of Vitrage is to provide insights. In 
case of a failure in one datasource/template, some of the insights might be 
missing. But this will not lead to inaccurate behavior or to wrong actions 
being executed in the system. IMO, we should give the user as much information 
as possible given that we have only part of the input.

I agree, if enough insights could be provided by the running system. We can 
improve the handling of permanent error. What is even better is supporting of a 
hot load for the components and templates.

What I don't like much is sometimes errors are handled but without enough 
details. In this case, a crash with trace stack is more useful than a user 
"friendly" message like "failed to start xxx component" or "invalid 
configuration file" (I'm not talking about vitrage, it is quite common in many 
projects)

My preference is "good error handling" > "no error handling" > "bad error 
handling". Though it is difficult to distinguish what is a good error handling 
and what is bad...

Regarding the use cases that you mentioned:


  1.  invalid configuration file
[Ifat] This should depend on the specific configuration. If keystone is 
misconfigured, nothing will work of course. But if for example Zabbix is 
misconfigured, Vitrage should work and show the topology and the non-Zabbix 
alarms.

Agree. It should be handled in a different way regarding what kind of error and 
how critical it is.


  1.  failed to communicate with data source
[Ifat] I think that the error should be logged, and all other datasources 
should work as usual.

Yes, and it would be good to have a retry mechanism


  1.  malformed data from data source

[Ifat] I think that the error should be logged, and all other datasources 
should work as usual. This problem means we must modify the code in the 
datasource itself, but until then Vitrage should work, right?
Yes, I think it is possible when the data source version changes and we should 
discard the data and indicate the error. The other part should not be affected.


  1.  failed to execute an action
[Ifat] Again, that’s a problem that requires code changes; but why fail other 
actions?

What I meant here is temporary failure, e.g. when you try to mark host down but 
not able to reach it due to network connection issue or other reasons


  1.  ...
BTW, it might be a good idea to add API/UI for showing the configuration and 
the status of the datasources. We all know that errors in the log files are 
often ignored…

Sure, the errors I mentioned above is what the system operators could encounter 
even with a correct configuration and not related to software bugs. Display 
them in UI would be very helpful. The log files are more for the engineers 

Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-06-01 Thread Thierry Carrez
Thierry Carrez wrote:
> In a previous thread[1] I introduced the idea of moving the PTG from a
> purely horizontal/vertical week split to a more
> inter-project/intra-project activities split, and the initial comments
> were positive.
> 
> We need to solidify how the week will look like before we open up
> registration (first week of June), so that people can plan their
> attendance accordingly. Based on the currently-signed-up teams and
> projected room availability, I built a strawman proposal of how that
> could look:
> 
> https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true

OK, it looks like the feedback on this strawman proposal was generally
positive, so we'll move on with this.

For teams that are placed on the Wednesday-Friday segment, please let us
know whether you'd like to make use of the room on Friday (pick between
2 days or 3 days). Note that it's not a problem if you do (we have space
booked all through Friday) and this can avoid people leaving too early
on Thursday afternoon. We just need to know how many rooms we might be
able to free up early.

In the same vein, if your team (or workgroup, or inter-project goal) is
not yet listed and you'd like to have a room in Denver, let us know ASAP.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-01 Thread Renat Akhmerov
We have a weekly meeting next Monday, will it be too late?

Renat Akhmerov
@Nokia

On 1 Jun 2017, 20:10 +0700, Thierry Carrez , wrote:
> Note that it's technically too late to change the release model
> (milestone-1 is the deadline), but since that kills two birds with one
> stone, I'd be willing to grant mistral an exception (as long as it's
> done before milestone-2, which is next week).
>
> Renat Akhmerov wrote:
> > Thanks Thierry.
> >
> > To me it sounds like even a better release model for us. We can discuss
> > it with a team at the next team meeting and make a decision.
> >
> > Renat Akhmerov
> > @Nokia
> >
> > On 1 Jun 2017, 17:06 +0700, Thierry Carrez , wrote:
> > > Renat Akhmerov wrote:
> > > > On 31 May 2017, 15:08 +0700, Thierry Carrez ,
> > > > wrote:
> > > > > > [mistral]
> > > > > > mistral - blocking sqlalchemy - milestones
> > > > >
> > > > > I wonder why mistral is in requirements. Looks like tripleo-common is
> > > > > depending on it ? Could someone shine some light on this ? It might 
> > > > > just
> > > > > mean mistral-lib is missing a few functions, and switching the release
> > > > > model of mistral itself might be overkill ?
> > > >
> > > > This dependency is currently needed to create custom Mistral actions. It
> > > > was originally not the best architecture and one of the reasons to
> > > > create 'mistral-lib' was in getting rid of dependency on ‘mistral’ by
> > > > moving all that’s needed for creating actions into a lib (plus something
> > > > else). The thing is that the transition is not over and APIs that we put
> > > > into ‘mistral-lib’ are still experimental. The plan is to complete this
> > > > initiative, including docs and needed refactoring, till the end of Pike.
> > > >
> > > > What possible negative consequences may we have if we switch release
> > > > model to "cycle-with-intermediary”?
> > >
> > > There are no "negative" consequences. There are just consequences in
> > > choosing a new release model, so I don't want mistral to switch to that
> > > model *only* because it didn't complete moving some code out of mistral
> > > proper into a more consumable mistral-lib. It feels like we wouldn't be
> > > having that discussion if the code was more adequately split :)
> > >
> > > First, the cycle-with-intermediary model means that every tag is a
> > > "release", which is expected to be consumed by users. You have to be
> > > pretty sure that it works -- there won't be any release candidates to
> > > protect you. This means your automated testing coverage needs to be
> > > pretty good.
> > >
> > > Second, the cycle-with-intermediary model is less "driven" by the
> > > release team -- you won't have as many reminders (like milestones), or
> > > best-practice deadlines (like feature freeze) to help you. Your team is
> > > basically doing release management internally, deciding when to release,
> > > when to slow down, etc.
> > >
> > > As such, this model appeals either to very young projects (which need a
> > > lot of flexibility and need to put things out fast), and very mature
> > > projects (where automated testing coverage is pretty complete, release
> > > liaisons take up much of the release management, and things don't change
> > > that often). Projects in the middle usually prefer the
> > > cycle-with-milestones model.
> > >
> > > > Practically, all our releases, even
> > > > those made after milestones, are considered stable and I don’t see
> > > > issues if we’ll be producing full releases every time.
> > >
> > > Yes, it sounds like you could switch to that model without too much pain.
> > >
> > > > Btw, how does
> > > > stable branch maintenance work in this case? I guess it should be the
> > > > same, one stable branch per cycle. I’d appreciate if you could
> > > > clarify this.
> > >
> > > There is no change in terms of stable releases, you still maintain only
> > > one branch per cycle. The last intermediary release in a given cycle is
> > > where the stable branch for the cycle is cut.
> > >
> > > --
> > > Thierry Carrez (ttx)
> > >
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [EXTERNAL] Re: [Tripleo] deploy software on Openstack controller on the Overcloud

2017-06-01 Thread Abhishek Kane
Hi Emilien,

The bin does following things on controller:
1. Install core HyperScale packages.
2. Start HyperScale API server
3. Install UI packages. This will add new files to and modify some existing 
files of Horison.
4. Create HyperScale user in mysql db. Create database and dump config. Add 
permissions of nova and cinder DB to HyperScale user.
5. Add ceilometer pollsters for additional stats and modify ceilometer files.
6. Change OpenStack configuration:
a. Create rabbitmq exchanges
b. Create keystone user
c. Define new flavors
d. Create HyperScale volume type and set default volume type to HyperScale in 
cinder.conf.
e. Restart openstack’s services
7. Configure HyperScale services

Once the controller is configured, we use HyperScale’s CLI to configure data 
and compute nodes-

On data node (cinder):
1. Install HyperScale data node packages.
2. Change cinder.conf to add backend and change rpc_backend.
3. Give the raw data disks and meta disks to HyperScale storage layer for usage.
4. Configure HyperScale services.
5. Dump config in the HyperScale database.

On compute (nova):
1. Install HyperScale compute packages.
2. Configure cgroup.
3. Disable selinux.
4. Add ceilometer pollsters for additional stats and modify ceilometer files.
5. Modify qemu.conf to relax ACS checks.
6. Modify libvirt.conf and libvirtd to allow live migration.
7. Change network settings.
8. Configure HyperScale services.
9. Dump config in the HyperScale database.

We assume that we will not require steps to install packages if we put packages 
in the overcloud image. We have started to convert the bin and the CLI into 
puppet modules.


Regards,
Abhishek

On 6/1/17, 4:24 AM, "Emilien Macchi"  wrote:

On Wed, May 31, 2017 at 6:29 PM, Dnyaneshwar Pawar
 wrote:
> Hi Alex,
>
> Currently we have puppet modules[0] to configure our software which has
> components on Openstack Controller, Cinder node and Nova node.
> As per document[1] we successfully tried out role specific 
configuration[2].
>
> So, does it mean that if we have an overcloud image with our packages
> inbuilt and we call our configuration scripts using role specific
> configuration, we may not need puppet modules[0] ? Is it acceptable
> deployment method?

So running a binary from Puppet, to make configuration management is
not something we recommend.
Puppet has been good at managing configuration files and services for
example. In your module, you just manage a file and execute it. The
problem with that workflow is we have no idea what happens in backend.
Also we have no way to make Puppet run idempotent, which is one
important aspect in TripleO.

Please tell us what does the binary, and maybe we can convert the
tasks into Puppet resources that could be managed by your module. Also
make the resources by class (service), so we can plug it into the
composable services in TripleO.

Thanks,

> [0] https://github.com/abhishek-kane/puppet-veritas-hyperscale
> [1]
> 
https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_config.html
> [2] http://paste.openstack.org/show/66/
>
> Thanks,
> Dnyaneshwar
>
> On 5/30/17, 6:52 PM, "Alex Schultz"  wrote:
>
> On Mon, May 29, 2017 at 5:05 AM, Dnyaneshwar Pawar
>  wrote:
>
> Hi,
>
> I am tying to deploy a software on openstack controller on the overcloud.
> One way to do this is by modifying ‘overcloud image’ so that all packages 
of
> our software are added to image and then run overcloud deploy.
> Other way is to write heat template and puppet module which will deploy 
the
> required packages.
>
> Question: Which of above two approaches is better?
>
> Note: Configuration part of the software will be done via separate heat
> template and puppet module.
>
>
> Usually you do both.  Depending on how the end user is expected to
> deploy, if they are using the TripleoPackages service[0] in their
> role, the puppet installation of the package won't actually work (we
> override the package provider to noop) so it needs to be in the
> images.  That being said, usually there is also a bit of puppet that
> needs to be written to configure the end service and as a best
> practice (and for development purposes), it's a good idea to also
> capture the package in the manifest.
>
> Thanks,
> -Alex
>
> [0]
> 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/tripleo-packages.yaml
>
>
> Thanks and Regards,
> Dnyaneshwar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-06-01 Thread Amrith Kumar
Thierry, isn't the py35 goal continuing for Queens?

--
Amrith Kumar
amrith.ku...@gmail.com


> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Thursday, June 1, 2017 4:35 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate
> Off Paste
> 
> Amrith Kumar wrote:
> > I agree, this would be a good thing to do and something which will
> definitely improve the overall ease of upgrades. We already have two Queens
> goals though; do we want to add a third?
> 
> Hmm, we only have one so far ?
> 
> https://governance.openstack.org/tc/goals/queens/index.html
> 
> --
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl][all] Potential Queens Goal: Continuing Python 3.5+ Support

2017-06-01 Thread Emilien Macchi
On Wed, May 31, 2017 at 10:38 PM, Mike  wrote:
> Hello everyone,
>
> For this thread we will be discussing continuing Python 3.5+ support.
> Emilien who has been helping with coordinating our efforts here with
> Pike can probably add more here, but glancing at our goals document
> [1] it looks like we have a lot of unanswered projects’ status, but
> mostly we have python 3.5 unit test voting jobs done thanks to this
> effort! I have no idea how to use the graphite dashboard, but here’s a
> graph [2] showing success vs failure with python-35 jobs across all
> projects.

Indeed, nice work from the community to make progress on this effort.

> Glancing at that I think it’s safe to say we can start discussions on
> moving forward with having our functional tests support python 3.5.
> Some projects are already ahead in this. Let the discussions begin so
> we can aid the decision in the  TC deciding our community wide goals
> for Queens [3].

+1 - making progress on functional tests looks like the next thing and
Queens cycle could be used. I'm happy to keep helping on coordination
if needed.

>
> [1] - https://governance.openstack.org/tc/goals/pike/python35.html
> [2] - 
> http://graphite.openstack.org/render/?width=1273=554&_salt=1496261911.56=00%3A00_20170401=23%3A59_20170531=sumSeries(stats.zuul.pipeline.gate.job.gate-*-python35.SUCCESS)=sumSeries(stats.zuul.pipeline.gate.job.gate-*-python35.FAILURE)
> [3] - https://governance.openstack.org/tc/goals/index.html
>
> —
> Mike Perez



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-01 Thread Thierry Carrez
Note that it's technically too late to change the release model
(milestone-1 is the deadline), but since that kills two birds with one
stone, I'd be willing to grant mistral an exception (as long as it's
done before milestone-2, which is next week).

Renat Akhmerov wrote:
> Thanks Thierry.
> 
> To me it sounds like even a better release model for us. We can discuss
> it with a team at the next team meeting and make a decision.
> 
> Renat Akhmerov
> @Nokia
> 
> On 1 Jun 2017, 17:06 +0700, Thierry Carrez , wrote:
>> Renat Akhmerov wrote:
>>> On 31 May 2017, 15:08 +0700, Thierry Carrez ,
>>> wrote:
> [mistral]
> mistral - blocking sqlalchemy - milestones

 I wonder why mistral is in requirements. Looks like tripleo-common is
 depending on it ? Could someone shine some light on this ? It might just
 mean mistral-lib is missing a few functions, and switching the release
 model of mistral itself might be overkill ?
>>>
>>> This dependency is currently needed to create custom Mistral actions. It
>>> was originally not the best architecture and one of the reasons to
>>> create 'mistral-lib' was in getting rid of dependency on ‘mistral’ by
>>> moving all that’s needed for creating actions into a lib (plus something
>>> else). The thing is that the transition is not over and APIs that we put
>>> into ‘mistral-lib’ are still experimental. The plan is to complete this
>>> initiative, including docs and needed refactoring, till the end of Pike.
>>>
>>> What possible negative consequences may we have if we switch release
>>> model to "cycle-with-intermediary”?
>>
>> There are no "negative" consequences. There are just consequences in
>> choosing a new release model, so I don't want mistral to switch to that
>> model *only* because it didn't complete moving some code out of mistral
>> proper into a more consumable mistral-lib. It feels like we wouldn't be
>> having that discussion if the code was more adequately split :)
>>
>> First, the cycle-with-intermediary model means that every tag is a
>> "release", which is expected to be consumed by users. You have to be
>> pretty sure that it works -- there won't be any release candidates to
>> protect you. This means your automated testing coverage needs to be
>> pretty good.
>>
>> Second, the cycle-with-intermediary model is less "driven" by the
>> release team -- you won't have as many reminders (like milestones), or
>> best-practice deadlines (like feature freeze) to help you. Your team is
>> basically doing release management internally, deciding when to release,
>> when to slow down, etc.
>>
>> As such, this model appeals either to very young projects (which need a
>> lot of flexibility and need to put things out fast), and very mature
>> projects (where automated testing coverage is pretty complete, release
>> liaisons take up much of the release management, and things don't change
>> that often). Projects in the middle usually prefer the
>> cycle-with-milestones model.
>>
>>> Practically, all our releases, even
>>> those made after milestones, are considered stable and I don’t see
>>> issues if we’ll be producing full releases every time.
>>
>> Yes, it sounds like you could switch to that model without too much pain.
>>
>>> Btw, how does
>>> stable branch maintenance work in this case? I guess it should be the
>>> same, one stable branch per cycle. I’d appreciate if you could
>>> clarify this.
>>
>> There is no change in terms of stable releases, you still maintain only
>> one branch per cycle. The last intermediary release in a given cycle is
>> where the stable branch for the cycle is cut.
>>
>> --
>> Thierry Carrez (ttx)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using keystone right - catalog, endpoints, tokens and noauth

2017-06-01 Thread Pavlo Shchelokovskyy
Hi all,

thanks Monty for the feedback. I've started this set of patches in ironic
[0] (very WiP currently), and would really like some eyes on them (not
right now, as I plan to rewrite those basing on this conversation :) but
soon I hope). Also I have some questions/comments inline:

On Thu, May 25, 2017 at 12:36 AM, Monty Taylor  wrote:

> On 05/24/2017 12:51 PM, Eric Fried wrote:
>
>> Pavlo-
>>
>> There's a blueprint [1] whereby we're trying to address a bunch of
>> these same concerns in nova.  You can see the first part in action here
>> [2].  However, it has become clear that nova is just one of the many
>> services that would benefit from get_service_url().  With the full
>> support of mordred (let's call it The Full Monty), we've got our sights
>> on moving that method into ksa itself for that purpose.
>>
>
> Yes - this has started with documenting how to consume Keystone Catalog
> and discovery properly.
>
> https://review.openstack.org/#/q/topic:version-discovery
>
> (it's a big stack)
>
> Once we're good with that - the next step is getting ksa updated to be
> able to handle the end-to-end. It does most of it today, but there are
> enough edgecases it doesn't that you wind up having to do something else,
> like efried just did in nova. The goal is to make that not necessary - and
> so that it's both possible and EASY for everyone to CORRECTLY consume
> catalog and version discovery.
>
> (more comments inline below)
>
>
> Please have a look at this blueprint and change set.  Let us know
>> if
>> your concerns would be addressed if this were available to you from ksa.
>>
>> [1]
>> https://specs.openstack.org/openstack/nova-specs/specs/pike/
>> approved/use-service-catalog-for-endpoints.html
>> [2] https://review.openstack.org/#/c/458257/
>>
>> Thanks,
>> efried
>>
>> On 05/24/2017 04:46 AM, Pavlo Shchelokovskyy wrote:
>>
>>> Hi all,
>>>
>>> There are several problems or inefficiencies in how we are dealing with
>>> auth to other services. Although it became much better in Newton, some
>>> things are still to be improved and I like to discuss how to tackle
>>> those and my ideas for that.
>>>
>>> Keystone endpoints
>>> ===
>>>
>>> Apparently since February-ish DevStack no longer sets up 'internal'
>>> endpoints for most of the services in core devstack [0].
>>> Luckily we were not broken by that right away - although when
>>> discovering a service endpoint from keystone catalog we default to
>>> 'internal' endpoint [1], for most services our devstack plugin still
>>> configures explicit service URL in the corresponding config section, and
>>> thus the service discovery from keystone never takes place (or that code
>>> path is not tested by functional/integration testing).
>>>
>>> AFAIK different endpoint types (internal vs public) are still quite used
>>> by deployments (and IMO rightfully so), so we have to continue
>>> supporting that. I propose to take the following actions:
>>>
>>
> I agree you should continue supporting it.
>
> I'm not sure it's important for you to change your defaults ... as long at
> it's possible to consistently set "interface=public" or
> "interface=internal" and have the results be correct, I think that's the
> big win.
>
> - in our devstack plugin, stop setting up the direct service URLs in
>>> config, always use keystone catalog for discovery
>>>
>>
> YES
>
> - in every conf section related to external service add
>>> 'endpoint_type=[internal|public]' option, defaulting to 'internal', with
>>> a warning in option description (and validated on conductor start) that
>>> it will be changed to 'public' in the next release
>>>
>>
> efried just added a call to keystoneauth which will register all of the
> appropriate CONF options that are needed to request a service endpoint from
> the catalog - register_adapter_conf_options:
>
> http://git.openstack.org/cgit/openstack/keystoneauth/tree/ke
> ystoneauth1/loading/__init__.py#n39
>
> The word "adapter" in this case isn't directly important - but there are
> three general concepts in keystoneauth that relate to how you connect:
>
> auth
>  - how you authenticate - auth_type, username, password, etc.
> session
>  - how the transport layer connects - certs, timeouts, etc.
> adapter
>  - what base endpoint to mount from the catalog - service_type, interface,
> endpoint_override, api_version


Currently I'm trying to understand how to use adapters for creating
clients. It seems that not all of clients of interest for me support this,
as some ignore 'interface', 'endpoint_override'  etc of the adapter
instance if I'm passing it instead of a session, and always rely on the
same/similar options passed to client. Are there any examples of how to use
them? Also, some clients (e.g. a neutron [1]) already base their
SessionClient on keystoneauth1.adapter.Adapter, so how could one create
such client instance with adapter options loaded from 

Re: [openstack-dev] [collectd-ceilometer-plugin] dpdkstat related meters are not displayed under "ceilometer meter-list"

2017-06-01 Thread gordon chung


On 01/06/17 03:51 AM, rajeev.satyanaray...@wipro.com wrote:
> I am working on bringing up Newton version of Openstack on 3
> Nodes(Controller, Compute and Network). I am using OVS with DPDK on my
> Compute Node and to get dpdk port related statistics on my Ceilometer, I
> have configured collectd to use DPDKSTAT plugin and also enabled the
> collectd-ceilometer-plugin as mentioned in their docs. I have used
> mongodb as the database for ceilometer service. I have observed that
> “ceilometer meter-list” doesn’t display any of the dpdkstat related
> meters, but when I issue “ceilometer sample-list –m
> dpdkstat.if_rx_packets” I get a table populated with resource-id and
> other details. I am not sure why “ceilometer meter-list” is not able to
> list my new dpdkstat meters.
>

you see them in sample-list and not meter-list? if that's the case, it's 
probably because the api doesn't return full dataset. i believe it's 
limited to 100 or a 1000.

regardless, i'm going to add obligatory ceilometer storage is 
unsupported and deprecated. use Gnocchi[1] or another time-series 
optimised solution. you're going to run into massive scaling issues 
(each datapoint in mongodb driver is over >1KB), especially if you're 
collecting collectd stats.

[1] http://gnocchi.xyz

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-01 Thread Renat Akhmerov
Thanks Thierry.

To me it sounds like even a better release model for us. We can discuss it with 
a team at the next team meeting and make a decision.

Renat Akhmerov
@Nokia

On 1 Jun 2017, 17:06 +0700, Thierry Carrez , wrote:
> Renat Akhmerov wrote:
> > On 31 May 2017, 15:08 +0700, Thierry Carrez , wrote:
> > > > [mistral]
> > > > mistral - blocking sqlalchemy - milestones
> > >
> > > I wonder why mistral is in requirements. Looks like tripleo-common is
> > > depending on it ? Could someone shine some light on this ? It might just
> > > mean mistral-lib is missing a few functions, and switching the release
> > > model of mistral itself might be overkill ?
> >
> > This dependency is currently needed to create custom Mistral actions. It
> > was originally not the best architecture and one of the reasons to
> > create 'mistral-lib' was in getting rid of dependency on ‘mistral’ by
> > moving all that’s needed for creating actions into a lib (plus something
> > else). The thing is that the transition is not over and APIs that we put
> > into ‘mistral-lib’ are still experimental. The plan is to complete this
> > initiative, including docs and needed refactoring, till the end of Pike.
> >
> > What possible negative consequences may we have if we switch release
> > model to "cycle-with-intermediary”?
>
> There are no "negative" consequences. There are just consequences in
> choosing a new release model, so I don't want mistral to switch to that
> model *only* because it didn't complete moving some code out of mistral
> proper into a more consumable mistral-lib. It feels like we wouldn't be
> having that discussion if the code was more adequately split :)
>
> First, the cycle-with-intermediary model means that every tag is a
> "release", which is expected to be consumed by users. You have to be
> pretty sure that it works -- there won't be any release candidates to
> protect you. This means your automated testing coverage needs to be
> pretty good.
>
> Second, the cycle-with-intermediary model is less "driven" by the
> release team -- you won't have as many reminders (like milestones), or
> best-practice deadlines (like feature freeze) to help you. Your team is
> basically doing release management internally, deciding when to release,
> when to slow down, etc.
>
> As such, this model appeals either to very young projects (which need a
> lot of flexibility and need to put things out fast), and very mature
> projects (where automated testing coverage is pretty complete, release
> liaisons take up much of the release management, and things don't change
> that often). Projects in the middle usually prefer the
> cycle-with-milestones model.
>
> > Practically, all our releases, even
> > those made after milestones, are considered stable and I don’t see
> > issues if we’ll be producing full releases every time.
>
> Yes, it sounds like you could switch to that model without too much pain.
>
> > Btw, how does
> > stable branch maintenance work in this case? I guess it should be the
> > same, one stable branch per cycle. I’d appreciate if you could clarify this.
>
> There is no change in terms of stable releases, you still maintain only
> one branch per cycle. The last intermediary release in a given cycle is
> where the stable branch for the cycle is cut.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [EXTERNAL] Re: [TripleO] custom configuration to overcloud fails second time

2017-06-01 Thread Jiří Stránský

On 31.5.2017 17:40, Dnyaneshwar Pawar wrote:

Hi Ben,

On 5/31/17, 8:06 PM, "Ben Nemec" 
> wrote:

I think we would need to see what your custom config templates look like
as well.

Custom config templates: http://paste.openstack.org/show/64/


Hello Dnyaneshwar,

from a brief scan of that paste i think that:

  OS::TripleO::ControllerExtraConfig: /home/stack/example_2.yaml

should rather be:

  OS::TripleO::ControllerExtraConfigPre: /home/stack/example_2.yaml


The 'Pre' hook gets a `server` parameter (not `servers`) - it's 
instantiated per server [1], not per role. There are docs [2] that 
describe the interface, and they describe some alternative options as well.


(Please ask such questions on IRC channel #tripleo on freenode, as the 
openstack-dev list is meant mainly for development discussion.)



Have a good day,

Jirka

[1] 
https://github.com/openstack/tripleo-heat-templates/blob/b344f5994fcd16e562d55e6e00ad0980c5b32621/puppet/role.role.j2.yaml#L475-L479

[2] http://tripleo.org/advanced_deployment/extra_config.html




Also note that it's generally not recommended to drop environment files
from your deploy command unless you explicitly want to stop applying
them.  So if you applied myconfig_1.yaml and then later want to apply
myconfig_2.yaml your deploy command should look like: openstack
overcloud deploy --templates -e myconfig_1.yaml -e myconfig_2.yaml

Yes, I agree. But in my case even if I dropped myconfig_1.yaml while applying 
myconfig_2.yaml , config in step 1 remained unchanged.

On 05/31/2017 07:53 AM, Dnyaneshwar Pawar wrote:
Hi TripleO Experts,
I performed following steps -

   1. openstack overcloud deploy --templates -e myconfig_1.yaml
   2. openstack overcloud deploy --templates -e myconfig_2.yaml

Step 1  Successfully applied custom configuration to the overcloud.
Step 2 completed successfully but custom configuration is not applied to
the overcloud. And configuration applied by step 1 remains unchanged.

*Do I need to do anything before performing step 2?*


Thanks and Regards,
Dnyaneshwar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] nominating Mike Fedosin for glance core

2017-06-01 Thread Brian Rosmaita
Having heard only affirmative responses, I've added Mikhail Fedosin to
the Glance core group, with all the rights and privileges pertaining
thereto.

Welcome back to the Glance core team, Mike!

I'd also like to express my personal thanks to you for stepping up to
help out the Glance project in these difficult times.

cheers,
brian

On Mon, May 29, 2017 at 10:01 AM, Mikhail Fedosin  wrote:
> Thank you very much for your trust! I will try not to let you down, and be
> with the project in these difficult times.
>
> Despite the fact that most of the time I'm involved in the Glare project, I
> agree that they have much in common. And at least they both share
> glance_store library. For this reason, I'm thinking of implementing the
> multi-storage support, where operators can create various settings for
> different stores. For instance, having multiple connected ceph data stores.
> The rest of the time I plan to review the code, write tests and fix minor
> bugs.
>
> I'm glad to be a part of the community!
>
> Best,
> Mike
>
> On Fri, May 26, 2017 at 7:28 AM, feilong  wrote:
>>
>> Welcome back, Mike.
>>
>>
>> On 26/05/17 16:21, Kekane, Abhishek wrote:
>>
>> +1, agree with Nikhil.
>>
>>
>>
>> Abhishek
>>
>>
>>
>> From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
>> Sent: Friday, May 26, 2017 6:04 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [glance] nominating Mike Fedosin for glance
>> core
>>
>>
>>
>> This is great news. Always +2 for Mike, he's been great (dev, glancer,
>> stacker ..) all the years.  Let's not wait so long for reinstatement if
>> folks are on-board, as having another core will only help.
>>
>>
>>
>> On Thu, May 25, 2017 at 11:53 AM, Brian Rosmaita
>>  wrote:
>>
>> As you've no doubt read elsewhere on the ML, we've lost several glance
>> cores recently due to employment changes.  Luckily, Mike Fedosin
>> informed me at today's Glance weekly meeting that he will have time
>> for the next few months to devote some time to Glance reviewing.
>>
>> For those who don't know Mike (mfedosin on IRC), he was a Glance core
>> for several years.  He provided a lot of notes that were used to write
>> the Glance architecture documentation that is so helpful to new
>> contributors, so he's extremely knowledgeable about the design
>> patterns used in Glance.
>>
>> Most recently, Mike's been working on the Glare project, which has a
>> lot in common with Glance.  While Mike says he can't commit much time
>> to Glance development, he has proposed porting some of the Glare tests
>> over to Glance, which will certainly help with our code coverage, and
>> would be a helpful addition to Glance.
>>
>> (Mike agreed at today's Glance meeting not to propose re-integrating
>> Glare into the Glance project until the Queens PTG (if at all), so I'm
>> not worried about that being a distraction during the Pike cycle when
>> we are so short-handed.)
>>
>> I'd like to reinstate Mike as a Glance core contributor at the next
>> Glance weekly meeting.  Please reply to this message with any comments
>> or concerns before 23:59 UTC on Wednesday 31 May 2017.
>>
>> cheers,
>> brian
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>>
>> --
>>
>> Thanks,
>>
>> Nikhil
>>
>>
>> __
>> Disclaimer: This email and any attachments are sent in strictest
>> confidence
>> for the sole use of the addressee and may contain legally privileged,
>> confidential, and proprietary data. If you are not the intended recipient,
>> please advise the sender by replying promptly to this email and then
>> delete
>> and destroy this email and any attachments without any further use,
>> copying
>> or forwarding.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> --
>> Cheers & Best regards,
>> Feilong Wang (王飞龙)
>> --
>> Senior Cloud Software Engineer
>> Tel: +64-48032246
>> Email: flw...@catalyst.net.nz
>> Catalyst IT Limited
>> Level 6, Catalyst House, 150 Willis Street, Wellington
>> --
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

Re: [openstack-dev] [gnocchi] regional incoming storage targets

2017-06-01 Thread Julien Danjou
On Wed, May 31 2017, gordon chung wrote:


[…]

> i'm not entirely sure this is an issue, just thought i'd raise it to 
> discuss.

It's a really interesting point you raise. I never thought we could do
that but indeed, we could. Maybe we built a great architecture after
all. ;-)

Easy solution: disable refresh. Problem solved.

Also that means you could not push measures to the central API endpoint,
or that'd be a problem. There might be a lot of little problem like that
we need to solve.

> regardless, thoughts on maybe writing up deployment strategies like 
> this? or make everyone who reads this to erase their minds and use this 
> for 'consulting' fees :P

Yes, write doc or log an issue at least. It's best way to keep a public
track now on ideas and what's going on since it's what people are going
to read and search into.

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [EXTERNAL] Re: [TripleO] custom configuration to overcloud fails second time

2017-06-01 Thread Dnyaneshwar Pawar
Another observation -
Steps:
1. openstack overcloud deploy --templates -e myconfig_1.yaml
2. openstack overcloud update overcloud
3. openstack overcloud deploy --templates -e myconfig_2.yaml

With this steps in sequence, configuration of myconfig_2.yaml is applied 
successfully. (still config in step 1 remained unchanged.)
I am not sure why do we need step 2 above?


Thanks,
Dnyaneshwar

From: "dnyaneshwar.pa...@veritas.com" 
>
Date: Wednesday, May 31, 2017 at 9:10 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [EXTERNAL] Re: [openstack-dev] [TripleO] custom configuration to 
overcloud fails second time

Hi Ben,

On 5/31/17, 8:06 PM, "Ben Nemec" 
> wrote:

I think we would need to see what your custom config templates look like
as well.

Custom config templates: http://paste.openstack.org/show/64/


Also note that it's generally not recommended to drop environment files
from your deploy command unless you explicitly want to stop applying
them.  So if you applied myconfig_1.yaml and then later want to apply
myconfig_2.yaml your deploy command should look like: openstack
overcloud deploy --templates -e myconfig_1.yaml -e myconfig_2.yaml

Yes, I agree. But in my case even if I dropped myconfig_1.yaml while applying 
myconfig_2.yaml , config in step 1 remained unchanged.

On 05/31/2017 07:53 AM, Dnyaneshwar Pawar wrote:
Hi TripleO Experts,
I performed following steps -

  1. openstack overcloud deploy --templates -e myconfig_1.yaml
  2. openstack overcloud deploy --templates -e myconfig_2.yaml

Step 1  Successfully applied custom configuration to the overcloud.
Step 2 completed successfully but custom configuration is not applied to
the overcloud. And configuration applied by step 1 remains unchanged.

*Do I need to do anything before performing step 2?*


Thanks and Regards,
Dnyaneshwar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-06-01 Thread Rabi Mishra
On Thu, Jun 1, 2017 at 3:39 PM, Chris Dent  wrote:

> On Wed, 31 May 2017, Doug Hellmann wrote:
>
>> Yeah, it sounds like the current organization of the repo is not
>> ideal in terms of equal playing field for all of our project teams.
>> I would be fine with all of the interop tests being in a plugin
>> together, or of saying that the tempest repo should only contain
>> those tests and that others should move to their own plugins. If we're
>> going to reorganize all of that, we should decide what new structure we
>> want and work it into the goal.
>>
>
> I feel like the discussion about the interop tests has strayed this
> conversation from the more general point about plugin "fairness" and
> allowed the vagueness in plans for interop to control our thinking
> and discussion about options in the bigger view.
>
> 
> This is pretty standard for an OpenStack conversation:
>
> * introduce a general idea or critique
> * someone latches on to one small aspect of that idea that presents
>   some challenges, narrowing the context
> * that latching and those challenges is used to kill the introspection
>   that the general idea was pursuing, effectively killing any
>   opportunities for learning and discovery that could lead to
>   improvement or innovation
>
> This _has_ to stop. We're at my three year anniversary in the
> community and this has been and still is my main concern with the
> how we collaborate. There is so much stop energy and chilling effect
> in the way we discuss things in OpenStack. So much fatigue over
> issues being brought up "over and over again" or things being
> discussed without immediate solutions in mind. So what! Time moves
> forward which means the context for issues is always changing.
> Discussion is how we identify problems! Discussion is how we
> get good solutions! 
>
> It's clear from this thread and other conversations that the
> management of tempest plugins is creating a multiplicity of issues
> and confusions:
>
> * Some projects are required to use plugins and some are not. This
>   creates classes of projects.
>
> * Those second class projects now need to move their plugins to
>   other repos because rules.
>
> * Projects with plugins need to put their tests in their new repos,
>   except for some special tests which will be identified by a vague
>   process.
>
> * Review of changes is intermittent and hard to track because
>   stakeholders need to think about multiple locations, without
>   guidance.
>
> * People who want to do validation with tempest need to gather stuff
>   from a variety of locations.
>
> * Tempest is a hammer used for lots of different nails, but the
>   type of nail varies over time and with the whimsy of policy.
>
> * Discussion of using something other than tempest for interop is
>   curtailed by policy which appears to be based in "that's the way
>   it is done".
>
> A lot of this results, in part, from there being no single guiding
> pattern and principle for how (and where) the tests are to be
> managed. When there's a choice between one, some and all, "some" is
> almost always the wrong way to manage something. "some" is how we do
> tempest (and fair few other OpenStack things).
>
> If it is the case that we want some projects to not put their tests
> in the main tempest repo then the only conceivable pattern from a
> memorability, discoverability, and equality standpoint is actually
> for all the tests to be in plugins.
>
> If that isn't possible (and it is clear there are many reasons why
> that may be the case) then we need to be extra sure that we explore
> and uncover the issues that the "some" approach presents and provide
> sufficient documentation, tooling, and guidance to help people get
> around them. And that we recognize and acknowledge the impact it has.
>
> If the answer to that is "who is going to do that?" or "who has the
> time?" then I ask you to ask yourself why we think the "non-core"
> projects have time to fiddle about with tempest plugins?
>
> +1

> And finally, I actually don't have too strong of a position in the
> case of tempest and tempest plugins. What I take issue with is the
> process whereby we discuss and decide these things and characterize
> the various projects.
>
> If I have any position on tempest at all it is that we should limit
> it to gross cloud validation and maybe interop testing, and projects
> should manage their own integration testing in tree using whatever
> tooling they feel is most appropriate. If that turns out to be
> tempest, cool.
>
>  I think it's a fair position and IMO should be the way forward.

>
> --
> Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
> freenode: cdent tw: @anticdent
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-06-01 Thread Chris Dent

On Wed, 31 May 2017, Doug Hellmann wrote:

Yeah, it sounds like the current organization of the repo is not
ideal in terms of equal playing field for all of our project teams.
I would be fine with all of the interop tests being in a plugin
together, or of saying that the tempest repo should only contain
those tests and that others should move to their own plugins. If we're
going to reorganize all of that, we should decide what new structure we
want and work it into the goal.


I feel like the discussion about the interop tests has strayed this
conversation from the more general point about plugin "fairness" and
allowed the vagueness in plans for interop to control our thinking
and discussion about options in the bigger view.


This is pretty standard for an OpenStack conversation:

* introduce a general idea or critique
* someone latches on to one small aspect of that idea that presents
  some challenges, narrowing the context
* that latching and those challenges is used to kill the introspection
  that the general idea was pursuing, effectively killing any
  opportunities for learning and discovery that could lead to
  improvement or innovation

This _has_ to stop. We're at my three year anniversary in the
community and this has been and still is my main concern with the
how we collaborate. There is so much stop energy and chilling effect
in the way we discuss things in OpenStack. So much fatigue over
issues being brought up "over and over again" or things being
discussed without immediate solutions in mind. So what! Time moves
forward which means the context for issues is always changing.
Discussion is how we identify problems! Discussion is how we
get good solutions! 



It's clear from this thread and other conversations that the
management of tempest plugins is creating a multiplicity of issues
and confusions:

* Some projects are required to use plugins and some are not. This
  creates classes of projects.

* Those second class projects now need to move their plugins to
  other repos because rules.

* Projects with plugins need to put their tests in their new repos,
  except for some special tests which will be identified by a vague
  process.

* Review of changes is intermittent and hard to track because
  stakeholders need to think about multiple locations, without
  guidance.

* People who want to do validation with tempest need to gather stuff
  from a variety of locations.

* Tempest is a hammer used for lots of different nails, but the
  type of nail varies over time and with the whimsy of policy.

* Discussion of using something other than tempest for interop is
  curtailed by policy which appears to be based in "that's the way
  it is done".

A lot of this results, in part, from there being no single guiding
pattern and principle for how (and where) the tests are to be
managed. When there's a choice between one, some and all, "some" is
almost always the wrong way to manage something. "some" is how we do
tempest (and fair few other OpenStack things).

If it is the case that we want some projects to not put their tests
in the main tempest repo then the only conceivable pattern from a
memorability, discoverability, and equality standpoint is actually
for all the tests to be in plugins.

If that isn't possible (and it is clear there are many reasons why
that may be the case) then we need to be extra sure that we explore
and uncover the issues that the "some" approach presents and provide
sufficient documentation, tooling, and guidance to help people get
around them. And that we recognize and acknowledge the impact it has.

If the answer to that is "who is going to do that?" or "who has the
time?" then I ask you to ask yourself why we think the "non-core"
projects have time to fiddle about with tempest plugins?

And finally, I actually don't have too strong of a position in the
case of tempest and tempest plugins. What I take issue with is the
process whereby we discuss and decide these things and characterize
the various projects.

If I have any position on tempest at all it is that we should limit
it to gross cloud validation and maybe interop testing, and projects
should manage their own integration testing in tree using whatever
tooling they feel is most appropriate. If that turns out to be
tempest, cool.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano][barbican] Encrypting sensitive properties

2017-06-01 Thread Paul Bourke
Thanks for that Kirill. Optional sounds good. Right now I'm leaning 
towards encrypting the full object model in the database rather than 
selective attributes, I can't think of a reason not to do this and it 
makes things more transparent and straight forward for the user. I've 
added a spec for this at https://review.openstack.org/#/c/469467/ if you 
have a chance to review.


Regards,
-Paul

On 31/05/17 17:59, Kirill Zaitsev wrote:
As long as this integration is optional (i.e. no barbican — no 
encryption) It feels ok to me. We have a very similar integration with 
congress, yet you can deploy murano with or without it.


As for the way to convey this, I believe metadata attributes were 
designed to answer use-cases like this one. see 
https://docs.openstack.org/developer/murano/appdev-guide/murano_pl/metadata.html for 
more info.


Regards, Kirill

Le 25 мая 2017 г. à 18:49, Paul Bourke > a écrit :



Hi all,

I've been looking at a blueprint[0] logged for Murano which involves 
encrypting parts of the object model stored in the database that may 
contain passwords or sensitive information.


I wanted to see if people had any thoughts or preferences on how this 
should be done. On the face of it, it seems Barbican is a good choice 
for solving this, and have read a lengthy discussion around this on 
the mailing list from earlier this year[1]. Overall the benefits of 
Barbican seem to be that we can handle the encryption and management 
of secrets in a common and standard way, and avoid having to implement 
and maintain this ourselves. The main drawback for Barbican seems to 
be that we impose another service dependency on the operator, though 
this complaint seems to be in some way appeased by Castellan, which 
offers alternative backends to just Barbican (though unsure right now 
what those are?). The alternative to integrating Barbican/Castellan is 
to use a more lightweight "roll your own" encryption such as what 
Glance is using[2].


After we decide on how we want to implement the encryption there is 
also the question of how best to expose this feature to users. My 
current thought is that we can use Murano attributes, so application 
authors can do something like this:


- name: appPassword
 type: password
 encrypt: true

This would of course be transparent to the end user of the 
application. Any thoughts on both issues are very welcome, I hope to 
have a prototype in the next few days which may help solidify this also.


Regards,
-Paul.

[0] 
https://blueprints.launchpad.net/murano/+spec/allow-encrypting-of-muranopl-properties
[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/110192.html
[2] 
https://github.com/openstack/glance/blob/48ee8ef4793ed40397613193f09872f474c11abe/glance/common/crypt.py


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org 
?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-01 Thread Thierry Carrez
Renat Akhmerov wrote:
> On 31 May 2017, 15:08 +0700, Thierry Carrez , wrote:
>>> [mistral]
>>> mistral - blocking sqlalchemy - milestones
>>
>> I wonder why mistral is in requirements. Looks like tripleo-common is
>> depending on it ? Could someone shine some light on this ? It might just
>> mean mistral-lib is missing a few functions, and switching the release
>> model of mistral itself might be overkill ?
> 
> This dependency is currently needed to create custom Mistral actions. It
> was originally not the best architecture and one of the reasons to
> create 'mistral-lib' was in getting rid of dependency on ‘mistral’ by
> moving all that’s needed for creating actions into a lib (plus something
> else). The thing is that the transition is not over and APIs that we put
> into ‘mistral-lib’ are still experimental. The plan is to complete this
> initiative, including docs and needed refactoring, till the end of Pike.
> 
> What possible negative consequences may we have if we switch release
> model to "cycle-with-intermediary”?

There are no "negative" consequences. There are just consequences in
choosing a new release model, so I don't want mistral to switch to that
model *only* because it didn't complete moving some code out of mistral
proper into a more consumable mistral-lib. It feels like we wouldn't be
having that discussion if the code was more adequately split :)

First, the cycle-with-intermediary model means that every tag is a
"release", which is expected to be consumed by users. You have to be
pretty sure that it works -- there won't be any release candidates to
protect you. This means your automated testing coverage needs to be
pretty good.

Second, the cycle-with-intermediary model is less "driven" by the
release team -- you won't have as many reminders (like milestones), or
best-practice deadlines (like feature freeze) to help you. Your team is
basically doing release management internally, deciding when to release,
when to slow down, etc.

As such, this model appeals either to very young projects (which need a
lot of flexibility and need to put things out fast), and very mature
projects (where automated testing coverage is pretty complete, release
liaisons take up much of the release management, and things don't change
that often). Projects in the middle usually prefer the
cycle-with-milestones model.

> Practically, all our releases, even
> those made after milestones, are considered stable and I don’t see
> issues if we’ll be producing full releases every time.

Yes, it sounds like you could switch to that model without too much pain.

> Btw, how does
> stable branch maintenance work in this case? I guess it should be the
> same, one stable branch per cycle. I’d appreciate if you could clarify this.

There is no change in terms of stable releases, you still maintain only
one branch per cycle. The last intermediary release in a given cycle is
where the stable branch for the cycle is cut.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova][os-brick] Testing for proposed iSCSI OS-Brick code

2017-06-01 Thread Gorka Eguileor
On 31/05, Matt Riedemann wrote:
> On 5/31/2017 6:58 AM, Gorka Eguileor wrote:
> > Hi,
> >
> > As some of you may know I've been working on improving iSCSI connections
> > on OpenStack to make them more robust and prevent them from leaving
> > leftovers on attach/detach operations.
> >
> > There are a couple of posts [1][2] going in more detail, but a good
> > summary would be that to fix this issue we require a considerable rework
> > in OS-Brick, changes in Open iSCSI, Cinder, Nova and specific tests.
> >
> > Relevant changes for those projects are:
> >
> > - Open iSCSI: iscsid behavior is not a perfect fit for the OpenStack use
> >case, so a new feature was added to disable automatic scans that added
> >unintended devices to the systems.  Done and merged [3][4], it will be
> >available on RHEL with iscsi-initiator-utils-6.2.0.874-2.el7
> >
> > - OS-Brick: rework iSCSI to make it robust on unreliable networks, to
> >add a `force` detach option that prioritizes leaving a clean system
> >over possible data loss, and to support the new Open iSCSI feature.
> >Done and pending review [5][6][7]
> >
> > - Cinder: Handle some attach/detach errors a little better and add
> >support to the force detach option for some operations where data loss
> >on error is acceptable, ie: create volume from image, restore backup,
> >etc. Done and pending review [8][9]
> >
> > - Nova: I haven't looked into the code here, but I'm sure there will be
> >cases where using the force detach operation will be useful.
> >
> > - Tests: While we do have tempest tests that verify that attach/detach
> >operations work both in Nova and in cinder volume creation operations,
> >they are not meant to test the robustness of the system, so new tests
> >will be required to validate the code.  Done [10]
> >
> > Proposed tests are simplified versions of the ones I used to validate
> > the code; but hey, at least these are somewhat readable ;-)
> > Unfortunately they are not in line with the tempest mission since they
> > are not meant to be run in a production environment due to their
> > disruptive nature while injecting errors.  They need to be run
> > sequentially and without any other operations running on the deployment.
> > They also run sudo commands via local bash or SSH for the verification
> > and error generation bits.
> >
> > We are testing create volume from image and attaching a volume to an
> > instance under the following networking error scenarios:
> >
> >   - No errors
> >   - All paths have 10% incoming packets dropped
> >   - All paths have 20% incoming packets dropped
> >   - All paths have 100% incoming packets dropped
> >   - Half the paths have 20% incoming packets dropped
> >   - The other half of the paths have 20% incoming packets dropped
> >   - Half the paths have 100% incoming packets dropped
> >   - The other half of the paths have 100% incoming packets dropped
> >
> > There are single execution versions as well as 10 consecutive operations
> > variants.
> >
> > Since these are big changes I'm sure we would all feel a lot more
> > confident to merge them if storage vendors would run the new tests to
> > confirm that there are no issues with their backends.
> >
> > Unfortunately to fully test the solution you may need to build the
> > latest Open-iSCSI package and install it in the system, then you can
> > just use an all-in-one DevStack with a couple of changes in the local.conf:
> >
> > enable_service tempest
> >
> > CINDER_REPO=https://review.openstack.org/p/openstack/cinder
> > CINDER_BRANCH=refs/changes/45/469445/1
> >
> > LIBS_FROM_GIT=os-brick
> >
> > OS_BRICK_REPO=https://review.openstack.org/p/openstack/os-brick
> > OS_BRICK_BRANCH=refs/changes/94/455394/11
> >
> > [[post-config|$CINDER_CONF]]
> > [multipath-backend]
> > use_multipath_for_image_xfer=true
> >
> > [[post-config|$NOVA_CONF]]
> > [libvirt]
> > volume_use_multipath = True
> >
> > [[post-config|$KEYSTONE_CONF]]
> > [token]
> > expiration = 14400
> >
> > [[test-config|$TEMPEST_CONFIG]]
> > [volume-feature-enabled]
> > multipath = True
> > [volume]
> > build_interval = 10
> > multipath_type = $MULTIPATH_VOLUME_TYPE
> > backend_protocol_tcp_port = 3260
> > multipath_backend_addresses = $STORAGE_BACKEND_IP1,$STORAGE_BACKEND_IP2
> >
> > Multinode configurations are also supported using SSH with use/password or
> > private key to introduce the errors or check that the systems didn't leave 
> > any
> > leftovers, the tests can also run a cleanup command between tests, etc., but
> > that's beyond the scope of this email.
> >
> > Then you can run them all from /opt/stack/tempest with:
> >
> >   $ cd /opt/stack/tempest
> >   $ OS_TEST_TIMEOUT=7200 ostestr -r 
> > cinder.tests.tempest.scenario.test_multipath.*
> >
> > But I would recommend first running the simplest one without errors and
> > manually checking that the 

Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-06-01 Thread Thierry Carrez
Graham Hayes wrote:
> On 01/06/17 01:30, Matthew Treinish wrote:
>> TBH, it's a bit premature to have the discussion. These additional programs 
>> do
>> not exist yet, and there is a governance road block around this. Right now 
>> the
>> set of projects that can be used defcore/interopWG is limited to the set of 
>> projects in:
>>
>> https://governance.openstack.org/tc/reference/tags/tc_approved-release.html
> 
> Sure - but that is a solved problem, when the interop committee is
> ready to propose them, they can add projects into that tag. Or am I
> misunderstanding [1] (again)?

I think you understand it well. The Board/InteropWG should propose
additions/removals of this tag, which will then be approved by the TC:

https://governance.openstack.org/tc/reference/tags/tc_approved-release.html#tag-application-process

> [...]
>> We had a forum session on it (I can't find the etherpad for the session) 
>> which
>> was pretty speculative because it was about planning the new programs. Part 
>> of
>> that discussion was around the feasibility of using tests in plugins and 
>> whether
>> that would be desirable. Personally, I was in favor of doing that for some of
>> the proposed programs because of the way they were organized it was a good 
>> fit.
>> This is because the proposed new programs were extra additions on top of the
>> base existing interop program. But it was hardly a definitive discussion.
> 
> Which will create 2 classes of testing for interop programs.

FWIW I would rather have a single way of doing "tests used in trademark
programs" without differentiating between old and new trademark programs.

I fear that we are discussing solutions before defining the problem. We
want:

1- Decentralize test maintenance, through more tempest plugins, to
account for limited QA resources
2- Additional codereview constraints and approval rules for tests that
happen to be used in trademark programs
3- Discoverability/ease-of-install of the set of tests that happen to be
used in trademark programs
4- A git repo layout that can be simply explained, for new teams to
understand

It feels like the current git repo layout (result of that 2016-05-04
resolution) optimizes for 2 and 3, which kind of works until you add
more trademark programs, at which point it breaks 1 and 4.

I feel like you could get 2 and 3 without necessarily using git repo
boundaries (using Gerrit approval rules and some tooling to install/run
subset of tests across multiple git repos), which would allow you to
optimize git repo layout to get 1 and 4...

Or am I missing something ?

-- 
Thierry Carrez (ttx)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [doc] Docs team meeting

2017-06-01 Thread Alexandra Settle
Hey everyone,

The docs meeting will continue today in #openstack-meeting as scheduled 
(Thursday at 16:00 UTC). For more details, and the agenda, see the meeting 
page: - 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

The meeting chair will be me! Hope you can all make it ☺

Thanks,

Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-06-01 Thread Graham Hayes
On 01/06/17 01:30, Matthew Treinish wrote:
> On Wed, May 31, 2017 at 03:45:52PM +, Jeremy Stanley wrote:
>> On 2017-05-31 15:22:59 + (+), Jeremy Stanley wrote:
>>> On 2017-05-31 09:43:11 -0400 (-0400), Doug Hellmann wrote:
>>> [...]
 it's news to me that they're considering reversing course. If the
 QA team isn't going to continue, we'll need to figure out what
 that means and potentially find another group to do it.
>>>
>>> I wasn't there for the discussion, but it sounds likely to be a
>>> mischaracterization. I'm going to assume it's not true (or much more
>>> nuanced) at least until someone responds on behalf of the QA team.
>>> This particular subthread is only going to go further into the weeds
>>> until it is grounded in some authoritative details.
>>
>> Apologies for replying to myself, but per discussion[*] with Chris
>> in #openstack-dev I'm adjusting the subject header to make it more
>> clear which particular line of speculation I consider weedy.
>>
>> Also in that brief discussion, Graham made it slightly clearer that
>> he was talking about pushback on the tempest repo getting tests for
>> new trademark programs beyond "OpenStack Powered Platform,"
>> "OpenStack Powered Compute" and "OpenStack Powered Object Storage."
> 
> TBH, it's a bit premature to have the discussion. These additional programs do
> not exist yet, and there is a governance road block around this. Right now the
> set of projects that can be used defcore/interopWG is limited to the set of 
> projects in:
> 
> https://governance.openstack.org/tc/reference/tags/tc_approved-release.html

Sure - but that is a solved problem, when the interop committee is
ready to propose them, they can add projects into that tag. Or am I
misunderstanding [1] (again)?

This *is* the time to discuss it, as these programs are aimed as
advisory for the 2018 spec - which means we need to solve these
problems, and soon.

> We had a forum session on it (I can't find the etherpad for the session) which
> was pretty speculative because it was about planning the new programs. Part of
> that discussion was around the feasibility of using tests in plugins and 
> whether
> that would be desirable. Personally, I was in favor of doing that for some of
> the proposed programs because of the way they were organized it was a good 
> fit.
> This is because the proposed new programs were extra additions on top of the
> base existing interop program. But it was hardly a definitive discussion.

Which will create 2 classes of testing for interop programs.

> We will have to have discussions about how we're going to actually implement
> the additional programs when we start to create them, but that's not happening
> yet.
> 

Except it is - at least one team has submitted capabilities, and others
are doing so at the moment.

1 - https://review.openstack.org/#/c/368240/

> -Matt Treinish
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-06-01 Thread Thierry Carrez
Amrith Kumar wrote:
> I agree, this would be a good thing to do and something which will definitely 
> improve the overall ease of upgrades. We already have two Queens goals 
> though; do we want to add a third?

Hmm, we only have one so far ?

https://governance.openstack.org/tc/goals/queens/index.html

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Onboarding rooms postmortem, what did you do, what worked, lessons learned

2017-06-01 Thread Thierry Carrez
Jeremy Stanley wrote:
> On 2017-05-19 09:22:07 -0400 (-0400), Sean Dague wrote:
> [...]
>> the project,
> 
> I hosted the onboarding session for the Infrastructure team. For
> various logistical reasons discussed on the planning thread before
> the PTG, it was a shared session with many other "horizontal" teams
> (QA, Requirements, Stable, Release). We carved the 90-minute block
> up into individual subsessions for each team, though due to
> scheduling conflicts I was only able to attend the second half
> (Release and Infra). Attendance was also difficult to gauge; we had
> several other regulars from the Infra team present in the audience,
> people associated with other teams with which we shared the room,
> and an assortment of new faces but hard to tell which session(s)
> they were mainly there to see.

Doug and I ran the "Release management" segment of that shared slot.

>> what you did in the room,
> 
> I prepared a quick (5-10 minute) "help wanted" intro slide deck to
> set the stage, then transitioned to a less formal mix of Q and
> open discussion of some of the exciting things we're working on
> currently. I felt like we didn't really get as many solid questions
> as I was hoping, but the back-and-forth with other team members in
> the room about our priority efforts was definitely a good way to
> fill in the gaps between.

We had a quick slidedeck to introduce what the release team actually
does (not that much), what are the necessary skills (not really ninjas)
and a base intro on our process. The idea was to inspire others to join
the team by making it more approachable, and stating that new faces were
definitely needed.

>> what you think worked,
> 
> The format wasn't bad. Given the constraints we were under for this,
> sharing seems to have worked out pretty well for us and possibly
> seeded the audience with people who were interested in what those
> other teams had to say and stuck around to see me ramble.

I liked the room setup (classroom style) which is conducive to learning.

>> what you would have done differently
> [...]
> 
> The goal I had was to drum up some additional solid contributors to
> our team, though the upshot (not necessarily negative, just not what
> I expected) was that we seemed to get more interest from "adjacent
> technologies" representatives interested in what we were doing and
> how to replicate it in their ecosystems. If that ends up being a
> significant portion of the audience going forward, it's possible we
> could make some adjustments to our approach in an attempt to entice
> them to collaborate further on co-development of our tools and
> processes.

Attracting the right set of people in the room is definitely a
challenge. I don't know if regrouping several teams into the same slot
was a good idea in that respect. Maybe have shorter slots for smaller
teams, but still give them their own slot in the schedule ?

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][horizon] FWaaS/VPNaaS dashboard split out from horizon

2017-06-01 Thread Itxaka Serrano Garcia


On 31/05/17 15:12, Akihiro Motoki wrote:

Hi all,

As discussed last month [1], we agree that each neutron-related
dashboard has its own repository.
I would like to move this forward on FWaaS and VPNaaS
as the horizon team plans to split them out as horizon plugins.

A couple of questions hit me.

(1) launchpad project
Do we create a new launchpad project for each dashboard?
At now, FWaaS and VPNaaS projects use 'neutron' for their bug tracking
from the historical reason, it sometimes There are two choices: the
one is to accept dashboard bugs in 'neutron' launchpad,
and the other is to have a separate launchpad project.

My vote is to create a separate launchpad project.
It allows users to search and file bugs easily.

+1


(2) repository name

Are neutron-fwaas-dashboard / neutron-vpnaas-dashboard good repository
names for you?
Most horizon related projects use -dashboard or -ui as their repo names.
I personally prefers to -dashboard as it is consistent with the
OpenStack dashboard
(the official name of horizon). On the other hand, I know some folks
prefer to -ui
as the name is shorter enough.
Any preference?


+1 for dashboard, it goes with the openstack-dashboard theme.


(3) governance
neutron-fwaas project is under the neutron project.
Does it sound okay to have neutron-fwaas-dashboard under the neutron project?
This is what the neutron team does for neutron-lbaas-dashboard before
and this model is adopted in most horizon plugins (like trove, sahara
or others).

+1

(4) initial core team

My thought is to have neutron-fwaas/vpnaas-core and horizon-core as
the initial core team.
The release team and the stable team follow what we have for
neutron-fwaas/vpnaas projects.
Sounds reasonable?

+1


Finally, I already prepare the split out version of FWaaS and VPNaaS
dashboards in my personal github repos.
Once we agree in the questions above, I will create the repositories
under git.openstack.org.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-April/thread.html#115200


Thank for doing this work!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][horizon] FWaaS/VPNaaS dashboard split out from horizon

2017-06-01 Thread Takashi Yamamoto
On Wed, May 31, 2017 at 10:12 PM, Akihiro Motoki  wrote:
> Hi all,
>
> As discussed last month [1], we agree that each neutron-related
> dashboard has its own repository.
> I would like to move this forward on FWaaS and VPNaaS
> as the horizon team plans to split them out as horizon plugins.
>
> A couple of questions hit me.
>
> (1) launchpad project
> Do we create a new launchpad project for each dashboard?
> At now, FWaaS and VPNaaS projects use 'neutron' for their bug tracking
> from the historical reason, it sometimes There are two choices: the
> one is to accept dashboard bugs in 'neutron' launchpad,
> and the other is to have a separate launchpad project.
>
> My vote is to create a separate launchpad project.
> It allows users to search and file bugs easily.

+1

>
> (2) repository name
>
> Are neutron-fwaas-dashboard / neutron-vpnaas-dashboard good repository
> names for you?
> Most horizon related projects use -dashboard or -ui as their repo 
> names.
> I personally prefers to -dashboard as it is consistent with the
> OpenStack dashboard
> (the official name of horizon). On the other hand, I know some folks
> prefer to -ui
> as the name is shorter enough.
> Any preference?

+1 for -dashboard.
-ui sounds too generic to me.

>
> (3) governance
> neutron-fwaas project is under the neutron project.
> Does it sound okay to have neutron-fwaas-dashboard under the neutron project?
> This is what the neutron team does for neutron-lbaas-dashboard before
> and this model is adopted in most horizon plugins (like trove, sahara
> or others).

+1

>
> (4) initial core team
>
> My thought is to have neutron-fwaas/vpnaas-core and horizon-core as
> the initial core team.
> The release team and the stable team follow what we have for
> neutron-fwaas/vpnaas projects.
> Sounds reasonable?

+1

>
>
> Finally, I already prepare the split out version of FWaaS and VPNaaS
> dashboards in my personal github repos.
> Once we agree in the questions above, I will create the repositories
> under git.openstack.org.

great, thank you.

>
> [1] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-April/thread.html#115200
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [collectd-ceilometer-plugin] dpdkstat related meters are not displayed under "ceilometer meter-list"

2017-06-01 Thread rajeev.satyanaray...@wipro.com
Hi All,

I am working on bringing up Newton version of Openstack on 3 Nodes(Controller, 
Compute and Network). I am using OVS with DPDK on my Compute Node and to get 
dpdk port related statistics on my Ceilometer, I have configured collectd to 
use DPDKSTAT plugin and also enabled the collectd-ceilometer-plugin as 
mentioned in their docs. I have used mongodb as the database for ceilometer 
service. I have observed that "ceilometer meter-list" doesn't display any of 
the dpdkstat related meters, but when I issue "ceilometer sample-list -m 
dpdkstat.if_rx_packets" I get a table populated with resource-id and other 
details. I am not sure why "ceilometer meter-list" is not able to list my new 
dpdkstat meters.

Please find below my setup details:

Node 1: Controller
All the controller based services are running (mysqld, rabbitmq-server, 
mongodb, keystone, glance, dashboard, 
ceilometer-[notification/central/collector])

Node 2: Compute
All compute based services are running (nova-compute, ovs-agent, 
openstack-ceilometer-compute.service)

When I enable csv based write plugin in collectd, I could see all the csv files 
getting generated for all the dpdkstat counters and it also has data in it.
One observation is, I see that that dpdkstat counters like 
rx_size_1024_to_max_packets etc are getting populated as resource-id  for the 
meter dpdkstat.if_rx_packets. Is this behavior correct? Or should 
rx_size_1024_to_max_packets be considered as meter?

After enabling both collectd-ceilometer-plugin, should I modify or update the 
meters.yaml or pipeline.yaml to specify dpdkstat related meters?

Thanks in advance!

Regards,
Rajeev.

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSC][ironic][mogan][nova] mogan and nova co-existing

2017-06-01 Thread Zhenguo Niu
On Wed, May 31, 2017 at 10:01 PM, Jay Pipes  wrote:

> On 05/31/2017 01:31 AM, Zhenguo Niu wrote:
>
>> On Wed, May 31, 2017 at 12:20 PM, Ed Leafe  e...@leafe.com>> wrote:
>>
>> On May 30, 2017, at 9:36 PM, Zhenguo Niu > > wrote:
>>
>> > as placement is not splitted out from nova now, and there would be
>> users who only want a baremetal cloud, so we don't add resources to
>> placement yet, but it's easy for us to turn to placement to match the node
>> type with mogan flavors.
>>
>> Placement is a separate service, independent of Nova. It tracks
>> Ironic nodes as individual resources, not as a "pretend" VM. The
>> Nova integration for selecting an Ironic node as a resource is still
>> being developed, as we need to update our view of the mess that is
>> "flavors", but the goal is to have a single flavor for each Ironic
>> machine type, rather than the current state of flavors pretending
>> that an Ironic node is a VM with certain RAM/CPU/disk quantities.
>>
>>
>> Yes, I understand the current efforts of improving the baremetal nodes
>> scheduling. It's not conflict with mogan's goal, and when it is done, we
>> can share the same scheduling strategy with placement :)
>>
>> Mogan is a service for a specific group of users who really want a
>> baremetal resource instead of a generic compute resource, on API side, we
>> can expose RAID, advanced partitions, nics bonding, firmware management,
>> and other baremetal specific capabilities to users. And unlike nova's host
>> based availability zone, host aggregates, server groups (ironic nodes share
>> the same host), mogan can make it possible to divide baremetal nodes into
>> such groups, and make Rack aware for affinity and anti-affinity when
>> scheduling.
>>
> Zhenguo Niu brings up a very good point here. Currently, all Ironic nodes
> are associated with a single host aggregate in Nova, because of the
> vestigial notion that a compute *service* (ala the nova-compute worker) was
> equal to the compute *node*.
>
> In the placement API, of course, there's no such coupling. A placement
> aggregate != a Nova host aggregate.
>
> So, technically Ironic (or Mogan) can call the placement service to create
> aggregates that match *its* definition of what an aggregate is (rack, row,
> cage, zone, DC, whatever). Furthermore, Ironic (or Mogan) can associate
> Ironic baremetal nodes to one or more of those placement aggregates to get
> around Nova host aggregate to compute service coupling.
>
> That said, there's still lots of missing pieces before placement gets
> affinity/anti-affinity support...
>

Thanks Jay, we are also considering how to leverage the placement
aggregates, and if possible, we would like to contribute in this part to
make placement work well for mogan :)


>
> Best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Undercloud backup and restore

2017-06-01 Thread Shinobu Kinjo
Hi Carlos,

Since I thought that that is a kind of documentation bug, I filed a
bug on this as Doc bug.
Now I'm seeing that you were assigned...

Regards,
Shinobu Kinjo


On Tue, May 30, 2017 at 5:48 PM, Carlos Camacho Gonzalez
 wrote:
> Hi Shinobu,
>
> It's really helpful to get feedback from customers, please, can you give me
> details about the failures you are having?. If so, sending me directly some
> logs would be great.
>
> Thanks,
> Carlos.
>
>
> On Mon, May 29, 2017 at 9:07 AM, Shinobu Kinjo  wrote:
>>
>> Here is feedback from the customer.
>>
>> Following the guide [1], undercloud restoration was not succeeded.
>>
>> Swift objects could haven't been downloaded after restoration even
>> though they followed all procedures during backing up / restoring
>> their system described in [1].
>>
>> Since that, I'm not 100% sure if `tar -czf` is good enough to take a
>> backup of the system or not.
>>
>> It would be great help to do dry-run against backed up data so that we
>> can make sure that backed up data is completely fine.
>>
>> [1]
>> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_openstack_platform/7/html/back_up_and_restore_red_hat_enterprise_linux_openstack_platform/back_up_and_restore_the_undercloud
>>
>>
>> On Wed, May 24, 2017 at 4:26 PM, Carlos Camacho Gonzalez
>>  wrote:
>> > Hey folks,
>> >
>> > Based on what we discussed yesterday in the TripleO weekly team meeting,
>> > I'll like to propose a blueprint to create 2 features, basically to
>> > backup
>> > and restore the Undercloud.
>> >
>> > I'll like to follow in the first iteration the available docs for this
>> > purpose [1][2].
>> >
>> > With the addition of backing up the config files on /etc/ specifically
>> > to be
>> > able to recover from a failed Undercloud upgrade, i.e. recover the repos
>> > info removed in [3].
>> >
>> > I'll like to target this for P as I think I have enough time for
>> > coding/testing these features.
>> >
>> > I already have created a blueprint to track this effort
>> > https://blueprints.launchpad.net/tripleo/+spec/undercloud-backup-restore
>> >
>> > What do you think about it?
>> >
>> > Thanks,
>> > Carlos.
>> >
>> > [1]:
>> >
>> > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_openstack_platform/7/html/back_up_and_restore_red_hat_enterprise_linux_openstack_platform/restore
>> >
>> > [2]:
>> >
>> > https://docs.openstack.org/developer/tripleo-docs/post_deployment/backup_restore_undercloud.html
>> >
>> > [3]:
>> >
>> > https://docs.openstack.org/developer/tripleo-docs/installation/updating.html
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] revised structure of the heat-templates repository. Suggestions

2017-06-01 Thread Lance Haig

Hi,

One question I have not asked on this thread is.

What would you like to see changed within the repository and do you have 
a suggestion onhow to fix it.



Regards

Lance


On 31.05.17 16:03, Lance Haig wrote:

Hi,


On 24.05.17 18:43, Zane Bitter wrote:

On 19/05/17 11:00, Lance Haig wrote:

Hi,

As we know the heat-templates repository has become out of date in some
respects and also has been difficult to be maintained from a community
perspective.

For me the repository is quiet confusing with different styles that are
used to show certain aspects and other styles for older template 
examples.


This I think leads to confusion and perhaps many people who give up on
heat as a resource as things are not that clear.

From discussions in other threads and on the IRC channel I have seen
that there is a need to change things a bit.


This is why I would like to start the discussion that we rethink the
template example repository.

I would like to open the discussion with mys suggestions.

  * We need to differentiate templates that work on earlier versions of
heat that what is the current supported versions.


I typically use the heat_template_version for this. Technically this 
is entirely independent of what resource types are available in Heat. 
Nevertheless, if I submit e.g. a template that uses new resources 
only available in Ocata, I'll set 'heat_template_version: ocata' even 
if the template doesn't contain any Ocata-only intrinsic functions. 
We could make that a convention.

That is one way to achieve this.



  o I have suggested that we create directories that relate to
different versions so that you can create a stable version of
examples for the heat version and they should always remain
stable for that version and once it goes out of support can
remain there.


I'm reluctant to move existing things around unless its absolutely 
necessary, because there are a lot of links out in the wild to 
templates that will break. And they're links directly to the Git 
repo, it's not like we publish them somewhere and could add redirects.


Although that gives me an idea: what if we published them somewhere? 
We could make templates actually discoverable by publishing a list of 
descriptions (instead of just the names like you get through browsing 
the Git repo). And we could even add some metadata to indicate what 
versions of Heat they run on.


It would be better to do something like this. One of the biggest 
learning curves that our users have had is understanding what is 
available in what version of heat and then finding examples of 
templates that match their version.
I wanted to create the heat-lib library so that people could easily 
find working examples for their version of heat and also use the 
library intact as is so that they can get up-to speed really quickly.

This has enabled people to become productive much faster with heat.


  o This would mean people can find their version of heat and know
these templates all work on their version


This would mean keeping multiple copies of each template and 
maintaining them all. I don't think this is the right way to do this 
- to maintain old stuff what you need is a stable branch. That's also 
how you're going to be able to test against old versions of OpenStack 
in the gate.
Well I am not sure that this would be needed. Unless there are many 
backports of new resources to older versions of the templates.
e.g. Would the project backport the Neuton conditionals to the Liberty 
version of heat? I am assuming not.


That means that once a new version of heat is decided the template set 
becomes locked and you just create a copy with the new template 
version and test regression and once that is complete then you start 
adding the changes that are specific to the new version of heat.


I know that initially it would be quiet a bit of work to setup and to 
test the versions but once they are locked then you don't touch them 
again.


As I suggested in the other thread, I'd be OK with moving deprecated 
stuff to a 'deprecated' directory and then eventually deleting it. 
Stable branches would then correctly reflect the status of those 
templates at each previous release.
That makes sense. I would liek to clarify the above discussion first 
before we look at how to deprecate unsupported versions. I say that as 
many of our customers are running Liberty still :-)




  * We should consider adding a docs section that that includes 
training

for new users.
  o I know that there are documents hosted in the developer area 
and
these could be utilized but I would think having a 
documentation

section in the repository would be a good way to keep the
examples and the documents in the same place.
  o This docs directory could also host some training for new users
and old ones on new features etc.. In a similar line to what is
here in this repo 

Re: [openstack-dev] [qa][tc][all] Tempest to reject trademark tests

2017-06-01 Thread Matthew Treinish
On Thu, Jun 01, 2017 at 12:32:03PM +0900, Ghanshyam Mann wrote:
> On Thu, Jun 1, 2017 at 9:46 AM, Matthew Treinish  wrote:
> > On Wed, May 31, 2017 at 04:24:14PM +, Jeremy Stanley wrote:
> >> On 2017-05-31 17:18:54 +0100 (+0100), Graham Hayes wrote:
> >> [...]
> >> > Trademark programs are trademark programs - we should have a unified
> >> > process for all of them. Let's not make the same mistakes again by
> >> > creating classes of projects / programs. I do not want this to be
> >> > a distinction as we move forward.
> >>
> >> This I agree with. However I'll be surprised if a majority of the QA
> >> team disagree on this point (logistic concerns with how to curate
> >> this over time I can understand, but that just means they need to
> >> interest some people in working on a manageable solution).
> >
> > +1 I don't think anyone disagrees with this. There is a logistical concern
> > with the way the new proposed programs are going to be introduced. Quite
> > frankly it's too varied and broad and I don't think we'll have enough people
> > working on this space to help maintain it in the same manner.
> >
> > It's the same reason we worked on the plugin decomposition in the first 
> > place.
> > You can easily look at the numbers of tests to see this:
> >
> > https://raw.githubusercontent.com/mtreinish/qa-in-the-open/lca2017/tests_per_proj.png
> >
> > Which shows things before the plugin decomposition (and before the big 
> > tent) Just
> > because we said we'd support all the incubated and integrated projects in 
> > tempest
> > didn't mean people were contributing and/or the tests were well maintained.
> >
> > But, as I said elsewhere in this thread this is a bit too early to have the
> > conversation because the new interop programs don't actually exist yet.
> 
> Yes, there is no question on goal to have a unified process for all.
> As Jeremy, Matthew mentioned, key things here is manageability issues.
> 
> We know contributors in QA are reducing cycle by cycle. I might be
> thinking over but I thought about QA team situation when we have
> around 30-40 trademark projects and all tests on Tempest
> repo.Personally I am ok to have tests in Tempest repo or a dedicated
> interop plugin repo which can be controlled by QA at some level But we

I actually don't think a dedicated interop plugin is a good idea. It doesn't
actually solve anything, because the tests are going to be the same and the
same people are going to be maintaining them. All you did was move it into a
different repo which solves none of the problems. What I was referring to was
exploring a more distributed approach to handling the tests (like what we did
for plugin decomposition for higher level services) That is the only way I see
us addressing the work overload problem. But, as I said before this is still
too early to talk about because there aren't defined new programs yet, just
the idea for them and a rough plan. We're still talking very much in the
abstract about everything...

-Matt Treinish

> need dedicated participation from interop + projects liason (I am not
> sure that worked well in pass but if with TC help it might work :)).
> 
> I can recall that, QA team has many patches on plugin side to improve
> them or fix them but may of them has no active reviews or much
> attentions from project team. I am afraid about same case for
> trademark projects also.
> 
> May be broad direction on trademark program and scope of it can help
> to imagine the quantity of programs and tests which QA teams need to
> maintain.



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev