[openstack-dev] [daisycloud-core] Cancelled: IRC meeting 0800UTC Jan. 6 2017

2017-01-05 Thread hu . zhijiang
Hi Team,

Due to there is no topic to be discussed, I suggest to cancel the meeting.

B.R.,
Zhijiang



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] glance v2 support?

2017-01-05 Thread Tim Bell

On 6 Jan 2017, at 05:04, Rabi Mishra 
> wrote:

On Fri, Jan 6, 2017 at 4:38 AM, Emilien Macchi 
> wrote:
Greetings Heat folks!

My question is simple:
When do you plan to support Glance v2?
https://review.openstack.org/#/c/240450/

The spec looks staled while Glance v1 was deprecated in Newton (and v2
was started in Kilo!).


Hi Emilien,

I think we've not been able to move to v2 due to v1/v2 incompatibility[1] with 
respect to the location[2] property. Moving to v2 would break all existing 
templates using that property.

I've seen several discussions around that without any conclusion.  I think we 
can support a separate v2 image resource and deprecate the current one, unless 
there is a better path available.


[1] https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability
[2] 
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/glance/image.py#L107-L112


Would this be backwards compatible (i.e. the old image resource would still 
work without taking advantage of the new functions) or would heat users have to 
change their templates?

It would be good if there is a way to minimise the user impact.

Tim


As an user, I need Glance v2 support so I can remove Glance Registry
from my deployment. and run pure v2 everywhere in my cloud.

Thanks for your help,
--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
Rabi Misra

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] project specific question for the next user survey

2017-01-05 Thread Rico Lin
Dear Team
Base on suggestions
We open a simple doodle to collect opinions.
http://doodle.com/poll/fns89s3ee4za3saw
Please help to pick the better question in your mind.
Let's voting till the end of this Friday (for everyone!)

2017-01-05 1:06 GMT+08:00 Rico Lin :

> Dear Team
> Let's put some thoughts here[1] in next 24h.
> Any idea/suggestion to this heat adoption question are welcome.
> It will be great if we can get some really useful information from users
> by given the right question.
>
> [1] https://etherpad.openstack.org/p/ocata-heat-user-survey
>
>
> 2017-01-02 12:18 GMT+08:00 Rabi Mishra :
>
>> Hi All,
>>
>> We have an opportunity to submit a heat adoption related question (for
>> those who are USING, TESTING, or INTERESTED in heat) to be included in the
>> User Survey.
>>
>> Please provide your suggestions/questions.*The deadline for this is 9th
>> Jan.*
>>
>> --
>> Regards,
>> Rabi Misra
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> May The Force of OpenStack Be With You,
>
>
>
> *Rico LinChief OpenStack Technologist, inwinSTACK*irc: ricolin
>
>


-- 
May The Force of OpenStack Be With You,



*Rico LinChief OpenStack Technologist, inwinSTACK*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Rolling upgrades vs. duplication of prop data

2017-01-05 Thread Rabi Mishra
On Thu, Jan 5, 2017 at 10:28 PM, Zane Bitter  wrote:

> On 05/01/17 11:41, Crag Wolfe wrote:
>
>> Hi,
>>
>> I have a patch[1] to support the de-duplication of resource properties
>> data between events and resources. In the ideal rolling-upgrade world,
>> we would be writing data to the old and new db locations, only reading
>> from the old in the first release (let's assume Ocata). The problem is
>> that in this particular case, we would be duplicating a lot of data, so
>> [1] for now does not take that approach. I.e., it is not rolling-upgrade
>> friendly.
>>
>> So, we need to decide what to do for Ocata:
>>
>> A. Support assert:supports-upgrade[2] and resign ourselves to writing
>> duplicated resource prop. data through Pike (following the standard
>> strategy of write to old/new and read from old, write to old/new and
>> read from new, write/read from new over O,P,Q).
>>
>> B. Push assert:supports-upgrade back until Pike, and avoid writing
>> resource prop. data in multiple locations in Ocata.
>>
>
> +1
>
> Rabi mentioned that we don't yet have tests in place to claim the tag in
> Ocata anyway, so I vote for making it easy on ourselves until we have to.
> Anything that involves shifting stuff between tables like this inevitably
> gets pretty gnarly.
>
>
Yeah, as per governance requirements to claim the tag we would need gate
tests to validate that mixed-version services work together properly[1]. We
would probably need a multi-node grenade job running services of n-1/n
releases.

I could not find one for any other project to refer to, though there are
few projects that already have this tag.


[1]
https://governance.openstack.org/tc/reference/tags/assert_supports-rolling-upgrade.html#requirements

C. DB triggers.

>
> -2! -2!
>
>
> I vote for B. I'm pretty sure there is not much support for C (count me
>> in that group :), but throwing it out there just in case.
>>
>> Thanks,
>>
>> --Crag
>>
>> [1] https://review.openstack.org/#/c/363415/
>>
>> [2] https://review.openstack.org/#/c/407989/
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] glance v2 support?

2017-01-05 Thread Rabi Mishra
On Fri, Jan 6, 2017 at 4:38 AM, Emilien Macchi  wrote:

> Greetings Heat folks!
>
> My question is simple:
> When do you plan to support Glance v2?
> https://review.openstack.org/#/c/240450/
>
> The spec looks staled while Glance v1 was deprecated in Newton (and v2
> was started in Kilo!).
>
>
Hi Emilien,

I think we've not been able to move to v2 due to v1/v2 incompatibility[1]
with respect to the location[2] property. Moving to v2 would break all
existing templates using that property.

I've seen several discussions around that without any conclusion.  I think
we can support a separate v2 image resource and deprecate the current one,
unless there is a better path available.


[1] https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability
[2] https://github.com/openstack/heat/blob/master/heat/engine/
resources/openstack/glance/image.py#L107-L112


> As an user, I need Glance v2 support so I can remove Glance Registry
> from my deployment. and run pure v2 everywhere in my cloud.
>
> Thanks for your help,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Britt Houser (bhouser)
I think you’re giving a great example of my point that we’re not yet at the 
stage where we can say, “Any tool should be able to deploy kolla containers”.  
Right?

From: Pete Birley 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, January 5, 2017 at 9:06 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables

I'll reply to Britts comments, and then duck out, unless explicitly asked back, 
as I don't want to (totally) railroad this conversation:

The Kolla containers entry-point is a great example of how the field have moved 
on. While it was initially required, in the Kkubernetes world the Kolla ABI is 
actually more of a hindrance than help, as it makes the containers much more of 
a 'black-box' to use. In the other Openstack on Kubernetes projects I 
contribute to, and my own independent work, in we actually just define the 
entry point to the container directly in the k8s manifest and make no use of 
Kolla's entry point and config mechanisms, either running another 'init' 
container to build and bind mount the configuration (Harbor), or use Kubernetes 
configmap object to achieve the same result (Openstack Helm). It would be 
perfectly possible for Kolla Ansible (and indeed Salt) to take a similar 
approach - meaning that rather maintaining an ABI that works for all platforms, 
Kolla would be free to just ensure that the required binaries were present in 
images.

I agree that this cannot happen overnight, but think that when appropriate we 
should take stock of where we are and how to plot a course that lets all of our 
projects flourish without competing for resources, or being so entwined that we 
become technically paralyzed and overloaded.

Sorry, Sam and Michal! You can have your thread back now :)

On Fri, Jan 6, 2017 at 1:17 AM, Britt Houser (bhouser) 
> wrote:
I think both Pete and Steve make great points and it should be our long term 
vision.  However, I lean more with Michael that we should make that a separate 
discussion, and it’s probably better done further down the road.  Yes, Kolla 
containers have come a long way, and the ABI has been stable for awhile, but 
the vast majority of that “for awhile” was with a single deployment tool: 
ansible.  Now we have kolla-k8s and kolla-salt.  Neither one is fully featured 
yet as ansible, which to me means I don’t think we can say for sure that ABI 
won’t need to change as we try to support many deployment tools.  (Someone 
remind me, didn’t kolla-mesos change the ABI?)  Anyway, the point is I don’t 
think we’re at a point of maturity to be certain the ABI won’t need changing.  
When we have 2-3 deployment tools with enough feature parity to say, “Any tool 
should be able to deploy kolla containers” then I think it make sense to have 
that discussion.  I just don’t think we’re there yet.  And until the point, 
changes to the ABI will be quite painful if each project is in outside of the 
kolla umbrella, IMHO.

Thx,
britt

From: Pete Birley 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, January 5, 2017 at 6:47 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables

Also coming from the perspective of a Kolla-Kubernetes contributor, I am 
worried about the scope that Kolla is extending itself to.

Moving from a single repo to multiple repo's has made the situation much 
better, but by operating under a single umbrella I feel that we may potentially 
be significantly limiting the potential for each deliverable. Alex Schultz, 
Steve and Sam raise some good points here.

The interdependency between the projects is causing issues, the current 
reliance that Kolla-Kubernetes has on Kolla-ansible is both undesirable and 
unsustainable in my opinion. This is both because it limits the flexibility 
that we have as Kolla-Kubernetes developers, but also because it places burden 
and rigidity on Kolla-Ansible. This will ultimately prevent both projects from 
being able to take advantage of the capabilities offered to them by the 
deployment mechanism they use.

Like Steve, I don't think the addition of Kolla-aSlt should affect me, and as a 
result don't feel I should have any say in the project. That said, I'd really 
like to see it happen in one form or another - as having a wide variety of 
complementary projects and tooling for OpenStack deployment can only be a good 
thing for the community if correctly managed.

When Kolla started it was very experimental, containers (In their modern form) 
were a relatively 

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Pete Birley
I'll reply to Britts comments, and then duck out, unless explicitly asked
back, as I don't want to (totally) railroad this conversation:

The Kolla containers entry-point is a great example of how the field have
moved on. While it was initially required, in the Kkubernetes world the
Kolla ABI is actually more of a hindrance than help, as it makes the
containers much more of a 'black-box' to use. In the other Openstack on
Kubernetes projects I contribute to, and my own independent work, in we
actually just define the entry point to the container directly in the k8s
manifest and make no use of Kolla's entry point and config mechanisms,
either running another 'init' container to build and bind mount the
configuration (Harbor), or use Kubernetes configmap object to achieve the
same result (Openstack Helm). It would be perfectly possible for Kolla
Ansible (and indeed Salt) to take a similar approach - meaning that rather
maintaining an ABI that works for all platforms, Kolla would be free to
just ensure that the required binaries were present in images.

I agree that this cannot happen overnight, but think that when appropriate
we should take stock of where we are and how to plot a course that lets all
of our projects flourish without competing for resources, or being so
entwined that we become technically paralyzed and overloaded.

Sorry, Sam and Michal! You can have your thread back now :)

On Fri, Jan 6, 2017 at 1:17 AM, Britt Houser (bhouser) 
wrote:

> I think both Pete and Steve make great points and it should be our long
> term vision.  However, I lean more with Michael that we should make that a
> separate discussion, and it’s probably better done further down the road.
> Yes, Kolla containers have come a long way, and the ABI has been stable for
> awhile, but the vast majority of that “for awhile” was with a single
> deployment tool: ansible.  Now we have kolla-k8s and kolla-salt.  Neither
> one is fully featured yet as ansible, which to me means I don’t think we
> can say for sure that ABI won’t need to change as we try to support many
> deployment tools.  (Someone remind me, didn’t kolla-mesos change the ABI?)
> Anyway, the point is I don’t think we’re at a point of maturity to be
> certain the ABI won’t need changing.  When we have 2-3 deployment tools
> with enough feature parity to say, “Any tool should be able to deploy kolla
> containers” then I think it make sense to have that discussion.  I just
> don’t think we’re there yet.  And until the point, changes to the ABI will
> be quite painful if each project is in outside of the kolla umbrella, IMHO.
>
>
>
> Thx,
>
> britt
>
>
>
> *From: *Pete Birley 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Thursday, January 5, 2017 at 6:47 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [tc][kolla] Adding new deliverables
>
>
>
> Also coming from the perspective of a Kolla-Kubernetes contributor, I am
> worried about the scope that Kolla is extending itself to.
>
>
>
> Moving from a single repo to multiple repo's has made the situation much
> better, but by operating under a single umbrella I feel that we may
> potentially be significantly limiting the potential for each deliverable.
> Alex Schultz, Steve and Sam raise some good points here.
>
>
>
> The interdependency between the projects is causing issues, the current
> reliance that Kolla-Kubernetes has on Kolla-ansible is both undesirable and
> unsustainable in my opinion. This is both because it limits the flexibility
> that we have as Kolla-Kubernetes developers, but also because it places
> burden and rigidity on Kolla-Ansible. This will ultimately prevent both
> projects from being able to take advantage of the capabilities offered to
> them by the deployment mechanism they use.
>
>
>
> Like Steve, I don't think the addition of Kolla-aSlt should affect me, and
> as a result don't feel I should have any say in the project. That said, I'd
> really like to see it happen in one form or another - as having a wide
> variety of complementary projects and tooling for OpenStack deployment can
> only be a good thing for the community if correctly managed.
>
>
>
> When Kolla started it was very experimental, containers (In their modern
> form) were a relatively new construct, and it took on the audacious task of
> trying to package and deploy OpenStack using the tooling that was available
> at the time. I really feel that this effort has succeeded admirably, and
> conversations like this are a result of that. Kolla is one of the most
> active projects in OpenStack, with two deployment mechanisms being
> developed currently, and hopefully to increase soon with a salt based
> deployment and potentially even more on the horizon.
>
>
>
> With this in mind, I return to my original point and wonder if we 

[openstack-dev] [Vitrage] About alarms reported by datasource and the alarms generated by vitrage evaluator

2017-01-05 Thread yinliyin
Hi all, 


   Vitrage generate alarms acording to the templates. All the alarms raised by 
vitrage has the type "vitrage". Suppose Nagios has an alarm A. Alarm A is 
raised by vitrage evaluator according to the action part of a scenario, type of 
alarm A is "vitrage". If Nagios reported alarm A latter, a new alarm A with 
type "Nagios" would be generator in the entity graph. There would be two 
vertices for the same alarm in the graph. And we have to define two alarm 
entities, two relationships, two scenarios in the template file to make the 
alarm propagation procedure work.


   It is inconvenient to describe fault model of system with lot of alarms. How 
to solve this problem?



















殷力殷 YinLiYin













上海市浦东新区碧波路889号中兴研发大楼D502 
D502, ZTE Corporation R Center, 889# Bibo Road, 
Zhangjiang Hi-tech Park, Shanghai, P.R.China, 201203 
T: +86 21 68896229
M: +86 13641895907 
E: yinli...@zte.com.cn
www.zte.com.cn__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Vitrage] About introduce suspect state of a alarm

2017-01-05 Thread yinliyin
Hi all, 



I have a question when learnning vitrage: 


A, B, C, D, E are alarms, consider the following alarm propagation model:  


A --> B   


C --> D   


B or D --> E





When alarm E is reported by the system, from the above model, we know that 
one or both of the following conditions may be true:


   1. A is trigged


   2. C is trigged


That is alarm A and C are suspected to be triggered. The suspect state 
of the alarm is valuable for system administrator because 


one could check the system to find out whether the suspected alarms is 
really triggered according to the information.


In current vitrage template, we could not describe this situation. An 
alarm has only two states: triggered or not triggered. 


Whether we could introduce suspect state for alarms ?







殷力殷 YinLiYin







上海市浦东新区碧波路889号中兴研发大楼D502 
D502, ZTE Corporation R Center, 889# Bibo Road, 
Zhangjiang Hi-tech Park, Shanghai, P.R.China, 201203 
T: +86 21 68896229
M: +86 13641895907 
E: yinli...@zte.com.cn
www.zte.com.cn__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Britt Houser (bhouser)
I think both Pete and Steve make great points and it should be our long term 
vision.  However, I lean more with Michael that we should make that a separate 
discussion, and it’s probably better done further down the road.  Yes, Kolla 
containers have come a long way, and the ABI has been stable for awhile, but 
the vast majority of that “for awhile” was with a single deployment tool: 
ansible.  Now we have kolla-k8s and kolla-salt.  Neither one is fully featured 
yet as ansible, which to me means I don’t think we can say for sure that ABI 
won’t need to change as we try to support many deployment tools.  (Someone 
remind me, didn’t kolla-mesos change the ABI?)  Anyway, the point is I don’t 
think we’re at a point of maturity to be certain the ABI won’t need changing.  
When we have 2-3 deployment tools with enough feature parity to say, “Any tool 
should be able to deploy kolla containers” then I think it make sense to have 
that discussion.  I just don’t think we’re there yet.  And until the point, 
changes to the ABI will be quite painful if each project is in outside of the 
kolla umbrella, IMHO.

Thx,
britt

From: Pete Birley 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, January 5, 2017 at 6:47 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [tc][kolla] Adding new deliverables

Also coming from the perspective of a Kolla-Kubernetes contributor, I am 
worried about the scope that Kolla is extending itself to.

Moving from a single repo to multiple repo's has made the situation much 
better, but by operating under a single umbrella I feel that we may potentially 
be significantly limiting the potential for each deliverable. Alex Schultz, 
Steve and Sam raise some good points here.

The interdependency between the projects is causing issues, the current 
reliance that Kolla-Kubernetes has on Kolla-ansible is both undesirable and 
unsustainable in my opinion. This is both because it limits the flexibility 
that we have as Kolla-Kubernetes developers, but also because it places burden 
and rigidity on Kolla-Ansible. This will ultimately prevent both projects from 
being able to take advantage of the capabilities offered to them by the 
deployment mechanism they use.

Like Steve, I don't think the addition of Kolla-aSlt should affect me, and as a 
result don't feel I should have any say in the project. That said, I'd really 
like to see it happen in one form or another - as having a wide variety of 
complementary projects and tooling for OpenStack deployment can only be a good 
thing for the community if correctly managed.

When Kolla started it was very experimental, containers (In their modern form) 
were a relatively new construct, and it took on the audacious task of trying to 
package and deploy OpenStack using the tooling that was available at the time. 
I really feel that this effort has succeeded admirably, and conversations like 
this are a result of that. Kolla is one of the most active projects in 
OpenStack, with two deployment mechanisms being developed currently, and 
hopefully to increase soon with a salt based deployment and potentially even 
more on the horizon.

With this in mind, I return to my original point and wonder if we may be better 
moving from our current structure of Kolla-deploy(x) to deploy(x)-Kolla and 
redefine the governance of these deliverables, turning them into freestanding 
projects. I think this would offer several potential advantages, as it would 
allow teams to form tighter bonds with the tools and communities they use (ie 
Kubernetes/Helm, Ansible or Salt). This would also make it easier for these 
projects to use upstream components where available (eg Ceph, RabbitMQ, and 
MariaDB) which are (and should be) in many cases better than the artifacts we 
can produce. To this end, I have been working with the Ceph community to get 
their Kubernetes Helm implementation to the point where we can use it for our 
own work, and would love to see more of this. It benefits not only us by 
offloading support to the upstream project, but gives them a vested interest in 
supporting us and also helps provide better quality tooling for the entire open 
source ecosystem.

This should also allow Kolla itself to become much more streamlined, and 
focused simply on producing docker containers for consumption by the community, 
and make the artifacts produced potentially much less opinionated and more 
attractive to other projects. And being honest, I have a real desire for this 
activity to eventually be taken on by the relevant OpenStack projects 
themselves - and would love to see Kolla help develop a framework that would 
allow projects to take ownership of the containerisation of their output.

Sorry for such a long email - but this seems like a good opportunity to raise 
some of these issues 

[openstack-dev] [nova] Feature review sprint on Wednesday 1/11

2017-01-05 Thread Matt Riedemann
We agreed in the nova meeting today to hold a feature review sprint next 
Wednesday 1/11.


We'll be going through the unmerged changes in 
https://etherpad.openstack.org/p/nova-ocata-feature-freeze and try to 
push some of those through, or get them close.


If you have a blueprint series in that etherpad please try to be in the 
#openstack-nova IRC channel on Wednesday to answer any questions, talk 
through any issues and just in general quickly respond to feedback.


I'll send another reminder as we get closer to the date.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] removing glance-registry & api v1

2017-01-05 Thread Emilien Macchi
Just some heads-up for those who were interested by removing Glance
Registry from TripleO (yes Flavio, I'm looking at you !).

Because TripleO relies on Heat for the pingtest (validation that
OpenStack cloud is up and running) and because Heat doesn't support
Glance v2 API [1], we strongly rely on Glance v1 API and therefore
Glance Registry service now.
I've started a thread [2] to ask Heat folks when they plan to support
Glance v2 API.

The pingtest is a heat template used to manage the images, volumes and
servers. Because our scenarios are testing boot from volume, creating
from an image, Glance API is used.
A (dirty) workaround would be to stop testing boot from volume and
manage the glance image creating by just using osc in tripleo-ci
scripts, and find a way to give the image ID to the heat template. I
personally don't like it as I consider it as a test regression and I'm
not sure we want that at this point.

Here's the work done to remove Glance Registry in both Puppet & TripleO CI:
https://review.openstack.org/#/q/topic:g-registry/removal

Any feedback or suggestion is welcome. On my side, I won't work on the
workaround as I don't like the idea, and I'll just wait for Heat to
support Glance v2 API.

[1] https://review.openstack.org/#/c/240450/
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-January/109722.html
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Pete Birley
Also coming from the perspective of a Kolla-Kubernetes contributor, I am
worried about the scope that Kolla is extending itself to.

Moving from a single repo to multiple repo's has made the situation much
better, but by operating under a single umbrella I feel that we may
potentially be significantly limiting the potential for each deliverable.
Alex Schultz, Steve and Sam raise some good points here.

The interdependency between the projects is causing issues, the current
reliance that Kolla-Kubernetes has on Kolla-ansible is both undesirable and
unsustainable in my opinion. This is both because it limits the flexibility
that we have as Kolla-Kubernetes developers, but also because it places
burden and rigidity on Kolla-Ansible. This will ultimately prevent both
projects from being able to take advantage of the capabilities offered to
them by the deployment mechanism they use.

Like Steve, I don't think the addition of Kolla-aSlt should affect me, and
as a result don't feel I should have any say in the project. That said, I'd
really like to see it happen in one form or another - as having a wide
variety of complementary projects and tooling for OpenStack deployment can
only be a good thing for the community if correctly managed.

When Kolla started it was very experimental, containers (In their modern
form) were a relatively new construct, and it took on the audacious task of
trying to package and deploy OpenStack using the tooling that was available
at the time. I really feel that this effort has succeeded admirably, and
conversations like this are a result of that. Kolla is one of the most
active projects in OpenStack, with two deployment mechanisms being
developed currently, and hopefully to increase soon with a salt based
deployment and potentially even more on the horizon.

With this in mind, I return to my original point and wonder if we may be
better moving from our current structure of Kolla-deploy(x) to
deploy(x)-Kolla and redefine the governance of these deliverables, turning
them into freestanding projects. I think this would offer several potential
advantages, as it would allow teams to form tighter bonds with the tools
and communities they use (ie Kubernetes/Helm, Ansible or Salt). This would
also make it easier for these projects to use upstream components where
available (eg Ceph, RabbitMQ, and MariaDB) which are (and should be) in
many cases better than the artifacts we can produce. To this end, I have
been working with the Ceph community to get their Kubernetes Helm
implementation to the point where we can use it for our own work, and would
love to see more of this. It benefits not only us by offloading support to
the upstream project, but gives them a vested interest in supporting us and
also helps provide better quality tooling for the entire open source
ecosystem.

This should also allow Kolla itself to become much more streamlined, and
focused simply on producing docker containers for consumption by the
community, and make the artifacts produced potentially much less
opinionated and more attractive to other projects. And being honest, I have
a real desire for this activity to eventually be taken on by the relevant
OpenStack projects themselves - and would love to see Kolla help develop a
framework that would allow projects to take ownership of the
containerisation of their output.

Sorry for such a long email - but this seems like a good opportunity to
raise some of these issues that have been on my mind. In summary, if it
doesn't affect me then I wish a Salt based Kolla deployment the best of
success and hope to see the project prosper so that we as OpenStack
developers can all share from the increased experience and opportunities
growing the community offers.


On Thu, Jan 5, 2017 at 9:43 PM, Steve Wilkerson 
wrote:

> There are some interesting points in this topic.  I agree entirely with
> Sam Yaple.  It does not make sense to me to have kolla-ansible and
> kolla-kubernetes cores involved with the introduction of a new deliverable
> under the kolla umbrella.  A new deliverable (read: project, really) should
> not rely on a separate project to ratify its existence.  I feel this is
> dangerous.  I also feel looking at the different deployment methodologies
> scoped under the kolla project as competition or rivalry is folly.  I'm
> honestly a bit concerned about how broad the scope of the project kolla has
> become.  I think the conversation of separating the deployment projects
> from the kolla umbrella is a conversation worth having at some point.
>
> The repo split was a step in the right direction, but currently the
> deliverables (4, if kolla-salt becomes a thing) are sharing a single PTL, a
> single IRC channel, and a single IRC weekly meeting.  This has the
> potential of introducing a significant amount of overhead for the
> overarching project as a whole.  What happens if kolla-puppet becomes a
> thing?  What if kolla-mesos was still about?  I think 

[openstack-dev] [heat] glance v2 support?

2017-01-05 Thread Emilien Macchi
Greetings Heat folks!

My question is simple:
When do you plan to support Glance v2?
https://review.openstack.org/#/c/240450/

The spec looks staled while Glance v1 was deprecated in Newton (and v2
was started in Kilo!).

As an user, I need Glance v2 support so I can remove Glance Registry
from my deployment. and run pure v2 everywhere in my cloud.

Thanks for your help,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] office hours starting January 6th

2017-01-05 Thread Lance Bragstad
Heads up folks! Just sending out a friendly reminder that tomorrow will be
our first office hour session of the new year.

See everyone in #openstack-keystone tomorrow!

On Wed, Dec 21, 2016 at 5:32 PM, Rodrigo Duarte 
wrote:

> Thanks for the initiative! This is something that both keystone and the
> community will benefit! :)
>
> On Wed, Dec 21, 2016 at 4:22 PM, Steve Martinelli 
> wrote:
>
>> Thanks for setting this up Lance!
>>
>> You can count on me to join and smash some bugs.
>>
>> On Wed, Dec 21, 2016 at 1:06 PM, Lance Bragstad 
>> wrote:
>>
>>> Hi folks!
>>>
>>> If you remember, last year we started a weekly bug day [0]. The idea was
>>> to dedicate one day a week to managing keystone's bug queue by triaging,
>>> fixing, and reviewing bugs. This was otherwise known as keystone's office
>>> hours.
>>>
>>> I'd like to remind everyone that we are starting up this initiative
>>> again, right after the New Year. Our first bug day this year will be
>>> Friday, January 6th, and it will be recurring every Friday after that.
>>>
>>> Previously, we used the etherpad [1] to track the status of patches and
>>> bugs through the day. This time around, I'd like to see if we can keep
>>> state out of the etherpad in favor of Gerrit dashboards and IRC (which are
>>> easier to log and track). The etherpad now consists of information about
>>> the event, which should eventually be moved into a wiki somewhere.
>>>
>>> I wanted to get this out the door before the holidays so that people can
>>> get it on their calendar. We can also use this thread to air out any
>>> questions about office hours before the January 6th.
>>>
>>> Thanks and have a safe holiday season!
>>>
>>> Lance
>>>
>>>
>>> [0] http://lists.openstack.org/pipermail/openstack-dev/2015-
>>> October/076649.html
>>> [1] https://etherpad.openstack.org/p/keystone-office-hours
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Rodrigo Duarte Sousa
> Senior Quality Engineer @ Red Hat
> MSc in Computer Science
> http://rodrigods.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Steve Wilkerson
There are some interesting points in this topic.  I agree entirely with Sam
Yaple.  It does not make sense to me to have kolla-ansible and
kolla-kubernetes cores involved with the introduction of a new deliverable
under the kolla umbrella.  A new deliverable (read: project, really) should
not rely on a separate project to ratify its existence.  I feel this is
dangerous.  I also feel looking at the different deployment methodologies
scoped under the kolla project as competition or rivalry is folly.  I'm
honestly a bit concerned about how broad the scope of the project kolla has
become.  I think the conversation of separating the deployment projects
from the kolla umbrella is a conversation worth having at some point.

The repo split was a step in the right direction, but currently the
deliverables (4, if kolla-salt becomes a thing) are sharing a single PTL, a
single IRC channel, and a single IRC weekly meeting.  This has the
potential of introducing a significant amount of overhead for the
overarching project as a whole.  What happens if kolla-puppet becomes a
thing?  What if kolla-mesos was still about?  I think we can all agree this
gets out of hand quickly.

Yes, people are religious about the tools they use, and deployment tools
are no different.  I think scoping them all under the same umbrella project
is a mistake in the long term.  The folks that want to focus on Ansible
should be able to focus wholly on Ansible with like-minded folks, same for
Salt, same for whatever.  Having them remain together for the sake of
sharing a name isn't sustainable in the long term -- let each do what they
do well.  As far as being able to talk and share experiences in deployments
or whatever, let's not act as if IRC channels have walls we can't reach
across.  As part of the kolla-kubernetes community, it's imperative that I
can reach across the gap to work with people in the Helm and Kubernetes
community.  If the deployment tools existed separately, there's nothing
stopping them from asking either.

But in regards to the question, if kolla-salt is to be a thing, I think the
PTL and the kolla team proper can decide that.  As a contributor for
kolla-kubernetes, it does not and should not affect me.

On Thu, Jan 5, 2017 at 3:14 PM, Doug Hellmann  wrote:

> Excerpts from Michał Jastrzębski's message of 2017-01-05 11:45:49 -0800:
> > I think total separation of projects would require much larger
> > discussion in community. Currently we agreed on having kolla-ansible
> > and kolla-k8s to be deliverables under kolla umbrella from historical
> > reasons. Also I don't agree that there is "little or no overlap" in
> > teams, in fact there is ton of overlap, just not 100%. Many
> > contributors (myself included) jump between deliverables today.
>
> OK, that's good to know. It wasn't clear from some of the initial
> messages in this thread, which seemed to imply otherwise.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-6, 2-6 Jan

2017-01-05 Thread Doug Hellmann
Happy New Year, everyone!

Focus
-

Feature work and major refactoring should be well under way as we
approach the third milestone and various feature and release freeze
dates.

The deadline for non-client library releases is R-5 (19 Jan). We do not
grant Feature Freeze Extensions for any libraries, so that is a hard
freeze date. Any feature work that requires updates to non-client
libraries should be prioritized so it can be completed by that time.

Release Tasks
-

As we did at the end of Newton, when the time comes to create
stable/ocata branches they will be configured so that members of the
$project-release group in gerrit have permission to approve patches.
This group should be a small subset of the core review team, aware of
the priorities and criteria for patches to be approved as we work toward
release candidates. Release liaisons should ensure that these groups
exist in gerrit and that their membership is correct for this cycle.
Please coordinate with the release management team if you have any
questions.

General Notes
-

We will start the soft string freeze during R-4 (23-27 Jan). See
https://releases.openstack.org/ocata/schedule.html#o-soft-sf for details

Important Dates
---

Final release of non-client libraries: 19 Jan
Ocata 3 Milestone, with Feature and Requirements Freezes: 26 Jan
Ocata release schedule:
http://releases.openstack.org/ocata/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][diskimage-builder] containers, Containers, CONTAINERS!

2017-01-05 Thread Matthew Thode
On 01/05/2017 02:23 PM, Paul Belanger wrote:
> Did my click bait work? :)
> 
> So, over the holiday break I had some time to hack on diskimage-builder and
> create a new (container) element.  I opted to call it ubuntu-rootfs. The basic
> idea, is the most minimal debootstrap chroot for ubuntu.  Everything in, we 
> can
> get a 42MB tarball of ubuntu-xenial.  In turn, with the tarball we can then
> import into docker, lxc and do container things.
> 
> The stack is about 9 deep right now, and would love some feedback. I'm 
> planning
> on giving a talk in devconf.cz in a few weeks on this too. I hope to return 
> with
> some additional feedback on using DIB for building container things.
> 
I'd be interested in extending this to Gentoo at least.  Need to reduce
the stage size somewhat.  I wonder if stage2 is small enough...


-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-01-05 11:45:49 -0800:
> I think total separation of projects would require much larger
> discussion in community. Currently we agreed on having kolla-ansible
> and kolla-k8s to be deliverables under kolla umbrella from historical
> reasons. Also I don't agree that there is "little or no overlap" in
> teams, in fact there is ton of overlap, just not 100%. Many
> contributors (myself included) jump between deliverables today.

OK, that's good to know. It wasn't clear from some of the initial
messages in this thread, which seemed to imply otherwise.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][diskimage-builder] containers, Containers, CONTAINERS!

2017-01-05 Thread Paul Belanger
Did my click bait work? :)

So, over the holiday break I had some time to hack on diskimage-builder and
create a new (container) element.  I opted to call it ubuntu-rootfs. The basic
idea, is the most minimal debootstrap chroot for ubuntu.  Everything in, we can
get a 42MB tarball of ubuntu-xenial.  In turn, with the tarball we can then
import into docker, lxc and do container things.

The stack is about 9 deep right now, and would love some feedback. I'm planning
on giving a talk in devconf.cz in a few weeks on this too. I hope to return with
some additional feedback on using DIB for building container things.

Side note, I also included the simple-playbook element, used for running
ansible-playbook with a chroot connection. As mentioned to people in the past,
this gives users an additional way to hand configuration management outside
elements.

I've been using it with great success recently to build a nodepool-builder
docker container.

https://review.openstack.org/#/q/topic:ubuntu-container+status:open

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 7:42 PM, Doug Hellmann  wrote:

> Excerpts from Sam Yaple's message of 2017-01-05 18:22:54 +:
> > On Thu, Jan 5, 2017 at 6:12 PM, Doug Hellmann 
> wrote:
> >
> > > Excerpts from Sam Yaple's message of 2017-01-05 17:02:35 +:
> > > > On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley 
> > > wrote:
> > > >
> > > > > On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
> > > > > [...]
> > > > > > I do feel this is slightly different than whats described. Since
> it
> > > is
> > > > > not
> > > > > > unrelated services, but rather, for lack of a better word,
> competing
> > > > > > services. To my knowledge infra doesn't have several service
> doing
> > > the
> > > > > same
> > > > > > job with different core teams (though I could be wrong).
> > > > >
> > > > > True, though I do find it an interesting point of view that helping
> > > > > Kolla support multiple and diverse configuration management and
> > > > > automation ecosystems is a "competition" rather than merely
> > > > > extending the breadth of the project as a whole.
> > > > >
> > > >
> > > > Yea I computer good, but I am no wordsmith. Perhaps 'friendly
> rivalry'? I
> > > > expect these different deploy tools to bring new techniques that can
> then
> > > > be reapplied to kolla-ansible and kolla-kubernetes to help out
> everyone.
> > >
> > > I'm still curious to understand why, if the teams building those
> > > different things have little or no overlap in membership, they need to
> > > be part of "kolla" and not just part of the larger OpenStack? Why build
> > > a separate project hierarchy instead of keeping things flat?
> > >
> > > Do I misunderstand the situation?
> > >
> > > You absolutely do not misunderstand the situation. It is a very valid
> > question, one to which I do not have a satisfying answer. I can say that
> it
> > has been the intention since I started work on the ansible bits of kolla
> to
> > have separate repos for the deployment parts. That grew to having several
> > different deployment tools in the future and I don't think anyone really
> > stopped to think that building this hierarchy isn't necessarily the right
> > thing to do. It certainly isn't a required thing to do.
> >
> > With the separation of ansible from the main kolla repo, the kolla repo
> now
> > becomes a consumable much like the relationship keystone and glance.
> >
> > The only advantage I can really think of at the moment is to reuse the
> > Kolla name and community when starting a new project, but that may not be
> > as advantageous as I initially thought. By my own admission, why do these
> > other projects care about a different orchestration tool.
> >
> > So in your estimation Doug, do you feel kolla-salt would be better served
> > as a new project in it's own right? As well as future orchestration tools
> > using Kolla container images?
>
> I don't know enough about the situation to say for sure, and I'll
> leave it up to the people involved, but I thought I should raise
> the option as a way to ease up some of the friction.
>
> Our team structure is really supposed to be organized around groups
> of people rather than groups of things.  The fact that there's some
> negotiation going on to decide who needs to (or gets to) have a say
> in when new deliverables are added, with some people saying they
> don't want to have to vote or that others shouldn't have a vote,
> just makes it seem to me that we're trying to force a fit where it
> would be simpler to establish separate teams.
>
> There may be some common space for shared tools, and it sounds like
> that's how things started out. But not maybe it's time to rethink
> that?
>
> This is definitely the case. Shared tooling. In the case of kolla-k8s and
kolla-ansible sharing the configs, this has broken kolla-k8s many times.
Perhaps the right decision long term is identifying all the needed pieces
that Kolla would be sharing and centralizing them rather than building this
Kolla hierarchy of projects forcing Kolla more into what resembles a
benevolent dictator model for all underlying deployment projects.

Thanks,
SamYaple

> Doug
>
> >
> > Thanks,
> > SamYaple
> >
> > > Doug
> > >
> > > >
> > > > Thanks,
> > > > SamYaple
> > > >
> > > > > --
> > > > > Jeremy Stanley
> > > > >
> > > > > 
> > > __
> > > > > OpenStack Development Mailing List (not for usage questions)
> > > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > > unsubscribe
> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > > >
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > 

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 7:45 PM, Michał Jastrzębski  wrote:

> I think total separation of projects would require much larger
> discussion in community. Currently we agreed on having kolla-ansible
> and kolla-k8s to be deliverables under kolla umbrella from historical
> reasons. Also I don't agree that there is "little or no overlap" in
> teams, in fact there is ton of overlap, just not 100%. Many
> contributors (myself included) jump between deliverables today.
>
> Having single Kolla umbrella has practical benefits which I would hate
> to lose quite frankly. One of which would be that Kolla is being
> evaluated by lot of different companies, and having full separation
> between projects would make navigation of a landscape harder. Another
> reason is single community which we value - there is no full
> separation even between kolla-ansible and kolla-k8s (ansible still
> generates config files for k8s for example), and further separation of
> projects would hurt cooperation, and I don't think we've hit situation
> when it's necessary. I'm not ready to have this discussion yet, and
> I'm personally quite opposed to this.
>
> If kolla-salt would like to be first completely separate project,
> there is nothing we can (or want) to do to stop it, but I wouldn't
> like to see this being pushed. Having special beast isn't great, and
> moving kolla-ansible and kolla-k8s out of kolla umbrella is revolution
> I don't want to rush. I'd rather figure out process to accept
> kolla-salt (and following projects) to kolla umbrella and have this
> discussion later, when we actually hit community scale issues.
>

I don't think moving kolla-ansible or kolla-k8s out of the kolla namespace
was being suggested. If I implied that, it was not intended. That said,
with Doug's comments, I am not sure it makes sense to continue building a
Kolla deployment hierarchy. I would ask what the benefit of having
kolla-salt or kolla-puppet would be?

It is just a point that hasn't been discussed or considered up until now.
We had all just assumed kolla-salt and kolla-puppet and kolla-chef would be
a thing, but would there be a benefit to sitting under the kolla namespace?
I am not sure what those benefits are.

Thanks,
SamYaple


> Cheers,
> Michal
>
>
> On 5 January 2017 at 10:22, Sam Yaple  wrote:
> > On Thu, Jan 5, 2017 at 6:12 PM, Doug Hellmann 
> wrote:
> >>
> >> Excerpts from Sam Yaple's message of 2017-01-05 17:02:35 +:
> >> > On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley 
> >> > wrote:
> >> >
> >> > > On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
> >> > > [...]
> >> > > > I do feel this is slightly different than whats described. Since
> it
> >> > > > is
> >> > > not
> >> > > > unrelated services, but rather, for lack of a better word,
> competing
> >> > > > services. To my knowledge infra doesn't have several service doing
> >> > > > the
> >> > > same
> >> > > > job with different core teams (though I could be wrong).
> >> > >
> >> > > True, though I do find it an interesting point of view that helping
> >> > > Kolla support multiple and diverse configuration management and
> >> > > automation ecosystems is a "competition" rather than merely
> >> > > extending the breadth of the project as a whole.
> >> > >
> >> >
> >> > Yea I computer good, but I am no wordsmith. Perhaps 'friendly
> rivalry'?
> >> > I
> >> > expect these different deploy tools to bring new techniques that can
> >> > then
> >> > be reapplied to kolla-ansible and kolla-kubernetes to help out
> everyone.
> >>
> >> I'm still curious to understand why, if the teams building those
> >> different things have little or no overlap in membership, they need to
> >> be part of "kolla" and not just part of the larger OpenStack? Why build
> >> a separate project hierarchy instead of keeping things flat?
> >>
> >> Do I misunderstand the situation?
> >>
> > You absolutely do not misunderstand the situation. It is a very valid
> > question, one to which I do not have a satisfying answer. I can say that
> it
> > has been the intention since I started work on the ansible bits of kolla
> to
> > have separate repos for the deployment parts. That grew to having several
> > different deployment tools in the future and I don't think anyone really
> > stopped to think that building this hierarchy isn't necessarily the right
> > thing to do. It certainly isn't a required thing to do.
> >
> > With the separation of ansible from the main kolla repo, the kolla repo
> now
> > becomes a consumable much like the relationship keystone and glance.
> >
> > The only advantage I can really think of at the moment is to reuse the
> Kolla
> > name and community when starting a new project, but that may not be as
> > advantageous as I initially thought. By my own admission, why do these
> other
> > projects care about a different orchestration tool.
> >
> > So in your estimation Doug, do you feel kolla-salt would 

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Michał Jastrzębski
I think total separation of projects would require much larger
discussion in community. Currently we agreed on having kolla-ansible
and kolla-k8s to be deliverables under kolla umbrella from historical
reasons. Also I don't agree that there is "little or no overlap" in
teams, in fact there is ton of overlap, just not 100%. Many
contributors (myself included) jump between deliverables today.

Having single Kolla umbrella has practical benefits which I would hate
to lose quite frankly. One of which would be that Kolla is being
evaluated by lot of different companies, and having full separation
between projects would make navigation of a landscape harder. Another
reason is single community which we value - there is no full
separation even between kolla-ansible and kolla-k8s (ansible still
generates config files for k8s for example), and further separation of
projects would hurt cooperation, and I don't think we've hit situation
when it's necessary. I'm not ready to have this discussion yet, and
I'm personally quite opposed to this.

If kolla-salt would like to be first completely separate project,
there is nothing we can (or want) to do to stop it, but I wouldn't
like to see this being pushed. Having special beast isn't great, and
moving kolla-ansible and kolla-k8s out of kolla umbrella is revolution
I don't want to rush. I'd rather figure out process to accept
kolla-salt (and following projects) to kolla umbrella and have this
discussion later, when we actually hit community scale issues.

Cheers,
Michal


On 5 January 2017 at 10:22, Sam Yaple  wrote:
> On Thu, Jan 5, 2017 at 6:12 PM, Doug Hellmann  wrote:
>>
>> Excerpts from Sam Yaple's message of 2017-01-05 17:02:35 +:
>> > On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley 
>> > wrote:
>> >
>> > > On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
>> > > [...]
>> > > > I do feel this is slightly different than whats described. Since it
>> > > > is
>> > > not
>> > > > unrelated services, but rather, for lack of a better word, competing
>> > > > services. To my knowledge infra doesn't have several service doing
>> > > > the
>> > > same
>> > > > job with different core teams (though I could be wrong).
>> > >
>> > > True, though I do find it an interesting point of view that helping
>> > > Kolla support multiple and diverse configuration management and
>> > > automation ecosystems is a "competition" rather than merely
>> > > extending the breadth of the project as a whole.
>> > >
>> >
>> > Yea I computer good, but I am no wordsmith. Perhaps 'friendly rivalry'?
>> > I
>> > expect these different deploy tools to bring new techniques that can
>> > then
>> > be reapplied to kolla-ansible and kolla-kubernetes to help out everyone.
>>
>> I'm still curious to understand why, if the teams building those
>> different things have little or no overlap in membership, they need to
>> be part of "kolla" and not just part of the larger OpenStack? Why build
>> a separate project hierarchy instead of keeping things flat?
>>
>> Do I misunderstand the situation?
>>
> You absolutely do not misunderstand the situation. It is a very valid
> question, one to which I do not have a satisfying answer. I can say that it
> has been the intention since I started work on the ansible bits of kolla to
> have separate repos for the deployment parts. That grew to having several
> different deployment tools in the future and I don't think anyone really
> stopped to think that building this hierarchy isn't necessarily the right
> thing to do. It certainly isn't a required thing to do.
>
> With the separation of ansible from the main kolla repo, the kolla repo now
> becomes a consumable much like the relationship keystone and glance.
>
> The only advantage I can really think of at the moment is to reuse the Kolla
> name and community when starting a new project, but that may not be as
> advantageous as I initially thought. By my own admission, why do these other
> projects care about a different orchestration tool.
>
> So in your estimation Doug, do you feel kolla-salt would be better served as
> a new project in it's own right? As well as future orchestration tools using
> Kolla container images?
>
> Thanks,
> SamYaple
>>
>> Doug
>>
>> >
>> > Thanks,
>> > SamYaple
>> >
>> > > --
>> > > Jeremy Stanley
>> > >
>> > >
>> > > __
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> 

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Doug Hellmann
Excerpts from Sam Yaple's message of 2017-01-05 18:22:54 +:
> On Thu, Jan 5, 2017 at 6:12 PM, Doug Hellmann  wrote:
> 
> > Excerpts from Sam Yaple's message of 2017-01-05 17:02:35 +:
> > > On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley 
> > wrote:
> > >
> > > > On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
> > > > [...]
> > > > > I do feel this is slightly different than whats described. Since it
> > is
> > > > not
> > > > > unrelated services, but rather, for lack of a better word, competing
> > > > > services. To my knowledge infra doesn't have several service doing
> > the
> > > > same
> > > > > job with different core teams (though I could be wrong).
> > > >
> > > > True, though I do find it an interesting point of view that helping
> > > > Kolla support multiple and diverse configuration management and
> > > > automation ecosystems is a "competition" rather than merely
> > > > extending the breadth of the project as a whole.
> > > >
> > >
> > > Yea I computer good, but I am no wordsmith. Perhaps 'friendly rivalry'? I
> > > expect these different deploy tools to bring new techniques that can then
> > > be reapplied to kolla-ansible and kolla-kubernetes to help out everyone.
> >
> > I'm still curious to understand why, if the teams building those
> > different things have little or no overlap in membership, they need to
> > be part of "kolla" and not just part of the larger OpenStack? Why build
> > a separate project hierarchy instead of keeping things flat?
> >
> > Do I misunderstand the situation?
> >
> > You absolutely do not misunderstand the situation. It is a very valid
> question, one to which I do not have a satisfying answer. I can say that it
> has been the intention since I started work on the ansible bits of kolla to
> have separate repos for the deployment parts. That grew to having several
> different deployment tools in the future and I don't think anyone really
> stopped to think that building this hierarchy isn't necessarily the right
> thing to do. It certainly isn't a required thing to do.
> 
> With the separation of ansible from the main kolla repo, the kolla repo now
> becomes a consumable much like the relationship keystone and glance.
> 
> The only advantage I can really think of at the moment is to reuse the
> Kolla name and community when starting a new project, but that may not be
> as advantageous as I initially thought. By my own admission, why do these
> other projects care about a different orchestration tool.
> 
> So in your estimation Doug, do you feel kolla-salt would be better served
> as a new project in it's own right? As well as future orchestration tools
> using Kolla container images?

I don't know enough about the situation to say for sure, and I'll
leave it up to the people involved, but I thought I should raise
the option as a way to ease up some of the friction.

Our team structure is really supposed to be organized around groups
of people rather than groups of things.  The fact that there's some
negotiation going on to decide who needs to (or gets to) have a say
in when new deliverables are added, with some people saying they
don't want to have to vote or that others shouldn't have a vote,
just makes it seem to me that we're trying to force a fit where it
would be simpler to establish separate teams.

There may be some common space for shared tools, and it sounds like
that's how things started out. But not maybe it's time to rethink
that?

Doug

> 
> Thanks,
> SamYaple
> 
> > Doug
> >
> > >
> > > Thanks,
> > > SamYaple
> > >
> > > > --
> > > > Jeremy Stanley
> > > >
> > > > 
> > __
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Fixing Swift rings when upscaling/replacing nodes in TripleO deployments

2017-01-05 Thread Christian Schwede
On 05.01.2017 17:03, Steven Hardy wrote:
> On Thu, Jan 05, 2017 at 02:56:15PM +, arkady.kanev...@dell.com wrote:
>> I have concern to rely on undercloud for overcloud swift.
>> Undercloud is not HA (yet) so it may not be operational when disk failed or 
>> swift overcloud node is added/deleted.
> 
> I think the proposal is only for a deploy-time dependency, after the
> overcloud is deployed there should be no dependency on the undercloud
> swift, because the ring data will have been copied to all the nodes.

Yes, exactly - there is no runtime dependency. The overcloud will
continue to work even if the undercloud is gone.

If you "loose" the undercloud (or more precisely, the overcloud rings
that are stored on the undercloud Swift) you can copy them from any
overcloud node and run an update.

Even if one deletes the rings from the undercloud, the deployment will
continue to work after an update - puppet-swift will simply continue to
use the already existing .builder files on the nodes.

Only if one deletes the rings on the undercloud and runs an update with
new/replaced nodes it will fail - the swift-recon check will raise an
error in step 5 because rings are inconsistent on the new/replaced
nodes. But the inconsistency is already the case today (in fact it's the
same way as it works today), except that there is no check and no
warning to the operator.

-- Christian

> During create/update operations you need the undercloud operational by
> definition, so I think this is probably OK?
> 
> Steve
>>
>> -Original Message-
>> From: Christian Schwede [mailto:cschw...@redhat.com] 
>> Sent: Thursday, January 05, 2017 6:14 AM
>> To: OpenStack Development Mailing List 
>> Subject: [openstack-dev] [TripleO] Fixing Swift rings when 
>> upscaling/replacing nodes in TripleO deployments
>>
>> Hello everyone,
>>
>> there was an earlier discussion on $subject last year [1] regarding a bug 
>> when upscaling or replacing nodes in TripleO [2].
>>
>> Shortly summarized: Swift rings are built on each node separately, and if 
>> adding or replacing nodes (or disks) this will break the rings because they 
>> are no longer consistent across the nodes. What's needed are the previous 
>> ring builder files on each node before changing the rings.
>>
>> My former idea in [1] was to build the rings in advance on the undercloud, 
>> and also using introspection data to gather a set of disks on each node for 
>> the rings.
>>
>> However, this changes the current way of deploying significantly, and also 
>> requires more work in TripleO and Mistral (for example to trigger a ring 
>> build on the undercloud after the nodes have been started, but before the 
>> deployment triggers the Puppet run).
>>
>> I prefer smaller steps to keep everything stable for now, and therefore I 
>> changed my patches quite a bit. This is my updated proposal:
>>
>> 1. Two temporary undercloud Swift URLs (one PUT, one GET) will be computed 
>> before Mistral starts the deployments. A new Mistral action to create such 
>> URLs is required for this [3].
>> 2. Each overcloud node will try to fetch rings from the undercloud Swift 
>> deployment before updating it's set of rings locally using the temporary GET 
>> url. This guarantees that each node uses the same source set of builder 
>> files. This happens in step 2. [4] 3. puppet-swift runs like today, updating 
>> the rings if required.
>> 4. Finally, at the end of the deployment (in step 5) the nodes will upload 
>> their modified rings to the undercloud using the temporary PUT urls. 
>> swift-recon will run before this, ensuring that all rings across all nodes 
>> are consistent.
>>
>> The two required patches [3][4] are not overly complex IMO, but they solve 
>> the problem of adding or replacing nodes without changing the current 
>> workflow significantly. It should be even easy to backport them if needed.
>>
>> I'll continue working on an improved way of deploying Swift rings (using 
>> introspection data), but using this approach it could be even done using 
>> todays workflow, feeding data into puppet-swift (probably with some updates 
>> to puppet-swift/tripleo-heat-templates to allow support for regions, zones, 
>> different disk layouts and the like). However, all of this could be built on 
>> top of these two patches.
>>
>> I'm curious about your thoughts and welcome any feedback or reviews!
>>
>> Thanks,
>>
>> -- Christian
>>
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2016-August/100720.html
>> [2] https://bugs.launchpad.net/tripleo/+bug/1609421
>> [3] https://review.openstack.org/#/c/413229/
>> [4] https://review.openstack.org/#/c/414460/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

Re: [openstack-dev] [Release-job-failures] Release of openstack/glance failed

2017-01-05 Thread Brian Rosmaita
On 1/5/17 3:17 AM, Erno Kuvaja wrote:
> On Wed, Jan 4, 2017 at 9:22 PM, Tony Breeds  wrote:
>> On Wed, Jan 04, 2017 at 01:31:42PM -0500, Ian Cordasco wrote:
>>
>>> I believe you asked in another thread (that I cannot locate) if it was
>>> acceptable to the Glance team to not have an 11.0.3 tarball on
>>> openstack.org. With Brian on vacation, I'm hoping the other stable
>>> maintenance cores will chime in. I, for one, (as Release CPL and a
>>> Stable branch core reviewer) don't think the tarballs are critical for
>>> Glance. I'm fairly certain that most of the deployment projects use
>>> the Git repository directly or Distro provided packages (which are
>>> built from git tags). With that in mind, I don't think this should
>>> block Glance being EOL'd.
>>
>> Sounds good.  We can always generate and manually upload signed tarballs and
>> wheels, we can't do it with our automated tools.
>>
>> I'll include glance projects in the next round of EOL requests to infra.
>>
>>> I'm sorry for the delay in my reply. I took a little over a week of
>>> time off myself.
>>
>> No problem.  It's that time of year.
>>
>> Yours Tony.
>>
>>
> 
> ++ I think this is reasonable way forward. Thanks for your efforts!
> 
> - jokke
> 

I agree with this proposal.

thanks,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Retiring fairy-slipper, the conversion tool for API docs

2017-01-05 Thread Anne Gentle
Hi all,

I believe that with the migration from WADL to RST complete, the use case
for fairy-slipper has also come to a close. I plan to retire it.

Share any concerns or questions you have, and I'll use the standard process
[1] to retire the repo.

Anne

1. http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

-- 

Read my blog: justwrite.click 
Subscribe to Docs|Code: docslikecode.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 6:12 PM, Doug Hellmann  wrote:

> Excerpts from Sam Yaple's message of 2017-01-05 17:02:35 +:
> > On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley 
> wrote:
> >
> > > On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
> > > [...]
> > > > I do feel this is slightly different than whats described. Since it
> is
> > > not
> > > > unrelated services, but rather, for lack of a better word, competing
> > > > services. To my knowledge infra doesn't have several service doing
> the
> > > same
> > > > job with different core teams (though I could be wrong).
> > >
> > > True, though I do find it an interesting point of view that helping
> > > Kolla support multiple and diverse configuration management and
> > > automation ecosystems is a "competition" rather than merely
> > > extending the breadth of the project as a whole.
> > >
> >
> > Yea I computer good, but I am no wordsmith. Perhaps 'friendly rivalry'? I
> > expect these different deploy tools to bring new techniques that can then
> > be reapplied to kolla-ansible and kolla-kubernetes to help out everyone.
>
> I'm still curious to understand why, if the teams building those
> different things have little or no overlap in membership, they need to
> be part of "kolla" and not just part of the larger OpenStack? Why build
> a separate project hierarchy instead of keeping things flat?
>
> Do I misunderstand the situation?
>
> You absolutely do not misunderstand the situation. It is a very valid
question, one to which I do not have a satisfying answer. I can say that it
has been the intention since I started work on the ansible bits of kolla to
have separate repos for the deployment parts. That grew to having several
different deployment tools in the future and I don't think anyone really
stopped to think that building this hierarchy isn't necessarily the right
thing to do. It certainly isn't a required thing to do.

With the separation of ansible from the main kolla repo, the kolla repo now
becomes a consumable much like the relationship keystone and glance.

The only advantage I can really think of at the moment is to reuse the
Kolla name and community when starting a new project, but that may not be
as advantageous as I initially thought. By my own admission, why do these
other projects care about a different orchestration tool.

So in your estimation Doug, do you feel kolla-salt would be better served
as a new project in it's own right? As well as future orchestration tools
using Kolla container images?

Thanks,
SamYaple

> Doug
>
> >
> > Thanks,
> > SamYaple
> >
> > > --
> > > Jeremy Stanley
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] Shorter Meetings

2017-01-05 Thread Ian Cordasco
-Original Message-
From: Rob C 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 5, 2017 at 12:10:43
To: OpenStack Development Mailing List (not for usage questions)

Subject:  [openstack-dev] [Security] Shorter Meetings

> Hi All,
>
> As per our IRC meeting today[1] we've decided to try shortening the
> Security IRC meetings to 30 minutes per week. The other option was to have
> meetings every two weeks but we all agreed that would lead to missed
> meetings, confusion around holidays etc.
>
> The main reason for shortening our meetings is because we have found that
> many of our members are finding that the amount of time they have to
> dedicate to OpenStack is shrinking, in response we're going to try to
> shorten our meetings by being more disciplined in following our weekly
> agenda[2].
>
> I'd encourage everyone participating in the project to ensure they can make
> the 30 minutes for the meeting which is every week, at 1700 UTC in the
> #openstack-meeting-alt room on Freenode.
>
> Cheers
> -Rob
>
> [1]
> http://eavesdrop.openstack.org/meetings/security/2017/security.2017-01-05-16.59.html
> [2] https://etherpad.openstack.org/p/security-agenda

Also, if you have something to bring up at the meeting, it doesn't
hurt to mention it here first. The [Security] tag should be watched by
the OSSP and having the lion's share of the conversation here should
allow for shorter, concise, and conclusive discussions in the meetings
(if they're necessary at all).

Cheers!
--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Doug Hellmann
Excerpts from Sam Yaple's message of 2017-01-05 17:02:35 +:
> On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley  wrote:
> 
> > On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
> > [...]
> > > I do feel this is slightly different than whats described. Since it is
> > not
> > > unrelated services, but rather, for lack of a better word, competing
> > > services. To my knowledge infra doesn't have several service doing the
> > same
> > > job with different core teams (though I could be wrong).
> >
> > True, though I do find it an interesting point of view that helping
> > Kolla support multiple and diverse configuration management and
> > automation ecosystems is a "competition" rather than merely
> > extending the breadth of the project as a whole.
> >
> 
> Yea I computer good, but I am no wordsmith. Perhaps 'friendly rivalry'? I
> expect these different deploy tools to bring new techniques that can then
> be reapplied to kolla-ansible and kolla-kubernetes to help out everyone.

I'm still curious to understand why, if the teams building those
different things have little or no overlap in membership, they need to
be part of "kolla" and not just part of the larger OpenStack? Why build
a separate project hierarchy instead of keeping things flat?

Do I misunderstand the situation?

Doug

> 
> Thanks,
> SamYaple
> 
> > --
> > Jeremy Stanley
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 5:56 PM, Alex Schultz  wrote:

> On Thu, Jan 5, 2017 at 10:25 AM, Michał Jastrzębski 
> wrote:
> > Ad kolla-ansible core team +2ing new deliverable,I agree with Sam,
> > there is no reason in my mind for kolla-ansible/k8s core team to vote
> > on accepting new deliverable. I think this should be lightweight and
> > easy, we should allow experimentation (after all, kolla itself went
> > through few fail iterations before ansible).
> >
> > Ad disconnect, I think we all are interested in other orch tools more
> > or less, but it's more about "who should allow new one to be added",
> > and that requires more than interest as might potentially block adding
> > new deliverable. Avoiding this disconnect is exactly why I'd like to
> > keep all deliverable team under one Kolla umbrealla, keep everyone in
> > same community so they can make use of each others experience (that
> > would also mean, that kolla-puppet is what I'd like to see rather than
> > puppet-kolla:)).
> >
>
> I mean it depends on what a proposed 'kolla-puppet' does.  If it's
> like puppet-tripleo, which falls under the TripleO umbrella and not
> Puppet OpenStack because it configures more than just a single
> 'openstack' related service then that would make sense.  Since
> puppet-tripleo a way of deploying all things OpenStack it lives in the
> TripleO world.  But I don't necessarily agree that kolla-puppet should
> exist over puppet-kolla if it just configures 'kolla'.  I would like
> to see more cross group collaboration with the deployment tool groups
> and not keeping things to themselves.  As for the intricacies of the
> specific deployment tooling, because we already have patterns and
> plenty of tooling for deploying OpenStack related services in our
> other 40 or so modules I think the Puppet OpenStack community might be
> better suited to provide review feedback than say the Kolla group when
> it comes to puppet specific questions and best practices.  And
> speaking as the Puppet PTL there would not be anything stopping us
> from having Kolla cores also be cores on puppet-kolla.
>

These are good points. I don't know which way I lean on this subject at the
moment. But I would mention that kolla-ansible doesn't exist under
openstack-ansible-kolla. And kolla-salt (should that become a thing) isn't
openstack-salt-kolla.

Just because there is an openstack project that uses a orchestration tool
doesn't mean only one such project can exist. Nor that all approaches would
be the same, even if the end goal is the same (deploy openstack).

So I am going to remain neutral on this point and say what you are saying
is reasonable, though on the other hand it may not be compatible in some
situations.

Thanks,
SamYaple

I think its important to focus on the sense of OpenStack community
> building (not just Kolla community) and spreading knowledge I think it
> would be better not to try and keep everything to yourself if there's
> already a group of people in the community who specialize in a
> specific thing.  As an aside, I'd honestly like to see more
> contribution by the upstream projects into the puppet-* world because
> I think it's important for people to understand how the software they
> right actually gets consumed.
>
> Thanks,
> -Alex
>
> > Ad multi-deployment-tool friendly rivalry, it is meant to extend
> > breadth of the project indeed, but let's face it, religious wars are
> > real (and vim is better than emacs.);) I don't thing problem would be
> > ill intent tho, I could easily predict problem being rather in "I
> > don't have time to look at this review queue" sort. Stalling reviews
> > can kill lots of potentially great changes.
> >
> >
> > On 5 January 2017 at 09:02, Sam Yaple  wrote:
> >> On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley 
> wrote:
> >>>
> >>> On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
> >>> [...]
> >>> > I do feel this is slightly different than whats described. Since it
> is
> >>> > not
> >>> > unrelated services, but rather, for lack of a better word, competing
> >>> > services. To my knowledge infra doesn't have several service doing
> the
> >>> > same
> >>> > job with different core teams (though I could be wrong).
> >>>
> >>> True, though I do find it an interesting point of view that helping
> >>> Kolla support multiple and diverse configuration management and
> >>> automation ecosystems is a "competition" rather than merely
> >>> extending the breadth of the project as a whole.
> >>
> >>
> >> Yea I computer good, but I am no wordsmith. Perhaps 'friendly rivalry'?
> I
> >> expect these different deploy tools to bring new techniques that can
> then be
> >> reapplied to kolla-ansible and kolla-kubernetes to help out everyone.
> >>
> >> Thanks,
> >> SamYaple
> >>>
> >>> --
> >>> Jeremy Stanley
> >>>
> >>> 
> __
> >>> OpenStack 

[openstack-dev] [Security] Shorter Meetings

2017-01-05 Thread Rob C
Hi All,

As per our IRC meeting today[1] we've decided to try shortening the
Security IRC meetings to 30 minutes per week. The other option was to have
meetings every two weeks but we all agreed that would lead to missed
meetings, confusion around holidays etc.

The main reason for shortening our meetings is because we have found that
many of our members are finding that the amount of time they have to
dedicate to OpenStack is shrinking, in response we're going to try to
shorten our meetings by being more disciplined in following our weekly
agenda[2].

I'd encourage everyone participating in the project to ensure they can make
the 30 minutes for the meeting which is every week, at 1700 UTC in the
#openstack-meeting-alt room on Freenode.

Cheers
-Rob

[1]
http://eavesdrop.openstack.org/meetings/security/2017/security.2017-01-05-16.59.html
[2] https://etherpad.openstack.org/p/security-agenda
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Michał Jastrzębski
Oh you misunderstood good sir;) kolla-puppet is similar to tripleo -
it's would set up whole OpenStack using kolla containers deployed by
puppet manifests. I agree, if it would only install kolla - that
should go to openstack puppet, but kolla is a deployment tool.

On 5 January 2017 at 09:56, Alex Schultz  wrote:
> On Thu, Jan 5, 2017 at 10:25 AM, Michał Jastrzębski  wrote:
>> Ad kolla-ansible core team +2ing new deliverable,I agree with Sam,
>> there is no reason in my mind for kolla-ansible/k8s core team to vote
>> on accepting new deliverable. I think this should be lightweight and
>> easy, we should allow experimentation (after all, kolla itself went
>> through few fail iterations before ansible).
>>
>> Ad disconnect, I think we all are interested in other orch tools more
>> or less, but it's more about "who should allow new one to be added",
>> and that requires more than interest as might potentially block adding
>> new deliverable. Avoiding this disconnect is exactly why I'd like to
>> keep all deliverable team under one Kolla umbrealla, keep everyone in
>> same community so they can make use of each others experience (that
>> would also mean, that kolla-puppet is what I'd like to see rather than
>> puppet-kolla:)).
>>
>
> I mean it depends on what a proposed 'kolla-puppet' does.  If it's
> like puppet-tripleo, which falls under the TripleO umbrella and not
> Puppet OpenStack because it configures more than just a single
> 'openstack' related service then that would make sense.  Since
> puppet-tripleo a way of deploying all things OpenStack it lives in the
> TripleO world.  But I don't necessarily agree that kolla-puppet should
> exist over puppet-kolla if it just configures 'kolla'.  I would like
> to see more cross group collaboration with the deployment tool groups
> and not keeping things to themselves.  As for the intricacies of the
> specific deployment tooling, because we already have patterns and
> plenty of tooling for deploying OpenStack related services in our
> other 40 or so modules I think the Puppet OpenStack community might be
> better suited to provide review feedback than say the Kolla group when
> it comes to puppet specific questions and best practices.  And
> speaking as the Puppet PTL there would not be anything stopping us
> from having Kolla cores also be cores on puppet-kolla.
>
> I think its important to focus on the sense of OpenStack community
> building (not just Kolla community) and spreading knowledge I think it
> would be better not to try and keep everything to yourself if there's
> already a group of people in the community who specialize in a
> specific thing.  As an aside, I'd honestly like to see more
> contribution by the upstream projects into the puppet-* world because
> I think it's important for people to understand how the software they
> right actually gets consumed.
>
> Thanks,
> -Alex
>
>> Ad multi-deployment-tool friendly rivalry, it is meant to extend
>> breadth of the project indeed, but let's face it, religious wars are
>> real (and vim is better than emacs.);) I don't thing problem would be
>> ill intent tho, I could easily predict problem being rather in "I
>> don't have time to look at this review queue" sort. Stalling reviews
>> can kill lots of potentially great changes.
>>
>>
>> On 5 January 2017 at 09:02, Sam Yaple  wrote:
>>> On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley  wrote:

 On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
 [...]
 > I do feel this is slightly different than whats described. Since it is
 > not
 > unrelated services, but rather, for lack of a better word, competing
 > services. To my knowledge infra doesn't have several service doing the
 > same
 > job with different core teams (though I could be wrong).

 True, though I do find it an interesting point of view that helping
 Kolla support multiple and diverse configuration management and
 automation ecosystems is a "competition" rather than merely
 extending the breadth of the project as a whole.
>>>
>>>
>>> Yea I computer good, but I am no wordsmith. Perhaps 'friendly rivalry'? I
>>> expect these different deploy tools to bring new techniques that can then be
>>> reapplied to kolla-ansible and kolla-kubernetes to help out everyone.
>>>
>>> Thanks,
>>> SamYaple

 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 

Re: [openstack-dev] [kolla] Multi-Regions Support

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 2:12 PM, Ronan-Alexandre Cherrueau <
ronan-alexandre.cherru...@inria.fr> wrote:

> Hello,
>
>
> TL;DR: We make a multi-regions deployment with Kolla. It requires to
> patch the code a little bit, and you can find the diff on our
> GitHub[1]. This patch is just a first attempt to support multi-regions
> in Kolla and it raises questions. Some modifications are not done in
> an idiomatic way and we do not expect this to be merged in Kolla. The
> reminder of this mail explains our patch and states our questions.
>
>
> At Inria/Discovery[2], we evaluate OpenStack at scale for the
> Performance Working Group. So far, we focus on one single OpenStack
> region deployment with hundreds of computes and we always go with
> Kolla for our deployment. Over the last few days, we tried to achieve
> a multi-regions OpenStack deployment with Kolla. We want to share with
> you our current deployment workflow, patches we had to apply on Kolla
> to support multi-regions, and also ask you if we do things correctly.
>
> First of all, our multi-regions deployment follows the one described
> by the OpenStack documentation[3]. Concretely, the deployment
> considers /one/ Administrative Region (AR) that contains Keystone and
> Horizon. This is a Kolla-based deployment, so Keystone is hidden
> behind an HAProxy, and has MariaDB and memcached as backend. At the
> same time, /n/ OpenStack Regions (OSR1, ..., OSRn) contain a full
> OpenStack, except Keystone. We got something as follows at the end of
> the deployment:
>
>
> Admin Region (AR):
> - control:
>   * Horizon
>   * HAProxy
>   * Keyston
>   * MariaDB
>   * memcached
>
> OpenStack Region x (OSRx):
> - control:
>   * HAProxy
>   * nova-api/conductor/scheduler
>   * neutron-server/l3/dhcp/...
>   * glance-api/registry
>   * MariaDB
>   * RabbitMQ
>
> - compute1:
>   * nova-compute
>   * neutron-agent
>
> - compute2: ...
>
>
> We do the deployment by running Kolla n+1 times. The first run deploys
> the Administrative Region (AR) and the other runs deploy OpenStack
> Regions (OSR). For each run, we fix the value of `openstack_region_name'
> variable to the name of the current region.
>
> In the context of multi-regions, Keystone (in the AR) should be
> available to all OSRs. This means, there are as many Keystone
> endpoints as regions. For instance, if we consider two OSRs, the
> result of listing endpoints at the end of the AR deployment looks like
> this:
>
>
>  $ openstack endpoint list
>
>  | Region | Serv Name | Serv Type | Interface | URL
>   |
>  |+---+---+---+--
> |
>  | AR | keystone  | identity  | public|
> http://10.24.63.248:5000/v3  |
>  | AR | keystone  | identity  | internal  |
> http://10.24.63.248:5000/v3  |
>  | AR | keystone  | identity  | admin |
> http://10.24.63.248:35357/v3 |
>  | OSR1   | keystone  | identity  | public|
> http://10.24.63.248:5000/v3  |
>  | OSR1   | keystone  | identity  | internal  |
> http://10.24.63.248:5000/v3  |
>  | OSR1   | keystone  | identity  | admin |
> http://10.24.63.248:35357/v3 |
>  | OSR2   | keystone  | identity  | public|
> http://10.24.63.248:5000/v3  |
>  | OSR2   | keystone  | identity  | internal  |
> http://10.24.63.248:5000/v3  |
>  | OSR2   | keystone  | identity  | admin |
> http://10.24.63.248:35357/v3 |
>
>
> This requires patching the `keystone/tasks/register.yml' play[4] to
> re-execute the `Creating admin project, user, role, service, and
> endpoint' task for all regions we consider. An example of such a patch
> is given on our GitHub[5]. In this example, the `openstack_regions'
> variable is a list that contains the name of all regions (see [6]). As
> a drawback, the patch implies to know in advance all OSR. A better
> implementation would execute the `Creating admin project, user, role,
> service, and endpoint' task every time a new OSR is going to be
> deployed. But this requires to move the task somewhere else in the
> Kolla workflow and we have no idea where this should be.
>
> In the AR, we also have to change the Horizon configuration file to
> handle multi-regions[7]. The modification could be done easily and
> idiomatically by setting the `node_custome_config' variable to the
> `multi-regions' directory[8] and benefits from Kolla merging config
> system.
>
> Also, deploying OSRs requires patching the kolla-toolbox as it seems
> not region-aware. In particular, patch the `kolla_keystone_service.py'
> module[9] that is responsible for contacting Keystone and creating a
> new endpoint when we register a new OpenStack service.
>
>
>  73  for _endpoint in cloud.keystone_client.endpoints.list():
>  74  if _endpoint.service_id == service.id and \
>  75 _endpoint.interface == interface:
>  76endpoint = _endpoint
>  77if endpoint.url != url:
>  78  changed = True
>  79  cloud.keystone_client.endpoints.update(
>  80endpoint, url=url)
>  

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Alex Schultz
On Thu, Jan 5, 2017 at 10:25 AM, Michał Jastrzębski  wrote:
> Ad kolla-ansible core team +2ing new deliverable,I agree with Sam,
> there is no reason in my mind for kolla-ansible/k8s core team to vote
> on accepting new deliverable. I think this should be lightweight and
> easy, we should allow experimentation (after all, kolla itself went
> through few fail iterations before ansible).
>
> Ad disconnect, I think we all are interested in other orch tools more
> or less, but it's more about "who should allow new one to be added",
> and that requires more than interest as might potentially block adding
> new deliverable. Avoiding this disconnect is exactly why I'd like to
> keep all deliverable team under one Kolla umbrealla, keep everyone in
> same community so they can make use of each others experience (that
> would also mean, that kolla-puppet is what I'd like to see rather than
> puppet-kolla:)).
>

I mean it depends on what a proposed 'kolla-puppet' does.  If it's
like puppet-tripleo, which falls under the TripleO umbrella and not
Puppet OpenStack because it configures more than just a single
'openstack' related service then that would make sense.  Since
puppet-tripleo a way of deploying all things OpenStack it lives in the
TripleO world.  But I don't necessarily agree that kolla-puppet should
exist over puppet-kolla if it just configures 'kolla'.  I would like
to see more cross group collaboration with the deployment tool groups
and not keeping things to themselves.  As for the intricacies of the
specific deployment tooling, because we already have patterns and
plenty of tooling for deploying OpenStack related services in our
other 40 or so modules I think the Puppet OpenStack community might be
better suited to provide review feedback than say the Kolla group when
it comes to puppet specific questions and best practices.  And
speaking as the Puppet PTL there would not be anything stopping us
from having Kolla cores also be cores on puppet-kolla.

I think its important to focus on the sense of OpenStack community
building (not just Kolla community) and spreading knowledge I think it
would be better not to try and keep everything to yourself if there's
already a group of people in the community who specialize in a
specific thing.  As an aside, I'd honestly like to see more
contribution by the upstream projects into the puppet-* world because
I think it's important for people to understand how the software they
right actually gets consumed.

Thanks,
-Alex

> Ad multi-deployment-tool friendly rivalry, it is meant to extend
> breadth of the project indeed, but let's face it, religious wars are
> real (and vim is better than emacs.);) I don't thing problem would be
> ill intent tho, I could easily predict problem being rather in "I
> don't have time to look at this review queue" sort. Stalling reviews
> can kill lots of potentially great changes.
>
>
> On 5 January 2017 at 09:02, Sam Yaple  wrote:
>> On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley  wrote:
>>>
>>> On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
>>> [...]
>>> > I do feel this is slightly different than whats described. Since it is
>>> > not
>>> > unrelated services, but rather, for lack of a better word, competing
>>> > services. To my knowledge infra doesn't have several service doing the
>>> > same
>>> > job with different core teams (though I could be wrong).
>>>
>>> True, though I do find it an interesting point of view that helping
>>> Kolla support multiple and diverse configuration management and
>>> automation ecosystems is a "competition" rather than merely
>>> extending the breadth of the project as a whole.
>>
>>
>> Yea I computer good, but I am no wordsmith. Perhaps 'friendly rivalry'? I
>> expect these different deploy tools to bring new techniques that can then be
>> reapplied to kolla-ansible and kolla-kubernetes to help out everyone.
>>
>> Thanks,
>> SamYaple
>>>
>>> --
>>> Jeremy Stanley
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List 

Re: [openstack-dev] [kolla] Multi-Regions Support

2017-01-05 Thread Michał Jastrzębski
Great stuff! Thank you Ronan. I'd love to see this guide refactored
and submitted to our docs (also take a longer look how to make full
fledged support in kolla tree). Looking for volunteers:)

On 5 January 2017 at 07:59, Jeffrey Zhang  wrote:
> Thanks Ronan,
>
> Great approach.
>
> I am always expecting multi-region support in Kolla. I hope what you did can
> be merged into Kolla.
>
>
>
> On Thu, Jan 5, 2017 at 10:12 PM, Ronan-Alexandre Cherrueau
>  wrote:
>>
>> Hello,
>>
>>
>> TL;DR: We make a multi-regions deployment with Kolla. It requires to
>> patch the code a little bit, and you can find the diff on our
>> GitHub[1]. This patch is just a first attempt to support multi-regions
>> in Kolla and it raises questions. Some modifications are not done in
>> an idiomatic way and we do not expect this to be merged in Kolla. The
>> reminder of this mail explains our patch and states our questions.
>>
>>
>> At Inria/Discovery[2], we evaluate OpenStack at scale for the
>> Performance Working Group. So far, we focus on one single OpenStack
>> region deployment with hundreds of computes and we always go with
>> Kolla for our deployment. Over the last few days, we tried to achieve
>> a multi-regions OpenStack deployment with Kolla. We want to share with
>> you our current deployment workflow, patches we had to apply on Kolla
>> to support multi-regions, and also ask you if we do things correctly.
>>
>> First of all, our multi-regions deployment follows the one described
>> by the OpenStack documentation[3]. Concretely, the deployment
>> considers /one/ Administrative Region (AR) that contains Keystone and
>> Horizon. This is a Kolla-based deployment, so Keystone is hidden
>> behind an HAProxy, and has MariaDB and memcached as backend. At the
>> same time, /n/ OpenStack Regions (OSR1, ..., OSRn) contain a full
>> OpenStack, except Keystone. We got something as follows at the end of
>> the deployment:
>>
>>
>> Admin Region (AR):
>> - control:
>>   * Horizon
>>   * HAProxy
>>   * Keyston
>>   * MariaDB
>>   * memcached
>>
>> OpenStack Region x (OSRx):
>> - control:
>>   * HAProxy
>>   * nova-api/conductor/scheduler
>>   * neutron-server/l3/dhcp/...
>>   * glance-api/registry
>>   * MariaDB
>>   * RabbitMQ
>>
>> - compute1:
>>   * nova-compute
>>   * neutron-agent
>>
>> - compute2: ...
>>
>>
>> We do the deployment by running Kolla n+1 times. The first run deploys
>> the Administrative Region (AR) and the other runs deploy OpenStack
>> Regions (OSR). For each run, we fix the value of `openstack_region_name'
>> variable to the name of the current region.
>>
>> In the context of multi-regions, Keystone (in the AR) should be
>> available to all OSRs. This means, there are as many Keystone
>> endpoints as regions. For instance, if we consider two OSRs, the
>> result of listing endpoints at the end of the AR deployment looks like
>> this:
>>
>>
>>  $ openstack endpoint list
>>
>>  | Region | Serv Name | Serv Type | Interface | URL
>> |
>>
>> |+---+---+---+--|
>>  | AR | keystone  | identity  | public|
>> http://10.24.63.248:5000/v3  |
>>  | AR | keystone  | identity  | internal  |
>> http://10.24.63.248:5000/v3  |
>>  | AR | keystone  | identity  | admin |
>> http://10.24.63.248:35357/v3 |
>>  | OSR1   | keystone  | identity  | public|
>> http://10.24.63.248:5000/v3  |
>>  | OSR1   | keystone  | identity  | internal  |
>> http://10.24.63.248:5000/v3  |
>>  | OSR1   | keystone  | identity  | admin |
>> http://10.24.63.248:35357/v3 |
>>  | OSR2   | keystone  | identity  | public|
>> http://10.24.63.248:5000/v3  |
>>  | OSR2   | keystone  | identity  | internal  |
>> http://10.24.63.248:5000/v3  |
>>  | OSR2   | keystone  | identity  | admin |
>> http://10.24.63.248:35357/v3 |
>>
>>
>> This requires patching the `keystone/tasks/register.yml' play[4] to
>> re-execute the `Creating admin project, user, role, service, and
>> endpoint' task for all regions we consider. An example of such a patch
>> is given on our GitHub[5]. In this example, the `openstack_regions'
>> variable is a list that contains the name of all regions (see [6]). As
>> a drawback, the patch implies to know in advance all OSR. A better
>> implementation would execute the `Creating admin project, user, role,
>> service, and endpoint' task every time a new OSR is going to be
>> deployed. But this requires to move the task somewhere else in the
>> Kolla workflow and we have no idea where this should be.
>>
>> In the AR, we also have to change the Horizon configuration file to
>> handle multi-regions[7]. The modification could be done easily and
>> idiomatically by setting the `node_custome_config' variable to the
>> `multi-regions' directory[8] and benefits from Kolla merging config
>> system.
>>
>> Also, deploying OSRs requires patching the kolla-toolbox as it seems
>> not region-aware. In particular, patch 

Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Michał Jastrzębski
Ad kolla-ansible core team +2ing new deliverable,I agree with Sam,
there is no reason in my mind for kolla-ansible/k8s core team to vote
on accepting new deliverable. I think this should be lightweight and
easy, we should allow experimentation (after all, kolla itself went
through few fail iterations before ansible).

Ad disconnect, I think we all are interested in other orch tools more
or less, but it's more about "who should allow new one to be added",
and that requires more than interest as might potentially block adding
new deliverable. Avoiding this disconnect is exactly why I'd like to
keep all deliverable team under one Kolla umbrealla, keep everyone in
same community so they can make use of each others experience (that
would also mean, that kolla-puppet is what I'd like to see rather than
puppet-kolla:)).

Ad multi-deployment-tool friendly rivalry, it is meant to extend
breadth of the project indeed, but let's face it, religious wars are
real (and vim is better than emacs.);) I don't thing problem would be
ill intent tho, I could easily predict problem being rather in "I
don't have time to look at this review queue" sort. Stalling reviews
can kill lots of potentially great changes.


On 5 January 2017 at 09:02, Sam Yaple  wrote:
> On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley  wrote:
>>
>> On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
>> [...]
>> > I do feel this is slightly different than whats described. Since it is
>> > not
>> > unrelated services, but rather, for lack of a better word, competing
>> > services. To my knowledge infra doesn't have several service doing the
>> > same
>> > job with different core teams (though I could be wrong).
>>
>> True, though I do find it an interesting point of view that helping
>> Kolla support multiple and diverse configuration management and
>> automation ecosystems is a "competition" rather than merely
>> extending the breadth of the project as a whole.
>
>
> Yea I computer good, but I am no wordsmith. Perhaps 'friendly rivalry'? I
> expect these different deploy tools to bring new techniques that can then be
> reapplied to kolla-ansible and kolla-kubernetes to help out everyone.
>
> Thanks,
> SamYaple
>>
>> --
>> Jeremy Stanley
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-01-05 Thread Chris Dent


Greetings OpenStack community,

Our first meeting for 2017 opened with a bang. Lots of people in attendance to 
talk about the ongoing adventure of trying to improve the handling of the 
concept of visibility in Glance. If we had had more time we could have 
continued talking about it for a long time. We touched on issues of backwards 
compatibility in the face of semantic confusion, microversions, the value and 
strength of guidelines, and the impact of API changes on users. The current 
blocking issue is described at 
https://etherpad.openstack.org/p/glance-ocata-community-images-api-stability . 
We mostly agreed that the long term satisfaction of all users, existing and 
future, trumps anything else. Further discussion will continue on the review 
mentioned in the etherpad: https://review.openstack.org/#/c/412731/

We also had some pretty strenuous discussion about the pagination review at 
https://review.openstack.org/#/c/390973/ but ran out of time before getting 
into any true discussion of the capabilities review 
https://review.openstack.org/#/c/386555/ .

All of this suggests we've got some useful work to look forward to.

# Newly Published Guidelines

Nothing new

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None at the moment, but you are always very welcome to review stuff in the next 
section.

# Guidelines Currently Under Review [3]

* Add guidelines on usage of state vs. status
  https://review.openstack.org/#/c/411528/

* Add guidelines for boolean names
  https://review.openstack.org/#/c/411529/

* Clarify the status values in versions
  https://review.openstack.org/#/c/411849/

* Define pagination guidelines
  https://review.openstack.org/#/c/390973/

* Add API capabilities discovery guideline
  https://review.openstack.org/#/c/386555/

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are faciing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 4:54 PM, Jeremy Stanley  wrote:

> On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
> [...]
> > I do feel this is slightly different than whats described. Since it is
> not
> > unrelated services, but rather, for lack of a better word, competing
> > services. To my knowledge infra doesn't have several service doing the
> same
> > job with different core teams (though I could be wrong).
>
> True, though I do find it an interesting point of view that helping
> Kolla support multiple and diverse configuration management and
> automation ecosystems is a "competition" rather than merely
> extending the breadth of the project as a whole.
>

Yea I computer good, but I am no wordsmith. Perhaps 'friendly rivalry'? I
expect these different deploy tools to bring new techniques that can then
be reapplied to kolla-ansible and kolla-kubernetes to help out everyone.

Thanks,
SamYaple

> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Alex Schultz
On Thu, Jan 5, 2017 at 8:58 AM, Sam Yaple  wrote:

> Involving kolla-ansible and kolla-kubernetes in a decision about
> kolla-salt (or kolla-puppet, or kolla-chef) is silly since the projects are
> unrelated. That would be like involving glance when cinder has a new
> service because they both use keystone. The kolla-core team is reasonable
> since those are the images being consumed.
>
>
Technically, I think if there was going to be a puppet module for kolla, it
should fall under the Puppet OpenStack namespace (as puppet-kolla).  So in
this case kolla-salt is only falling under the kolla namespace because
there's no longer a salt group in OpenStack right?  I don't have any skin
in the kolla game, but I would encourage a puppet-kolla if someone wanted
to contribute :)  Just to add my thoughts on the original question posed by
this thread as an outside observer, I would echo what Michal said since the
PTL should at the very least have a say.

Thanks,
-Alex


> Thanks,
> SamYaple
>
>
>
> Sam Yaple
>
> On Thu, Jan 5, 2017 at 12:31 AM, Steven Dake (stdake) 
> wrote:
>
>> Michal,
>>
>> Another option is 2 individuals from each core review team + PTL.  That
>> is lighter weight then 3 and 4, yet more constrained then 1 and 2 and would
>> be my preferred choice (or alternatively 3 or 4).  Adding a deliverable is
>> serious business ☺
>>
>> FWIW I don’t’ think we are at an impasse, it just requires a policy vote
>> as we do today.
>>
>> Regards
>> -steve
>>
>> -Original Message-
>> From: Michał Jastrzębski 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Date: Wednesday, January 4, 2017 at 3:38 PM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: [openstack-dev] [tc][kolla] Adding new deliverables
>>
>> Hello,
>>
>> New deliverable to Kolla was proposed, and we found ourselves in a bit
>> of an impasse regarding process of accepting new deliverables. Kolla
>> community grew a lot since we were singular project, and now we have 3
>> deliverables already (kolla, kolla-ansible and kolla-kubernetes). 4th
>> one was proposed, kolla-salt, all of them having separate core teams
>> today. How to we proceed with this and following deliverables? How to
>> we accept them to kolla namespace? I can think of several ways.
>>
>> 1) Open door policy - whoever wants to create new deliverable, is just
>> free to do so.
>> 2) Lightweight agreement - just 2*+2 from Kolla core team to some list
>> of deliveralbes that will sit in kolla repo, potentially 2*+2 + PTL
>> vote it would be good for PTL to know what he/she is PTL of;)
>> 3) Majority vote from Kolla core team - much like we do with policy
>> changes today
>> 4) Majority vote from all Kolla deliverables core teams
>>
>> My personal favorite is option 2+PTL vote. We want to encourage
>> experiments and new contributors to use our namespace, for both larger
>> community and ease of navigation for users.
>>
>> One caveat to this would be to note that pre-1.0 projects are
>> considered dev/experimental.
>>
>> Thoughts?
>>
>> Cheers,
>> Michal
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Rolling upgrades vs. duplication of prop data

2017-01-05 Thread Zane Bitter

On 05/01/17 11:41, Crag Wolfe wrote:

Hi,

I have a patch[1] to support the de-duplication of resource properties
data between events and resources. In the ideal rolling-upgrade world,
we would be writing data to the old and new db locations, only reading
from the old in the first release (let's assume Ocata). The problem is
that in this particular case, we would be duplicating a lot of data, so
[1] for now does not take that approach. I.e., it is not rolling-upgrade
friendly.

So, we need to decide what to do for Ocata:

A. Support assert:supports-upgrade[2] and resign ourselves to writing
duplicated resource prop. data through Pike (following the standard
strategy of write to old/new and read from old, write to old/new and
read from new, write/read from new over O,P,Q).

B. Push assert:supports-upgrade back until Pike, and avoid writing
resource prop. data in multiple locations in Ocata.


+1

Rabi mentioned that we don't yet have tests in place to claim the tag in 
Ocata anyway, so I vote for making it easy on ourselves until we have 
to. Anything that involves shifting stuff between tables like this 
inevitably gets pretty gnarly.



C. DB triggers.


-2! -2!


I vote for B. I'm pretty sure there is not much support for C (count me
in that group :), but throwing it out there just in case.

Thanks,

--Crag

[1] https://review.openstack.org/#/c/363415/

[2] https://review.openstack.org/#/c/407989/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Jeremy Stanley
On 2017-01-05 16:46:36 + (+), Sam Yaple wrote:
[...]
> I do feel this is slightly different than whats described. Since it is not
> unrelated services, but rather, for lack of a better word, competing
> services. To my knowledge infra doesn't have several service doing the same
> job with different core teams (though I could be wrong).

True, though I do find it an interesting point of view that helping
Kolla support multiple and diverse configuration management and
automation ecosystems is a "competition" rather than merely
extending the breadth of the project as a whole.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 4:34 PM, Jeremy Stanley  wrote:

> On 2017-01-05 15:58:45 + (+), Sam Yaple wrote:
> > Involving kolla-ansible and kolla-kubernetes in a decision about
> kolla-salt
> > (or kolla-puppet, or kolla-chef) is silly since the projects are
> unrelated.
> > That would be like involving glance when cinder has a new service because
> > they both use keystone. The kolla-core team is reasonable since those are
> > the images being consumed.
>
> In contrast, the Infra team also has a vast number of fairly
> unrelated deliverables with their own dedicated core review teams.
> In our case, we refer to them as our "Infra Council" and ask them to
> weigh in with Roll-Call votes on proposals to the infra-specs repo.
> In short, just because people work primarily on one particular
> service doesn't mean they're incapable of providing useful feedback
> on and help prioritize proposals to add other (possibly unrelated)
> services.
>

I do feel this is slightly different than whats described. Since it is not
unrelated services, but rather, for lack of a better word, competing
services. To my knowledge infra doesn't have several service doing the same
job with different core teams (though I could be wrong).

Thanks,
SamYaple

> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] question for OpenStack User Survey

2017-01-05 Thread Brian Rosmaita
At today's Glance meeting, we decided to go with a version of the
question Erno posed below.  I've posted an etherpad where we can refine
the question in order to increase the probability of receiving useful
responses:

https://etherpad.openstack.org/p/glance-user-survey-question-feb2017

Please add your refinements before 23:59 UTC 8 January 2017.

thanks,
brian

On 1/3/17 6:11 AM, Erno Kuvaja wrote:
> On Thu, Dec 22, 2016 at 11:05 PM, Brian Rosmaita
>  wrote:
>> Glancers,
>>
>> We have the opportunity to submit one question for the upcoming User
>> Survey, which launches on or before February 1.  We'd receive responses
>> in advance of the February PTG, so this would be a good opportunity for
>> glancers who are thinking of organizing design sessions at the PTG to
>> get some user input to discuss at the PTG.
>>
>> The question is due on January 9, so I'll put an item on the January 5
>> meeting agenda, and if there are multiple contenders, we can discuss and
>> vote to select the question likely to have the most impact.
>>
>> As far as the question format goes, you can have one of:
>> * multiple choice, pick one of up to 6 items
>> * multiple choice, pick all that apply of up to 6 items
>> * short answer
>> (Both multiple choice formats allow an "other" option for a write-in
>> candidate.)
>>
>> You also get to specify who you want the question aimed at:
>> * people using Glance in a production/test cloud
>> * people testing Glance in a production/test cloud
>> * people interested in Glance
>> (You can pick more than one group.)
>>
>> Add your question to the agenda for the January 5 meeting:
>> https://etherpad.openstack.org/p/glance-team-meeting-agenda
>>
>> cheers,
>> brian
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> Can it have "If X, why?" box?
> 
> I'd like to have question "Which Images Api Version are you using?
> Pick any that applies. a) v1 b) v2; If v1, Why?"
> 
> Could help us prioritizing the work needed to get everybody off from
> v1 and it out of support.
> 
> - jokke
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
On Thu, Jan 5, 2017 at 4:06 PM, Doug Hellmann  wrote:

> Excerpts from Sam Yaple's message of 2017-01-05 15:58:45 +:
> > Involving kolla-ansible and kolla-kubernetes in a decision about
> kolla-salt
> > (or kolla-puppet, or kolla-chef) is silly since the projects are
> unrelated.
> > That would be like involving glance when cinder has a new service because
> > they both use keystone. The kolla-core team is reasonable since those are
> > the images being consumed.
>
> If those teams are so disconnected as to not have an interest in the
> work the other is doing, why are they part of the same umbrella project
> team?
>
> The disconnection is rather new, though it has been a long time coming. In
Newton, all of the kolla-ansible code existed in the kolla repo. It has
since been split to kolla-ansible to separate interests and allow for new
projects like kolla-kubernetes (and even kolla-salt) to have the same
advantages as kolla-ansible. Be a first-class citizen. Frankly, the kolla
namespace is _not_ needed for these projects anymore. Kolla-salt does not
need to be called kolla-salt at all. It could be called some other name
without much ado. However, the primary goal of the split was to encourage
projects like this to pop up under the kolla namespace, and keep different
deployment tools from interfering with each other.

As someone who works with Ansible and Salt, I don't personally think I
should be voting on the acceptance of a new deployment tool I have no
interest in that won't affect anything I am working on. Of course, this is
just my opinion.

Thanks,
SamYaple

Doug
>
> >
> > Thanks,
> > SamYaple
> >
> >
> >
> > Sam Yaple
> >
> > On Thu, Jan 5, 2017 at 12:31 AM, Steven Dake (stdake) 
> > wrote:
> >
> > > Michal,
> > >
> > > Another option is 2 individuals from each core review team + PTL.
> That is
> > > lighter weight then 3 and 4, yet more constrained then 1 and 2 and
> would be
> > > my preferred choice (or alternatively 3 or 4).  Adding a deliverable is
> > > serious business ☺
> > >
> > > FWIW I don’t’ think we are at an impasse, it just requires a policy
> vote
> > > as we do today.
> > >
> > > Regards
> > > -steve
> > >
> > > -Original Message-
> > > From: Michał Jastrzębski 
> > > Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" <
> > > openstack-dev@lists.openstack.org>
> > > Date: Wednesday, January 4, 2017 at 3:38 PM
> > > To: "OpenStack Development Mailing List (not for usage questions)" <
> > > openstack-dev@lists.openstack.org>
> > > Subject: [openstack-dev] [tc][kolla] Adding new deliverables
> > >
> > > Hello,
> > >
> > > New deliverable to Kolla was proposed, and we found ourselves in a
> bit
> > > of an impasse regarding process of accepting new deliverables.
> Kolla
> > > community grew a lot since we were singular project, and now we
> have 3
> > > deliverables already (kolla, kolla-ansible and kolla-kubernetes).
> 4th
> > > one was proposed, kolla-salt, all of them having separate core
> teams
> > > today. How to we proceed with this and following deliverables? How
> to
> > > we accept them to kolla namespace? I can think of several ways.
> > >
> > > 1) Open door policy - whoever wants to create new deliverable, is
> just
> > > free to do so.
> > > 2) Lightweight agreement - just 2*+2 from Kolla core team to some
> list
> > > of deliveralbes that will sit in kolla repo, potentially 2*+2 + PTL
> > > vote it would be good for PTL to know what he/she is PTL of;)
> > > 3) Majority vote from Kolla core team - much like we do with policy
> > > changes today
> > > 4) Majority vote from all Kolla deliverables core teams
> > >
> > > My personal favorite is option 2+PTL vote. We want to encourage
> > > experiments and new contributors to use our namespace, for both
> larger
> > > community and ease of navigation for users.
> > >
> > > One caveat to this would be to note that pre-1.0 projects are
> > > considered dev/experimental.
> > >
> > > Thoughts?
> > >
> > > Cheers,
> > > Michal
> > >
> > > 
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > > unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

[openstack-dev] [heat] Rolling upgrades vs. duplication of prop data

2017-01-05 Thread Crag Wolfe
Hi,

I have a patch[1] to support the de-duplication of resource properties
data between events and resources. In the ideal rolling-upgrade world,
we would be writing data to the old and new db locations, only reading
from the old in the first release (let's assume Ocata). The problem is
that in this particular case, we would be duplicating a lot of data, so
[1] for now does not take that approach. I.e., it is not rolling-upgrade
friendly.

So, we need to decide what to do for Ocata:

A. Support assert:supports-upgrade[2] and resign ourselves to writing
duplicated resource prop. data through Pike (following the standard
strategy of write to old/new and read from old, write to old/new and
read from new, write/read from new over O,P,Q).

B. Push assert:supports-upgrade back until Pike, and avoid writing
resource prop. data in multiple locations in Ocata.

C. DB triggers.

I vote for B. I'm pretty sure there is not much support for C (count me
in that group :), but throwing it out there just in case.

Thanks,

--Crag

[1] https://review.openstack.org/#/c/363415/

[2] https://review.openstack.org/#/c/407989/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] RFC: deprecating "set IPMI credentials" feature in ironic-inspector

2017-01-05 Thread Dmitry Tantsur

On 12/13/2016 01:40 PM, Dmitry Tantsur wrote:

Hi folks!

Since nearly its beginning, ironic-inspector has had a controversial feature: we
allow a user to request changing IPMI credentials of the node after
introspection. The new credentials are passed back from inspector to the
ramdisk, and the ramdisk calls "ipmitool" to set them.

Now I realize that the feature has quite a few substantial drawbacks:
1. It's a special case in ironic-inspector. It's the only thing that runs after
introspection, and it requires special state machine states and actions.
2. There is no way to signal errors back from the ramdisk. We can only poll the
nodes to see if the new credentials match.
3. This is the only place where ironic-inspector modifies physical nodes (as
opposed to modifying the ironic database). This feels like a violation of our 
goal.
4. It depends on ipmitool actually being able to update credentials from within
the node without knowing the current ones. I'm not sure how wildly it's
supported. I'm pretty sure some hardware does not support it.
5. It's not and never will be tested by any CI. It's not possible to test on VMs
at all.
6. Due to its dangerous nature, this feature is hidden behind a configuration
option, and is disabled by default.

The upside I see is that it may play nicely with node autodiscovery. I'm not
sure they work together today, though. We didn't end up using this feature in
our products, and I don't recall being approached by people using it.

I suggest deprecating this feature and removing it in Pike. The rough plan is as
follows:

I. Ocata:
 * Deprecate the configuration option enabling this feature.
 * Create an API version that returns HTTP 400 when this feature is requested.


A review is posted for this part: https://review.openstack.org/#/c/417041/


 * Deprecate the associated arguments in CLI.
 * Issue a deprecating warning in IPA when this feature is used.

II. Pike:
 * Remove the feature from IPA and ironic-inspector.
 * Remove the feature from CLI.

Please respond with your comments and/or objects to this thread. I'll soon
prepare a patch on which you'll also be able to comment.

Dmitry.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Jeremy Stanley
On 2017-01-05 15:58:45 + (+), Sam Yaple wrote:
> Involving kolla-ansible and kolla-kubernetes in a decision about kolla-salt
> (or kolla-puppet, or kolla-chef) is silly since the projects are unrelated.
> That would be like involving glance when cinder has a new service because
> they both use keystone. The kolla-core team is reasonable since those are
> the images being consumed.

In contrast, the Infra team also has a vast number of fairly
unrelated deliverables with their own dedicated core review teams.
In our case, we refer to them as our "Infra Council" and ask them to
weigh in with Roll-Call votes on proposals to the infra-specs repo.
In short, just because people work primarily on one particular
service doesn't mean they're incapable of providing useful feedback
on and help prioritize proposals to add other (possibly unrelated)
services.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Doug Hellmann
Excerpts from Sam Yaple's message of 2017-01-05 15:58:45 +:
> Involving kolla-ansible and kolla-kubernetes in a decision about kolla-salt
> (or kolla-puppet, or kolla-chef) is silly since the projects are unrelated.
> That would be like involving glance when cinder has a new service because
> they both use keystone. The kolla-core team is reasonable since those are
> the images being consumed.

If those teams are so disconnected as to not have an interest in the
work the other is doing, why are they part of the same umbrella project
team?

Doug

> 
> Thanks,
> SamYaple
> 
> 
> 
> Sam Yaple
> 
> On Thu, Jan 5, 2017 at 12:31 AM, Steven Dake (stdake) 
> wrote:
> 
> > Michal,
> >
> > Another option is 2 individuals from each core review team + PTL.  That is
> > lighter weight then 3 and 4, yet more constrained then 1 and 2 and would be
> > my preferred choice (or alternatively 3 or 4).  Adding a deliverable is
> > serious business ☺
> >
> > FWIW I don’t’ think we are at an impasse, it just requires a policy vote
> > as we do today.
> >
> > Regards
> > -steve
> >
> > -Original Message-
> > From: Michał Jastrzębski 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> > openstack-dev@lists.openstack.org>
> > Date: Wednesday, January 4, 2017 at 3:38 PM
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> > openstack-dev@lists.openstack.org>
> > Subject: [openstack-dev] [tc][kolla] Adding new deliverables
> >
> > Hello,
> >
> > New deliverable to Kolla was proposed, and we found ourselves in a bit
> > of an impasse regarding process of accepting new deliverables. Kolla
> > community grew a lot since we were singular project, and now we have 3
> > deliverables already (kolla, kolla-ansible and kolla-kubernetes). 4th
> > one was proposed, kolla-salt, all of them having separate core teams
> > today. How to we proceed with this and following deliverables? How to
> > we accept them to kolla namespace? I can think of several ways.
> >
> > 1) Open door policy - whoever wants to create new deliverable, is just
> > free to do so.
> > 2) Lightweight agreement - just 2*+2 from Kolla core team to some list
> > of deliveralbes that will sit in kolla repo, potentially 2*+2 + PTL
> > vote it would be good for PTL to know what he/she is PTL of;)
> > 3) Majority vote from Kolla core team - much like we do with policy
> > changes today
> > 4) Majority vote from all Kolla deliverables core teams
> >
> > My personal favorite is option 2+PTL vote. We want to encourage
> > experiments and new contributors to use our namespace, for both larger
> > community and ease of navigation for users.
> >
> > One caveat to this would be to note that pre-1.0 projects are
> > considered dev/experimental.
> >
> > Thoughts?
> >
> > Cheers,
> > Michal
> >
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Multi-Regions Support

2017-01-05 Thread Jeffrey Zhang
Thanks Ronan,

Great approach.

I am always expecting multi-region support in Kolla. I hope what you did
can be merged into Kolla.



On Thu, Jan 5, 2017 at 10:12 PM, Ronan-Alexandre Cherrueau <
ronan-alexandre.cherru...@inria.fr> wrote:

> Hello,
>
>
> TL;DR: We make a multi-regions deployment with Kolla. It requires to
> patch the code a little bit, and you can find the diff on our
> GitHub[1]. This patch is just a first attempt to support multi-regions
> in Kolla and it raises questions. Some modifications are not done in
> an idiomatic way and we do not expect this to be merged in Kolla. The
> reminder of this mail explains our patch and states our questions.
>
>
> At Inria/Discovery[2], we evaluate OpenStack at scale for the
> Performance Working Group. So far, we focus on one single OpenStack
> region deployment with hundreds of computes and we always go with
> Kolla for our deployment. Over the last few days, we tried to achieve
> a multi-regions OpenStack deployment with Kolla. We want to share with
> you our current deployment workflow, patches we had to apply on Kolla
> to support multi-regions, and also ask you if we do things correctly.
>
> First of all, our multi-regions deployment follows the one described
> by the OpenStack documentation[3]. Concretely, the deployment
> considers /one/ Administrative Region (AR) that contains Keystone and
> Horizon. This is a Kolla-based deployment, so Keystone is hidden
> behind an HAProxy, and has MariaDB and memcached as backend. At the
> same time, /n/ OpenStack Regions (OSR1, ..., OSRn) contain a full
> OpenStack, except Keystone. We got something as follows at the end of
> the deployment:
>
>
> Admin Region (AR):
> - control:
>   * Horizon
>   * HAProxy
>   * Keyston
>   * MariaDB
>   * memcached
>
> OpenStack Region x (OSRx):
> - control:
>   * HAProxy
>   * nova-api/conductor/scheduler
>   * neutron-server/l3/dhcp/...
>   * glance-api/registry
>   * MariaDB
>   * RabbitMQ
>
> - compute1:
>   * nova-compute
>   * neutron-agent
>
> - compute2: ...
>
>
> We do the deployment by running Kolla n+1 times. The first run deploys
> the Administrative Region (AR) and the other runs deploy OpenStack
> Regions (OSR). For each run, we fix the value of `openstack_region_name'
> variable to the name of the current region.
>
> In the context of multi-regions, Keystone (in the AR) should be
> available to all OSRs. This means, there are as many Keystone
> endpoints as regions. For instance, if we consider two OSRs, the
> result of listing endpoints at the end of the AR deployment looks like
> this:
>
>
>  $ openstack endpoint list
>
>  | Region | Serv Name | Serv Type | Interface | URL
>   |
>  |+---+---+---+--
> |
>  | AR | keystone  | identity  | public|
> http://10.24.63.248:5000/v3  |
>  | AR | keystone  | identity  | internal  |
> http://10.24.63.248:5000/v3  |
>  | AR | keystone  | identity  | admin |
> http://10.24.63.248:35357/v3 |
>  | OSR1   | keystone  | identity  | public|
> http://10.24.63.248:5000/v3  |
>  | OSR1   | keystone  | identity  | internal  |
> http://10.24.63.248:5000/v3  |
>  | OSR1   | keystone  | identity  | admin |
> http://10.24.63.248:35357/v3 |
>  | OSR2   | keystone  | identity  | public|
> http://10.24.63.248:5000/v3  |
>  | OSR2   | keystone  | identity  | internal  |
> http://10.24.63.248:5000/v3  |
>  | OSR2   | keystone  | identity  | admin |
> http://10.24.63.248:35357/v3 |
>
>
> This requires patching the `keystone/tasks/register.yml' play[4] to
> re-execute the `Creating admin project, user, role, service, and
> endpoint' task for all regions we consider. An example of such a patch
> is given on our GitHub[5]. In this example, the `openstack_regions'
> variable is a list that contains the name of all regions (see [6]). As
> a drawback, the patch implies to know in advance all OSR. A better
> implementation would execute the `Creating admin project, user, role,
> service, and endpoint' task every time a new OSR is going to be
> deployed. But this requires to move the task somewhere else in the
> Kolla workflow and we have no idea where this should be.
>
> In the AR, we also have to change the Horizon configuration file to
> handle multi-regions[7]. The modification could be done easily and
> idiomatically by setting the `node_custome_config' variable to the
> `multi-regions' directory[8] and benefits from Kolla merging config
> system.
>
> Also, deploying OSRs requires patching the kolla-toolbox as it seems
> not region-aware. In particular, patch the `kolla_keystone_service.py'
> module[9] that is responsible for contacting Keystone and creating a
> new endpoint when we register a new OpenStack service.
>
>
>  73  for _endpoint in cloud.keystone_client.endpoints.list():
>  74  if _endpoint.service_id == service.id and \
>  75 _endpoint.interface == interface:
>  76endpoint = _endpoint
>  77if 

Re: [openstack-dev] [TripleO] Fixing Swift rings when upscaling/replacing nodes in TripleO deployments

2017-01-05 Thread Steven Hardy
On Thu, Jan 05, 2017 at 02:56:15PM +, arkady.kanev...@dell.com wrote:
> I have concern to rely on undercloud for overcloud swift.
> Undercloud is not HA (yet) so it may not be operational when disk failed or 
> swift overcloud node is added/deleted.

I think the proposal is only for a deploy-time dependency, after the
overcloud is deployed there should be no dependency on the undercloud
swift, because the ring data will have been copied to all the nodes.

During create/update operations you need the undercloud operational by
definition, so I think this is probably OK?

Steve
> 
> -Original Message-
> From: Christian Schwede [mailto:cschw...@redhat.com] 
> Sent: Thursday, January 05, 2017 6:14 AM
> To: OpenStack Development Mailing List 
> Subject: [openstack-dev] [TripleO] Fixing Swift rings when 
> upscaling/replacing nodes in TripleO deployments
> 
> Hello everyone,
> 
> there was an earlier discussion on $subject last year [1] regarding a bug 
> when upscaling or replacing nodes in TripleO [2].
> 
> Shortly summarized: Swift rings are built on each node separately, and if 
> adding or replacing nodes (or disks) this will break the rings because they 
> are no longer consistent across the nodes. What's needed are the previous 
> ring builder files on each node before changing the rings.
> 
> My former idea in [1] was to build the rings in advance on the undercloud, 
> and also using introspection data to gather a set of disks on each node for 
> the rings.
> 
> However, this changes the current way of deploying significantly, and also 
> requires more work in TripleO and Mistral (for example to trigger a ring 
> build on the undercloud after the nodes have been started, but before the 
> deployment triggers the Puppet run).
> 
> I prefer smaller steps to keep everything stable for now, and therefore I 
> changed my patches quite a bit. This is my updated proposal:
> 
> 1. Two temporary undercloud Swift URLs (one PUT, one GET) will be computed 
> before Mistral starts the deployments. A new Mistral action to create such 
> URLs is required for this [3].
> 2. Each overcloud node will try to fetch rings from the undercloud Swift 
> deployment before updating it's set of rings locally using the temporary GET 
> url. This guarantees that each node uses the same source set of builder 
> files. This happens in step 2. [4] 3. puppet-swift runs like today, updating 
> the rings if required.
> 4. Finally, at the end of the deployment (in step 5) the nodes will upload 
> their modified rings to the undercloud using the temporary PUT urls. 
> swift-recon will run before this, ensuring that all rings across all nodes 
> are consistent.
> 
> The two required patches [3][4] are not overly complex IMO, but they solve 
> the problem of adding or replacing nodes without changing the current 
> workflow significantly. It should be even easy to backport them if needed.
> 
> I'll continue working on an improved way of deploying Swift rings (using 
> introspection data), but using this approach it could be even done using 
> todays workflow, feeding data into puppet-swift (probably with some updates 
> to puppet-swift/tripleo-heat-templates to allow support for regions, zones, 
> different disk layouts and the like). However, all of this could be built on 
> top of these two patches.
> 
> I'm curious about your thoughts and welcome any feedback or reviews!
> 
> Thanks,
> 
> -- Christian
> 
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-August/100720.html
> [2] https://bugs.launchpad.net/tripleo/+bug/1609421
> [3] https://review.openstack.org/#/c/413229/
> [4] https://review.openstack.org/#/c/414460/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Steve Hardy
Red Hat Engineering, Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][kolla] Adding new deliverables

2017-01-05 Thread Sam Yaple
Involving kolla-ansible and kolla-kubernetes in a decision about kolla-salt
(or kolla-puppet, or kolla-chef) is silly since the projects are unrelated.
That would be like involving glance when cinder has a new service because
they both use keystone. The kolla-core team is reasonable since those are
the images being consumed.

Thanks,
SamYaple



Sam Yaple

On Thu, Jan 5, 2017 at 12:31 AM, Steven Dake (stdake) 
wrote:

> Michal,
>
> Another option is 2 individuals from each core review team + PTL.  That is
> lighter weight then 3 and 4, yet more constrained then 1 and 2 and would be
> my preferred choice (or alternatively 3 or 4).  Adding a deliverable is
> serious business ☺
>
> FWIW I don’t’ think we are at an impasse, it just requires a policy vote
> as we do today.
>
> Regards
> -steve
>
> -Original Message-
> From: Michał Jastrzębski 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, January 4, 2017 at 3:38 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [tc][kolla] Adding new deliverables
>
> Hello,
>
> New deliverable to Kolla was proposed, and we found ourselves in a bit
> of an impasse regarding process of accepting new deliverables. Kolla
> community grew a lot since we were singular project, and now we have 3
> deliverables already (kolla, kolla-ansible and kolla-kubernetes). 4th
> one was proposed, kolla-salt, all of them having separate core teams
> today. How to we proceed with this and following deliverables? How to
> we accept them to kolla namespace? I can think of several ways.
>
> 1) Open door policy - whoever wants to create new deliverable, is just
> free to do so.
> 2) Lightweight agreement - just 2*+2 from Kolla core team to some list
> of deliveralbes that will sit in kolla repo, potentially 2*+2 + PTL
> vote it would be good for PTL to know what he/she is PTL of;)
> 3) Majority vote from Kolla core team - much like we do with policy
> changes today
> 4) Majority vote from all Kolla deliverables core teams
>
> My personal favorite is option 2+PTL vote. We want to encourage
> experiments and new contributors to use our namespace, for both larger
> community and ease of navigation for users.
>
> One caveat to this would be to note that pre-1.0 projects are
> considered dev/experimental.
>
> Thoughts?
>
> Cheers,
> Michal
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] Monasca devstack installation is failing

2017-01-05 Thread Brandt, Ryan
I think this is going to require more time and information. Perhaps you could 
open a launchpad bug to track this?

I just brought up a new vagrant successfully yesterday, would you mind checking 
if you have the latest code and trying again? Also, if there is anything else 
relating to storm or monasca-thresh in that last log you provided, could you 
include that as well?

Thanks,
Ryan

From: Pradeep Singh >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, January 4, 2017 at 11:33 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [Monasca] Monasca devstack installation is failing


Hello,

I am trying to install Monasca devstack using vagrant file given in monasca-ui 
repo.

And its failing again and again with below error.

Could you please help me, i will really appreciate your help.


==> default: 2017-01-05 06:28:06.125 | ++ functions-common:start_service:2306   
   :   sudo /bin/systemctl start monasca-thresh

==> default: 2017-01-05 06:28:27.440 | Job for monasca-thresh.service failed 
because the control process exited with error code. See "systemctl status 
monasca-thresh.service" and "journalctl -xe" for details.

==> default: 2017-01-05 06:28:27.444 | ++ 
/opt/stack/monasca-api/devstack/plugin.sh:start_monasca_services:235 :   
restart_service monasca-thresh

==> default: 2017-01-05 06:28:27.447 | ++ functions-common:restart_service:2282 
   :   '[' -x /bin/systemctl ']'

==> default: 2017-01-05 06:28:27.450 | ++ functions-common:restart_service:2283 
   :   sudo /bin/systemctl restart monasca-thresh

==> default: 2017-01-05 06:28:47.368 | Job for monasca-thresh.service failed 
because the control process exited with error code. See "systemctl status 
monasca-thresh.service" and "journalctl -xe" for details.

vagrant@devstack:~$ systemctl status monasca-thresh.service

● monasca-thresh.service - LSB: Monitoring threshold engine running under storm

   Loaded: loaded (/etc/init.d/monasca-thresh; bad; vendor preset: enabled)

   Active: failed (Result: exit-code) since Thu 2017-01-05 06:28:47 UTC; 27s ago

 Docs: man:systemd-sysv-generator(8)

  Process: 28638 ExecStart=/etc/init.d/monasca-thresh start (code=exited, 
status=1/FAILURE)


Jan 05 06:28:47 devstack monasca-thresh[28638]: main()

Jan 05 06:28:47 devstack monasca-thresh[28638]:   File 
"/opt/storm/current/bin/storm.py", line 766, in main

Jan 05 06:28:47 devstack monasca-thresh[28638]: (COMMANDS.get(COMMAND, 
unknown_command))(*ARGS)

Jan 05 06:28:47 devstack monasca-thresh[28638]:   File 
"/opt/storm/current/bin/storm.py", line 248, in jar

Jan 05 06:28:47 devstack monasca-thresh[28638]: os.remove(tmpjar)

Jan 05 06:28:47 devstack monasca-thresh[28638]: OSError: [Errno 2] No such file 
or directory: '/tmp/30c1980ed31011e68cd3080027b55b5e.jar'

Jan 05 06:28:47 devstack systemd[1]: monasca-thresh.service: Control process 
exited, code=exited status=1

Jan 05 06:28:47 devstack systemd[1]: Failed to start LSB: Monitoring threshold 
engine running under storm.

Jan 05 06:28:47 devstack systemd[1]: monasca-thresh.service: Unit entered 
failed state.

Jan 05 06:28:47 devstack systemd[1]: monasca-thresh.service: Failed with result 
'exit-code'.


Thanks,

Pradeep



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] Intermittent database transaction issues, affecting the tempest gate

2017-01-05 Thread Bernard Cafarelli
After some research, this review fixes the tempest failures:
https://review.openstack.org/#/c/416503/1 (newer patchset has an
unrelated fix for the functional tests gate)

Multiple local tempest runs and gate rechecks all turned green with
this fix. That is the good news part.

The bad news is that I am still not sure on the root cause. The code
that triggers the problems is:
https://github.com/openstack/networking-sfc/blob/f5b52d5304796e44431b3874117aa0be91ed13d8/networking_sfc/services/sfc/drivers/ovs/db.py#L292
_get_port_detail() is just a wrapper on CommonDbMixin._get_by_id()
from neutron, so is it triggered by two _model_query() calls in a row?

Hoping someone can shed a light here, next time it may not be as an
easy fix as removing an unused line


On 22 December 2016 at 20:48, Mike Bayer  wrote:
>
> On 12/20/2016 06:50 PM, Cathy Zhang wrote:
>>
>> Hi Bernard,
>>
>> Thanks for the email. I will take a look at this. Xiaodong has been
>> working on tempest test scripts.
>> I will work with Xiaodong on this issue.
>
>
> I've added a comment to the issue which refers to upstream SQLAlchemy issue
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3803 as a potential
> contributor, though looking at the logs linked from the issue it appears
> that database deadlocks are also occurring which may also be a precursor
> here.   There are many improvements in SQLAlchemy 1.1 such that the
> "rollback()" state should not be as susceptible to a corrupted database
> connection as seems to be the case here.
>
>
>
>
>
>>
>> Cathy
>>
>>
>> -Original Message-
>> From: Bernard Cafarelli [mailto:bcafa...@redhat.com]
>> Sent: Tuesday, December 20, 2016 3:00 AM
>> To: OpenStack Development Mailing List
>> Subject: [openstack-dev] [networking-sfc] Intermittent database
>> transaction issues, affecting the tempest gate
>>
>> Hi everyone,
>>
>> we have an open bug (thanks Igor for the report) on DB transaction issues:
>> https://bugs.launchpad.net/networking-sfc/+bug/1630503
>>
>> The thing is, I am seeing  quite a few tempest gate failures that follow
>> the same pattern: at some point in the test suite, the service gets
>> warnings/errors from the DB layer (reentrant call, closed transaction,
>> nested rollback, …), and all following tests fail.
>>
>> This affects both master and stable/newton branches (not many changes for
>> now in the DB parts between these branches)
>>
>> Some examples:
>> * https://review.openstack.org/#/c/400396/ failed with console log
>>
>> http://logs.openstack.org/96/400396/2/check/gate-tempest-dsvm-networking-sfc-ubuntu-xenial/c27920b/console.html#_2016-12-16_12_44_47_564544
>> and service log
>>
>> http://logs.openstack.org/96/400396/2/check/gate-tempest-dsvm-networking-sfc-ubuntu-xenial/c27920b/logs/screen-q-svc.txt.gz?level=WARNING#_2016-12-16_12_44_32_301
>> * https://review.openstack.org/#/c/405391/ failed,
>>
>> http://logs.openstack.org/91/405391/2/check/gate-tempest-dsvm-networking-sfc-ubuntu-xenial/7e2b1de/console.html.gz#_2016-12-16_13_05_17_384323
>> and
>> http://logs.openstack.org/91/405391/2/check/gate-tempest-dsvm-networking-sfc-ubuntu-xenial/7e2b1de/logs/screen-q-svc.txt.gz?level=WARNING#_2016-12-16_13_04_11_840
>> * another on master branch: https://review.openstack.org/#/c/411194/
>> with
>> http://logs.openstack.org/94/411194/1/gate/gate-tempest-dsvm-networking-sfc-ubuntu-xenial/90633de/console.html.gz#_2016-12-15_22_36_15_216260
>> and
>> http://logs.openstack.org/94/411194/1/gate/gate-tempest-dsvm-networking-sfc-ubuntu-xenial/90633de/logs/screen-q-svc.txt.gz?level=WARNING#_2016-12-15_22_35_53_310
>>
>> I took a look at the errors, but only found old-and-apparently-fixed
>> pymysql bugs, and suggestions like:
>> *
>> http://docs.sqlalchemy.org/en/latest/faq/sessions.html#this-session-s-transaction-has-been-rolled-back-due-to-a-previous-exception-during-flush-or-similar
>> *  https://review.openstack.org/#/c/230481/
>> Not really my forte, so if someone could take a look at these logs and fix
>> the problem, it would be great! Especially with the upcoming multinode
>> tempest gate
>>
>> Thanks,
>> --
>> Bernard Cafarelli
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

[openstack-dev] [ironic] User survey question

2017-01-05 Thread Galyna Zholtkevych
Hello,

I would like to use this opportunity upcoming to decide things related to
one of features that I am implementing.

To prevent lost-update problem, the user in the next Ironic API versions
will have the ability to use the entity tag for each resource.
Thus, in order to send conditional request he has to have at least some
representation on client side.

So there is a question on how to implement better storing resource (with
ETAG) representation in ironic CLI (see
https://review.openstack.org/#/c/381991/)

The proposals are following, I will specify some in the spec:

- During update send get and afterwards update with current representation
(descreases performance and changes the general behaviour, but hides the
behaviour from user)

- Force user to do ``ironic node-show `` and afterwards he will do
update
putting entity tag manually

- Or he may be advised to do sth like

``node=$(ironic node-show )``

and then he can send sth like

``ironic node-update $node``and resource will call manager and update
itself (doing it through the resource is the similar way in
python-novaclient for some requests).

In these two last cases user is forced again to do additional actions to
perform a simple command. Need to be discussed if it is a good way to
change behaviour like that


- Cache resource representation at client side using e.g. requests-cache

- Cache resource representation at ironic api middleware (or contribute
caching to pecan to make it available for other projects)

There are many conflicting opinions basically on this question, so need to
be decided in which direction to go, and what is the most reasonable,
standard, logical manner.

Thank you for comments in advance.



Best regards,
Galyna Zholtkevych
Mirantis Inc

IRC nickname:galyna
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] quick reminder on review policy

2017-01-05 Thread Emilien Macchi
On Wed, Jan 4, 2017 at 8:57 AM, John Trowbridge  wrote:
>
>
> On 01/03/2017 07:30 PM, Emilien Macchi wrote:
>> I've noticed some TripleO core reviewers self approving patch without
>> respecting our review policy, specially in tripleo-quickstart.
>>
>
> This is slightly misleading. To me, self-approving is +2/+A on your own
> patch.
>
> What has been going in tripleo-quickstart is different though. We have
> allowed cores to +A a patch from another core with only a single +2.
> That is against the policies laid out in tripleo-specs[1,2]. However,
> following those policies will effectively make it impossible for cores
> of tripleo-quickstart to get their own work merged in anything
> approaching a reasonable amount of time.
>
> This is because there are currently only 3 cores reviewing
> tripleo-quickstart with any regularity. So the policies below mean that
> any patch submitted by a core must be reviewed by every other core. I
> think it has actually been a full month since we even had all 3 cores
> working at the same time due to holidays and PTO (currently we only have 2).
>
> If we want to apply the policies below to quickstart, I get it... they
> are after all the agreed on policies. I think this puts moving CI to
> quickstart this cycle at a very high risk to complete though, which also
> means getting container CI is also at risk.

I'm "ok" with the current state, as long as we work together to
scale-out the number of contributors and reviewers in oooq.
If we want this project a reference to deploy TripleO in CI & dev
envs, we need more adoption & reviewers.

Probably we could organize some deep dive sessions, also do regular
meetings & send notes over ML. It will, I think, improve communication
and involvement from other TripleO folks.

> [1]
> http://specs.openstack.org/openstack/tripleo-specs/specs/policy/expedited-approvals.html#single-2-approvals
> [2]
> http://specs.openstack.org/openstack/tripleo-specs/specs/policy/expedited-approvals.html#self-approval
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ci] TripleO-Quickstart Transition to TripleO-CI Update and Invite:

2017-01-05 Thread Emilien Macchi
On Wed, Jan 4, 2017 at 11:22 AM, Attila Darazs  wrote:
> On 01/04/2017 10:34 AM, Steven Hardy wrote:
>>
>> Hi Harry,
>>
>> On Tue, Jan 03, 2017 at 04:04:51PM -0500, Harry Rybacki wrote:
>>>
>>> Greetings All,
>>>
>>> Folks have been diligently working on the blueprint[1] to prepare
>>> TripleO-Quickstart (OOOQ)[2] and TripleO-Quickstart-Extras[3] for
>>> their transition into TripleO-CI. Presently, our aim is to begin the
>>> actual transition to OOOQ on 4-Feb-2017. We are tracking our work on
>>> the RDO-Infra Trello board[4] and holding public discussion of key
>>> blockers on the team’s scrum etherpad[5].
>>
>>
>> Thanks for the update - can you please describe what "transition into
>> TripleO-CI" means?
>
>
> Hello Steve,
>
> This means we're trying to run all the gate jobs with Quickstart and make
> sure we have the same features enabled and results for each existing gate
> jobs.
>
>> I'm happy to see this work proceeding, but we have to be mindful that the
>> end of the development cycle (around the time you're proposing) is always
>> a crazy-busy time where folks are trying to land features and fixes.
>>
>> So, we absolutely must avoid any CI outages around this time, thus I get
>> nervous talking about major CI transitions around the Release-candate
>> weeks ;)
>>
>> https://releases.openstack.org/ocata/schedule.html
>>
>> If we're talking about getting the jobs ready, then switching over to
>> primarily oooq jobs in early pike, that's great, but please lets ensure we
>> don't may any disruptive changes before the end of this (very short and
>> really busy) cycle.
>
>
> As I see the early pike is only 2 week away from our planned switch, it
> might be wiser to delay it indeed. The end-of-cycle stability might be even
> useful for us to run some new jobs parallel for a while if we have enough
> resources.

Yes. The transition won't happen before we release Ocata GA, so not
before March 10th.

Enjoy this time to stabilize things, run experimental jobs, doing deep
testing :-)

>>> We are hosting weekly transition update meetings (1600-1700 UTC) and
>>> would like to invite folks to participate. Specifically, we are
>>> looking for at least one stakeholder in the existing TripleO-CI to
>>> join us as we prepare to migrate OOOQ. Attend and map out job/feature
>>> coverage to identify any holes so we can begin plugging them. Please
>>> reply off-list or reach out to me (hrybacki) on IRC to be added to the
>>> transition meeting calendar invite.
>>
>>
>> Why can't we discuss this in the weekly TripleO IRC meeting?
>>
>> I think folks would be fine with having a standing item where we dicscuss
>> this transition (there is already a CI item, but I've rarely seen this
>> topic raised there).
>
>
> I agree that we should have a standing item about this in the TripleO
> meeting, however this transition meeting usually takes an hour a week in
> itself, so we cannot really fit it into the TripleO meeting.
>
> Also why we ask somebody well versed in the TripleO CI to join us is that we
> might get answers to questions we didn't even know we had. There are
> probably shortcuts and known workarounds to what we're trying to achieve in
> the upstream system that we're not familiar with.
>
> Also the discussion is focused on Quickstart (for example how to develop
> some roles that unify different workloads like OVB and nodepool), so it
> wouldn't be relevant for the TripleO meeting entirely.
>
> Thus the request still stands, I think we could get a big help with somebody
> familiar with the CI system. This should be a once a week meeting for only
> the following 3-6 weeks.
>
> We will make a short status from now on about the current state of the
> transition on the TripleO meetings though.

My hope when I introduced TripleO squads was to have this kind of
efforts discussed on squad meetings.

TripleO Upgrade squad already to work together, and they started to do
meetings together. It's probably a good opportunity for TripleO CI
squad to start.
I would propose that we start thinking at a meeting (weekly?)
dedicated to TripleO CI topics and we would probably do it over IRC
(using one of the meeting channels if possible).
Another alternative would be BJ, as long as we keep the meeting open
to anyone and take notes afterward, well communicated to the
community.

> Thank you for your thoughts,
> Attila
>
>
>> https://wiki.openstack.org/wiki/Meetings/TripleO
>>
>> Thanks!
>>
>> Steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [glance][tempest][api] community images, tempest tests, and API stability

2017-01-05 Thread Brian Rosmaita
To anyone interested in this discussion: I put it on the agenda for
today's API-WG meeting (16:00 UTC):

https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda


On 12/25/16 12:04 PM, GHANSHYAM MANN wrote:
> Thanks Brian for bringing this up, same we been discussing last week on QA
> channel and this patch[1] but I completely agree with Matthew opinion here.
> There is no doubt that this change(4-valued) are much better and clear than
> old one. This makes much defined and clear way of defining the image
> visibility by/for operator/user.
> 
> Upgrade procedure defined in all referenced ML/spec looks fine for
> redefining the visibility for images with or without members to
> shared/private. Operator feedback/acceptance for this change makes it
> acceptable.
> 
> But operator/users creating images with visibility as "private"
> *explicitly*, this changes is going to break them:
> 
> - Images with member already added in that would not works
> as Tempest tests does [2].
> 
> - They cannot add member as they used to do that.
> 
> First one is being something tested by Tempest and which is valid tests as
> per current behaviour of API
> 
> There might be lot of operators will be doing the same and going to be
> broken after this. We really need to think about this change as API
> backward incompatible pov where upgrade Cloud with new visibility
> definition is all ok but it break the way of usage(Image of Private
> visibility explicitly with added members).
> 
> After looking into glance API versioning mechanism, it seems /v2 points to
> latest version irrespective of version includes backward compatible or
> incompatible changes. I mean users cannot lock API on old version (say they
> want only v2.2). How glance introduced backward incompatible changes?
> 
> I am not sure why it is not done like api-wg suggested microversion way
> (like nova). I know glance API versioning is much older but when there are
> some better improvement coming like this community image change, I feel it
> should be done with backward compatible way in microversion.
> 
> Tempest testing the old behaviour is something very valid here and we
> should not change that to introduced backward incompatible changes which
> going to break cloud.
> 
> .. [1] https://review.openstack.org/#/c/412731/
> 
> .. [2]
> https://github.com/openstack/tempest/blob/master/tempest/api/image/v2/test_images.py#L339
> 
> ​-gmann
> 
> On Fri, Dec 23, 2016 at 5:51 AM, Matthew Treinish 
> wrote:
> 
>> On Thu, Dec 22, 2016 at 02:57:20PM -0500, Brian Rosmaita wrote:
>>> Something has come up with a tempest test for Glance and the community
>>> images implementation, and I think it could use some mailing list
>>> discussion, as everyone might not be aware of the patch where the
>>> discussion is happening now [1].
>>>
>>> First, the Glance background, to get everyone on the same page:
>>>
>>> As part of implementing community images [2], the 'visibility' field of
>>> an image is going from being 2-valued to being 4-valued.  Up to now, the
>>> only values have been 'private' and 'public', which meant that shared
>>> images were 'private', which was inaccurate and confusing (and bugs were
>>> filed with Glance about shared images not having visibility 'shared'
>>> [3a,b]).
>>>
>>> With the new visibility enum, the Images API v2 will behave as follows:
>>>
>>> * An image with visibility == 'private' is not shared, and is not
>>> shareable until its visibility is changed to 'shared'.
>>>
>>> * An image must have visibility == 'shared' in order to do member
>>> operations or be accessible to image members.
>>>
>>> * The default visibility of a freshly-created image is 'shared'.  This
>>> may seem weird, but a freshly-created image has no members, so it's
>>> effectively private, behaving exactly as a freshly-created image does,
>>> pre-Ocata.  It's also ready to immediately accept a member-create call,
>>> as freshly-created images are pre-Ocata.  So from a workflow
>>> perspective, this change is backward compatible.
>>>
>>> * After much discussion [4], including discussion with operators and an
>>> operator's survey [5], we decided that the correct migration of
>>> 'visibility' values for existing images when a cloud is updated would
>>> be: public images stay 'public', private images with members become
>>> 'shared', and private images without images stay 'private'.  (Thus, if
>>> you have a 'private' image, you'll have to change it to 'shared' before
>>> you can add members.  On the other hand, now it's *really* private.)
>>>
>>> * You can specify a visibility at the time of image-creation, as you can
>>> now.  But if you specify 'private', what you get is *really* private.
>>> This either introduces a minor backward incompatibility, or it fixes a
>>> bug, depending on how you look at it.  The key thing is, if you *don't*
>>> specify a visibility, an image with the default visibility will behave
>>> exactly as 

Re: [openstack-dev] [TripleO] Fixing Swift rings when upscaling/replacing nodes in TripleO deployments

2017-01-05 Thread Arkady.Kanevsky
I have concern to rely on undercloud for overcloud swift.
Undercloud is not HA (yet) so it may not be operational when disk failed or 
swift overcloud node is added/deleted.

-Original Message-
From: Christian Schwede [mailto:cschw...@redhat.com] 
Sent: Thursday, January 05, 2017 6:14 AM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [TripleO] Fixing Swift rings when upscaling/replacing 
nodes in TripleO deployments

Hello everyone,

there was an earlier discussion on $subject last year [1] regarding a bug when 
upscaling or replacing nodes in TripleO [2].

Shortly summarized: Swift rings are built on each node separately, and if 
adding or replacing nodes (or disks) this will break the rings because they are 
no longer consistent across the nodes. What's needed are the previous ring 
builder files on each node before changing the rings.

My former idea in [1] was to build the rings in advance on the undercloud, and 
also using introspection data to gather a set of disks on each node for the 
rings.

However, this changes the current way of deploying significantly, and also 
requires more work in TripleO and Mistral (for example to trigger a ring build 
on the undercloud after the nodes have been started, but before the deployment 
triggers the Puppet run).

I prefer smaller steps to keep everything stable for now, and therefore I 
changed my patches quite a bit. This is my updated proposal:

1. Two temporary undercloud Swift URLs (one PUT, one GET) will be computed 
before Mistral starts the deployments. A new Mistral action to create such URLs 
is required for this [3].
2. Each overcloud node will try to fetch rings from the undercloud Swift 
deployment before updating it's set of rings locally using the temporary GET 
url. This guarantees that each node uses the same source set of builder files. 
This happens in step 2. [4] 3. puppet-swift runs like today, updating the rings 
if required.
4. Finally, at the end of the deployment (in step 5) the nodes will upload 
their modified rings to the undercloud using the temporary PUT urls. 
swift-recon will run before this, ensuring that all rings across all nodes are 
consistent.

The two required patches [3][4] are not overly complex IMO, but they solve the 
problem of adding or replacing nodes without changing the current workflow 
significantly. It should be even easy to backport them if needed.

I'll continue working on an improved way of deploying Swift rings (using 
introspection data), but using this approach it could be even done using todays 
workflow, feeding data into puppet-swift (probably with some updates to 
puppet-swift/tripleo-heat-templates to allow support for regions, zones, 
different disk layouts and the like). However, all of this could be built on 
top of these two patches.

I'm curious about your thoughts and welcome any feedback or reviews!

Thanks,

-- Christian


[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-August/100720.html
[2] https://bugs.launchpad.net/tripleo/+bug/1609421
[3] https://review.openstack.org/#/c/413229/
[4] https://review.openstack.org/#/c/414460/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALU] Re: [ALU] Re: [ALU] [vitrage] how to useplaceholder vertex

2017-01-05 Thread Yujun Zhang
A follow up question on relationships.

On Thu, Jan 5, 2017 at 9:59 PM Weyl, Alexey (Nokia - IL) <
alexey.w...@nokia.com> wrote:

> Hi Yujun,
>
> Lets see.
>
> 1. There is no need for the transformer to handle this duplication. What
> will happen at the moment is that we will receive twice every neighbor, and
> it is fine by us, because it is a quite small datasource, and 99.999% of
> the time it won't be changed.
>

It's fine for neighbor because vertex can be identified by id and there
won't be duplication.

But what about relationship, how do we model *redundant* links between two
entities? There seems to be no id for relationships.

2. It should be 2 events. We want to make it as simple as possible, and in
> the same time as flexible as possible. So you should create 2 events and
> each one will have the neighbor connection.
>
> Hope it answers everything.
>
> BR,
> Alexey
>
>
>
> From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> Sent: Thursday, January 05, 2017 2:32 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [ALU] Re: [openstack-dev] [ALU] Re: [ALU] [vitrage] how to
> useplaceholder vertex
>
> Alexey,
>
> I have to dig this old thread to clarify some issues I met during static
> datasource implementation. Hope that you can still recall the context :-)
>
> I'll try to simplify this question with an example. The following
> configuration are snippet from static datasource
>
> 1. suppose we have three switches linked in a ring. What would be the
> expected entity events emit by the driver?
>
> In my proposed driver, there will be three entities. And each relationship
> will appear both in source entity and target entity, e.g. s1->s2 will be
> included in both s1 and s2. Should the transformer handle this duplication
> or the graph utils will?
> entities:
>   - config_id: s1
> type: switch
> name: switch-1
> id: 12345
> state: available
>   - config_id: s2
> type: switch
> name: switch-2
> id: 23456
> state: available
>   - config_id: s3
> type: switch
> name: switch-3
> id: 34567
> state: available
> relationships:
>   - source: s1
> target: s2
> relation_type: linked
>   - source: s2
> target: s3
> relation_type: linked
>   - source: s3
> target: s1
> relation_type: linked
> 2. suppose we created a link between switch and nova.host. What will be
> the expected entity events? Should it be one entity event of s1 with h1
> embedded as neighbor? Or two entity events, s1 and h1?
> entities:
>   - config_id: s1
> type: switch
> name: switch-1
> id: 12345
> state: available
>   - config_id: h1
> type: nova.host
> id: 1
> relationships:
>   - source: s1
> target: h1
> relation_type: attached
>
> On Wed, Dec 14, 2016 at 11:54 PM Weyl, Alexey (Nokia - IL) <
> alexey.w...@nokia.com> wrote:
> 1. That is correct.
>
> 2. That is not quite correct.
> In static we only define the main properties of each entity, meaning type,
> id, category and thus it is ok that for each main entity we will create its
> neighbors and connect between them. There is no need for any distinguish
> due to that.
>
>
> From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> Sent: Wednesday, December 14, 2016 5:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [ALU] Re: [openstack-dev] [ALU] [vitrage] how to use placeholder
> vertex
>
> Hi, Alexey,
>
> Thanks for the detail example. It explains the existing mechanism of
> vertex creation well.
>
> So it looks like each resource type will have a primary datasource, e.g.
> nova.host for nova.host, nova.intance for nova.instance, that holds full
> details. Is that correct?
>
> Not sure that you remember the long discussion in static driver review[1]
> or not. At last, we agreed on a unified entity definition for both
> `nova.host` and `switch`, no extra key to indicate it is "external" (should
> create a placeholder).
>
> If I understand it correctly, no placeholder will be created in this case.
> Because we can not distinguish them from the static configuration. And the
> properties of `nova.host` resource shall to be merged from `static` and
> nova.host` datasources. Is that so?
>
> [1]: https://review.openstack.org/#/c/405354/
>
> On Wed, Dec 14, 2016 at 5:40 PM Weyl, Alexey (Nokia - IL) <
> alexey.w...@nokia.com> wrote:
> Hi Yujun,
>
> This is a good question, and let me explain for you how it works.
> Lets say we are supposed to get 2 entities from nova, nova.host called
> host1 and nova.instance called vm1 and vm1 is supposed to be connected to
> host1.
> The nova.host driver and nova.instance driver are working simultaneously
> and thus we don’t know the order in which those events will arrive.
> We have 2 use cases:
> 1.   Host1 event arrives before vm1.
> In this case the processor will call the transformer of nova.host and will
> create a vertex for host1 in the graph with the full details of host1.
> Then, 

Re: [openstack-dev] [tripleo] TripleO-Quickstart Transition to TripleO-CI Update and Invite:

2017-01-05 Thread James Slagle
On Tue, Jan 3, 2017 at 4:04 PM, Harry Rybacki  wrote:
> Greetings All,
>
> Folks have been diligently working on the blueprint[1] to prepare
> TripleO-Quickstart (OOOQ)[2] and TripleO-Quickstart-Extras[3] for
> their transition into TripleO-CI. Presently, our aim is to begin the
> actual transition to OOOQ on 4-Feb-2017. We are tracking our work on
> the RDO-Infra Trello board[4] and holding public discussion of key
> blockers on the team’s scrum etherpad[5].
>
> We are hosting weekly transition update meetings (1600-1700 UTC) and
> would like to invite folks to participate. Specifically, we are
> looking for at least one stakeholder in the existing TripleO-CI to
> join us as we prepare to migrate OOOQ. Attend and map out job/feature
> coverage to identify any holes so we can begin plugging them. Please
> reply off-list or reach out to me (hrybacki) on IRC to be added to the
> transition meeting calendar invite.

Is there still an ongoing effort to move OVB based jobs to be 3rd party CI?

Based on the previous email threads about 3rd party CI[1] and the
discussions at summit, I was under the impression that we'd move to
3rd party CI first, as there seemed to be more urgency around that.
Then, we'd transition to quickstart. It sounds like we're going to
transition to quickstart first though.

That is fine, but as others have mentioned, I'd like to see the rate
of change controlled in the CI system, especially around release time.
I'm wondering if there are still plans to move to 3rd party and how
those plans might line up with this proposed schedule.

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-October/105248.html

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/glance failed

2017-01-05 Thread Flavio Percoco

On 04/01/17 13:31 -0500, Ian Cordasco wrote:

-Original Message-
From: Tony Breeds 
Reply: OpenStack Development Mailing List (not for usage questions)
, OpenStack Development Mailing
List (not for usage questions) 
Date: December 14, 2016 at 00:18:38
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [Release-job-failures] Release of
openstack/glance failed


On Mon, Dec 12, 2016 at 09:46:54AM -0600, Ian Cordasco wrote:
>
>
> -Original Message-
> From: Andreas Jaeger
> Reply: OpenStack Development Mailing List (not for usage questions)
> Date: December 12, 2016 at 01:39:17
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Release-job-failures] Release of 
openstack/glance
failed
>
> > On 2016-12-12 08:34, Andreas Jaeger wrote:
> > > On 2016-12-12 06:20, Tony Breeds wrote:
> > >> On Mon, Dec 12, 2016 at 04:44:18AM +, jenk...@openstack.org wrote:
> > >>> Build failed.
> > >>>
> > >>> - glance-docs-ubuntu-xenial 
http://logs.openstack.org/38/38f199507aff8bfcaf81ad9ea58ea326224faf5f/release/glance-docs-ubuntu-xenial/de7d73e/
> > : FAILURE in 1m 44s
> > >>
> > >> This boils down to [1] which is a known problem with newer cryptography 
(and
> > >> the interaction with openssl). What I don't understand is how we got 
there
> > >> with constratints working[2]. Perhaps it's the openssl on the release 
sigining
> > >> node is "newer" than general nodepool nodes?
> > >
> > > glance does not use constraints in venv environment.
> > >
> > > It can be used since a few months. I'll send a change for master,
> >
> > I expect this needs backporting to stable branches - stable or glance
> > team, please review and backport yourself:
> >
> > https://review.openstack.org/409642
>
>
> Thank you Andreas!

https://review.openstack.org/#/c/410536 is the backport but it's still failing
with the same
issue with cryptography and openssl[1] :(

Yours Tony.
[1] 
http://logs.openstack.org/36/410536/1/check/gate-glance-releasenotes/46c2615/console.html#_2016-12-14_05_13_53_002878


Hi Tony,

So if I understand correctly, presently:

- There is no 11.0.3 tag for glance which is what we planned to use to
tag liberty-eol
- There is no 11.0.3 tarball for glance
- There is no good way to generate a 11.0.3 tarball because of the
cryptography & openssl conflict
- There is also no good way to generate a liberty-eol tarsal because
of that issue.

I believe you asked in another thread (that I cannot locate) if it was
acceptable to the Glance team to not have an 11.0.3 tarball on
openstack.org. With Brian on vacation, I'm hoping the other stable
maintenance cores will chime in. I, for one, (as Release CPL and a
Stable branch core reviewer) don't think the tarballs are critical for
Glance. I'm fairly certain that most of the deployment projects use
the Git repository directly or Distro provided packages (which are
built from git tags). With that in mind, I don't think this should
block Glance being EOL'd.

I'm sorry for the delay in my reply. I took a little over a week of
time off myself.



Yeah, the above sounds reasonable to me, fwiw.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Not running for Oslo PTL for Pike

2017-01-05 Thread Ken Giusti
On Tue, Jan 3, 2017 at 3:03 PM, Joshua Harlow  wrote:
> Hi Oslo folks (and others),
>
> Happy new year!
>
> After serving for about a year I think it's a good opportunity for myself to
> let another qualified individual run for Oslo PTL (seems common to only go
> for two terms and hand-off to another).
>
> So I just wanted to let folks know that I will be doing this, so that we can
> grow others in the community that wish to try out being a PTL.
>
> I don't plan on leaving the Oslo community btw, just want to make sure we
> spread the knowledge (and the fun!) of being a PTL.
>
> Hopefully I've been a decent PTL (with  room to improve) during this
> time :-)
>

Dude - you've been a most excellent PTL!

Thanks for all the help (and laughs :) you've provided in the past year.

> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Multi-Regions Support

2017-01-05 Thread Ronan-Alexandre Cherrueau
Hello,


TL;DR: We make a multi-regions deployment with Kolla. It requires to
patch the code a little bit, and you can find the diff on our
GitHub[1]. This patch is just a first attempt to support multi-regions
in Kolla and it raises questions. Some modifications are not done in
an idiomatic way and we do not expect this to be merged in Kolla. The
reminder of this mail explains our patch and states our questions.


At Inria/Discovery[2], we evaluate OpenStack at scale for the
Performance Working Group. So far, we focus on one single OpenStack
region deployment with hundreds of computes and we always go with
Kolla for our deployment. Over the last few days, we tried to achieve
a multi-regions OpenStack deployment with Kolla. We want to share with
you our current deployment workflow, patches we had to apply on Kolla
to support multi-regions, and also ask you if we do things correctly.

First of all, our multi-regions deployment follows the one described
by the OpenStack documentation[3]. Concretely, the deployment
considers /one/ Administrative Region (AR) that contains Keystone and
Horizon. This is a Kolla-based deployment, so Keystone is hidden
behind an HAProxy, and has MariaDB and memcached as backend. At the
same time, /n/ OpenStack Regions (OSR1, ..., OSRn) contain a full
OpenStack, except Keystone. We got something as follows at the end of
the deployment:


Admin Region (AR):
- control:
  * Horizon
  * HAProxy
  * Keyston
  * MariaDB
  * memcached

OpenStack Region x (OSRx):
- control:
  * HAProxy
  * nova-api/conductor/scheduler
  * neutron-server/l3/dhcp/...
  * glance-api/registry
  * MariaDB
  * RabbitMQ

- compute1:
  * nova-compute
  * neutron-agent

- compute2: ...


We do the deployment by running Kolla n+1 times. The first run deploys
the Administrative Region (AR) and the other runs deploy OpenStack
Regions (OSR). For each run, we fix the value of `openstack_region_name'
variable to the name of the current region.

In the context of multi-regions, Keystone (in the AR) should be
available to all OSRs. This means, there are as many Keystone
endpoints as regions. For instance, if we consider two OSRs, the
result of listing endpoints at the end of the AR deployment looks like
this:


 $ openstack endpoint list

 | Region | Serv Name | Serv Type | Interface | URL  |
 |+---+---+---+--|
 | AR | keystone  | identity  | public| http://10.24.63.248:5000/v3  |
 | AR | keystone  | identity  | internal  | http://10.24.63.248:5000/v3  |
 | AR | keystone  | identity  | admin | http://10.24.63.248:35357/v3 |
 | OSR1   | keystone  | identity  | public| http://10.24.63.248:5000/v3  |
 | OSR1   | keystone  | identity  | internal  | http://10.24.63.248:5000/v3  |
 | OSR1   | keystone  | identity  | admin | http://10.24.63.248:35357/v3 |
 | OSR2   | keystone  | identity  | public| http://10.24.63.248:5000/v3  |
 | OSR2   | keystone  | identity  | internal  | http://10.24.63.248:5000/v3  |
 | OSR2   | keystone  | identity  | admin | http://10.24.63.248:35357/v3 |


This requires patching the `keystone/tasks/register.yml' play[4] to
re-execute the `Creating admin project, user, role, service, and
endpoint' task for all regions we consider. An example of such a patch
is given on our GitHub[5]. In this example, the `openstack_regions'
variable is a list that contains the name of all regions (see [6]). As
a drawback, the patch implies to know in advance all OSR. A better
implementation would execute the `Creating admin project, user, role,
service, and endpoint' task every time a new OSR is going to be
deployed. But this requires to move the task somewhere else in the
Kolla workflow and we have no idea where this should be.

In the AR, we also have to change the Horizon configuration file to
handle multi-regions[7]. The modification could be done easily and
idiomatically by setting the `node_custome_config' variable to the
`multi-regions' directory[8] and benefits from Kolla merging config
system.

Also, deploying OSRs requires patching the kolla-toolbox as it seems
not region-aware. In particular, patch the `kolla_keystone_service.py'
module[9] that is responsible for contacting Keystone and creating a
new endpoint when we register a new OpenStack service.


 73  for _endpoint in cloud.keystone_client.endpoints.list():
 74  if _endpoint.service_id == service.id and \
 75 _endpoint.interface == interface:
 76endpoint = _endpoint
 77if endpoint.url != url:
 78  changed = True
 79  cloud.keystone_client.endpoints.update(
 80endpoint, url=url)
 81break
 82  else:
 83changed = True
 84cloud.keystone_client.endpoints.create(
 85  service=service.id,
 86  url=url,
 87  interface=interface,
 88  region=endpoint_region)


At some point, this module /create/ or /update/ a service endpoint. It
first tests if the service 

Re: [openstack-dev] 答复: RE: Re: [ALU] [Vitrage] vitrage tempest job config

2017-01-05 Thread Jeremy Stanley
On 2017-01-05 01:13:20 + (+), Yujun Zhang wrote:
> It seems the file is truncated on Github preview. It stops at
> openstack/security-specs
> 
> If you check the raw file[1], you will find entry of openstack/vitrage
> 
> [1]:
> https://raw.githubusercontent.com/openstack-infra/project-config/master/zuul/layout.yaml
[...]

Add that to the many other reasons we don't actually recommend those
(unofficial) read-only GitHub repository mirrors. Instead we
maintain a farm of CGit servers at git.openstack.org, and
http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml
renders the full (currently 18069 lines of) content you're looking
for.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALU] Re: [ALU] Re: [ALU] [vitrage] how to useplaceholder vertex

2017-01-05 Thread Weyl, Alexey (Nokia - IL)
Hi Yujun,

Lets see.

1. There is no need for the transformer to handle this duplication. What will 
happen at the moment is that we will receive twice every neighbor, and it is 
fine by us, because it is a quite small datasource, and 99.999% of the time it 
won't be changed.

2. It should be 2 events. We want to make it as simple as possible, and in the 
same time as flexible as possible. So you should create 2 events and each one 
will have the neighbor connection.

Hope it answers everything.

BR,
Alexey



From: Yujun Zhang [mailto:zhangyujun+...@gmail.com] 
Sent: Thursday, January 05, 2017 2:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [ALU] Re: [openstack-dev] [ALU] Re: [ALU] [vitrage] how to 
useplaceholder vertex

Alexey,

I have to dig this old thread to clarify some issues I met during static 
datasource implementation. Hope that you can still recall the context :-)

I'll try to simplify this question with an example. The following configuration 
are snippet from static datasource

1. suppose we have three switches linked in a ring. What would be the expected 
entity events emit by the driver?

In my proposed driver, there will be three entities. And each relationship will 
appear both in source entity and target entity, e.g. s1->s2 will be included in 
both s1 and s2. Should the transformer handle this duplication or the graph 
utils will?
entities:
  - config_id: s1
type: switch
name: switch-1
id: 12345
state: available
  - config_id: s2
type: switch
name: switch-2
id: 23456
state: available
  - config_id: s3
type: switch
name: switch-3
id: 34567
state: available
relationships:
  - source: s1
target: s2
relation_type: linked
  - source: s2
target: s3
relation_type: linked
  - source: s3
target: s1
relation_type: linked
2. suppose we created a link between switch and nova.host. What will be the 
expected entity events? Should it be one entity event of s1 with h1 embedded as 
neighbor? Or two entity events, s1 and h1?
entities:
  - config_id: s1
type: switch
name: switch-1
id: 12345
state: available
  - config_id: h1
type: nova.host
id: 1
relationships:
  - source: s1
target: h1
relation_type: attached

On Wed, Dec 14, 2016 at 11:54 PM Weyl, Alexey (Nokia - IL) 
 wrote:
1. That is correct.

2. That is not quite correct.
In static we only define the main properties of each entity, meaning type, id, 
category and thus it is ok that for each main entity we will create its 
neighbors and connect between them. There is no need for any distinguish due to 
that.


From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
Sent: Wednesday, December 14, 2016 5:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [ALU] Re: [openstack-dev] [ALU] [vitrage] how to use placeholder vertex

Hi, Alexey,

Thanks for the detail example. It explains the existing mechanism of vertex 
creation well.

So it looks like each resource type will have a primary datasource, e.g. 
nova.host for nova.host, nova.intance for nova.instance, that holds full 
details. Is that correct?

Not sure that you remember the long discussion in static driver review[1] or 
not. At last, we agreed on a unified entity definition for both `nova.host` and 
`switch`, no extra key to indicate it is "external" (should create a 
placeholder).

If I understand it correctly, no placeholder will be created in this case. 
Because we can not distinguish them from the static configuration. And the 
properties of `nova.host` resource shall to be merged from `static` and 
nova.host` datasources. Is that so?

[1]: https://review.openstack.org/#/c/405354/  

On Wed, Dec 14, 2016 at 5:40 PM Weyl, Alexey (Nokia - IL) 
 wrote:
Hi Yujun,
 
This is a good question, and let me explain for you how it works.
Lets say we are supposed to get 2 entities from nova, nova.host called host1 
and nova.instance called vm1 and vm1 is supposed to be connected to host1.
The nova.host driver and nova.instance driver are working simultaneously and 
thus we don’t know the order in which those events will arrive.
We have 2 use cases:
1.   Host1 event arrives before vm1.
In this case the processor will call the transformer of nova.host and will 
create a vertex for host1 in the graph with the full details of host1.
Then, vm1 event will arrive, the processor will create the vm1 vertex in the 
graph, it will update the host1 properties in the graph (because the host1 
details that are created in nova.instance are only its basic details such as: 
category, type, id, is_deleted, is_placeholder they host1 properties won’t be 
changed in the graph because those details are basic and can’t be changed), and 
then it will create an edge between vm1 and host1 (only the nova.instance knows 
to which nova.host it is connected and not vice versa).
2.   Vm1 event arrives before host1.
In 

Re: [openstack-dev] [ironic] User survey question

2017-01-05 Thread Don maillist
One thought I had. And it may not be appropriate:

Do you use or have you considered using Ironic for non X86 based systems
(arm, powerpc, etc)?
Are you in the telecom industry? Should Ironic support ATCA based systems
to make use of older hardware?

I am facing a potential project in the next few months where I basically
have to do both and would be interested in working with others who might do
the same.

//Don

On Wed, Jan 4, 2017 at 6:07 AM, Jim Rollenhagen 
wrote:

> Hey all,
>
> We have an opportunity to ask a question on the upcoming User Survey,
> which will launch by February 1st. We can choose the audience to direct
> the question to, choosing from those USING, TESTING, or INTERESTED
> in Ironic (or some combination of these).
>
> The hope is that the Foundation folks can get us the raw answers before
> the PTG.
>
> So, Ironicers, what question would you like to ask users, and which group
> of
> users would you like to ask?
>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALU] Re: [ALU] [vitrage] how to use placeholder vertex

2017-01-05 Thread Yujun Zhang
Alexey,

I have to dig this old thread to clarify some issues I met during static
datasource implementation. Hope that you can still recall the context :-)

I'll try to simplify this question with an example. The following
configuration are snippet from static datasource

1. suppose we have three switches linked in a ring. What would be the
expected entity events emit by the driver?

In my proposed driver, there will be three entities. And each relationship
will appear both in source entity and target entity, e.g. s1->s2 will be
included in both s1 and s2. *Should the transformer handle this duplication
or the graph utils will?*

entities:
  - config_id: s1
type: switch
name: switch-1
id: 12345
state: available
  - config_id: s2
type: switch
name: switch-2
id: 23456
state: available
  - config_id: s3
type: switch
name: switch-3
id: 34567
state: available
relationships:
  - source: s1
target: s2
relation_type: linked
  - source: s2
target: s3
relation_type: linked
  - source: s3
target: s1
relation_type: linked

2. suppose we created a link between switch and nova.host. What will be the
expected entity events? *Should it be one entity event of s1 with h1
embedded as neighbor? Or two entity events, s1 and h1?*

entities:
  - config_id: s1
type: switch
name: switch-1
id: 12345
state: available
  - config_id: h1
type: nova.host
id: 1
relationships:
  - source: s1
target: h1
relation_type: attached


On Wed, Dec 14, 2016 at 11:54 PM Weyl, Alexey (Nokia - IL) <
alexey.w...@nokia.com> wrote:

> 1. That is correct.
>
> 2. That is not quite correct.
> In static we only define the main properties of each entity, meaning type,
> id, category and thus it is ok that for each main entity we will create its
> neighbors and connect between them. There is no need for any distinguish
> due to that.
>
>
> From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> Sent: Wednesday, December 14, 2016 5:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [ALU] Re: [openstack-dev] [ALU] [vitrage] how to use placeholder
> vertex
>
> Hi, Alexey,
>
> Thanks for the detail example. It explains the existing mechanism of
> vertex creation well.
>
> So it looks like each resource type will have a primary datasource, e.g.
> nova.host for nova.host, nova.intance for nova.instance, that holds full
> details. Is that correct?
>
> Not sure that you remember the long discussion in static driver review[1]
> or not. At last, we agreed on a unified entity definition for both
> `nova.host` and `switch`, no extra key to indicate it is "external" (should
> create a placeholder).
>
> If I understand it correctly, no placeholder will be created in this case.
> Because we can not distinguish them from the static configuration. And the
> properties of `nova.host` resource shall to be merged from `static` and
> nova.host` datasources. Is that so?
>
> [1]: https://review.openstack.org/#/c/405354/
>
> On Wed, Dec 14, 2016 at 5:40 PM Weyl, Alexey (Nokia - IL) <
> alexey.w...@nokia.com> wrote:
> Hi Yujun,
>
> This is a good question, and let me explain for you how it works.
> Lets say we are supposed to get 2 entities from nova, nova.host called
> host1 and nova.instance called vm1 and vm1 is supposed to be connected to
> host1.
> The nova.host driver and nova.instance driver are working simultaneously
> and thus we don’t know the order in which those events will arrive.
> We have 2 use cases:
> 1.   Host1 event arrives before vm1.
> In this case the processor will call the transformer of nova.host and will
> create a vertex for host1 in the graph with the full details of host1.
> Then, vm1 event will arrive, the processor will create the vm1 vertex in
> the graph, it will update the host1 properties in the graph (because the
> host1 details that are created in nova.instance are only its basic details
> such as: category, type, id, is_deleted, is_placeholder they host1
> properties won’t be changed in the graph because those details are basic
> and can’t be changed), and then it will create an edge between vm1 and
> host1 (only the nova.instance knows to which nova.host it is connected and
> not vice versa).
> 2.   Vm1 event arrives before host1.
> In this case the processor will add vm1 to the graph, then it will add the
> host1 placeholder to the graph (so we will know to which host vm1 is
> connected) and then add the edge between them.
> Then when the processor will handle with the host1 event, it will just add
> some properties of the host1 vertex, and of course will change the
> is_placeholder property of host1 to false.
> We also has the consistency service that runs every 10 minutes (its
> configurable with the snapshot_interval) and checks if there are vertices
> that are is_placeholder=True and are in the graph more then
> 2*snapshot_interval time then it means that such a vertex of host1 for
> example doesn’t 

[openstack-dev] [infra][nodepool][ci][keystone]

2017-01-05 Thread Lenny Verkhovsky
Hi Devs,
We are bringing up new CI using latest CI Solution[1]
And we are facing some issues[2] with connection nodepool[3] to devstack 
provider[4]
Due to keystone authentication error[5], looks like some missing info in the 
nodepool.yaml

Please advise.

p.s. provider was done by bringing up master devstack

[1] http://docs.openstack.org/infra/openstackci/third_party_ci.html
[2] http://paste.openstack.org/show/593963/
[3] http://paste.openstack.org/show/593959/
[4] http://paste.openstack.org/show/593960/
[5] http://paste.openstack.org/show/593964/
2017-01-05 00:17:00.034 185101 DEBUG keystone.middleware.auth 
[req-ba4a1c04-5ac1-47b6-ae11-641f3feb4209 - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. fill_context /opt/stack/keystone/keystone/middleware/auth.py:188
2017-01-05 00:17:00.035 185101 INFO keystone.common.wsgi 
[req-ba4a1c04-5ac1-47b6-ae11-641f3feb4209 - - - - -] POST 
http://10.224.33.32/identity/v3/auth/tokens
2017-01-05 00:17:00.046 185101 ERROR keystone.common.wsgi 
[req-ba4a1c04-5ac1-47b6-ae11-641f3feb4209 - - - - -] object of type 'NoneType' 
has no len()


2017-01-05 00:17:00.046 185101 TRACE keystone.common.wsgi TypeError: object 
of type 'NoneType' has no len()


Thanks.
Lenny  (lennyb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Fixing Swift rings when upscaling/replacing nodes in TripleO deployments

2017-01-05 Thread Christian Schwede
Hello everyone,

there was an earlier discussion on $subject last year [1] regarding a
bug when upscaling or replacing nodes in TripleO [2].

Shortly summarized: Swift rings are built on each node separately, and
if adding or replacing nodes (or disks) this will break the rings
because they are no longer consistent across the nodes. What's needed
are the previous ring builder files on each node before changing the rings.

My former idea in [1] was to build the rings in advance on the
undercloud, and also using introspection data to gather a set of disks
on each node for the rings.

However, this changes the current way of deploying significantly, and
also requires more work in TripleO and Mistral (for example to trigger a
ring build on the undercloud after the nodes have been started, but
before the deployment triggers the Puppet run).

I prefer smaller steps to keep everything stable for now, and therefore
I changed my patches quite a bit. This is my updated proposal:

1. Two temporary undercloud Swift URLs (one PUT, one GET) will be
computed before Mistral starts the deployments. A new Mistral action to
create such URLs is required for this [3].
2. Each overcloud node will try to fetch rings from the undercloud Swift
deployment before updating it's set of rings locally using the temporary
GET url. This guarantees that each node uses the same source set of
builder files. This happens in step 2. [4]
3. puppet-swift runs like today, updating the rings if required.
4. Finally, at the end of the deployment (in step 5) the nodes will
upload their modified rings to the undercloud using the temporary PUT
urls. swift-recon will run before this, ensuring that all rings across
all nodes are consistent.

The two required patches [3][4] are not overly complex IMO, but they
solve the problem of adding or replacing nodes without changing the
current workflow significantly. It should be even easy to backport them
if needed.

I'll continue working on an improved way of deploying Swift rings (using
introspection data), but using this approach it could be even done using
todays workflow, feeding data into puppet-swift (probably with some
updates to puppet-swift/tripleo-heat-templates to allow support for
regions, zones, different disk layouts and the like). However, all of
this could be built on top of these two patches.

I'm curious about your thoughts and welcome any feedback or reviews!

Thanks,

-- Christian


[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-August/100720.html
[2] https://bugs.launchpad.net/tripleo/+bug/1609421
[3] https://review.openstack.org/#/c/413229/
[4] https://review.openstack.org/#/c/414460/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mechanism drivers ! OpenvSwich or Linuxbridge or both of them?

2017-01-05 Thread slawek

Hello,

In such case like You described, ports will be bound with openvswitch 
mechanism driver because this agent will be found as alive on host. So 
linuxbridge mechanism driver will do nothing for binding such ports.


--
Slawek Kaplonski
sla...@kaplonski.pl

W dniu 05.01.2017 04:51, zhi napisał(a):

Hi, Kevin. If I load openvswitch and linuxbridge mechanism drivers in 
neutron server, and running ovs-agent in compute nodes. What does 
openvsitch mechanism driver do? What does linuxbridge mechanism do? I 
think there must have some differences between the openvswitch and the 
linuxbridge mechanism driver. But I can't get the exact point about the 
two mechanism drivers when running ovs-agent in compute nodes now.


2017-01-04 16:16 GMT+08:00 Kevin Benton :

Note that with the openvswitch and linuxbridge mechanism drivers, it 
will be safe to have both loaded on the Neutron server at the same time 
since each driver will only bind a port if it has an agent of that type 
running on the host.


On Fri, Dec 30, 2016 at 1:24 PM, Sławek Kapłoński  
wrote:

Hello,

I don't know what is hierarchical port binding but about mechanism
drivers, You should use this mechanism driver which L2 agent You are
using on compute/network nodes. If You have OVS L2 agent then You 
should

have enabled openvswitch mechanism driver.
In general both of those drivers are doing similar work on
neutron-server side because they are checking if proper agent type is
working on host and if other conditions required to bind port are 
valid.

Mechanism drivers can have also some additional informations about
backend driver, e.g. there is info about supported QoS rule types for
each backend driver (OVS, Linuxbridge and SR-IOV).

BTW. IMHO You should send such questions to 
openst...@lists.openstack.org


--
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Fri, 30 Dec 2016, zhi wrote:


Hi, all

First of all. Happy New year for everyone!

I have a question about mechanism drivers when using ML2 driver.

When should I use openvswitch mechanism driver ?

When should I use linuxbridge mechanism driver ?

And, when should I use openvswitch and linuxbridge mechanism drivers ?

In my opinion, ML2 driver has supported hierarchical port binding. By 
using

hierarchical port binding,
neutron will know every binding info in network topology, isn't it? If 
yes,
where I can found the every binding info. And what the relationship 
between

hierarchical port binding and mechanism drivers?


Hope for your reply.

Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mechanism drivers ! OpenvSwich or Linuxbridge or both of them?

2017-01-05 Thread Kevin Benton
The mechanism drivers populate the vif details that tell nova how it's
supposed to setup the VM port. So the linux bridge driver tells it the port
type is linux bridge[1] and the OVS tells it that the type is OVS.

So if you have both loaded and ovs is running on the compute node. The
following steps will happen:

* nova sends a port update populating the host_id of the compute node the
port will be on
* ML2 processes the update and starts the port binding operation and calls
each driver
* The linux bridge mech driver will see that it has no active agents on
that host so it will not bind the port
* The openvswitch mech driver will see that it does have an active agent,
so it will bind the port and populate the details indicating it's an OVS
port
* The updated port with the vif details indicating that it's an OVS port
will be returned to Nova and nova will wire up the port for OVS




1.
https://github.com/openstack/neutron/blob/bcd6fddb127f4fe3f7ce3415f5b5e0da910e0e0b/neutron/plugins/ml2/drivers/linuxbridge/mech_driver/mech_linuxbridge.py#L40-L43

On Wed, Jan 4, 2017 at 7:51 PM, zhi  wrote:

> Hi, Kevin. If I load openvswitch and linuxbridge mechanism drivers in
> neutron server, and running ovs-agent in compute nodes. What does
> openvsitch mechanism driver do? What does linuxbridge mechanism do? I think
> there must have some differences between the openvswitch and the
> linuxbridge mechanism driver. But I can't get the exact point about the two
> mechanism drivers when running ovs-agent in compute nodes now.
>
> 2017-01-04 16:16 GMT+08:00 Kevin Benton :
>
>> Note that with the openvswitch and linuxbridge mechanism drivers, it will
>> be safe to have both loaded on the Neutron server at the same time since
>> each driver will only bind a port if it has an agent of that type running
>> on the host.
>>
>> On Fri, Dec 30, 2016 at 1:24 PM, Sławek Kapłoński 
>> wrote:
>>
>>> Hello,
>>>
>>> I don't know what is hierarchical port binding but about mechanism
>>> drivers, You should use this mechanism driver which L2 agent You are
>>> using on compute/network nodes. If You have OVS L2 agent then You should
>>> have enabled openvswitch mechanism driver.
>>> In general both of those drivers are doing similar work on
>>> neutron-server side because they are checking if proper agent type is
>>> working on host and if other conditions required to bind port are valid.
>>> Mechanism drivers can have also some additional informations about
>>> backend driver, e.g. there is info about supported QoS rule types for
>>> each backend driver (OVS, Linuxbridge and SR-IOV).
>>>
>>> BTW. IMHO You should send such questions to
>>> openst...@lists.openstack.org
>>>
>>> --
>>> Best regards / Pozdrawiam
>>> Sławek Kapłoński
>>> sla...@kaplonski.pl
>>>
>>> On Fri, 30 Dec 2016, zhi wrote:
>>>
>>> > Hi, all
>>> >
>>> > First of all. Happy New year for everyone!
>>> >
>>> > I have a question about mechanism drivers when using ML2 driver.
>>> >
>>> > When should I use openvswitch mechanism driver ?
>>> >
>>> > When should I use linuxbridge mechanism driver ?
>>> >
>>> > And, when should I use openvswitch and linuxbridge mechanism drivers ?
>>> >
>>> > In my opinion, ML2 driver has supported hierarchical port binding. By
>>> using
>>> > hierarchical port binding,
>>> > neutron will know every binding info in network topology, isn't it? If
>>> yes,
>>> > where I can found the every binding info. And what the relationship
>>> between
>>> > hierarchical port binding and mechanism drivers?
>>> >
>>> >
>>> > Hope for your reply.
>>> >
>>> > Thanks
>>> > Zhi Chang
>>>
>>> > 
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List 

Re: [openstack-dev] [oslo] Not running for Oslo PTL for Pike

2017-01-05 Thread Amrith Kumar
Josh,

It has been great working with you as the PTL of Oslo. Thanks for your
leadership.

-amrith

> -Original Message-
> From: Joshua Harlow [mailto:harlo...@fastmail.com]
> Sent: Tuesday, January 3, 2017 3:04 PM
> To: OpenStack Development Mailing List (not for usage questions)
 d...@lists.openstack.org>
> Subject: [openstack-dev] [oslo] Not running for Oslo PTL for Pike
> 
> Hi Oslo folks (and others),
> 
> Happy new year!
> 
> After serving for about a year I think it's a good opportunity for myself
to let
> another qualified individual run for Oslo PTL (seems common to only go for
two
> terms and hand-off to another).
> 
> So I just wanted to let folks know that I will be doing this, so that we
can grow
> others in the community that wish to try out being a PTL.
> 
> I don't plan on leaving the Oslo community btw, just want to make sure we
> spread the knowledge (and the fun!) of being a PTL.
> 
> Hopefully I've been a decent PTL (with  room to improve) during this
time :-
> )
> 
> -Josh
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Pike PTG sessions

2017-01-05 Thread Rabi Mishra
HI All,

I've started an etherpad[1] to collect topic ideas for the PTG. We would
have a meeting room for 3 days(Wednesday-Friday). Feel free to add whatever
you think we should discuss/implement.

Basic information about the PTG (schedule, layout etc) is available at
https://www.openstack.org/ptg/ .

[1] https://etherpad.openstack.org/p/heat-pike-ptg-sessions

---
Regards,
Rabi Misra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/glance failed

2017-01-05 Thread Erno Kuvaja
On Wed, Jan 4, 2017 at 9:22 PM, Tony Breeds  wrote:
> On Wed, Jan 04, 2017 at 01:31:42PM -0500, Ian Cordasco wrote:
>
>> I believe you asked in another thread (that I cannot locate) if it was
>> acceptable to the Glance team to not have an 11.0.3 tarball on
>> openstack.org. With Brian on vacation, I'm hoping the other stable
>> maintenance cores will chime in. I, for one, (as Release CPL and a
>> Stable branch core reviewer) don't think the tarballs are critical for
>> Glance. I'm fairly certain that most of the deployment projects use
>> the Git repository directly or Distro provided packages (which are
>> built from git tags). With that in mind, I don't think this should
>> block Glance being EOL'd.
>
> Sounds good.  We can always generate and manually upload signed tarballs and
> wheels, we can't do it with our automated tools.
>
> I'll include glance projects in the next round of EOL requests to infra.
>
>> I'm sorry for the delay in my reply. I took a little over a week of
>> time off myself.
>
> No problem.  It's that time of year.
>
> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

++ I think this is reasonable way forward. Thanks for your efforts!

- jokke

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev