Re: [openstack-dev] [Interop-wg] [tc][qa][all]Potential New Interoperability Programs: The Current Thinking

2017-06-13 Thread Egle Sigler
Thank you Mark for writing this up! If you are interested in new interop
programs, please comment on https://review.openstack.org/#/c/472785/.
In addition, we will also be discussing this during our weekly meeting on
Wednesday, 11:00 AM CST/ 16:00 UTC in #openstack-meeting-3. If you cannot
join us for the meeting at that time, find us in #openstack-interop
channel.

Thank you,

Egle

On 6/9/17, 3:55 PM, "Mark Voelker"  wrote:

>Hi Everyone,
>
>Happy Friday!  There have been a number of discussions (at the PTG, at
>OpenStack Summit, in Interop WG and Board of Directors meetings, etc)
>over the past several months about the possibility of creating new
>interoperability programs in addition to the existing OpenStack Powered
>program administered by the Interop Working Group (formerly the DefCore
>Committee).  In particular, lately there have been a lot of discussions
>recently [1] about where to put tests associated with trademark programs
>with respect to some existing TC guidance [2] and community goals for
>Queens [3].  Although these potential new programs have been discussed in
>a number of places, it¹s a little hard to keep tabs on where we¹re at
>with them unless you¹re actively following the Interop WG.  Given the
>recent discussions on openstack-dev, I thought it might be useful to try
>and brain dump our current thinking on what these new programs might look
>like into a document somewhere that people could point at in discussions
>rather than discussing abstracts and working off memories from prior
>meetings.  To that end, I took a first stab at it this week which you can
>find here:
>
>https://review.openstack.org/#/c/472785/
>
>Needless to say this is just a draft to try to get some of the ideas out
>of neurons and on to electrons, so please don¹t take it to be firm
>consensus‹rather consider it a draft of what we¹re currently thinking and
>an invitation to collaborate.  I expect that other members of the Interop
>Working Group will be leaving comments in Gerrit as we hash through this,
>and we¹d love to have input from other folks in the community as well.
>These programs potentially touch a lot of you (in fact, almost all of
>you) in some way or another, so we¹re happy to hear your input as we work
>on evolving the interop programs.  Quite a lot has happened over the past
>couple of years, so we hope this will help folks understand where we came
>from and think about whether we want to make changes going forward.
>
>By the way, for those of you who might find an HTML-rendered document
>easier to read, click on the "gate-interop-docs-ubuntu-xenial² link in
>the comments left by Jenkins and then on ³Extension Programs - Current
>Direction².  Thanks, and have a great weekend!
>
>[1] 
>http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#11
>7657
>[2] 
>https://governance.openstack.org/tc/resolutions/20160504-defcore-test-loca
>tion.html
>[3] 
>https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.htm
>l
>
>At Your Service,
>
>Mark T. Voelker
>___
>Interop-wg mailing list
>interop...@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/interop-wg


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Does anyone rely on PUT /os-services/disable for non-compute services?

2017-06-13 Thread Kris G. Lindgren
I am fine with #2, and I am also fine with calling it a bug.  Since the 
enabled/disabled state for the other services didn’t actually do anything.


___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 6/13/17, 8:46 PM, "Dan Smith"  wrote:

> Are we allowed to cheat and say auto-disabling non-nova-compute services
> on startup is a bug and just fix it that way for #2? :) Because (1) it
> doesn't make sense, as far as we know, and (2) it forces the operator to
> have to use the API to enable them later just to fix their nova
> service-list output.

Yes, definitely.

--Dan

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] [Product] PTG attendance

2017-06-13 Thread T. Nichole Williams
Tentative yes on attending Denver PTG. If the mid-cycle meeting is also in 
Denver, I have higher chances of my job endorsing the trip. 

<3 Trilliams

Sent from my iPhone

> On Jun 13, 2017, at 9:05 PM,  
>  wrote:
> 
> Fellow Product WG members,
> We are taking informal poll on how many of us plan to attend
> PTG meeting in Denver?
>  
> Second question should we have mid-cycle meeting co-located with PTG or with 
> operator summit in Mexico city?
>  
> Please, respond to this email so Shamail and Leong can tally the results.
> Thanks,
> Arkady
>  
> Arkady Kanevsky, Ph.D.
> Director of SW Development
> Dell EMC CPSD
> Dell Inc. One Dell Way, MS PS2-91
> Round Rock, TX 78682, USA
> Phone: 512 723 5264
>  
> ___
> User-committee mailing list
> user-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Does anyone rely on PUT /os-services/disable for non-compute services?

2017-06-13 Thread Dan Smith

Are we allowed to cheat and say auto-disabling non-nova-compute services
on startup is a bug and just fix it that way for #2? :) Because (1) it
doesn't make sense, as far as we know, and (2) it forces the operator to
have to use the API to enable them later just to fix their nova
service-list output.


Yes, definitely.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] PTG attendance

2017-06-13 Thread Arkady.Kanevsky
Fellow Product WG members,
We are taking informal poll on how many of us plan to attend
PTG meeting in Denver?

Second question should we have mid-cycle meeting co-located with PTG or with 
operator summit in Mexico city?

Please, respond to this email so Shamail and Leong can tally the results.
Thanks,
Arkady

Arkady Kanevsky, Ph.D.
Director of SW Development
Dell EMC CPSD
Dell Inc. One Dell Way, MS PS2-91
Round Rock, TX 78682, USA
Phone: 512 723 5264

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Does anyone rely on PUT /os-services/disable for non-compute services?

2017-06-13 Thread Matt Riedemann

On 6/13/2017 8:17 PM, Dan Smith wrote:

So it seems our options are:

1. Allow PUT /os-services/{service_uuid} on any type of service, even if
doesn't make sense for non-nova-compute services.

2. Change the behavior of [1] to only disable new "nova-compute" 
services.


Please, #2. Please.

--Dan

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Are we allowed to cheat and say auto-disabling non-nova-compute services 
on startup is a bug and just fix it that way for #2? :) Because (1) it 
doesn't make sense, as far as we know, and (2) it forces the operator to 
have to use the API to enable them later just to fix their nova 
service-list output.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] TripleO OVB check gates to move to third party

2017-06-13 Thread Emilien Macchi
On Tue, Jun 13, 2017 at 3:11 PM, Ben Nemec  wrote:
>
>
> On 06/13/2017 12:28 PM, Paul Belanger wrote:
>>
>> On Tue, Jun 13, 2017 at 11:12:08AM -0500, Ben Nemec wrote:
>>>
>>>
>>>
>>> On 06/12/2017 06:19 PM, Ronelle Landy wrote:

 Greetings,

 TripleO OVB check gates are managed by upstream Zuul and executed on
 nodes provided by test cloud RH1. RDO Cloud is now available as a test
 cloud to be used when running CI jobs. To utilize to RDO Cloud, we could
 either:

 - continue to run from upstream Zuul (and spin up nodes to deploy
 the overcloud from RDO Cloud)
 - switch the TripleO OVB check gates to run as third party and
 manage these jobs from the Zuul instance used by Software Factory

 The openstack infra team advocates moving to third party.
 The CI team is meeting with Frederic Lepied, Alan Pevec, and other
 members of the Software Factory/RDO project infra tream to discuss how
 this move could be managed.

 Note: multinode jobs are not impacted - and will continue to run from
 upstream Zuul on nodes provided by nodepool.

 Since a move to third party could have significant impact, we are
 posting this out to gather feedback and/or concerns that TripleO
 developers may have.
>>>
>>>
>>> I'm +1 on moving to third-party...eventually.  I don't think it should be
>>> done at the same time as we move to a new cloud, which is a major change
>>> in
>>> and of itself.  I suppose we could do the third-party transition in
>>> parallel
>>> with the existing rh1 jobs, but as one of the people who will probably
>>> have
>>> to debug problems in RDO cloud I'd rather keep the number of variables to
>>> a
>>> minimum.  Once we're reasonably confident that RDO cloud is stable and
>>> handling our workload well we can transition to third-party and deal with
>>> the problems that will no doubt cause on their own.
>>>
>> This was a goal for tripleo-test-cloud-rh2, to move that to thirdparty CI,
>> ensure jobs work, then migrated. As you can see, we never actually did
>> that.
>>
>> My preference would be to make the move the thirdparty now, with
>> tripleo-test-cloud-rh1.  We now have all the pieces in place for RDO
>> project to
>> support this and in parallel set up RDO cloud to run jobs from RDO.
>>
>> If RDO stablility is a concern, the move to thirdparty first seems to make
>> the
>> most sense. This avoid the need to bring RDO cloud online, ensure it
>> works, then
>> move it again, and re-insure it works.
>>
>> Again, the move can be made seemless by turning down some of the capacity
>> in
>> nodepool.o.o and increase capacity in nodepool.rdoproject.org. And I am
>> happy to
>> help work with RDO on making this happen.
>
>
> I'm good with doing the third-party migration first too.  I'm only looking
> to avoid two concurrent major changes.

+1, I do agree with Ben here.

Go for it!

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Does anyone rely on PUT /os-services/disable for non-compute services?

2017-06-13 Thread Dan Smith

So it seems our options are:

1. Allow PUT /os-services/{service_uuid} on any type of service, even if
doesn't make sense for non-nova-compute services.

2. Change the behavior of [1] to only disable new "nova-compute" services.


Please, #2. Please.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Does anyone rely on PUT /os-services/disable for non-compute services?

2017-06-13 Thread Matt Riedemann

On 6/13/2017 12:19 PM, Matt Riedemann wrote:

With this change in Pike:

https://review.openstack.org/#/c/442162/

The PUT /os-services/* APIs to enable/disable/force-down a service will 
now only work with nova-compute services. If you're using those to try 
and disable a non-compute service, like nova-scheduler or 
nova-conductor, those APIs will result in a 404 response because there 
won't be host mappings for non-compute services.


There really never was a good reason to disable/enable non-compute 
services anyway since it wouldn't do anything. The scheduler and API are 
checking the status and forced_down fields to see if instance builds can 
be scheduled to a compute host or if instances can be evacuated from a 
downed compute host. There is nothing that relies on a disabled or 
downed conductor or scheduler service.


I realize the docs aren't justification for API behavior, but the API 
reference has always pointed out that these PUT operations are for 
*compute* services:


https://developer.openstack.org/api-ref/compute/#compute-services-os-services 



This has come up while working on an API microversion [1] where we'll 
now expose service uuids in GET calls and take a service uuid in PUT and 
DELETE calls to the os-services API. The uuid is needed to uniquely 
identify a service across cells. I plan on restricting PUT 
/os-services/{service_id} calls to only nova-compute services, and 
return a 400 on any other service like nova-conductor or nova-scheduler, 
since it doesn't make sense to enable/disable/force-down non-compute 
services.


This email is to provide awareness of this change and to also see if 
there are any corner cases in which people are relying on any of this 
behavior that we don't know about - this is your chance to speak up 
before we make the change.


[1] 
https://review.openstack.org/#/c/464280/11/nova/api/openstack/compute/services.py@288 





Kris Lindgren brought up a good point in IRC today about this.

If you configure enable_new_services=False, when new services are 
created they will be automatically disabled [1].


As noted earlier, disabled nova-conductor, nova-scheduler, etc, doesn't 
really mean anything. However, if we don't allow you to enable them via 
the API (the new PUT /os-services/{service_uuid} microversion), then 
those are going to be listed as disabled until you tweak them in the 
database directly, which isn't good.


And trying to get around this by using "PUT /os-services/enable" with 
microversion 2.1 won't work in Pike because of the host mapping issue I 
mentioned before.


So it seems our options are:

1. Allow PUT /os-services/{service_uuid} on any type of service, even if 
doesn't make sense for non-nova-compute services.


2. Change the behavior of [1] to only disable new "nova-compute" services.

[1] 
https://github.com/openstack/nova/blob/d26b3e7051a89160ad26c38548fcf0c08c06dc33/nova/db/sqlalchemy/api.py#L588


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic] Hardware provisioning testing for Ocata

2017-06-13 Thread Joe Talerico
On Fri, Jun 9, 2017 at 7:28 AM, Justin Kilpatrick  wrote:
> On Fri, Jun 9, 2017 at 5:25 AM, Dmitry Tantsur  wrote:
>> This number of "300", does it come from your testing or from other sources?
>> If the former, which driver were you using? What exactly problems have you
>> seen approaching this number?
>
> I haven't encountered this issue personally, but talking to Joe
> Talerico and some operators at summit around this number a single
> conductor begins to fall behind polling all of the out of band
> interfaces for the machines that it's responsible for. You start to
> see what you would expect from polling running behind, like incorrect
> power states listed for machines and a general inability to perform
> machine operations in a timely manner.
>
> Having spent some time at the Ironic operators form this is pretty
> normal and the correct response is just to scale out conductors, this
> is a problem with TripleO because we don't really have a scale out
> option with a single machine design. Fortunately just increasing the
> time between interface polling acts as a pretty good stopgap for this
> and lets Ironic catch up.
>
> I may get some time on a cloud of that scale in the future, at which
> point I will have hard numbers to give you. One of the reasons I made
> YODA was the frustrating prevalence of anecdotes instead of hard data
> when it came to one of the most important parts of the user
> experience. If it doesn't deploy people don't use it, full stop.
>
>> Could you please elaborate? (a bug could also help). What exactly were you
>> doing?
>
> https://bugs.launchpad.net/ironic/+bug/1680725

Additionally, I would like to see more verbose output from the
cleaning : https://bugs.launchpad.net/ironic/+bug/1670893

>
> Describes exactly what I'm experiencing. Essentially the problem is
> that nodes can and do fail to pxe, then cleaning fails and you just
> lose the nodes. Users have to spend time going back and babysitting
> these nodes and there's no good instructions on what to do with failed
> nodes anyways. The answer is move them to manageable and then to
> available at which point they go back into cleaning until it finally
> works.
>
> Like introspection was a year ago this is a cavalcade of documentation
> problems and software issues. I mean really everything *works*
> technically but the documentation acts like cleaning will work all the
> time and so does the software, leaving the user to figure out how to
> accommodate the realities of the situation without so much as a
> warning that it might happen.
>
> This comes out as more of a ux issue than a software one, but we can't
> just ignore these.
>
> - Justin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][tripleo] Add ganesha puppet module

2017-06-13 Thread Alex Schultz
On Mon, Jun 12, 2017 at 4:27 AM, Jan Provaznik  wrote:
> Hi,
> we would like to use nfs-ganesha for accessing shares on ceph storage
> cluster[1]. There is not yet a puppet module which would install and
> configure nfs-ganesha service. So to be able to set up nfs-ganesha with
> TripleO, I'd like to create a new ganesha puppet module under
> openstack-puppet umbrella unless there is a disagreement?
>

I don't have any particular issue with it.  Feel free to follow the guide[0]

Thanks,
-Alex

[0] https://docs.openstack.org/developer/puppet-openstack-guide/new-module.html

> Thanks, Jan
>
> [1] https://blueprints.launchpad.net/tripleo/+spec/nfs-ganesha
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic] Hardware provisioning testing for Ocata

2017-06-13 Thread Joe Talerico
On Fri, Jun 9, 2017 at 5:25 AM, Dmitry Tantsur  wrote:
> On 06/08/2017 02:21 PM, Justin Kilpatrick wrote:
>>
>> Morning everyone,
>>
>> I've been working on a performance testing tool for TripleO hardware
>> provisioning operations off and on for about a year now and I've been
>> using it to try and collect more detailed data about how TripleO
>> performs in scale and production use cases. Perhaps more importantly
>> YODA (Yet Openstack Deployment Tool, Another) automates the task
>> enough that days of deployment testing is a set it and forget it
>> operation. >
>> You can find my testing tool here [0] and the test report [1] has
>> links to raw data and visualization. Just scroll down, click the
>> capcha and click "go to kibana". I  still need to port that machine
>> from my own solution over to search guard.
>>
>> If you have too much email to consider clicking links I'll copy the
>> results summary here.
>>
>> TripleO inspection workflows have seen massive improvements from
>> Newton with a failure rate for 50 nodes with the default workflow
>> falling from 100% to <15%. Using patches slated for Pike that spurious
>> failure rate reaches zero.
>
>
> \o/
>
>>
>> Overcloud deployments show a significant improvement of deployment
>> speed in HA and stack update tests.
>>
>> Ironic deployments in the overcloud allow the use of Ironic for bare
>> metal scale out alongside more traditional VM compute. Considering a
>> single conductor starts to struggle around 300 nodes it will be
>> difficult to push a multi conductor setup to it's limits.
>
>
> This number of "300", does it come from your testing or from other sources?

Dmitry - The "300" comes from my testing on different environments.

Most recently, here is what I saw at CNCF -
https://snapshot.raintank.io/dashboard/snapshot/Sp2wuk2M5adTpqfXMJenMXcSlCav2PiZ

The undercloud was "idle" during this period.

> If the former, which driver were you using?

pxe_ipmitool.

> What exactly problems have you seen approaching this number?

I would have to restart ironic-conductor before every scale-up, which
here is what ironic-conductor looks like after a restart
https://snapshot.raintank.io/dashboard/snapshot/Im3AxP6qUfMnTeB97kryUcQV6otY0bHP
. Without restarting ironic, the scale up would fail due to ironic (I
do not have the exact error we would encounter documented).

>
>>
>> Finally Ironic node cleaning, shows a similar failure rate to
>> inspection and will require similar attention in TripleO workflows to
>> become painless.
>
>
> Could you please elaborate? (a bug could also help). What exactly were you
> doing?
>
>>
>> [0] https://review.openstack.org/#/c/384530/
>> [1]
>> https://docs.google.com/document/d/194ww0Pi2J-dRG3-X75mphzwUZVPC2S1Gsy1V0K0PqBo/
>>
>> Thanks for your time!
>
>
> Thanks for YOUR time, this work is extremely valuable!
>
>
>>
>> - Justin
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-13 Thread Matt Riedemann

On 6/8/2017 7:45 AM, Jim Rollenhagen wrote:

Hey friends,

I've been mostly missing for the past six weeks while looking for a new 
job, so maybe you've forgotten me already, maybe not. I'm happy to tell 
you I've found one that I think is a great opportunity for me. But, I'm 
sad to tell you that it's totally outside of the OpenStack community.


The last 3.5 years have been amazing. I'm extremely grateful that I've 
been able to work in this community - I've learned so much and met so 
many awesome people. I'm going to miss the insane(ly awesome) level of 
collaboration, the summits, the PTGs, and even some of the bikeshedding. 
We've built amazing things together, and I'm sure y'all will continue to 
do so without me.


I'll still be lurking in #openstack-dev and #openstack-ironic for a 
while, if people need me to drop a -2 or dictate old knowledge or 
whatever, feel free to ping me. Or if you just want to chat. :)


<3 jroll

P.S. obviously my core permissions should be dropped now :P


How can you drop a -2 if you don't have core anymore Jim?!

Good luck on the new position. We'll miss you around the nova channel. 
We were just talking today about how much better you made the 
nova/ironic interaction for users and operators, and developers by 
bridging the gap on both sides.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic] Hardware provisioning testing for Ocata

2017-06-13 Thread Sai Sindhur Malleni
On Thu, Jun 8, 2017 at 11:10 AM, Emilien Macchi  wrote:

> On Thu, Jun 8, 2017 at 2:21 PM, Justin Kilpatrick 
> wrote:
> > Morning everyone,
> >
> > I've been working on a performance testing tool for TripleO hardware
> > provisioning operations off and on for about a year now and I've been
> > using it to try and collect more detailed data about how TripleO
> > performs in scale and production use cases. Perhaps more importantly
> > YODA (Yet Openstack Deployment Tool, Another) automates the task
> > enough that days of deployment testing is a set it and forget it
> > operation.
> >
> > You can find my testing tool here [0] and the test report [1] has
> > links to raw data and visualization. Just scroll down, click the
> > capcha and click "go to kibana". I  still need to port that machine
> > from my own solution over to search guard.
> >
> > If you have too much email to consider clicking links I'll copy the
> > results summary here.
> >
> > TripleO inspection workflows have seen massive improvements from
> > Newton with a failure rate for 50 nodes with the default workflow
> > falling from 100% to <15%. Using patches slated for Pike that spurious
> > failure rate reaches zero.
> >
> > Overcloud deployments show a significant improvement of deployment
> > speed in HA and stack update tests.
> >
> > Ironic deployments in the overcloud allow the use of Ironic for bare
> > metal scale out alongside more traditional VM compute. Considering a
> > single conductor starts to struggle around 300 nodes it will be
> > difficult to push a multi conductor setup to it's limits.
> >
> > Finally Ironic node cleaning, shows a similar failure rate to
> > inspection and will require similar attention in TripleO workflows to
> > become painless.
> >
> > [0] https://review.openstack.org/#/c/384530/
> > [1] https://docs.google.com/document/d/194ww0Pi2J-dRG3-
> X75mphzwUZVPC2S1Gsy1V0K0PqBo/
> >
> > Thanks for your time!
>
> Hey Justin,
>
> All of this is really cool. I was wondering if you had a list of bugs
> that you've faced or reported yourself regarding to performances
> issues in TripleO.
> As you might have seen in a separate thread on openstack-dev, we're
> planning a sprint on June 21/22th to improve performances in TripleO.
>

Is this an IRC thing, or a video call? I work on the OpenStack Performance
and Scale team and would love to participate.

We would love your participation or someone from your team and if you
> have time before, please add the deployment-time tag to the Launchpad
> bugs that you know related to performances.
>
> Thanks a lot,
>
> > - Justin
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sai Sindhur Malleni

Software Engineer
Red Hat Inc.
100 East Davie Street
Raleigh, NC, USA
Work: (919) 754-4557 | Cell: (919) 985-1055
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-13 Thread Emilien Macchi
On Thu, Jun 8, 2017 at 8:45 AM, Jim Rollenhagen  wrote:
> Hey friends,
>
> I've been mostly missing for the past six weeks while looking for a new job,
> so maybe you've forgotten me already, maybe not. I'm happy to tell you I've
> found one that I think is a great opportunity for me. But, I'm sad to tell
> you that it's totally outside of the OpenStack community.
>
> The last 3.5 years have been amazing. I'm extremely grateful that I've been
> able to work in this community - I've learned so much and met so many
> awesome people. I'm going to miss the insane(ly awesome) level of
> collaboration, the summits, the PTGs, and even some of the bikeshedding.
> We've built amazing things together, and I'm sure y'all will continue to do
> so without me.
>
> I'll still be lurking in #openstack-dev and #openstack-ironic for a while,
> if people need me to drop a -2 or dictate old knowledge or whatever, feel
> free to ping me. Or if you just want to chat. :)
>
> <3 jroll

I confirm what others said: you'll be missed for sure.

I wanted to personally thank you for your help and involvement on many
topics that make OpenStack better.
Enjoy the next things ;-)

> P.S. obviously my core permissions should be dropped now :P
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer][opendaylight][networking-odl] OpenDaylight Driver for Ceilometer

2017-06-13 Thread gordon chung


On 13/06/17 08:22 AM, Deepthi V V wrote:
> Gordon, the driver would leverage ceilometer's polling framework. It will run 
> as part of ceilometer-agent-central process and will be added to 
> "network.statistics.drivers" Namespace.
> I guess then we are good to add it to networking-odl?

yep, you can also see how we add powervm in a separate repo here: 
https://github.com/openstack/ceilometer-powervm. it basically leverages 
ceilometer's interface but they manage the powervm specific stuff.

it's worked ok so far (i think) we just need to sync up on what 
interface is in case other drivers want to do the same.

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev]  [horizon] Approach for bugs in xstatic packages

2017-06-13 Thread Mateusz Kowalski
Hi everyone,

I would like to raise a question about our approach to xstatic packages, more 
specifically xstatic-roboto-fontface, and its updates/bugfixes. What we have 
encountered was https://bugs.launchpad.net/horizon/+bug/1671004 
 what according to our 
knowledge affects everyone using stable/ocata with Material design (though I 
get there may not be many operators using this setup).

What I did at the very first was to submit a patch 
https://review.openstack.org/#/c/443025/ 
 but unfortunately a policy is to 
take xstatic packages from the upstream in their unchanged form. As I totally 
understand this, I raised this issue to the upstream package provider in March 
— https://github.com/choffmeister/roboto-fontface-bower/issues/41 
. Again, 
unfortunately, as since then there was no any response I submitted a merge 
request for this issue — 
https://github.com/choffmeister/roboto-fontface-bower/pull/47 
. However, it’s 
complete openstack-wise, but it does not solve the issue in general (long story 
short, more than just one file requires patching, but Horizon uses only the one 
I have patched).

Anyway, seeing no much response for my initial upstream report since March, I 
don’t have much hope with the merge request I have just submitted. From the 
other side, any advice on how I can proceed in this particular case would be 
helpful as I have no any experience in this kind of workflow (bugs in xstatic 
packages).

Thanks for any suggestions,
Mateusz

smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] TripleO OVB check gates to move to third party

2017-06-13 Thread Paul Belanger
On Tue, Jun 13, 2017 at 02:11:53PM -0500, Ben Nemec wrote:
> 
> 
> On 06/13/2017 12:28 PM, Paul Belanger wrote:
> > On Tue, Jun 13, 2017 at 11:12:08AM -0500, Ben Nemec wrote:
> > > 
> > > 
> > > On 06/12/2017 06:19 PM, Ronelle Landy wrote:
> > > > Greetings,
> > > > 
> > > > TripleO OVB check gates are managed by upstream Zuul and executed on
> > > > nodes provided by test cloud RH1. RDO Cloud is now available as a test
> > > > cloud to be used when running CI jobs. To utilize to RDO Cloud, we could
> > > > either:
> > > > 
> > > > - continue to run from upstream Zuul (and spin up nodes to deploy
> > > > the overcloud from RDO Cloud)
> > > > - switch the TripleO OVB check gates to run as third party and
> > > > manage these jobs from the Zuul instance used by Software Factory
> > > > 
> > > > The openstack infra team advocates moving to third party.
> > > > The CI team is meeting with Frederic Lepied, Alan Pevec, and other
> > > > members of the Software Factory/RDO project infra tream to discuss how
> > > > this move could be managed.
> > > > 
> > > > Note: multinode jobs are not impacted - and will continue to run from
> > > > upstream Zuul on nodes provided by nodepool.
> > > > 
> > > > Since a move to third party could have significant impact, we are
> > > > posting this out to gather feedback and/or concerns that TripleO
> > > > developers may have.
> > > 
> > > I'm +1 on moving to third-party...eventually.  I don't think it should be
> > > done at the same time as we move to a new cloud, which is a major change 
> > > in
> > > and of itself.  I suppose we could do the third-party transition in 
> > > parallel
> > > with the existing rh1 jobs, but as one of the people who will probably 
> > > have
> > > to debug problems in RDO cloud I'd rather keep the number of variables to 
> > > a
> > > minimum.  Once we're reasonably confident that RDO cloud is stable and
> > > handling our workload well we can transition to third-party and deal with
> > > the problems that will no doubt cause on their own.
> > > 
> > This was a goal for tripleo-test-cloud-rh2, to move that to thirdparty CI,
> > ensure jobs work, then migrated. As you can see, we never actually did that.
> > 
> > My preference would be to make the move the thirdparty now, with
> > tripleo-test-cloud-rh1.  We now have all the pieces in place for RDO 
> > project to
> > support this and in parallel set up RDO cloud to run jobs from RDO.
> > 
> > If RDO stablility is a concern, the move to thirdparty first seems to make 
> > the
> > most sense. This avoid the need to bring RDO cloud online, ensure it works, 
> > then
> > move it again, and re-insure it works.
> > 
> > Again, the move can be made seemless by turning down some of the capacity in
> > nodepool.o.o and increase capacity in nodepool.rdoproject.org. And I am 
> > happy to
> > help work with RDO on making this happen.
> 
> I'm good with doing the third-party migration first too.  I'm only looking
> to avoid two concurrent major changes.
> 
Great, I am happy to hear that :D

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-13 Thread Emilien Macchi
On Mon, Jun 12, 2017 at 8:02 AM, Jiří Stránský  wrote:
> On 9.6.2017 18:51, Flavio Percoco wrote:
>>
>> A-ha, ok! I figured this was another option. In this case I guess we would
>> have 2 options:
>>
>> 1. Run confd + openstack service in side the container. My concern in this
>> case
>> would be that we'd have to run 2 services inside the container and
>> structure
>> things in a way we can monitor both services and make sure they are both
>> running. Nothing impossible but one more thing to do.
>
>
> I see several cons with this option:
>
> * Even if we do this in a sidecar container like Bogdan mentioned (which is
> better than running 2 "top-level" processes in a single container IMO), we
> still have to figure out when to restart the main service, IIUC. I see confd
> in daemon mode listens on the backend change and updates the conf files, but
> i can't find a mention that it would be able to restart services. Even if we
> implemented this auto-restarting in OpenStack services, we need to deal with
> services like MariaDB, Redis, ..., so additional wrappers might be needed to
> make this a generic solution.
>
> * Assuming we've solved the above, if we push a config change to etcd, all
> services get restarted at roughly the same time, possibly creating downtime
> or capacity issues.

I'm not sure galera1 container would share the same namespace for the
key/values of galera2 container (example); I think we would separate
namespaces by container names or something unique.

> * It complicates the reasoning about container lifecycle, as we have to
> start distinguishing between changes that don't require a new container
> (config change only) vs. changes which do require it (image content change).
> Mutable container config also hides this lifecycle from the operator -- the
> container changes on the inside without COE knowing about it, so any
> operator's queries to COE would look like no changes happened.
>
> I think ideally container config would be immutable, and every time we want
> to change anything, we'd do that via a roll out of a new set of containers.
> This way we have a single way of making changes to reason about, and when
> we're doing rolling updates, it shouldn't result in a downtime or tangible
> performance drop. (Not talking about migrating to a new major OpenStack
> release, which will still remain a special case in foreseeable future.)
>
>>
>> 2. Run confd `-onetime` and then run the openstack service.
>
>
> This sounds simpler both in terms of reasoning and technical complexity, so
> if we go with confd, i'd lean towards this option. We'd have to
> rolling-replace the containers from outside, but that's what k8s can take
> care of, and at least the operator can see what's happening on high level.
>
> The issues that Michał mentioned earlier still remain to be solved -- config
> versioning ("accidentally" picking up latest config), and how to supply
> config elements that differ per host.
>
> Also, it's probably worth diving a bit deeper into comparing `confd
> -onetime` and ConfigMaps...
>
>
> Jirka
>
>>
>>
>> Either would work but #2 means we won't have config files monitored and
>> the
>> container would have to be restarted to update the config files.
>>
>> Thanks, Doug.
>> Flavio
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo] Making stack outputs static

2017-06-13 Thread Zane Bitter

On 13/06/17 13:00, Ben Nemec wrote:



On 06/12/2017 03:53 PM, Zane Bitter wrote:

On 12/06/17 16:21, Steven Hardy wrote:
I think we wanted to move to convergence anyway so I don't see a 
problem
with this.  I know there was some discussion about starting to test 
with

convergence in tripleo-ci, does anyone know what, if anything,
happened with
that?

There's an experimental job that runs only on the heat repo
(gate-tripleo-ci-centos-7-ovb-nonha-convergence)

But yeah now seems like a good time to get something running more
regularly in tripleo-ci.


+1, there's no reason not to run a non-voting job against tripleo itself
at this point IMHO. That would allow me to start tracking the memory use
over time.


Do you have a strong preference multinode vs. ovb?  I would tend to 
think we want this to be ovb since multinode stubs out a bunch of stuff, 
but at the same time ovb capacity is limited.  It's better* now that 
we've basically halved our ovb ci coverage so we could probably get away 
with adding a job to a specific repo though (t-h-t seems logical for 
this purpose).


The job I've been tracking memory usage on against t-h-t was 
tripleo-ci-centos-7-ovb-nonha... which I see no longer exists (doh!). So 
I guess we can pick whatever it makes sense to track over the long term.


We want something fairly representative, so that we can be confident to 
flip the switch early in Queens development. I don't think it especially 
matters whether we're using baremetal servers or VMs though... that 
should be fairly transparent to Heat.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally][no-admin] Finally Rally can be run without admin user

2017-06-13 Thread Morgan Fainberg
On Tue, Jun 13, 2017 at 1:04 PM, Boris Pavlovic  wrote:
> Hi stackers,
>
> Intro
>
> Initially Rally was targeted for developers which means running it from
> admin was OK.
> Admin was basically used to simplify preparing environment for testing:
> create and setup users/tenants, networks, quotas and other resources that
> requires admin role.
> As well it was used to cleanup all resources after test was executed.
>
> Problem
>
> More and more operators were running Rally against their production
> environments, and they were not happy with the thing that they should
> provide admin, they would rather prepare environment by hand and provide
> already existing users than allow Rally to mess up with admin rights =)
>
> Solution
>
> After years of refactoring we changed almost everything;) and we managed to
> keep Rally as simple as it was and support Operators and Developers needs.
>
> Now Rally supports 3 different modes:
>
> admin mode -> Rally manages users that are used for testing
> admin + existing users mode -> Rally uses existing users for testing (if no
> user context)
> [new one] existing users mode -> Rally uses existing users for testing
>
> In every mode input task will look the same, however in case of only
> existing users mode you won't be able to use plugins that requires admin
> role.
>
> This patch finishes works: https://review.openstack.org/#/c/465495/
>
> Thanks to everybody that was involved in this huge effort!
>
>
> Best regards,
> Boris Pavlovic
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Good work, and fantastic news. This will make rally a more interesting
tool to use against real-world deployments.

Congrats on a job well done.
--Morgan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-13 Thread Emilien Macchi
On Fri, Jun 9, 2017 at 1:38 PM, Doug Hellmann  wrote:
> Excerpts from Alex Schultz's message of 2017-06-09 10:54:16 -0600:
>> I ran into a case where I wanted to add python-tripleoclient to
>> test-requirements for tripleo-heat-templates but it's not in the
>> global requirements. In looking into adding this, I noticed that
>> python-tripleoclient and tripleo-common are not
>> cycle-with-intermediary either. Should/can we update these as well?
>> tripleo-common is already in the global requirements but I guess since
>> we've been releasing non-prerelease versions fairly regularly with the
>> milestones it hasn't been a problem.
>
> Yes, let's get all of the tripleo team's libraries onto the
> cycle-with-intermediary release model.

Done: https://review.openstack.org/473974

Please review and let me know if I missed something.

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI Squad Meeting Summary (week 23) - images, devmode and the RDO Cloud

2017-06-13 Thread Emilien Macchi
On Fri, Jun 9, 2017 at 10:12 AM, Attila Darazs  wrote:
> If the topics below interest you and you want to contribute to the
> discussion, feel free to join the next meeting:
>
> Time: Thursdays, 14:30-15:30 UTC
> Place: https://bluejeans.com/4113567798/
>
> Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
>
> We had a packed agenda and intense discussion as always! Let's start with an
> announcement:
>
> The smoothly named "TripleO deploy time optimization hackathlon" will be
> held on 21st and 22nd of June. It would be great to have the cooperation of
> multiple teams here. See the etherpad[1] for details.
>
> = Extending our image building =
>
> It seems that multiple teams would like to utilize the upstream/RDO image
> building process and produce images just like we do upstream. Unfortunately
> our current image storage systems are not having enough bandwidth (either
> upstream or on the RDO level) to increase the amount of images served.
>
> Paul Belanger joined us and explained the longer term plans of OpenStack
> infra, which would provide a proper image/binary blob hosting solution in
> the 6 months time frame.
>
> In the short term, we will recreate both the upstream and RDO image hosting
> instances on the new RDO Cloud and will test the throughput.

Also, you might want to read the future OpenStack guidelines for
managing releases of binary artifacts:
https://review.openstack.org/#/c/469265/

> = Transitioning the promotion jobs =
>
> This task still needs some further work. We're missing feature parity on the
> ovb-updates job. As the CI Squad is not able to take responsibility for the
> update functionality, we will probably migrate the job with everything else
> but the update part and make that the new promotion job.

I don't think we need to wait on the conversion to switch.
We could just configure the promotion pipeline to run ovb-oooq-ha and
ovb-updates; and put the conversion in a parallel effort. Isn't?

> We will also extend the amount of jobs voting on a promotion, probably will
> the scenario jobs.

+1000 for having scenarios. Let's start by classic deployment, and
then later we'll probably add upgrades.

> = Devmode =
>
> Quickstart's devmode.sh seems to be picking up popularity among the TripleO
> developers. Meanwhile we're starting to realize the limitations of the
> interface it provides for Quickstart. We're going to have a design session
> next week on Tuesday (13th) at 1pm UTC where we will try to come up with
> some ideas to improve this.
>
> Ian Main suggested to default devmode.sh to deploy a containerized system so
> that developers get more familiar with that. We agreed on this being a good
> idea and will follow it up with some changes.
>
> = RDO Cloud =
>
> The RDO cloud transition is continuing, however Paul requested that we don't
> add the new cloud to the tripleo queue upstream but rather use the
> rdoproject's own zuul and nodepool to be a bit more independent and run it
> like a third party CI system. This will require further cooperation with RDO
> Infra folks.
>
> Meanwhile Sagi is setting up the infrastructure needed on the RDO Cloud
> instance to run CI jobs.
>
> Thank you for reading the summary. Have a great weekend!

Thanks for the report, very useful as usual.

> Best regards,
> Attila
>
> [1] https://etherpad.openstack.org/p/tripleo-deploy-time-hack
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally][no-admin] Finally Rally can be run without admin user

2017-06-13 Thread Boris Pavlovic
Hi stackers,

*Intro*

Initially Rally was targeted for developers which means running it from
admin was OK.
Admin was basically used to simplify preparing environment for testing:
create and setup users/tenants, networks, quotas and other resources that
requires admin role.
As well it was used to cleanup all resources after test was executed.

*Problem*

More and more operators were running Rally against their production
environments, and they were not happy with the thing that they should
provide admin, they would rather prepare environment by hand and provide
already existing users than allow Rally to mess up with admin rights =)

*Solution*

After years of refactoring we changed almost everything;) and we managed to
keep Rally as simple as it was and support Operators and Developers needs.

Now Rally supports 3 different modes:

   - admin mode -> Rally manages users that are used for testing
   - admin + existing users mode -> Rally uses existing users for testing
   (if no user context)
   - *[new one] existing users mode *-> Rally uses existing users for
   testing

In every mode input task will look the same, however in case of only
existing users mode you won't be able to use plugins that requires admin
role.

This patch finishes works: https://review.openstack.org/#/c/465495/

Thanks to everybody that was involved in this huge effort!


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 24

2017-06-13 Thread Chris Dent


No meeting this week, but some motion on a variety of proposals and
other changes. As usual, this document doesn't report on everything
going on with the Technical Committee. Instead it tries to focus on
those thing which I subjectively believe may have impact on
community members.

I will be taking some time off between now and the first week of
July so there won't be another of these until July 11th unless
someone else chooses to do one.

# New Things

No recently merged changes in policy, plans, or behavior. The office
hours announced in [last
weeks's](https://anticdent.org/tc-report-23.html) report are
happening and the associated IRC channel, `#openstack-tc` is gaining
members and increased chatter.

# Pending Stuff

## Queens Community Goals

Progress continues on the discussion surrounding community goals for
the Queens cycle. There are enough well defined goals that we'll
have to pick from amongst the several that are available to narrow
it down. I would guess that at some point in the not too distant future
there will be some kind of aggregated presentation to help us all
decide. I would guess that since I just said that, it will likely be me.

## Managing Binary Artifacts

With the addition of a requirement to include architecture in the
metadata associated with the artifact the [Guidelines for managing
releases of binary artifacts](https://review.openstack.org/#/c/469265/)
appears to be close to making everyone happy. This change will be
especially useful for those projects that want to produce containers.

## PostgreSQL

There was some difference of opinion on the next steps on
documenting the state of PostgreSQL, but just in the last couple of
hours today we seem to have reached some agreement to do only those
things on which everyone agrees. [Last
week's](https://anticdent.org/tc-report-23.html) report has a
summary of the discussion that was held in a meeting that week. Dirk
Mueller has taken on the probably not entirely pleasant task of
consolidating the feedback. His latest work can be found at [Declare
plainly the current state of PostgreSQL in
OpenStack](https://review.openstack.org/#/c/427880/). The briefest
of summaries of the difference of opinion is that for a while the
title of that review had "MySQL" where "PostgreSQL" is currently.

## Integrating Feedback on the 2019 TC Vision

The agreed next step on the [Draft technical committee vision for
public feedback](https://review.openstack.org/#/c/453262/) has been
to create a version which integrates the most unambiguous feedback
and edits the content to have more consistent tense, structure and
style. That's now in progress at [Begin integrating vision feedback
and editing for style](https://review.openstack.org/#/c/473620/).
The new version includes a few TODO markers for adding things like a
preamble that explains what's going on. As the document evolves
we'll be simultaneously discussing the ambiguous feedback and
determining what we can use and how that should change the document.

## Top 5 Help Wanted List

The vision document mentions a top ten hit list that will be used in
2019 to help orient contributors to stuff that matters. Here in 2017
the plan is to start smaller with a top 5 list of areas where new
individuals and organizations can make contributions that will have
immediate impact. The hope is that by having a concrete and highly
visible list of stuff that matters people will be encouraged to
participate in the most productive ways available. [Introduce Top 5
help wanted list](https://review.openstack.org/#/c/466684/) provides
the framework for the concept. Once that framework merges anyone is
empowered to propose an item for the list. That's the best part.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [EXTERNAL] Re: [Tripleo] deploy software on Openstack controller on the Overcloud

2017-06-13 Thread Emilien Macchi
On Wed, Jun 7, 2017 at 10:38 AM, Abhishek Kane
 wrote:
> Hi,
>
> On cinder node- we need to modify the cinder.conf. We don’t change any config 
> apart from this. We want to keep the config changes in heat template, package 
> installation in puppet, and trigger rest of the operations via Horizon (as 
> it’s done today). We are also trying to get rid of the nova.conf file 
> changes. Once the approach for cinder is sorted, will get back on this.
>
> If this is correct approach for cinder, I will raise review requests for the 
> following projects:
> puppet-tripleo: http://paste.openstack.org/show/611697/
> puppet-cinder: http://paste.openstack.org/show/611698/
> tripleo-heat-templates: http://paste.openstack.org/show/611700/
>
> Also, I am not sure which TripleO repos need to be patched for the controller 
> components.
>
> We have decomposed the controller bin installer into idempotent 
> modules/scripts. Now, the installer is not a black box operation:
> https://github.com/abhishek-kane/puppet-veritas-hyperscale
> The inline replies below are w.r.t. this project. The product installer bin 
> currently works in atomic fashion. One issue which we see in puppet is the 
> error handling and rollback operations.
>
> Thanks,
> Abhishek
>
> On 6/1/17, 8:41 PM, "Emilien Macchi"  wrote:
>
> On Thu, Jun 1, 2017 at 3:47 PM, Abhishek Kane  
> wrote:
> > Hi Emilien,
> >
> > The bin does following things on controller:
> > 1. Install core HyperScale packages.
>
> Should be done by Puppet, with Package resource.
> Ak> It’s done.
>
> > 2. Start HyperScale API server
>
> Should be done by Puppet, with Service resource.
> AK> It’s done.
>
> > 3. Install UI packages. This will add new files to and modify some 
> existing files of Horison.
>
> Should be done by Puppet, with Package resource and also some changes
> in puppet-horizon maybe if you need to change Horizon config.
> Ak> We have got rid of the horizon dependency right now. Our GUI components 
> get installed via separate package.
>
> > 4. Create HyperScale user in mysql db. Create database and dump config. 
> Add permissions of nova and cinder DB to HyperScale user.
>
> We have puppet-openstacklib which already manages DBs, you could
> easily re-use it. Please look at puppet-nova for example to see how
> things works in nova::db::mysql, etc.
> AK> TBD
>
> > 5. Add ceilometer pollsters for additional stats and modify ceilometer 
> files.
>
> puppet-ceilometer I guess. What do you mean by "files"? Config files?
> Ak> We are trying to get rid of this dependency as well. TBD.
>
> > 6. Change OpenStack configuration:
> > a. Create rabbitmq exchanges
>
> puppet-* modules already does it.
> AK> It’s done via script. Do we need to patch any module?

Everything that touch *.conf files of OpenStack services need to be
done by existing openstack/puppet-* modules.

>
> > b. Create keystone user
>
> puppet-keystone already does it.
> AK> It’s done via script. Do we need to patch keystone module?

No, you'll need to create the right user/roles/endpoints/... in
puppet-tripleo, in a new profile, most probably.
You'll probably need to read a bit about:
https://github.com/openstack/puppet-keystone#setup
Let me know if you need more help on this thing.

>
> > c. Define new flavors
>
> puppet-nova can manage flavors.
> AK> It’s done via script. Do we need to patch nova module?

Same as Keystone.
See: 
https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/provision.pp#L7-L13
You'll probably need that thing in your module or in puppet-tripleo
(composition layer for tripleo).

>
> > d. Create HyperScale volume type and set default volume type to 
> HyperScale in cinder.conf.
>
> we already support multi backends in tripleo, HyperScale would just be
> a new addition. Re-use the bits please: puppet-cinder and
> puppet-tripleo will need to be patched.
> AK> It’s done via script. Do we need to patch cinder module?

Same, please look how other backends are done in TripleO, we have a
bunch of examples in puppet-tripleo.

> > e. Restart openstack’s services
>
> Already done by openstack/puppet-* modules.
> AK> We are trying to get rid of all OpenStack config file changes that we 
> used to do. TBD.
>
> > 7. Configure HyperScale services
>
> Should be done by your module, (you can either write a _config
> provider if it's ini standard otherwise just do a template that you
> ship in the module, like puppet-horizon).
> AK> It’s done.
>
> > Once the controller is configured, we use HyperScale’s CLI to configure 
> data and compute nodes-
> >
> > On data node (cinder):
> > 1. Install HyperScale data node packages.
>
> Should be done by Puppet, with Package resource.
>
> > 2. Change cinder.conf to add backend and change rpc_backend.
>
> puppet-cinder
>
> > 3. Give the raw data disks and meta disks to HyperSca

Re: [openstack-dev] [tripleo] Ansible roles repo and how to inject them into the overcloud

2017-06-13 Thread Emilien Macchi
On Wed, Jun 7, 2017 at 10:25 AM, Juan Antonio Osorio
 wrote:
> Hi folks!
>
> I would like to know if there are thoughts about where to put
> tripleo-specific ansible roles.
>
> I've been working lately on a role that would deploy ipsec tunnels for most
> networks in an overcloud [1]. And I think that would be quite useful for
> folks as an alternative to TLS everywhere. However, I don't know in what
> TripleO repository I could put that role. Any ideas?
>
> Also, I know I could call that from a composable service (although I would
> need that to be ran after the puppet steps so maybe I'll need an extra
> hook). However, is there any recommended way right now on how to inject
> extra ansible roles into the overcloud nodes? If not, maybe a dedicated hook
> to do this kind of thing would be something useful for others as well.
>
> Any thoughts?

General answer (not only for your module):

- If the module can be used by anyone in Ansible community (and not
only in TripleO), push it to be in Ansible Modules Extras:
https://github.com/ansible/ansible-modules-extras
- If it's rejected from Ansible Modules Extras, you can host it on
your own namespace or use redhat-openstack. Example with
https://github.com/redhat-openstack/ansible-pacemaker.
- If it's something TripleO (which means you can run the roles /
module only in a TripleO environment): I would suggest to move it
under OpenStack namespace, under TripleO umbrella, to have a
consistent governance, CI and release management.

I hope this short answer helped. Please give any feedback.

Thanks,

> [1] https://github.com/JAORMX/tripleo-ipsec
>
> --
> Juan Antonio Osorio R.
> e-mail: jaosor...@gmail.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Future of the tripleo-quickstart-utils project

2017-06-13 Thread Emilien Macchi
On Tue, Jun 6, 2017 at 8:12 AM, Raoul Scarazzini  wrote:
> On 17/05/2017 04:01, Emilien Macchi wrote:
>> Hey Raoul,
>> Thanks for putting this up in the ML. Replying inline:
>
> Sorry for the long delay between the answers, a lot of things on going.
>
> [...]
>> I've looked at 
>> https://github.com/redhat-openstack/tripleo-quickstart-utils/blob/master/roles/validate-ha/tasks/main.yml
>> and I see it's basically a set of tasks that validates that HA is
>> working well on the overcloud.
>> Despite little things that might be adjusted (calling bash scripts
>> from Ansible), I think this role would be a good fit with
>> tripleo-validations projects, which is "a collection of Ansible
>> playbooks to detect and report potential issues during TripleO
>> deployments".
>
> Moving this stuff in the tripleo-validations project would impose a
> massive change in the approach HA validation is made today.
> The bash script way is something that was used to make someone able to
> do the validation even without ansible. Anyone could write his test by
> just adding the script inside the test (and recovery) dir.
> This is the tech reason behind the choice, and today this is doing great
> as it is.
> So I think that until I can reserve a slot to make this "port" this can
> stay where it is today.

It's unclear to me if yes or no you're willing to move this bash
script into tripleo-validations.

>>> 2 - stonith-config: to configure STONITH inside an HA env;
> [...]> Great, it means we could easily re-use the bits, modulo some
> technical
>> adjustments.
>
> Since we're moving into integrating stonith and (hopefully) instance HA
> directly inside tripleo, then this can stay where it is today, it would
> be useless giving effort in putting this since soon we will have the
> same directly inside tripleo.

Ok, so forget this one if the problem is solved within TripleO.

>>> There's also a docs related to the Multi Virtual Undercloud project [4]
>>> that explains how to have more than one virtual Undercloud on a physical
>>> machine to manage more environments from the same place.
>> I would suggest to move it to tripleo-docs, so we have a single place for 
>> doc.
>
> Action item for me here: move this document under tripleo-docs. I'm
> already preparing a review for this.
>
> [...]
>> IIRC, everything in this repo could be moved to existing projects in
>> TripleO that are already productized, so little efforts would be done.
> [...]> Thanks for bringing this up!
>
> Agreed.
>
> Bye,
>
> --
> Raoul Scarazzini
> ra...@redhat.com



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] TripleO OVB check gates to move to third party

2017-06-13 Thread Ben Nemec



On 06/13/2017 12:28 PM, Paul Belanger wrote:

On Tue, Jun 13, 2017 at 11:12:08AM -0500, Ben Nemec wrote:



On 06/12/2017 06:19 PM, Ronelle Landy wrote:

Greetings,

TripleO OVB check gates are managed by upstream Zuul and executed on
nodes provided by test cloud RH1. RDO Cloud is now available as a test
cloud to be used when running CI jobs. To utilize to RDO Cloud, we could
either:

- continue to run from upstream Zuul (and spin up nodes to deploy
the overcloud from RDO Cloud)
- switch the TripleO OVB check gates to run as third party and
manage these jobs from the Zuul instance used by Software Factory

The openstack infra team advocates moving to third party.
The CI team is meeting with Frederic Lepied, Alan Pevec, and other
members of the Software Factory/RDO project infra tream to discuss how
this move could be managed.

Note: multinode jobs are not impacted - and will continue to run from
upstream Zuul on nodes provided by nodepool.

Since a move to third party could have significant impact, we are
posting this out to gather feedback and/or concerns that TripleO
developers may have.


I'm +1 on moving to third-party...eventually.  I don't think it should be
done at the same time as we move to a new cloud, which is a major change in
and of itself.  I suppose we could do the third-party transition in parallel
with the existing rh1 jobs, but as one of the people who will probably have
to debug problems in RDO cloud I'd rather keep the number of variables to a
minimum.  Once we're reasonably confident that RDO cloud is stable and
handling our workload well we can transition to third-party and deal with
the problems that will no doubt cause on their own.


This was a goal for tripleo-test-cloud-rh2, to move that to thirdparty CI,
ensure jobs work, then migrated. As you can see, we never actually did that.

My preference would be to make the move the thirdparty now, with
tripleo-test-cloud-rh1.  We now have all the pieces in place for RDO project to
support this and in parallel set up RDO cloud to run jobs from RDO.

If RDO stablility is a concern, the move to thirdparty first seems to make the
most sense. This avoid the need to bring RDO cloud online, ensure it works, then
move it again, and re-insure it works.

Again, the move can be made seemless by turning down some of the capacity in
nodepool.o.o and increase capacity in nodepool.rdoproject.org. And I am happy to
help work with RDO on making this happen.


I'm good with doing the third-party migration first too.  I'm only 
looking to avoid two concurrent major changes.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] TripleO OVB check gates to move to third party

2017-06-13 Thread Paul Belanger
On Tue, Jun 13, 2017 at 11:12:08AM -0500, Ben Nemec wrote:
> 
> 
> On 06/12/2017 06:19 PM, Ronelle Landy wrote:
> > Greetings,
> > 
> > TripleO OVB check gates are managed by upstream Zuul and executed on
> > nodes provided by test cloud RH1. RDO Cloud is now available as a test
> > cloud to be used when running CI jobs. To utilize to RDO Cloud, we could
> > either:
> > 
> > - continue to run from upstream Zuul (and spin up nodes to deploy
> > the overcloud from RDO Cloud)
> > - switch the TripleO OVB check gates to run as third party and
> > manage these jobs from the Zuul instance used by Software Factory
> > 
> > The openstack infra team advocates moving to third party.
> > The CI team is meeting with Frederic Lepied, Alan Pevec, and other
> > members of the Software Factory/RDO project infra tream to discuss how
> > this move could be managed.
> > 
> > Note: multinode jobs are not impacted - and will continue to run from
> > upstream Zuul on nodes provided by nodepool.
> > 
> > Since a move to third party could have significant impact, we are
> > posting this out to gather feedback and/or concerns that TripleO
> > developers may have.
> 
> I'm +1 on moving to third-party...eventually.  I don't think it should be
> done at the same time as we move to a new cloud, which is a major change in
> and of itself.  I suppose we could do the third-party transition in parallel
> with the existing rh1 jobs, but as one of the people who will probably have
> to debug problems in RDO cloud I'd rather keep the number of variables to a
> minimum.  Once we're reasonably confident that RDO cloud is stable and
> handling our workload well we can transition to third-party and deal with
> the problems that will no doubt cause on their own.
> 
This was a goal for tripleo-test-cloud-rh2, to move that to thirdparty CI,
ensure jobs work, then migrated. As you can see, we never actually did that.

My preference would be to make the move the thirdparty now, with
tripleo-test-cloud-rh1.  We now have all the pieces in place for RDO project to
support this and in parallel set up RDO cloud to run jobs from RDO.

If RDO stablility is a concern, the move to thirdparty first seems to make the
most sense. This avoid the need to bring RDO cloud online, ensure it works, then
move it again, and re-insure it works.

Again, the move can be made seemless by turning down some of the capacity in
nodepool.o.o and increase capacity in nodepool.rdoproject.org. And I am happy to
help work with RDO on making this happen.

PB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Does anyone rely on PUT /os-services/disable for non-compute services?

2017-06-13 Thread Matt Riedemann

With this change in Pike:

https://review.openstack.org/#/c/442162/

The PUT /os-services/* APIs to enable/disable/force-down a service will 
now only work with nova-compute services. If you're using those to try 
and disable a non-compute service, like nova-scheduler or 
nova-conductor, those APIs will result in a 404 response because there 
won't be host mappings for non-compute services.


There really never was a good reason to disable/enable non-compute 
services anyway since it wouldn't do anything. The scheduler and API are 
checking the status and forced_down fields to see if instance builds can 
be scheduled to a compute host or if instances can be evacuated from a 
downed compute host. There is nothing that relies on a disabled or 
downed conductor or scheduler service.


I realize the docs aren't justification for API behavior, but the API 
reference has always pointed out that these PUT operations are for 
*compute* services:


https://developer.openstack.org/api-ref/compute/#compute-services-os-services

This has come up while working on an API microversion [1] where we'll 
now expose service uuids in GET calls and take a service uuid in PUT and 
DELETE calls to the os-services API. The uuid is needed to uniquely 
identify a service across cells. I plan on restricting PUT 
/os-services/{service_id} calls to only nova-compute services, and 
return a 400 on any other service like nova-conductor or nova-scheduler, 
since it doesn't make sense to enable/disable/force-down non-compute 
services.


This email is to provide awareness of this change and to also see if 
there are any corner cases in which people are relying on any of this 
behavior that we don't know about - this is your chance to speak up 
before we make the change.


[1] 
https://review.openstack.org/#/c/464280/11/nova/api/openstack/compute/services.py@288


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard][release][help] Anyone interested in adding bug-closing logic ?

2017-06-13 Thread Jeremy Stanley
On 2017-06-13 15:59:16 +0200 (+0200), Thierry Carrez wrote:
[...]
> This functionality is missing for StoryBoard-driven projects. We'd
> need something similar, that detects "Closes-Story:" stanzas and
> calls a new storyboard_add_comment.py script.
[...]

Probably not quite like that. StoryBoard itself considers any story
with only merged or invalid tasks to be closed, but stories simply
have an inferred active/closed status which can't be directly
manipulated.

The Gerrit its-storyboard plugin recognizes two commit message
footers:

Story: #12345
Task: #6789

The behavior it has is that it will leave a story comment for the
first and will auto-adjust the status of the second. The reason for
this design is that the "story" footer acts like our current
"related-bug" footer does with LP, while the "task" footer addresses
the additional steps we get from "closes-bug" today (though since
tasks are globally unique, a future optimization may be to make
"story" optional when "task" is provided and then infer stories from
tasks). The existence of "partial-bug" is seen as an LP-specific
workaround which SB doesn't need since stories can be made up of
multiple tasks.

Since a story can have tasks whose corresponding commits merge in
different releases, what I would personally expect from the release
tooling is to look at task footers and then add a "task note"
indicating the release in which that task reached merged status. If
we want to add a "released" task state in SB (that should be a
fairly trivial addition I think), then it could switch the state on
the task to that at the same time.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Role updates

2017-06-13 Thread Alex Schultz
On Tue, Jun 13, 2017 at 6:58 AM, Dan Prince  wrote:
> On Fri, 2017-06-09 at 09:24 -0600, Alex Schultz wrote:
>> Hey folks,
>>
>> I wanted to bring to your attention that we've merged the change[0]
>> to
>> add a basic set of roles that can be combined to create your own
>> roles_data.yaml as needed.  With this change the roles_data.yaml and
>> roles_data_undercloud.yaml files in THT should not be changed by
>> hand.
>
> In general I like the feature.
>
> I added some comments to your validations [1] patch below. We need
> those validations, but I think we need to carefully consider adding a
> hard dependency on python-tripleoclient simply to have validations in
> tree. Wondering if perhaps a t-h-t-utils library project might be in
> order here to contain routines we use in t-h-t and in higher level
> workflow tools in Mistral and on the CLI? This might also make the
> tools/process-templates.py stuff cleaner as well.
>
> Thoughts?

So my original implementation of the roles stuff included a standalone
script in THT to generate the roles_data.yaml files.  This was -1'd as
realistically the actions for managing this should probably live
within python-tripleoclient.  This made sense to me as that's how the
end user really should be interacting with these things.  Given that
the tripleoclient and the UI are the two ways and operator is going to
consume with THT I think there is already an undocumented requirement
that should be there.

An alternative would be to move the roles generation items into
tripleo-common but then we would have to write two distinct ways of
then executing this code. One being tripleoclient and the other being
a standalone script which basically would have to reinvent the
interface provided by tripleoclient/openstackclient.  Since we're not
allowing folks to dynamically construct the roles_data.yaml as part of
the overcloud deployment yet, I'm not sure we should try and move this
around further unless there's an agreed upon way we want to handle
this.

I think the better work would be to split the
tripleoclient/instack-undercloud dependency which is really where the
problem lies.  We shouldn't be pulling in the world for tripleoclient
if we are just going to operate on only the overcloud.

Thanks,
-Alex

>
> Dan
>
>> Instead if you have an update to a role, please update the
>> appropriate
>> roles/*.yaml file. I have proposed a change[1] to THT with additional
>> tools to validate that the roles/*.yaml files are updated and that
>> there are no unaccounted for roles_data.yaml changes.  Additionally
>> this change adds in a new tox target to assist in the generate of
>> these basic roles data files that we provide.
>>
>> Ideally I would like to get rid of the roles_data.yaml and
>> roles_data_undercloud.yaml so that the end user doesn't have to
>> generate this file at all but that won't happen this cycle.  In the
>> mean time, additional documentation around how to work with roles has
>> been added to the roles README[2].
>>
>> Thanks,
>> -Alex
>>
>> [0] https://review.openstack.org/#/c/445687/
>> [1] https://review.openstack.org/#/c/472731/
>> [2] https://github.com/openstack/tripleo-heat-templates/blob/master/r
>> oles/README.rst
>>
>> _
>> _
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
>> cribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Making stack outputs static

2017-06-13 Thread Ben Nemec



On 06/12/2017 03:53 PM, Zane Bitter wrote:

On 12/06/17 16:21, Steven Hardy wrote:

I think we wanted to move to convergence anyway so I don't see a problem
with this.  I know there was some discussion about starting to test with
convergence in tripleo-ci, does anyone know what, if anything,
happened with
that?

There's an experimental job that runs only on the heat repo
(gate-tripleo-ci-centos-7-ovb-nonha-convergence)

But yeah now seems like a good time to get something running more
regularly in tripleo-ci.


+1, there's no reason not to run a non-voting job against tripleo itself
at this point IMHO. That would allow me to start tracking the memory use
over time.


Do you have a strong preference multinode vs. ovb?  I would tend to 
think we want this to be ovb since multinode stubs out a bunch of stuff, 
but at the same time ovb capacity is limited.  It's better* now that 
we've basically halved our ovb ci coverage so we could probably get away 
with adding a job to a specific repo though (t-h-t seems logical for 
this purpose).


*for some definition of "better"

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Journey to running functional job with Python 3

2017-06-13 Thread Jakub Libosvar
Hi folks,

we've been tracking the OpenStack common goal for Python 3 in our
Neutron CI meetings. As an outcome we created a list of categorized
failures in current non-voting job. There are 250 failures that we split
into 14 categories. The list can be found here:

https://etherpad.openstack.org/p/py3-neutron-pike

Consider this email as a call-for-action in case you'd like to
participate in this goal. If you decide to work on one failure category,
please write down your name to the etherpad above.

Thanks to all who will join this effort! :)

Jakub

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] TripleO OVB check gates to move to third party

2017-06-13 Thread Ben Nemec



On 06/12/2017 06:19 PM, Ronelle Landy wrote:

Greetings,

TripleO OVB check gates are managed by upstream Zuul and executed on
nodes provided by test cloud RH1. RDO Cloud is now available as a test
cloud to be used when running CI jobs. To utilize to RDO Cloud, we could
either:

- continue to run from upstream Zuul (and spin up nodes to deploy
the overcloud from RDO Cloud)
- switch the TripleO OVB check gates to run as third party and
manage these jobs from the Zuul instance used by Software Factory

The openstack infra team advocates moving to third party.
The CI team is meeting with Frederic Lepied, Alan Pevec, and other
members of the Software Factory/RDO project infra tream to discuss how
this move could be managed.

Note: multinode jobs are not impacted - and will continue to run from
upstream Zuul on nodes provided by nodepool.

Since a move to third party could have significant impact, we are
posting this out to gather feedback and/or concerns that TripleO
developers may have.


I'm +1 on moving to third-party...eventually.  I don't think it should 
be done at the same time as we move to a new cloud, which is a major 
change in and of itself.  I suppose we could do the third-party 
transition in parallel with the existing rh1 jobs, but as one of the 
people who will probably have to debug problems in RDO cloud I'd rather 
keep the number of variables to a minimum.  Once we're reasonably 
confident that RDO cloud is stable and handling our workload well we can 
transition to third-party and deal with the problems that will no doubt 
cause on their own.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pike m2 has been released

2017-06-13 Thread Ben Nemec



On 06/12/2017 02:10 PM, Emilien Macchi wrote:

On Mon, Jun 12, 2017 at 7:20 PM, Ben Nemec  wrote:



On 06/09/2017 05:39 PM, Emilien Macchi wrote:


On Fri, Jun 9, 2017 at 5:01 PM, Ben Nemec  wrote:


Hmm, I was expecting an instack-undercloud release as part of m2.  Is
there
a reason we didn't do that?



You just released a new tag: https://review.openstack.org/#/c/471066/
with a new release model, why would we release m2? In case you want
it, I think we can still do it on Monday.



It was a new tag, but the same commit as m1 so it isn't really a new
release, just a re-tag of the same release we already had.  Part of my
reasoning for doing that was that it would get a new release for m2.


Sorry, I was confused, my bad. Done: https://review.openstack.org/473561


No worries.  At some point we'll get all of the release models synced up 
and things will be less confusing. :-)









On 06/08/2017 03:47 PM, Emilien Macchi wrote:



We have a new release of TripleO, pike milestone 2.
All bugs targeted on Pike-2 have been moved into Pike-3.

I'll take care of moving the blueprints into Pike-3.

Some numbers:
Blueprints: 3 Unknown, 18 Not started, 14 Started, 3 Slow progress, 11
Good progress, 9 Needs Code Review, 7 Implemented
Bugs: 197 Fix Released

Thanks everyone!




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] no ironic-ui meeting today

2017-06-13 Thread Julia Kreger
Greetings my ironic cohorts!

It seems everyone is busy this week and we have nothing really to
discuss during today's ironic-ui meeting. As such, today's meeting is
cancelled.

Talk to everyone next week or on #openstack-ironic.

Thanks!

-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] tripleo periodic jobs moving to RDO's software factory and RDO Cloud

2017-06-13 Thread Javier Pena


- Original Message -
> On Mon, Jun 12, 2017 at 05:01:26PM -0400, Wesley Hayutin wrote:
> > Greetings,
> > 
> > I wanted to send out a summary email regarding some work that is still
> > developing and being planned to give interested parties time to comment and
> > prepare for change.
> > 
> > Project:
> > Move tripleo periodic promotion jobs
> > 
> > Goal:
> > Increase the cadence of tripleo-ci periodic promotion jobs in a way
> > that does not impact upstream OpenStack zuul queues and infrastructure.
> > 
> > Next Steps:
> > The dependencies in RDO's instance of software factory are now complete
> > and we should be able to create a new a net new zuul queue in RDO infra for
> > tripleo-periodic jobs.  These jobs will have to run both multinode nodepool
> > and ovb style jobs and utilize RDO-Cloud as the host cloud provider.  The
> > TripleO CI team is looking into moving the TripleO periodic jobs running
> > upstream to run from RDO's software factory instance. This move will allow
> > the CI team more flexibility in managing the periodic jobs and resources to
> > run the jobs more frequently.
> > 
> > TLDR:
> > There is no set date as to when the periodic jobs will move. The move
> > will depend on tenant resource allocation and how easily the periodic jobs
> > can be modified.  This email is to inform the group that changes are being
> > planned to the tripleo periodic workflow and allow time for comment and
> > preparation.
> > 
> > Completed Background Work:
> > After long discussion with Paul Belanger about increasing the cadence
> > of the promotion jobs [1]. Paul explained infa's position and if he doesn't
> > -1/-2 a new pipeline that has the same priority as check jobs someone else
> > will. To summarize the point, the new pipeline would compete and slow down
> > non-tripleo projects in the gate even when the hardware resources are our
> > own.
> > To avoid slowing down non-tripleo projects Paul has volunteered to help
> > setup the infrastructure in rdoproject to manage the queue ( zuul etc). We
> > would still use rh-openstack-1 / rdocloud for ovb, and could also trigger
> > multinode nodepool jobs.
> > There is one hitch though, currently, rdo-project does not have all the
> > pieces of the puzzle in place to move off of openstack zuul and onto
> > rdoproject zuul. Paul mentioned that nodepool-builder [2] is a hard
> > requirement to be setup in rdoproject before we can proceed here. He
> > mentioned working with the software factory guys to get this setup and
> > running.
> > At this time, I think this issue is blocked until further discussion.
> > [1] https://review.openstack.org/#/c/443964/
> > [2]
> > https://github.com/openstack-infra/nodepool/blob/master/nodepool/builder.py
> > 
> > Thanks
> 
> The first step is landing the nodepool elements in nodepool.rdoproject.org,
> and
> building a centos-7 DIB.  I believe number80 is currently working on this and
> hopefully that could be landed in the next day or so.  Once images have been
> built, it won't be much work to then run a job. RDO already has 3rdparty jobs
> running, we'd to the same with tripleo-ci.
> 

I'm familiar with the 3rd party CI setup in review.rdoproject.org, since I 
maintain it for the rpm-packaging project. Please feel free to ping me if you 
need any help with the setup.

Javier

> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-13 Thread Flavio Percoco

On 13/06/17 10:49 +0200, Thierry Carrez wrote:

Quick attempt at a summary of the discussion so far, with my questions:

* Short-term, Glance needs help to stay afloat
 - Sean volunteered to help
 - but glance needs to add core reviewers to get stuff flowing
-> could the VM/BM workgroup also help ? Any progress there ?


+1

Given the current situation, I think we'll get any help we can. I'd be happy to
add Sean and a couple of other volunteers to the core team until the end of the
cycle. When Pike is out, we can do a status check and see how to proceed.


* Long-term, is Glance still our best bet for the future ?
 - The code base is way more complicated than it should be
 - Difficult to work on necessary refactoring with current resources
 - Glare is a sane base, but achieves more than just image catalog
 - Disk images may be special enough to require their own service
-> Elaborate on "optimizing for their specialness is really important"


I'd like to start working on a more formal proposal for this. The email threas
have covered some interesting points and there have been a good number of
sessions at various summits about this same argument.

There could be another session in Denver but I'd like to see a more formal
document, etherpad, whatever, that explains the different features that would
make the migration worth it and a set of different paths we could explore to
make this migration happen. With this info, I think we will be able to make a
thoughtful decision.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][upgrades] no meeting today

2017-06-13 Thread Ihar Hrachyshka
And sorry for late notice.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Containers Deep Dive - 15th June

2017-06-13 Thread Jiří Stránský

On 9.6.2017 16:49, Jiří Stránský wrote:

Hello,

as discussed previously on the list and at the weekly meeting, we'll do
a deep dive about containers. The time:

Thursday 15th June, 14:00 UTC (the usual time)

Link for attending will be at the deep dives etherpad [1], preliminary
agenda is in another etherpad [2], and i hope i'll be able to record it too.

This time it may be more of a "broad dive" :) as that's what containers
in TripleO mostly are -- they add new bits into many TripleO
areas/topics (composable services/upgrades, Quickstart/CI, etc.). So
i'll be trying to bring light to the container-specific parts of the
mix, and assume some familiarity with the generic TripleO
concepts/features (e.g. via docs and previous deep dives). Given this
pattern, i'll have slides with links into code. I'll post them online,
so that you can reiterate or examine some code more closely later, in
case you want to.


For folks who haven't had any prior exposure to Docker containers 
whatsoever, i'd recommend giving these links a scan beforehand:


* Docker Overview
  https://docs.docker.com/engine/docker-overview/

* Docker - Get Started pt.1: Orientation and Setup
  https://docs.docker.com/get-started/

* Docker - Get Started pt.2: Containers
  https://docs.docker.com/get-started/part2/

(I'd like us to spend majority of the time talking about how we use 
containers in TripleO, rather than what containers are.)



Thanks, looking forward to seeing you at the deep dive!

Jirka




Have a good day!

Jirka

[1] https://etherpad.openstack.org/p/tripleo-deep-dive-topics
[2] https://etherpad.openstack.org/p/tripleo-deep-dive-containers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] TripleO OVB check gates to move to third party

2017-06-13 Thread Juan Antonio Osorio
I would really appreciate that this is done once we finish moving the
TLS-everywhere job to run over oooq. This is on the works currently.

On Tue, Jun 13, 2017 at 2:19 AM, Ronelle Landy  wrote:

> Greetings,
>
> TripleO OVB check gates are managed by upstream Zuul and executed on nodes
> provided by test cloud RH1. RDO Cloud is now available as a test cloud to
> be used when running CI jobs. To utilize to RDO Cloud, we could either:
>
> - continue to run from upstream Zuul (and spin up nodes to deploy the
> overcloud from RDO Cloud)
> - switch the TripleO OVB check gates to run as third party and manage
> these jobs from the Zuul instance used by Software Factory
>
> The openstack infra team advocates moving to third party.
> The CI team is meeting with Frederic Lepied, Alan Pevec, and other members
> of the Software Factory/RDO project infra tream to discuss how this move
> could be managed.
>
> Note: multinode jobs are not impacted - and will continue to run from
> upstream Zuul on nodes provided by nodepool.
>
> Since a move to third party could have significant impact, we are posting
> this out to gather feedback and/or concerns that TripleO developers may
> have.
>
>
> Thanks!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] tripleo periodic jobs moving to RDO's software factory and RDO Cloud

2017-06-13 Thread Juan Antonio Osorio
Currently the TLS-everywhere job (fakeha-caserver) runs as a periodic job.
If there's gonna be a move. I would really appreciate that this is done
once we move that job to run over oooq. So don't loose that job.

On Tue, Jun 13, 2017 at 12:01 AM, Wesley Hayutin 
wrote:

> Greetings,
>
> I wanted to send out a summary email regarding some work that is still
> developing and being planned to give interested parties time to comment and
> prepare for change.
>
> Project:
> Move tripleo periodic promotion jobs
>
> Goal:
> Increase the cadence of tripleo-ci periodic promotion jobs in a way
> that does not impact upstream OpenStack zuul queues and infrastructure.
>
> Next Steps:
> The dependencies in RDO's instance of software factory are now
> complete and we should be able to create a new a net new zuul queue in RDO
> infra for tripleo-periodic jobs.  These jobs will have to run both
> multinode nodepool and ovb style jobs and utilize RDO-Cloud as the host
> cloud provider.  The TripleO CI team is looking into moving the TripleO
> periodic jobs running upstream to run from RDO's software factory instance.
> This move will allow the CI team more flexibility in managing the periodic
> jobs and resources to run the jobs more frequently.
>
> TLDR:
> There is no set date as to when the periodic jobs will move. The move
> will depend on tenant resource allocation and how easily the periodic jobs
> can be modified.  This email is to inform the group that changes are being
> planned to the tripleo periodic workflow and allow time for comment and
> preparation.
>
> Completed Background Work:
> After long discussion with Paul Belanger about increasing the cadence
> of the promotion jobs [1]. Paul explained infa's position and if he doesn't
> -1/-2 a new pipeline that has the same priority as check jobs someone else
> will. To summarize the point, the new pipeline would compete and slow down
> non-tripleo projects in the gate even when the hardware resources are our
> own.
> To avoid slowing down non-tripleo projects Paul has volunteered to help
> setup the infrastructure in rdoproject to manage the queue ( zuul etc). We
> would still use rh-openstack-1 / rdocloud for ovb, and could also trigger
> multinode nodepool jobs.
> There is one hitch though, currently, rdo-project does not have all the
> pieces of the puzzle in place to move off of openstack zuul and onto
> rdoproject zuul. Paul mentioned that nodepool-builder [2] is a hard
> requirement to be setup in rdoproject before we can proceed here. He
> mentioned working with the software factory guys to get this setup and
> running.
> At this time, I think this issue is blocked until further discussion.
> [1] https://review.openstack.org/#/c/443964/
> [2] https://github.com/openstack-infra/nodepool/blob/master/
> nodepool/builder.py
>
> Thanks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [storyboard][release][help] Anyone interested in adding bug-closing logic ?

2017-06-13 Thread Thierry Carrez
Hi everyone,

A long time ago, I wrote some Launchpad/release code so that when a
release is tagged, automation would go through the commit messages and
post a message on Launchpad bugs that are mentioned in a "Closes-Bug:"
stanza in the commit message of the changes corresponding to the release.

This logic lives in:
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/scripts/release-tools/release.sh#n94

which then calls:
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/scripts/release-tools/launchpad_add_comment.py

This functionality is missing for StoryBoard-driven projects. We'd need
something similar, that detects "Closes-Story:" stanzas and calls a new
storyboard_add_comment.py script.

With more and more projects being migrated to StoryBoard, would there be
anyone interested in writing that missing link ? It should be mostly
straightforward, the only obvious difficulty being storing StoryBoard
credentials for a release comment bot on the infra side.

It sounds like a great way to experiment with the StoryBoard API :)
So... Anyone interested ? Please ask me (or dhellmann) on
#openstack-release if you have questions...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Containers Deep Dive - 15th June

2017-06-13 Thread Jiří Stránský

On 13.6.2017 09:58, Or Idgar wrote:

Hi,
Can you please send me the meeting invitation?


Hi Or,

i've just added the bluejeans link to the etherpads. I'm not posting it 
here as i want the "sources of truth" for that link to be editable, in 
case we hit some issues with the setup / joining etc.


Jirka




Thanks in advance!

On Fri, Jun 9, 2017 at 5:49 PM, Jiří Stránský  wrote:


Hello,

as discussed previously on the list and at the weekly meeting, we'll do a
deep dive about containers. The time:

Thursday 15th June, 14:00 UTC (the usual time)

Link for attending will be at the deep dives etherpad [1], preliminary
agenda is in another etherpad [2], and i hope i'll be able to record it too.

This time it may be more of a "broad dive" :) as that's what containers in
TripleO mostly are -- they add new bits into many TripleO areas/topics
(composable services/upgrades, Quickstart/CI, etc.). So i'll be trying to
bring light to the container-specific parts of the mix, and assume some
familiarity with the generic TripleO concepts/features (e.g. via docs and
previous deep dives). Given this pattern, i'll have slides with links into
code. I'll post them online, so that you can reiterate or examine some code
more closely later, in case you want to.


Have a good day!

Jirka

[1] https://etherpad.openstack.org/p/tripleo-deep-dive-topics
[2] https://etherpad.openstack.org/p/tripleo-deep-dive-containers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-06-13 Thread Robert Li (baoli)
A quick update on this. As suggested by members of the community, I created a 
nova blueprint 
https://blueprints.launchpad.net/nova/+spec/expose-vlan-trunking, and posted a 
spec for Queens here: https://review.openstack.org/471815. Sean Mooney 
suggested in his review that automatic vlan subinterface configuration in the 
guest should be enabled/disabled on per trunk basis. I think that it’s a good 
idea. But to do that requires API and database schema changes. If it’s 
something that the community would like to go with, then I’d think it requires 
a RFE from the neutron side. We need reviews and feedbacks to move this forward.

Thanks,
Robert

On 6/7/17, 12:36 PM, "Robert Li (baoli)"  wrote:

Hi Bence,

Thanks for the pointers. I was aware of this 
https://bugs.launchpad.net/neutron/+bug/1631371, but not the blueprint you 
wrote. 

As suggested by Matt in https://bugs.launchpad.net/nova/+bug/1693535, I 
wrote a blueprint 
https://blueprints.launchpad.net/nova/+spec/expose-vlan-trunking, trying to 
tackle it in a simple manner.

--Robert


On 6/6/17, 7:35 AM, "Bence Romsics"  wrote:

Hi Robert,

I'm late to this thread, but let me add a bit. There was an attempt
for trunk support in nova metadata on the Pike PTG:

https://review.openstack.org/399076

But that was abandoned right after the PTG, because the agreement
seemed to be in favor of putting the trunk details into the upcoming
os-vif object. The os-vif object was supposed to be described in a new
patch set to this change:

https://review.openstack.org/390513

Unfortunately there's not much happening there since. Looking back now
it seems to me that turning the os-vif object into a prerequisite made
this work too big to ever happen. I definitely didn't have the time to
take that on.

But anyway I hope the abandoned spec may provide relevant input to you.

Cheers,
Bence


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [classifier] CCF Meeting

2017-06-13 Thread Duarte Cardoso, Igor
Hi all,

Reminding that we have the Common Classification Framework meeting today, in 
about an hour, at #openstack-meeting.

We'll talk about the status of the spec and implementation.

Agenda: 
https://wiki.openstack.org/wiki/Neutron/CommonClassificationFramework#Discussion_Topic_13_June_2017

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] [barbican] Re: Need your opinion on another issue...

2017-06-13 Thread Paul Bourke

Hi Stan,

Thanks for your input on this.

I'm finding the problem with doing the encryption/decryption on the 
engine side only is that at this point it is too late - the object model 
has already been written into the database at the API layer. I can't 
think how we can change this without moving the data persist logic into 
the engine which would be quite a large change.


If we encrypt just user specified properties on session create, this may 
still work as you outline, but this leads to the problem of how to 
signal to the API which property values are to be encrypted, as metadata 
and contracts are again only parsed once the object model reaches the 
engine.


The only other things I can think to improve this would be to either use 
some form of caching such as memcached, or drop barbican and do some 
form of local crypto that avoids the round trip time to barbican. 
Neither seem ideal.


If you have any other ideas on this or if some of the assumptions I've 
made above wrt the engine are incorrect I'd appreciate your thoughts. 
The current approach where we encrypt the entire object model is 
available as a work in progress at https://review.openstack.org/#/c/471772/.


Thanks again,
-Paul

On 08/06/17 04:09, Stan Lagun wrote:

Hi Ellen,

If you want my opinion I wouldn't recommend encrypt all the object model 
as it can create a lot of issues like this. What I suggest instead is to 
have special contract
(say $.secureString()) which does the decryption/encryption right from 
the engine while all calls to API would result in object model with some 
encrypted fields. This way
it would be possible to have this contract only for password and similar 
properties. I'd also introduce encryptString()/decryptString() yaql 
functions so that it would be possible
to do it manually (for example, to store sensitive values in attributes, 
which do not have contracts). But this can be done later. With contracts 
encryption would be completely
transparent to the rest of the code. Also, AFAIK MySQL enterprise has 
encryption capabilities so that you can make DB with object model be 
encrypted as well.


Regards,
Stan


On June 7, 2017 at 4:33:54 PM, Ellen Batbouta (ellen.batbo...@oracle.com 
) wrote:




Hi Stan,

Thanks for your last mail. Today, I am trying it out. It looks 
promising but haven't

gotten too far due to many interruptions.

Onto another issue that Paul and I would like your opinion on. Paul is 
a co-worker of mine

and he is working on the murano blueprint,

https://blueprints.launchpad.net/murano/+spec/allow-encrypting-of-muranopl-properties 



and has posted a spec and code review. The reason we need to encrypt 
the object model (or at least some
of the attributes) is because our database application contains 
passwords and we cannot have these
passwords stored in the database in clear text. We must absolutely fix 
this and for the Pike release.


A short summary (from Paul's code review) is:

This commit introduces optional integration for Murano with Barbican 
(or any other key manager supported via Castellan). When enabled, all 
object model will first be encrypted into Barbican, and a 'secret key' 
will be written to the Murano database in it's place. The code is 
compatible with mixed (encrypted and unencrypted) databases, however, 
environments/sessions created when encrypt_data is on cannot be read 
if encrypt_data is subsequently turned off. The complete configuration 
required in the api murano.conf to enable this change is as follows:


[murano]

encrypt_data = True

[barbican]
auth_endpoint = :

However, he is running into a performance problem. When listing the 
environments, the performance is
slow. The Murano code looks up the object model multiple times, which 
results in multiple calls to barbican.


Is it possible to reduce the number of look ups for the object model? 
We will be investigating further.

Just wondering if you have an opinion on this.

Thank you.

Ellen.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][deb-packaging] Stop using track-upstream for deb projects

2017-06-13 Thread Paul Belanger
Greetings,

I'd like to propose we stop using track-upstream for project-config on
deb-packaging projects. It seems there is no active development on these
projects currently and by using track-upstream we currently wasting both CI
resources and HDD space keeping their projects in sync with there upstream
openstack projects.

Long term, we don't actually want to support the behavior. I propose we stop
doing this today, and if somebody steps up to continue the effort on packaging
our release we then progress forward with out the need of track-upstream.

Effectively, track-upstream duplicates the size of a projects git repo. For
example, if deb-nova is setup to track-upstream of nova, we copy all commits and
import them into deb-nova. This puts unneeded pressure on our infrastructure
moving forward, the git overlay option for gbp is likely the solution we could
use.

-- Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Role updates

2017-06-13 Thread Dan Prince
On Fri, 2017-06-09 at 09:24 -0600, Alex Schultz wrote:
> Hey folks,
> 
> I wanted to bring to your attention that we've merged the change[0]
> to
> add a basic set of roles that can be combined to create your own
> roles_data.yaml as needed.  With this change the roles_data.yaml and
> roles_data_undercloud.yaml files in THT should not be changed by
> hand.

In general I like the feature.

I added some comments to your validations [1] patch below. We need
those validations, but I think we need to carefully consider adding a
hard dependency on python-tripleoclient simply to have validations in
tree. Wondering if perhaps a t-h-t-utils library project might be in
order here to contain routines we use in t-h-t and in higher level
workflow tools in Mistral and on the CLI? This might also make the
tools/process-templates.py stuff cleaner as well.

Thoughts?

Dan

> Instead if you have an update to a role, please update the
> appropriate
> roles/*.yaml file. I have proposed a change[1] to THT with additional
> tools to validate that the roles/*.yaml files are updated and that
> there are no unaccounted for roles_data.yaml changes.  Additionally
> this change adds in a new tox target to assist in the generate of
> these basic roles data files that we provide.
> 
> Ideally I would like to get rid of the roles_data.yaml and
> roles_data_undercloud.yaml so that the end user doesn't have to
> generate this file at all but that won't happen this cycle.  In the
> mean time, additional documentation around how to work with roles has
> been added to the roles README[2].
> 
> Thanks,
> -Alex
> 
> [0] https://review.openstack.org/#/c/445687/
> [1] https://review.openstack.org/#/c/472731/
> [2] https://github.com/openstack/tripleo-heat-templates/blob/master/r
> oles/README.rst
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer][opendaylight][networking-odl] OpenDaylight Driver for Ceilometer

2017-06-13 Thread Deepthi V V
>>>
Adding Isaku's reply here:
" What's the policy of telemetry or ceilometer?

As long as it follows the policy of them, networking-odl is fine to include 
such drivers."


Gordon, the driver would leverage ceilometer's polling framework. It will run 
as part of ceilometer-agent-central process and will be added to 
"network.statistics.drivers" Namespace.
I guess then we are good to add it to networking-odl?

Thanks,
Deepthi

-Original Message-
From: gordon chung [mailto:g...@live.ca] 
Sent: Monday, June 12, 2017 8:36 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] 
[telemetry][ceilometer][opendaylight][networking-odl] OpenDaylight Driver for 
Ceilometer



On 12/06/17 04:25 AM, Deepthi V V wrote:
> Hi,
>
>
>
> We plan to propose a ceilometer driver for collecting network 
> statistics information from OpenDaylight. We were thinking if we could 
> have the driver code residing in networking-odl project instead of 
> Ceilometer project. The thought is we have OpenDaylight depended code 
> restricted to n-odl repo. Please let us know your thoughts on the same.
>

will this run as its own periodic service or do you need to leverage ceilometer 
polling framework?

ideally, all this code will exists outside for ceilometer and have ceilometer 
consume it. the ceilometer team is far from experts on ODL so i don't think you 
want us reviewing ODL code. we'll be glad to help with integration though.

cheers,

--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] Conditionally passing properties in Heat

2017-06-13 Thread Harald Jensås
On Thu, 2017-04-20 at 16:11 -0700, Dan Sneddon wrote:
> On 04/20/2017 12:37 AM, Steven Hardy wrote:
> > On Wed, Apr 19, 2017 at 02:51:28PM -0700, Dan Sneddon wrote:
> > > On 04/13/2017 12:01 AM, Rabi Mishra wrote:
> > > > On Thu, Apr 13, 2017 at 2:14 AM, Dan Sneddon  > > > om
> > > > > wrote:
> > > > 
> > > > On 04/12/2017 01:22 PM, Thomas Herve wrote:
> > > > > On Wed, Apr 12, 2017 at 9:00 PM, Dan Sneddon  > > > dhat.com
> > > > > wrote:
> > > > >> I'm implementing predictable control plane IPs for
> > > > spine/leaf,
> > > > and I'm
> > > > >> running into a problem implementing this in the TripleO
> > > > Heat
> > > > templates.
> > > > >>
> > > > >> I have a review in progress [1] that works, but fails on
> > > > upgrade,
> > > > so I'm
> > > > >> looking for an alternative approach. I'm trying to
> > > > influence the IP
> > > > >> address that is selected for overcloud nodes' Control
> > > > Plane IP.
> > > > Here is
> > > > >> the current construct:
> > > > >>
> > > > >>   Controller:
> > > > >> type: OS::TripleO::Server
> > > > >> metadata:
> > > > >>   os-collect-config:
> > > > >> command: {get_param: ConfigCommand}
> > > > >> properties:
> > > > >>   image: {get_param: controllerImage}
> > > > >>   image_update_policy: {get_param:
> > > > ImageUpdatePolicy}
> > > > >>   flavor: {get_param: OvercloudControlFlavor}
> > > > >>   key_name: {get_param: KeyName}
> > > > >>   networks:
> > > > >> - network: ctlplane  # <- Here's where the port
> > > > is created
> > > > >>
> > > > >> If I add fixed_ip: to the networks element at the end of
> > > > the above, I
> > > > >> can select an IP address from the 'ctlplane' network,
> > > > like this:
> > > > >>
> > > > >>   networks:
> > > > >> - network: ctlplane
> > > > >>   fixed_ip: {get_attr: [ControlPlanePort,
> > > > ip_address]}
> > > > >>
> > > > >> But the problem is that if I pass a blank string to
> > > > fixed_ip, I
> > > > get an
> > > > >> error on deployment. This means that the old behavior of
> > > > automatically
> > > > >> selecting an IP doesn't work.
> > > > >>
> > > > >> I thought I has solved this by passing an external
> > > > Neutron port,
> > > > like this:
> > > > >>
> > > > >>   networks:
> > > > >> - network: ctlplane
> > > > >>   port: {get_attr: [ControlPlanePort, port_id]}
> > > > >>
> > > > >> Which works for deployments, but that fails on upgrades,
> > > > since the
> > > > >> original port was created as part of the Nova::Server
> > > > resource,
> > > > instead
> > > > >> of being an external resource.
> > > > >
> > > > > Can you detail how it fails? I was under the impression
> > > > we never
> > > > > replaced servers no matter what (or we try to do that, at
> > > > least). Is
> > > > > the issue that your new port is not the correct one?
> > > > >
> > > > >> I'm now looking for a way to use Heat conditionals to
> > > > apply the
> > > > fixed_ip
> > > > >> only if the value is not unset. Looking at the intrinsic
> > > > functions [2],
> > > > >> I don't see a way to do this. Is what I'm trying to do
> > > > with Heat
> > > > possible?
> > > > >
> > > > > You should be able to write something like that (not
> > > > tested):
> > > > >
> > > > > networks:
> > > > >   if:
> > > > > - 
> > > > > - network: ctlplane
> > > > >   fixed_ip: {get_attr: [ControlPlanePort,
> > > > ip_address]}
> > > > > - network: ctlplane
> > > > >
> > > > > The question is how to define your condition. Maybe:
> > > > >
> > > > > conditions:
> > > > >   fixed_ip_condition:
> > > > >  not:
> > > > > equals:
> > > > >   - {get_attr: [ControlPlanePort, ip_address]}
> > > > >   - ''
> > > > >
> > > > > To get back to the problem you stated first.
> > > > >
> > > > >
> > > > >> Another option I'm exploring is conditionally applying
> > > > resources. It
> > > > >> appears that would require duplicating the entire
> > > > TripleO::Server
> > > > stanza
> > > > >> in *-role.yaml so that there is one that uses fixed_ip
> > > > and one
> > > > that does
> > > > >> not. Which one is applied would be based on a condition
> > > > that tested
> > > > >> whether fixed_ip was blank or not. The downside of that
> > > > is that
> > > > it would
> > > > >> make the role definition confusing because there would
> > > > be a large
> > > > >> resource that was implemented twice, with only one line
> > > > difference
> > > > >> between them.
> > > > >
> > > > > You can define properties with condi

Re: [openstack-dev] [ironic] the driver composition and breaking changes to the supported interfaces

2017-06-13 Thread tie...@vn.fujitsu.com
Hi,

Dmitry: Thanks for bringing this issue into discussion.

For the iRMC patch, I would vote for the first option as it is commonly used. 
But overall, I think it's great if ironic can provide a mechanism like the 
second one. But as you said, that is technically challenging.

Regards
TienDC

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com] 
Sent: Monday, June 12, 2017 20:44
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [ironic] the driver composition and breaking changes 
to the supported interfaces

Hi folks!

I want to raise something we haven't apparently thought about when working on 
the driver composition reform.

For example, an iRMC patch [0] replaces 'pxe' boot with 'irmc-pxe'. This is the 
correct thing to do in this case. They're extending the PXE boot, and need a 
new class and a new entrypoint. We can expect more changes like this coming.

However, this change is breaking for users. Imagine a node explicitly created 
with:

  openstack baremetal node create --driver irmc --boot-interface pxe

On upgrade to Pike, such nodes will break and will require manual intervention 
to get it working again:

  openstack baremetal node set  --boot-interface irmc-pxe

What can we do about it? I see the following possibilities:

1. Keep "pxe" interface supported and issue a deprecation. This is relatively 
easy, but I'm not sure if it's always possible to keep the old interface 
working.

2. Change the driver composition reform to somehow allow the same names for 
different interfaces. e.g. "pxe" would point to PXEBoot for IPMI, but to 
IRMCPXEBoot for iRMC. This is technically challenging.

3. Only do a release note, and allow the breaking change to happen.

WDYT?

[0] https://review.openstack.org/#/c/416403

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Role updates

2017-06-13 Thread Dmitry Tantsur

On 06/13/2017 12:00 AM, Alex Schultz wrote:

On Mon, Jun 12, 2017 at 2:55 AM, Dmitry Tantsur  wrote:

On 06/09/2017 05:24 PM, Alex Schultz wrote:


Hey folks,

I wanted to bring to your attention that we've merged the change[0] to
add a basic set of roles that can be combined to create your own
roles_data.yaml as needed.  With this change the roles_data.yaml and
roles_data_undercloud.yaml files in THT should not be changed by hand.
Instead if you have an update to a role, please update the appropriate
roles/*.yaml file. I have proposed a change[1] to THT with additional
tools to validate that the roles/*.yaml files are updated and that
there are no unaccounted for roles_data.yaml changes.  Additionally
this change adds in a new tox target to assist in the generate of
these basic roles data files that we provide.

Ideally I would like to get rid of the roles_data.yaml and
roles_data_undercloud.yaml so that the end user doesn't have to
generate this file at all but that won't happen this cycle.  In the
mean time, additional documentation around how to work with roles has
been added to the roles README[2].



Hi, this is awesome! Do we expect more example roles to be added? E.g. I
could add a role for a reference Ironic Conductor node.



Yes. My expectation is that as we come up with new roles for supported
deployment types that we add them to the THT/roles directory so end
user can also use them.  The base set came from some work we did
during the Ocata cycle to have 3 base sets of architectures.

3 controller, 3 compute, 1 ceph (ha)
1 controller, 1 compute, 1 ceph (nonha)
3 controller, 3 database, 3 messaging, 2 networker, 1 compute, 1 ceph (advanced)

Feel free to propose additional roles if you have architectures you'd
like to have be reusable.


Ok, here we go: https://review.openstack.org/473788.

I guess it's expected that such deployments should still be done with `-e 
environments/services/ironic.yaml`, right?




Thanks,
-Alex




Thanks,
-Alex

[0] https://review.openstack.org/#/c/445687/
[1] https://review.openstack.org/#/c/472731/
[2]
https://github.com/openstack/tripleo-heat-templates/blob/master/roles/README.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Bug Triage and Integrated gate issue categorisation

2017-06-13 Thread Andrea Frittoli
Hello folks,

The QA team triage duty [0] has no-one assigned for the rest
of the cycle (except for one week), we could really use some volunteers.

I'm also introducing a new rota for integrated gate issue categorisation
[1].
The idea is to stay on top of the categorisation of issues, so that bugs are
filed, identified in reviews and can be triaged and solved by the relevant
teams.

If you're new to OpenStack QA / CI this is a great opportunity to get
started, I'm happy to offer mentorship to anyone who would like to give
this a try.

Just add your IRC nick (and name if you want) in one of the etherpad
and reach out to the QA team in #openstack-qa if you have any question
and/or need guidance.

Thank you!

Andrea (andreaf)

[0] https://etherpad.openstack.org/p/pike-qa-bug-triage
[1] https://etherpad.openstack.org/p/pike-gate-issue-categotisation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] migration to storyboard

2017-06-13 Thread Alexander Chadin
Hi Watchers,

I’ve prepared etherpad doc[1] to let you give your opinions about Storyboard 
migration.
Feel free to fill it up.

[1]: https://etherpad.openstack.org/p/watcher-storyboard

Best Regards,
_
Alexander Chadin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-13 Thread Thierry Carrez
Quick attempt at a summary of the discussion so far, with my questions:

* Short-term, Glance needs help to stay afloat
  - Sean volunteered to help
  - but glance needs to add core reviewers to get stuff flowing
-> could the VM/BM workgroup also help ? Any progress there ?

* Long-term, is Glance still our best bet for the future ?
  - The code base is way more complicated than it should be
  - Difficult to work on necessary refactoring with current resources
  - Glare is a sane base, but achieves more than just image catalog
  - Disk images may be special enough to require their own service
-> Elaborate on "optimizing for their specialness is really important"

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Containers Deep Dive - 15th June

2017-06-13 Thread Or Idgar
Hi,
Can you please send me the meeting invitation?


Thanks in advance!

On Fri, Jun 9, 2017 at 5:49 PM, Jiří Stránský  wrote:

> Hello,
>
> as discussed previously on the list and at the weekly meeting, we'll do a
> deep dive about containers. The time:
>
> Thursday 15th June, 14:00 UTC (the usual time)
>
> Link for attending will be at the deep dives etherpad [1], preliminary
> agenda is in another etherpad [2], and i hope i'll be able to record it too.
>
> This time it may be more of a "broad dive" :) as that's what containers in
> TripleO mostly are -- they add new bits into many TripleO areas/topics
> (composable services/upgrades, Quickstart/CI, etc.). So i'll be trying to
> bring light to the container-specific parts of the mix, and assume some
> familiarity with the generic TripleO concepts/features (e.g. via docs and
> previous deep dives). Given this pattern, i'll have slides with links into
> code. I'll post them online, so that you can reiterate or examine some code
> more closely later, in case you want to.
>
>
> Have a good day!
>
> Jirka
>
> [1] https://etherpad.openstack.org/p/tripleo-deep-dive-topics
> [2] https://etherpad.openstack.org/p/tripleo-deep-dive-containers
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Or Idgar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] swift3 Plugin Development

2017-06-13 Thread Venkata R Edara
Thanks for your reply. The reason we are looking into swift3 is we 
already integrated Gluster with openstack-swift which is providing obj 
store


(https://github.com/gluster/gluster-swift ). integration with RGW can be 
long-term solution, but for short term we would like to have ACL's support.


-Venkat

On 06/10/2017 01:23 AM, Pete Zaitcev wrote:

On Fri, 9 Jun 2017 10:37:15 +0530
Niels de Vos  wrote:


we are looking for S3 plugin with ACLS so that we can integrate gluster
with that.

Did you look into porting Ceph RGW on top of Gluster?

This is one of the longer term options that we have under consideration.
I am very interested in your reasons to suggest it, care to elaborate a
little?

RGW seems like the least worst starting point in terms of the end
result you're likely to get.

The swift3 does a good job for us in OpenStack Swift, providing a degree
of compatibility with S3. When Kota et.al. took over from Tomo, they revived
the development successfully. However, it remains fundamentally limited in
what it does, and its main function is to massage S3 to fit it on top
of Swift. If you place it in front of Gluster, you're saddled with
this fundamental incompatibility, unless you fork swift3 and rework it
beyond recognition.

In addition, surely you realize that swift3 is only a shim and you need
to have an object store to back it. Do you even have one in Gluster?

Fedora used to ship a self-contained S3 store "tabled", so unlike swift3
it's complete. It's written in C, so may be better compatible with Gluster's
development environment. However, it was out of development for years and
it only supports canned ACL. You aren't getting the full ACLs with it that
you're after.

The RGW gives you all that. It's well-compatible with S3, because it is
its native API (with Swift API being grafted on). Yehuda and crea maintain
a good compatibility. Yes, it's in C++, but the dialect is reasonable,
The worst downside is, yes, it's wedded to Ceph's RADOS and you need
a major surgery to place it on top of Gluster. Nonetheless, it seems like
a better defined task to me than trying to maintain your own webserver,
which you must do if you select swift3.

There are still some parts of RGW which will give you trouble. In particular,
it uses loadable classes, which run in the context of Ceph OSD. There's no
place in Gluster to run them. You may have to drag parts of OSD into the
project. But I didn't look closely enough to determine the feasibility.

In your shoes, I'd talk to Yehuda about this. He knows the problem domain
exceptionally and will give you a good advice, even though you're a
competitor in Open Source in general. Kinda like I do now :-)

Cheers,
-- Pete



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][mistral] For how long is blocking executor deprecated?

2017-06-13 Thread Mehdi Abaakouk

Hi,

On Tue, Jun 13, 2017 at 01:53:02PM +0700, Renat Akhmerov wrote:

Can you please clarify for how long you plan to keep ‘blocking executor’ 
deprecated before complete removal?


Like all deprecations. We just done it, so you have two cycles, we will
remove it in Rocky.

But as I said, this executor have never ever be tested. Even its
currently default, this default was chosen to not default to 'eventlet'
or 'threading', because this is an application choice and not a lib one.
But this (bad) default and the poor logged message haven't helped to
ensure application make the choice. That why blocking executor is not
deprecated and all 'executor' parameters in oslo.messaging will become
mandatory.

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][mistral] For how long is blocking executor deprecated?

2017-06-13 Thread Renat Akhmerov
Ok, I think I already got my question answered: 
https://docs.openstack.org/releasenotes/oslo.messaging/unreleased.html#deprecation-notes

Thanks

Renat Akhmerov
@Nokia

On 13 Jun 2017, 13:59 +0700, Renat Akhmerov , wrote:
> Hi Oslo team,
>
> Can you please clarify for how long you plan to keep ‘blocking executor’ 
> deprecated before complete removal?
>
> We have to use it in Mistral for the time being. We plan to move away from 
> using it but the transition may take significant time, not this cycle for 
> sure. So we got worried when we heard the news that it’s now been deprecated.
>
>
> Thanks
>
> Renat Akhmerov
> @Nokia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][mistral] For how long is blocking executor deprecated?

2017-06-13 Thread Renat Akhmerov
Hi Oslo team,

Can you please clarify for how long you plan to keep ‘blocking executor’ 
deprecated before complete removal?

We have to use it in Mistral for the time being. We plan to move away from 
using it but the transition may take significant time, not this cycle for sure. 
So we got worried when we heard the news that it’s now been deprecated.


Thanks

Renat Akhmerov
@Nokia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev