[openstack-dev] [tripleo] need help with tempest failures for Bug 1731063

2017-11-17 Thread Alex Schultz
Hello everyone,

Bug 1731063[0] has been kicking around for almost 10 days now. We're
now seeing something similar to it on scenario003 and will be
switching it to non-voting[1] as soon as the v3 cut over finishes.
This is removing additional test coverage and unless we start seeing
some movement on the critical bugs, I do not think we should continue
merging additional features until these bugs get resolved.  Since we
do not see this bug in Pike, this appears to be a regression and the
most recent review of the logs seems to point to neutron. If some
folks from the networking squad could take a look at the logs and help
that would be great.

Between this one and Bug 1731032[2], CI is randomly unhappy which is
not helping anyone get stuff landed.

Thanks,
-Alex

[0] https://bugs.launchpad.net/tripleo/+bug/1731063
[1] https://review.openstack.org/521205
[2] https://bugs.launchpad.net/tripleo/+bug/1731032

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][placement] resource providers update 41.75

2017-11-17 Thread Eric Fried
Folks are still trickling back from the summit, so things haven't ground
fully back into action yet.  But some progress has been made:

GET /allocation_candidates
==
The refactor series starting at [1] has started to merge.  The series
has included a number of useful tests, revealed a number of juicy bugs,
and even fixed some of them.

The same series is also starting to accomodate traits, which can now be
fed into AllocationCandidates.get_by_requests() since [2].

Once nested is accounted for (code still not started) we'll activate
these new goodies in the placement API via [3].

[1] https://review.openstack.org/#/c/516778/
[2] https://review.openstack.org/#/c/514092/
[3] https://review.openstack.org/#/c/517757/

Nested Resource Providers
=
The bottom of this series [4] hasn't moved, other than rebases.  Please
let's get some core eyes on this.

However, the top of the series is seeing some new action starting at
[5]: we're in the process of implementing a
ComputeDriver.update_provider_tree method that will allow a virt driver
to manage the nested provider structure and inventory associated with
its compute node(s) - something like [6] and [7].

[4] https://review.openstack.org/#/c/377138/
[5] https://review.openstack.org/#/c/520243/
[6] https://review.openstack.org/#/c/520313/
[7] https://review.openstack.org/#/c/521041/

Alternate Hosts
===
A little progress has been made on this series [8] but it's still pinned
pending full buy-in.

[8] https://review.openstack.org/#/c/499239/

=
Otherwise not much has changed in the last couple of weeks; see update
41 [9] for a 'fresher.

[9]
http://lists.openstack.org/pipermail/openstack-dev/2017-November/124233.html

EOM

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [Openstack-operators] [QA] Proposal for a QA SIG

2017-11-17 Thread Rochelle Grober
First off, let me say I think this is a tremendous idea.  And, it's perfect for 
the SIG concept.

Next, see inline:

Thierry Carrez wrote:
> Andrea Frittoli wrote:
> > [...]
> > during the last summit in Sydney we discussed the possibility of
> > creating an OpenStack quality assurance special interest group (OpenStack
> QA SIG).
> > The proposal was discussed during the QA feedback session [0] and it
> > received positive feedback there; I would like to bring now the
> > proposal to a larger audience via the SIG, dev and operators mailing
> > lists.
> > [...]
> 
> I think this goes with the current trends of re-centering upstream "project
> teams" on the production of software, while using SIGs as communities of
> practice (beyond the governance boundaries), even if they happen to
> produce (some) software as the result of their work.
> 
> One question I have is whether we'd need to keep the "QA" project team at
> all. Personally I think it would create confusion to keep it around, for no 
> gain.
> SIGs code contributors get voting rights for the TC anyway, and SIGs are free
> to ask for space at the PTG... so there is really no reason (imho) to keep a
> "QA" project team in parallel to the SIG ?

Well, you can get rid of the "QA Project Team" but you would then need to 
replace it with something like the Tempest Project, or perhaps the Test 
Project.  You still need a PTL and cores to write, review and merge tempest 
fixes and upgrades, along with some of the tests.  The Interop Guideline tests 
are part of Tempest because being there provides oversight on the style and 
quality of the code of those tests.  We still need that.

--Rocky

> In the same vein we are looking into turning the Security project team into a
> SIG, and could consider turning other non-purely-upstream teams (like I18n)
> in the future.
> 
> --
> Thierry Carrez (ttx)
> 
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release][neutron][horizon] Publishing server projects to PyPI

2017-11-17 Thread Monty Taylor

On 11/17/2017 10:51 AM, Andreas Jaeger wrote:

On 2017-11-17 17:27, Monty Taylor wrote:

Hey everybody!

tl;dr - We'd like to start publishing the server projects to PyPI

Background
==

The move to Zuul v3 has highlighted an odd situation we're in with some
of our projects, most notably neutron and horizon plugin projects.
Namely, those plugins need to depend on neutron and horizon, but we do
not publish neutron and horizon releases to PyPI.

There are a couple of reasons why we haven't historically published
server projects. We were concerned that doing so would 'encourage'
installation of the services from pip. Also that, because it is pip and
not dpkg/rpm/emerge there's no chance that 'pip install nova' will
result in a functioning service. (It's worth noting that pip and PyPI in
general were in a much different state 6 years ago than today - thanks
dstufft for all the great work!)

I think it's safe to say that the 'ship has sailed' on those two issues
... which is to say we're already well past the point of return on
either issue. pip is used or not used by deployers as they see fit, and
I don't think 'pip install nova' not producing a working nova isn't
going to actually surprise anyone.

Moving Forward
==

Before we can do anything, we need to update some of the release
validation scripts to allow this (they currently double-check that we're
doing the right things with the right type of project):
https://review.openstack.org/#/c/521115/

Once that's done, rather than doing a big-bang transition, the plan is
to move server projects from using the release-openstack-server template
to using the publish-to-pypi template as they are ready.

This should simplify a great many things for horizon and neutron - and
allow us to get rid of the horizon and neutron specific jobs and
templates. There are a few gotchas we'll need to work through - notably
there is another project there already named "Keystone" - although it
seems not very used. I've reached out to the author to see if he's
willing to relinquish. If he's not, we'll have to get creative.


One question on this: right now the dashboard and neutron plugins test
against current git head. Wouldn't installing from pypi mean that they
test against an older stable version?


Yes - by default - so we actually probably can't get rid of all of the 
variants. However, the tox-siblings support in the tox jobs would allow 
us to remove the zuul-cloner and other magic from the install_tox.sh 
scripts in the repos and just let things work normally.


Incidentally, it would also allow for the plugin projects to decide if 
they wanted to test against latest stable or master - or both.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release][neutron][horizon] Publishing server projects to PyPI

2017-11-17 Thread Jeremy Stanley
On 2017-11-17 17:51:36 +0100 (+0100), Andreas Jaeger wrote:
[...]
> One question on this: right now the dashboard and neutron plugins test
> against current git head. Wouldn't installing from pypi mean that they
> test against an older stable version?

I brought this up in #openstack-release as well, and Monty reminded
me that the tox-siblings role attempts to address that for any
repository declared as a required-project for a given job. That
said, it's only a solution in CI automation so until we have a good
story around running job definitions under Ansible in local
development environments, I don't think we can completely get rid of
the associated pip wrapper scripts in those various projects (but we
can at least find a way to stop running them in the CI system in the
meantime).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [api] Script injection issue

2017-11-17 Thread Jeremy Stanley
On 2017-11-17 15:55:33 + (+), Tristan Cacqueray wrote:
[...]
> We had similar issues[0][1] in the past where we already draw the line
> that it is the client responsibility to filter out API response.
> 
> Thus I agree with Jeremy, perhaps it is not ideal, but at least it
> doesn't give a false sense of security if^Wwhen the server side
> filtering let unpredicted malicious content through.
[...]

To be clear, I don't object to making whatever developers and API
SIG members feel are sane filtering choices service-side, it's just
that I think the VMT will consider those security hardening patches
and not vulnerability fixes. If Horizon or any other consuming
application fails to properly sanitize data before performing
potentially unsafe actions with it, that's a vulnerability and would
generally warrant an official security advisory.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] priorities for the week (11/17-11/23)

2017-11-17 Thread Brian Rosmaita
Hello Glancers,

Due to the Thanksgiving holidays in the USA next week, we are
tentatively cancelling the meeting on November 23.  However, most of
our developers these days are outside the USA, so if someone has a
pressing issue and puts the same on the meeting agenda before the
usual deadline (24 hours before the meeting, so before 14:00 on
Wednesday, Nov 22), we will hold the meeting.  Erno has volunteered to
keep an eye on the agenda and send out an email to the dev list if the
November 23 meeting will be held.  (So the default setting is: no
meeting next week.)

Patches needing review:

https://review.openstack.org/#/c/510449/
https://review.openstack.org/#/c/520945/
https://review.openstack.org/#/c/510424/
https://review.openstack.org/#/c/519514/
https://review.openstack.org/#/c/520644/

Except for the first patch, they're small changes that will be quick
to review and good to get merged.

cheers,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release][neutron][horizon] Publishing server projects to PyPI

2017-11-17 Thread Andreas Jaeger
On 2017-11-17 17:27, Monty Taylor wrote:
> Hey everybody!
> 
> tl;dr - We'd like to start publishing the server projects to PyPI
> 
> Background
> ==
> 
> The move to Zuul v3 has highlighted an odd situation we're in with some
> of our projects, most notably neutron and horizon plugin projects.
> Namely, those plugins need to depend on neutron and horizon, but we do
> not publish neutron and horizon releases to PyPI.
> 
> There are a couple of reasons why we haven't historically published
> server projects. We were concerned that doing so would 'encourage'
> installation of the services from pip. Also that, because it is pip and
> not dpkg/rpm/emerge there's no chance that 'pip install nova' will
> result in a functioning service. (It's worth noting that pip and PyPI in
> general were in a much different state 6 years ago than today - thanks
> dstufft for all the great work!)
> 
> I think it's safe to say that the 'ship has sailed' on those two issues
> ... which is to say we're already well past the point of return on
> either issue. pip is used or not used by deployers as they see fit, and
> I don't think 'pip install nova' not producing a working nova isn't
> going to actually surprise anyone.
> 
> Moving Forward
> ==
> 
> Before we can do anything, we need to update some of the release
> validation scripts to allow this (they currently double-check that we're
> doing the right things with the right type of project):
> https://review.openstack.org/#/c/521115/
> 
> Once that's done, rather than doing a big-bang transition, the plan is
> to move server projects from using the release-openstack-server template
> to using the publish-to-pypi template as they are ready.
> 
> This should simplify a great many things for horizon and neutron - and
> allow us to get rid of the horizon and neutron specific jobs and
> templates. There are a few gotchas we'll need to work through - notably
> there is another project there already named "Keystone" - although it
> seems not very used. I've reached out to the author to see if he's
> willing to relinquish. If he's not, we'll have to get creative.

One question on this: right now the dashboard and neutron plugins test
against current git head. Wouldn't installing from pypi mean that they
test against an older stable version?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][release][neutron][horizon] Publishing server projects to PyPI

2017-11-17 Thread Monty Taylor

Hey everybody!

tl;dr - We'd like to start publishing the server projects to PyPI

Background
==

The move to Zuul v3 has highlighted an odd situation we're in with some 
of our projects, most notably neutron and horizon plugin projects. 
Namely, those plugins need to depend on neutron and horizon, but we do 
not publish neutron and horizon releases to PyPI.


There are a couple of reasons why we haven't historically published 
server projects. We were concerned that doing so would 'encourage' 
installation of the services from pip. Also that, because it is pip and 
not dpkg/rpm/emerge there's no chance that 'pip install nova' will 
result in a functioning service. (It's worth noting that pip and PyPI in 
general were in a much different state 6 years ago than today - thanks 
dstufft for all the great work!)


I think it's safe to say that the 'ship has sailed' on those two issues 
... which is to say we're already well past the point of return on 
either issue. pip is used or not used by deployers as they see fit, and 
I don't think 'pip install nova' not producing a working nova isn't 
going to actually surprise anyone.


Moving Forward
==

Before we can do anything, we need to update some of the release 
validation scripts to allow this (they currently double-check that we're 
doing the right things with the right type of project): 
https://review.openstack.org/#/c/521115/


Once that's done, rather than doing a big-bang transition, the plan is 
to move server projects from using the release-openstack-server template 
to using the publish-to-pypi template as they are ready.


This should simplify a great many things for horizon and neutron - and 
allow us to get rid of the horizon and neutron specific jobs and 
templates. There are a few gotchas we'll need to work through - notably 
there is another project there already named "Keystone" - although it 
seems not very used. I've reached out to the author to see if he's 
willing to relinquish. If he's not, we'll have to get creative.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nominate akrivoka for tripleo-validations core

2017-11-17 Thread Julie Pichon
On 6 November 2017 at 14:32, Honza Pokorny  wrote:
> I would like to nominate Ana Krivokapić (akrivoka) for the core team for
> tripleo-validations.  She has really stepped up her game on that project
> in terms of helpful reviews, and great patches.
>
> With Ana's help as a core, we can get more done, and innovate faster.
>
> If there are no objections within a week, we'll proceed with adding Ana
> to the team.

It's been over a week, with no objections. Ana has now been added to
the tripleo-core group - welcome!

Julie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [api] Script injection issue

2017-11-17 Thread Tristan Cacqueray

On November 17, 2017 1:56 pm, Jeremy Stanley wrote:

On 2017-11-17 12:47:34 + (+), Luke Hinds wrote:

This will need the VMT's attention, so please raise as an issue on
launchpad and we can tag it as for the vmt members as a possible OSSA.

[...]

Ugh, looks like someone split this thread, and I already replied to
the original thread. In short, I don't think it's safe to assume we
know what's going to be safe for different frontends and consuming
applications, so trying to play whack-a-mole with various unsafe
sequences at the API side puts the responsibility for safe filtering
in the wrong place and can lead to lax measures in the software
which should actually be taking on that responsibility.

Of course, I'm just one voice. Others on the VMT certainly might
disagree with my opinion on this.


We had similar issues[0][1] in the past where we already draw the line
that it is the client responsibility to filter out API response.

Thus I agree with Jeremy, perhaps it is not ideal, but at least it
doesn't give a false sense of security if^Wwhen the server side
filtering let unpredicted malicious content through.

-Tristan

[0] https://launchpad.net/bugs/1486565
[1] https://launchpad.net/bugs/1649248


pgpn3umvdd5Hj.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [Openstack-operators] [QA] Proposal for a QA SIG

2017-11-17 Thread Andrea Frittoli
On Fri, Nov 17, 2017 at 12:33 PM Thierry Carrez 
wrote:

> Andrea Frittoli wrote:
> > [...]
> > during the last summit in Sydney we discussed the possibility of
> creating an
> > OpenStack quality assurance special interest group (OpenStack QA SIG).
> > The proposal was discussed during the QA feedback session [0] and it
> > received
> > positive feedback there; I would like to bring now the proposal to a
> larger
> > audience via the SIG, dev and operators mailing lists.
> > [...]
>
> I think this goes with the current trends of re-centering upstream
> "project teams" on the production of software, while using SIGs as
> communities of practice (beyond the governance boundaries), even if they
> happen to produce (some) software as the result of their work.
>
> One question I have is whether we'd need to keep the "QA" project team
> at all. Personally I think it would create confusion to keep it around,
> for no gain. SIGs code contributors get voting rights for the TC anyway,
> and SIGs are free to ask for space at the PTG... so there is really no
> reason (imho) to keep a "QA" project team in parallel to the SIG ?
>

That is a possibility indeed, but I think co-existance will be the case for
a
bit at least - we may decide to drop the QA program eventually depending
on how the experience with the SIG goes.


>
> In the same vein we are looking into turning the Security project team
> into a SIG, and could consider turning other non-purely-upstream teams
> (like I18n) in the future.
>
> --
> Thierry Carrez (ttx)
>
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] IPSEC integration

2017-11-17 Thread James Slagle
On Fri, Nov 17, 2017 at 10:27 AM, Bogdan Dobrelya  wrote:
> On 11/16/17 8:01 AM, Juan Antonio Osorio wrote:
>>
>> Hello folks!
>>
>> A few months ago Dan Sneddon and me worked in an ansible role that would
>> enable IPSEC for the overcloud [1]. Currently, one would run it as an extra
>> step after the overcloud deployment. But, I would like to start integrating
>> it to TripleO itself, making it another option, probably as a composable
>> service.
>>
>> For this, I'm planning to move the tripleo-ipsec ansible role repository
>> under the TripleO umbrella. Would that be fine with everyone? Or should I
>> add this ansible role as part of another repository? After that's
>
>
> This looks very similar to Kubespray [0] integration case. I hope that
> external deployments bits can be added without a hard requirement of being
> under the umbrella and packaged in RDO.

I don't have a strong opinion on it being under the TripleO umbrella
or not, but I agree with Bogdan that I think this could be a good fit
for the external_deploy_tasks interface that kubespray is currently
also consuming. You may find that is an easier way of consuming the
standalone Ansible roles you've already done as opposed to trying to
make those fit into the composable services framework that uses t-h-t
in tree Ansible tasks.


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] IPSEC integration

2017-11-17 Thread Bogdan Dobrelya

On 11/16/17 8:01 AM, Juan Antonio Osorio wrote:

Hello folks!

A few months ago Dan Sneddon and me worked in an ansible role that would 
enable IPSEC for the overcloud [1]. Currently, one would run it as an 
extra step after the overcloud deployment. But, I would like to start 
integrating it to TripleO itself, making it another option, probably as 
a composable service.


For this, I'm planning to move the tripleo-ipsec ansible role repository 
under the TripleO umbrella. Would that be fine with everyone? Or should 
I add this ansible role as part of another repository? After that's 


This looks very similar to Kubespray [0] integration case. I hope that 
external deployments bits can be added without a hard requirement of 
being under the umbrella and packaged in RDO.



I've tried to follow the guide [1] for adding RDO packages and the 
package review [2] and didn't succeed. There should be a simpler 
solution to host a package somewhere outside of RDO, and being able to 
add it for an external deployment managed by tripleo. My 2c.


[0] https://github.com/kubernetes-incubator/kubespray
[1] https://www.rdoproject.org/documentation/add-packages/
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1482524

available and packaged in RDO. I'll then look into the actual TripleO 
composable service.


Any input and contributions are welcome!

[1] https://github.com/JAORMX/tripleo-ipsec

--
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Migrating TripleO CI in-tree tomorrow - please README

2017-11-17 Thread Alex Schultz
On Thu, Nov 16, 2017 at 11:20 AM, Emilien Macchi  wrote:
> TL;DR: don't approve or recheck any tripleo patch from now, until
> further notice on this thread.
>
> Some good progress has been made on migrating legacy tripleo CI jobs
> to be in-tree:
> https://review.openstack.org/#/q/topic:tripleo/migrate-to-zuulv3
>
> The next steps:
> - Let the current gate to finish their jobs running.
> - Stop approving patches from now, and wait the gate to be done and cleared
> - Alex and I will approve the migration patches tomorrow and we hope
> to have them in the gate by Friday afternoon (US time) when gate isn't
> busy anymore. We'll also have to backport them all.

They have been pushed to the gate. There are a few patches in front of
them before they will hit. please do not approve anything until the v3
cut over lands as you'll end up with double the amount of jobs running
on your gate patches until the project-config change lands.

Thanks,
-Alex


> - When these patches will be merged (it might take the weekend to
> land, depending how the gate will be), we'll run duplicated jobs until
> https://review.openstack.org/514778 is merged. I'll try to ping
> someone from Infra over the week-end if we can land it, that would be
> great.
> - Once https://review.openstack.org/514778 is merged, people are free
> to do recheck or approve any patches. We hope it should happen over
> the weekend.
> - I'll continue to migrate all other tripleo projects to have in-tree
> layout. On the list: t-p-e, t-i-e, paunch, os-*-config,
> tripleo-validations.
>
> Thanks for your help,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Proposal for a QA SIG

2017-11-17 Thread MCCASLAND, TREVOR
Just going to keep it short,
I think this is a great idea and would like to participate/chair.

From: Andrea Frittoli [mailto:andrea.fritt...@gmail.com]
Sent: Friday, November 17, 2017 5:54 AM
To: OpenStack Development Mailing List (not for usage questions) 
; openstack-operat...@lists.openstack.org; 
openstack-s...@lists.openstack.org
Subject: [openstack-dev] [QA] Proposal for a QA SIG

Dear all,

during the last summit in Sydney we discussed the possibility of creating an
OpenStack quality assurance special interest group (OpenStack QA SIG).
The proposal was discussed during the QA feedback session [0] and it received
positive feedback there; I would like to bring now the proposal to a larger
audience via the SIG, dev and operators mailing lists.

The mission of the existing QA Program in OpenStack is to “develop, maintain,
and initiate tools and plans to ensure the upstream stability and quality of
OpenStack, and its release readiness at any point during the release cycle” [1].
While the mission statement is quite wide, the only QA that we have visibility 
on
is what happens upstream, which is limited with pre-merge testing, with the only
exception of periodic tests.

There’s a lot of engineering that goes into OpenStack QA downstream, and there
have been several attempts in the past to share this work and let everyone in 
the
community benefit from it. The QA SIG could be a forum to make this happen:
share use cases, tests, tools, best practises and ideas beyond what we test 
today
in upstream; enable everyone doing QA in the OpenStack community to benefit
from the QA work that happens today.

Adjacent communities may be interested in participating to a QA SIG as well.
The opnfv community [2] performs QA on OpenStack releases today and they
are actively looking for opportunities to share tools and test cases.

Please reply to this email thread to express interest in participating / 
chairing a
QA SIG, if I get enough positive feedback I’ll setup the SIG as forming [4], 
and we
can then continue the conversation on the SIG mailing list and eventually setup
an initial meeting.

Thank you,

Andrea Frittoli (andreaf)


[0] 
https://etherpad.openstack.org/p/SYD-forum-qa-tools-plugins
[1] 
https://wiki.openstack.org/wiki/QA
[2] 
https://www.opnfv.org
[3] 
https://wiki.openstack.org/wiki/Performance_Team
[4] 
https://wiki.openstack.org/wiki/OpenStack_SIGs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [api] Script injection issue

2017-11-17 Thread Jeremy Stanley
On 2017-11-17 12:47:34 + (+), Luke Hinds wrote:
> This will need the VMT's attention, so please raise as an issue on
> launchpad and we can tag it as for the vmt members as a possible OSSA.
[...]

Ugh, looks like someone split this thread, and I already replied to
the original thread. In short, I don't think it's safe to assume we
know what's going to be safe for different frontends and consuming
applications, so trying to play whack-a-mole with various unsafe
sequences at the API side puts the responsibility for safe filtering
in the wrong place and can lead to lax measures in the software
which should actually be taking on that responsibility.

Of course, I'm just one voice. Others on the VMT certainly might
disagree with my opinion on this.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Updates on the TripleO on Kubernetes work

2017-11-17 Thread James Slagle
On Fri, Nov 17, 2017 at 4:43 AM, Steven Hardy  wrote:
> On Thu, Nov 16, 2017 at 4:56 PM, James Slagle  wrote:
>> On Thu, Nov 16, 2017 at 8:44 AM, Flavio Percoco  wrote:
>> What I'm trying to propose is a path towards deprecating the Heat
>> parameter/environment driven and hieradata driven approach to
>> configuring the services. The ansible-role-k8s-* roles should offer a
>> new interface, so I don't think we have to remain tied to Heat
>> forever, so we should consider what we want the long term goal to be
>> in an ideal world, and take some iterative steps to get there.
>
> I agree this is a good time to discuss ways to rationalize the
> toolchain, but I do suspect it may be premature to consider
> derprecating puppet/hiera as AFAIK this doesn't provide any drop-in
> replacement for the config file generation?

I'm not proposing deprecating without a replacement. It's instead
about a way to use the apb roles without having to be locked into
puppet/hieradata/docker-puppet.py. That would be a path towards
deprecation, or we could choose to never deprecate.

I actually don't think the existing roles do lock you into
puppet/hieradata after reviewing:
https://github.com/openstack/ansible-role-k8s-keystone/blob/master/tasks/provision.yml
But, the demos I've seen are all driven by t-h-t and puppet via the
undercloud deploy mechanism. Perhaps an example of using the roles
standalone would be beneficial.

Showing things like:
  - write config files manually and inject them
  - make a manual change directly to a puppet generated (or not)
config file and inject that

Are we planning for an interface and framework that supports these
types of behaviors? Do the existing apb roles offer such an interface?

I think this is something we could consider now so we don't develop a
framework that locks us into a given implementation. If that's already
the case, that's great. I'm trying to get more of a feel of where we
want to go with this work in the long term and how it would integrate
into a more pure Ansible approach. config-download gets us to where we
can treat Heat as ephemeral for the overcloud (if using
deployed-server), where Heat is only used for config and task
generation. A flexible Ansible role architecture can move us further
towards not having to rely on the generated pieces, which are still
pretty complex for a lot of devs and users.

All of the architecture changes we've made over the years in t-h-t
have allowed us to mostly move forward while maintaining some basic
backwards compatibility which is both great and necessary. We need to
continue to do that, but this is also an opportunity to develop
something more flexible that could allow for different tooling
choices.

> I was thinking we'd probably maintain the current docker-puppet.py
> model for this first pass, to reduce the risk of migrating containers
> to k8s, and we could probably refactor things such that this config
> generation via puppet+docker is orchestrated via the ansible roles and
> kubernetes?
>
> The current model is something like:
>
> 1. Run temporary docker container, run puppet, write config files to
> host file system
> 2. Start service container, config files bind mounted into container
> from host filesystem
> 3. Run temporary bootstrapping container (runs puppet, optional step)
>
> (this is simplified for clarity as there are opportunities for some
> other bootstrapping steps)
>
> In the ansible/kubernetes model, it could work like:
>
> 1. Ansible role makes k8s API call creating pod with multiple containers
> 2. Pod starts temporary container that runs puppet, config files
> written out to shared volume
> 3. Service container starts, config consumed from shared volume
> 4. Optionally run temporary bootstrapping container inside pod
>
> This sort of pattern is documented here:
>
> https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
>
> The main advantage is we don't have to reimplement config management
> for every single service, but obviously we'd want this to be pluggable
> in the ansible roles so other config management strategies/tools could
> be used instead of our puppet model.

The pattern is fine, and using all the existing tools for config
management is fine. My point is entirely about the last bit. Let's not
force the existing tools as the defined interface for the apb roles.
It's an important first step because once the patches start to get
pushed upstream and landed, we start adopting them as the supported
framework for better or worse. Some of the changes we've had to make
over the years have had to require lots of refactoring in our
frameworks, and others not.

To avoid that, we should consider one aspect of flexiblity now around
the config management tooling and the interface. Let's make sure
that's the case and part of the agreed long term vision. Historically
(for many reasons), we've found that in TripleO, we may consider a
framework flexible but it doesn't

Re: [openstack-dev] [security] Script injection issue

2017-11-17 Thread Jeremy Stanley
On 2017-11-17 08:22:31 + (+), TommyLike Hu wrote:
> Recently when we integrating and testing OpenStack services. We
> found there is a potential script injection issue that some of our
> services accept the input with special character [1] [2], for
> instance we can create an instance or a volume with the name of
> 'script inside'. One of the possible solutions is
> add HTML encode/decode support in Horizon, but it's not guaranteed
> every OpenStack user is using Horizon. So should we apply more
> strict restriction on user's input?

Just my opinion, but I think its up to frontends to know what
strings are safe to present. Web-based interfaces are not the only
possible place those strings may end up, and if we consider it the
API's responsibility to strip out every possible sequence that might
cause trouble for every kind of frontend or consuming application
then we'll eventually be left accepting only ASCII alphanumerics.

> Also, I found  Google Cloud have a strict and explicit restrction in
> their instance insert API document [3].
[...]

To my knowledge, Google Cloud is proprietary software and can afford
to make decisions tightly coupling the security of their Web
frontend to their APIs. OpenStack can't easily make the same sorts
of assumptions.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [api] Script injection issue

2017-11-17 Thread Luke Hinds
This will need the VMT's attention, so please raise as an issue on
launchpad and we can tag it as for the vmt members as a possible OSSA.

Apologies for top post, replying from phone.

On 17 Nov 2017 12:34 pm, "Adam Heczko"  wrote:

> Thanks TommyLike for this bug report. Sounds like Stored XSS [1].
> Could you please share more details, e.g. branch / release, APIs tested
> etc.?
>
> [1] https://www.owasp.org/index.php/Types_of_Cross-Site_Scripting
>
> On Fri, Nov 17, 2017 at 12:36 PM, Davanum Srinivas 
> wrote:
>
>> Adding [api] to make sure the API (SIG?) sees this too
>>
>> On Fri, Nov 17, 2017 at 3:22 AM, TommyLike Hu 
>> wrote:
>> > Hey all,
>> >  Recently when we integrating and testing OpenStack services. We
>> found
>> > there is a potential script injection issue that some of our services
>> accept
>> > the input with special character [1] [2], for instance we can create an
>> > instance or a volume with the name of 'script inside'.
>> One
>> > of the possible solutions is add HTML encode/decode support in Horizon,
>> but
>> > it's not guaranteed every OpenStack user is using Horizon. So should we
>> > apply more strict restriction on user's input?
>> >  Also, I found  Google Cloud have a strict and explicit restrction
>> in
>> > their instance insert API document [3].
>> >
>> > [1]: Nova:
>> > https://github.com/openstack/nova/blob/master/nova/api/valid
>> ation/parameter_types.py#L148
>> > [2]: Cinder:
>> > https://github.com/openstack/cinder/blob/master/cinder/api/o
>> penstack/wsgi.py#L1253
>> > [3]: Google Cloud:
>> > https://cloud.google.com/compute/docs/reference/latest/instances/insert
>> >
>> > Thanks
>> > TommyLike.Hu
>> >
>> >
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Adam Heczko
> Security Engineer @ Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [QA] Proposal for a QA SIG

2017-11-17 Thread Thierry Carrez
Andrea Frittoli wrote:
> [...]
> during the last summit in Sydney we discussed the possibility of creating an
> OpenStack quality assurance special interest group (OpenStack QA SIG). 
> The proposal was discussed during the QA feedback session [0] and it
> received
> positive feedback there; I would like to bring now the proposal to a larger
> audience via the SIG, dev and operators mailing lists.
> [...]

I think this goes with the current trends of re-centering upstream
"project teams" on the production of software, while using SIGs as
communities of practice (beyond the governance boundaries), even if they
happen to produce (some) software as the result of their work.

One question I have is whether we'd need to keep the "QA" project team
at all. Personally I think it would create confusion to keep it around,
for no gain. SIGs code contributors get voting rights for the TC anyway,
and SIGs are free to ask for space at the PTG... so there is really no
reason (imho) to keep a "QA" project team in parallel to the SIG ?

In the same vein we are looking into turning the Security project team
into a SIG, and could consider turning other non-purely-upstream teams
(like I18n) in the future.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [api] Script injection issue

2017-11-17 Thread Adam Heczko
Thanks TommyLike for this bug report. Sounds like Stored XSS [1].
Could you please share more details, e.g. branch / release, APIs tested
etc.?

[1] https://www.owasp.org/index.php/Types_of_Cross-Site_Scripting

On Fri, Nov 17, 2017 at 12:36 PM, Davanum Srinivas 
wrote:

> Adding [api] to make sure the API (SIG?) sees this too
>
> On Fri, Nov 17, 2017 at 3:22 AM, TommyLike Hu 
> wrote:
> > Hey all,
> >  Recently when we integrating and testing OpenStack services. We
> found
> > there is a potential script injection issue that some of our services
> accept
> > the input with special character [1] [2], for instance we can create an
> > instance or a volume with the name of 'script inside'.
> One
> > of the possible solutions is add HTML encode/decode support in Horizon,
> but
> > it's not guaranteed every OpenStack user is using Horizon. So should we
> > apply more strict restriction on user's input?
> >  Also, I found  Google Cloud have a strict and explicit restrction in
> > their instance insert API document [3].
> >
> > [1]: Nova:
> > https://github.com/openstack/nova/blob/master/nova/api/
> validation/parameter_types.py#L148
> > [2]: Cinder:
> > https://github.com/openstack/cinder/blob/master/cinder/api/
> openstack/wsgi.py#L1253
> > [3]: Google Cloud:
> > https://cloud.google.com/compute/docs/reference/latest/instances/insert
> >
> > Thanks
> > TommyLike.Hu
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Proposal for a QA SIG

2017-11-17 Thread Andrea Frittoli
Dear all,

during the last summit in Sydney we discussed the possibility of creating an
OpenStack quality assurance special interest group (OpenStack QA SIG).
The proposal was discussed during the QA feedback session [0] and it
received
positive feedback there; I would like to bring now the proposal to a larger
audience via the SIG, dev and operators mailing lists.

The mission of the existing QA Program in OpenStack is to “develop,
maintain,
and initiate tools and plans to ensure the upstream stability and quality of
OpenStack, and its release readiness at any point during the release cycle”
[1].
While the mission statement is quite wide, the only QA that we have
visibility on
is what happens upstream, which is limited with pre-merge testing, with the
only
exception of periodic tests.

There’s a lot of engineering that goes into OpenStack QA downstream, and
there
have been several attempts in the past to share this work and let everyone
in the
community benefit from it. The QA SIG could be a forum to make this happen:
share use cases, tests, tools, best practises and ideas beyond what we test
today
in upstream; enable everyone doing QA in the OpenStack community to benefit
from the QA work that happens today.

Adjacent communities may be interested in participating to a QA SIG as well.
The opnfv community [2] performs QA on OpenStack releases today and they
are actively looking for opportunities to share tools and test cases.

Please reply to this email thread to express interest in participating /
chairing a
QA SIG, if I get enough positive feedback I’ll setup the SIG as forming
[4], and we
can then continue the conversation on the SIG mailing list and eventually
setup
an initial meeting.

Thank you,

Andrea Frittoli (andreaf)


[0] https://etherpad.openstack.org/p/SYD-forum-qa-tools-plugins
[1] https://wiki.openstack.org/wiki/QA
[2] https://www.opnfv.org
[3] https://wiki.openstack.org/wiki/Performance_Team
[4] https://wiki.openstack.org/wiki/OpenStack_SIGs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [api] Script injection issue

2017-11-17 Thread Davanum Srinivas
Adding [api] to make sure the API (SIG?) sees this too

On Fri, Nov 17, 2017 at 3:22 AM, TommyLike Hu  wrote:
> Hey all,
>  Recently when we integrating and testing OpenStack services. We found
> there is a potential script injection issue that some of our services accept
> the input with special character [1] [2], for instance we can create an
> instance or a volume with the name of 'script inside'. One
> of the possible solutions is add HTML encode/decode support in Horizon, but
> it's not guaranteed every OpenStack user is using Horizon. So should we
> apply more strict restriction on user's input?
>  Also, I found  Google Cloud have a strict and explicit restrction in
> their instance insert API document [3].
>
> [1]: Nova:
> https://github.com/openstack/nova/blob/master/nova/api/validation/parameter_types.py#L148
> [2]: Cinder:
> https://github.com/openstack/cinder/blob/master/cinder/api/openstack/wsgi.py#L1253
> [3]: Google Cloud:
> https://cloud.google.com/compute/docs/reference/latest/instances/insert
>
> Thanks
> TommyLike.Hu
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Developer Mailing List Digest November 11-17

2017-11-17 Thread Mike Perez
Contribute to the Dev Digest by summarizing OpenStack Dev List thread:

* https://etherpad.openstack.org/p/devdigest
* http://lists.openstack.org/pipermail/openstack-dev/

HTML version: 
https://www.openstack.org/blog/2017/11/developer-mailing-list-digest-november-11-17

Summaries
=
* POST /api-sig/news [0]
* Release countdown for week R-14, November 18-24 [1]
[0] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-November/124633.html
[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-November/124631.html


Upstream Long Term Support Releases
===
The Sydney Summit had a very well attended and productive session [0] about to
go about keeping a selection of past releases available and maintained for long
term support (LTS).

In the past the community has asked for people who are interested in old
branches being maintained for a long time to join the Stable Maintenance team
with the premise that if the stable team grew, it could support more branches
for longer periods. This has been repeated for about years and is not working.

This discussion is about allowing collaboration on patches beyond end of life
(EOL) and enable whoever steps up to maintain longer lived branches to come up
with a set of tests that actually match their needs with tests that would be
less likely to bitrot due to changing OS/PYPI substrate. We need to lower
expectations of what we're likely to produce will get more brittle the older
the branch gets. Any burden created by taking on more work is absorbed by
people doing the work, as does not unduly impact the folks not interested in
doing the work.

The idea is to continue the current stable policy more or less as it is.
Development teams take responsibility of a couple of stable branches. At some
point what we now call an EOL branch, instead of deleting it we would leave it
open and establish a new team of people who want to continue to main that
branch. It's anticipated the members of those new teams are coming mostly from
users and distributors. Not all branches are going to attract teams to maintain
them, and that's OK.

We will stop tagging these branches so the level of support they provide are
understood. Backports and other fixes can be shared, but to consume them, a
user will have to build their own packages.

Test jobs will run as they are, and the team that maintain the branch could
decide how to deal with them. Fixing the jobs upstream where possible is
preferred, but depending on who is maintaining the branch, the level of support
they are actually providing and the nature of the breakage, removing individual
tests or whole jobs is another option. Using third-party testing came up but is
not required.

Policies for the new team being formed to maintain these older branches is
being discussed in an etherpad [2].

Some feedback in the room expressed they to start one release a year who's
branch isn't deleted after a year. Do one release a year and still keep N-2
stable releases around. We still do backports to all open stable branches.
Basically do what we're doing now, but once a year.

Discussion on this suggestion extended to the OpenStack SIG mailing list [1]
suggesting that skip-release upgrades are a much better way to deal with
upgrade pain than extending cycles. Releasing every year instead of a 6 months
means our releases will contain more changes, and the upgrade could become more
painful. We should be release often as we can and makes the upgrades less
painful so versions can be skipped.

We have so far been able to find people to maintain stable branches for 12-18
months. Keep N-2 branches for annual releases open would mean extending that
support period to 2+ years. If we're going to do that, we need to address how
we are going to retain contributors.

When you don't release often enough, the pressure to get a patch "in"
increases. Missing the boat and waiting for another year is not bearable.


[0] - https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases
[1] - 
http://lists.openstack.org/pipermail/openstack-sigs/2017-November/000149.html
[2] - https://etherpad.openstack.org/p/LTS-proposal

-- 
Mike Perez (thingee)


pgpQn5z3KGpek.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee Status update, November 17th

2017-11-17 Thread Thierry Carrez
Hi!

This is the weekly summary of Technical Committee initiatives. You can
find the full list of all open topics (updated twice a week) at:

https://wiki.openstack.org/wiki/Technical_Committee_Tracker

If you are working on something (or plan to work on something) that is
not on the tracker, feel free to add to it !


== Recently-approved changes ==

* Add the supports-rolling-upgrade tag to Ironic [1]
* Select Thierry Carrez as TC chair [2]
* Update house rules to reflect absence of meetings [3]
* Split up sahara deliverables [4] [5]
* Goal updates: sahara, glance, senlin, designate, zun, zaqar
* New repos: os_congress, ansible-role-k8s-(keystone|mariadb|tripleo),
tripleo-common-tempest-plugin, cloudkitty-tempest-plugin,
cinder-tempest-plugin

[1] https://review.openstack.org/#/c/514448/
[2] https://review.openstack.org/#/c/514553/
[3] https://review.openstack.org/#/c/514582/
[4] https://review.openstack.org/#/c/515513/
[5] https://review.openstack.org/#/c/515780/

Low activity over the past 3 weeks as travel to Sydney, Summit week and
travel back from Sydney ate most of the TC members time. Most
interesting change is the addition of the supports-rolling-upgrade tag
to Ironic, to reflect that an Ironic installation can now to upgraded in
a rolling fashion. You can find more details about this assertion at:

https://governance.openstack.org/tc/reference/tags/assert_supports-rolling-upgrade.html


== Voting in progress ==

The LOCI team (packaging OCI container images for OpenStack
deliverables) is being proposed as an official OpenStack project. The
addition (and the followup typo fix) seem consensual, but are still
missing a couple of votes:

https://review.openstack.org/513851
https://review.openstack.org/516005


== Under review ==

It's time to propose and review community-wide goals for the Rocky
cycle. Kendall Nelson posted a proposal around Storyboard Migration.
Please review it at:

https://review.openstack.org/513875

Matt Treinish proposed an update to the Python PTI for tests to be
specific and explicit. Please review at:

https://review.openstack.org/519751

As a result of the discussion around stable policy in Sydney, it was
proposed to recenter the Stable policy on OpenStack cloud components
rather than expect it from all sorts of deliverables. As a result,
Emilien proposed the removal of the tag from Kolla[6], while I proposed
wording changes to the tag definition[7]

[6] https://review.openstack.org/#/c/519685/
[7] https://review.openstack.org/521049

It should not be necessary to wait until we have 5 items in our "help
wanted" list, nor require the presence of 5 elements at all times.
Please review the list rename at:

https://review.openstack.org/520619

The Mogan team application is still up for review. General feedback from
the Summit forum session was that the overlap and complementarity
between Nova, Ironic and Mogan makes for a complex landscape, and the
strategy going forward needs to be clarified before we can approve this
application. It is therefore likely that it will be delayed until Rocky,
Please comment at:

https://review.openstack.org/#/c/508400/

Finally, you might be interested in reviewing the addition of the
supports-accessible-upgrade tag to ironic, before it gets approved by
lazy consensus:

https://review.openstack.org/#/c/516671/


== TC member actions for the coming week(s) ==

TC members should prepare for Sydney activities (see below).

Monty should answer the feedback on the supported database version
resolution (https://review.openstack.org/493932) so that we can make
progress there -- or abandon it.

Doug should update or abandon the "champions and stewards" top help
wanted addition (https://review.openstack.org/510656)

Thierry should clarify the situation of the Stackube application
(https://review.openstack.org/#/c/462460/), in light of the refocus of
OpenStack on cloud infrastructure and potential creation in the future
of a separate strategic area around container infrastructure.


== Office hours ==

To be more inclusive of all timezones and more mindful of people for
which English is not the primary language, the Technical Committee
dropped its dependency on weekly meetings. So that you can still get
hold of TC members on IRC, we instituted a series of office hours on
#openstack-tc:

* 09:00 UTC on Tuesdays
* 01:00 UTC on Wednesdays
* 15:00 UTC on Thursdays

For the coming week, I expect the main topic of discussion to be actions
coming out of the Forum sessions.

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Updates on the TripleO on Kubernetes work

2017-11-17 Thread Steven Hardy
On Thu, Nov 16, 2017 at 4:56 PM, James Slagle  wrote:
> On Thu, Nov 16, 2017 at 8:44 AM, Flavio Percoco  wrote:
>> Integration with TripleO Heat Templates
>> ===
>>
>> This work is on-going and you should eventually see some patches popping-up
>> on
>> the reviews list. One of the goals, besides consuming these ansible roles
>> from
>> t-h-t, is to be able to create a PoC for upgrades and have an end-to-end
>> test/demo of this work.
>>
>> As we progress, we are trying to nail down an end-to-end deployment before
>> creating roles for all the services that are currently supported by TripleO.
>> We
>> will be adding projects as needed with a focus on the end-to-end goal.
>
> When we consume these ansible-role-k8s-* roles from t-h-t, I think
> that should be a stepping stone towards migrating away from having to
> use Heat to deploy and configure those services. We know that these
> new ansible roles will be deployable standalone, and the interface to
> do that should be typical ansible best practices (role defaults, vars,
> etc).
>
> We can offer a mechanism such that one can migrate from a
> tripleo-heat-templates/docker/services/database/mysql.yaml deployed
> mariadb to one deployed via
> ansible-role-k8s-mariadb. The config-download mechanism could be
> updated to generate or pull from Heat the necessary ansible vars files
> for configuring the roles. We should make sure that the integration
> with tripleo-heat-templates results in the same inputs/outputs that
> someone would consume if using the roles standalone. Future iterations
> would then not have to require Heat for that service at all, unless
> the operator wanted to continue to configure the service via Heat
> parameters/environments.
>
> What I'm trying to propose is a path towards deprecating the Heat
> parameter/environment driven and hieradata driven approach to
> configuring the services. The ansible-role-k8s-* roles should offer a
> new interface, so I don't think we have to remain tied to Heat
> forever, so we should consider what we want the long term goal to be
> in an ideal world, and take some iterative steps to get there.

I agree this is a good time to discuss ways to rationalize the
toolchain, but I do suspect it may be premature to consider
derprecating puppet/hiera as AFAIK this doesn't provide any drop-in
replacement for the config file generation?

I was thinking we'd probably maintain the current docker-puppet.py
model for this first pass, to reduce the risk of migrating containers
to k8s, and we could probably refactor things such that this config
generation via puppet+docker is orchestrated via the ansible roles and
kubernetes?

The current model is something like:

1. Run temporary docker container, run puppet, write config files to
host file system
2. Start service container, config files bind mounted into container
from host filesystem
3. Run temporary bootstrapping container (runs puppet, optional step)

(this is simplified for clarity as there are opportunities for some
other bootstrapping steps)

In the ansible/kubernetes model, it could work like:

1. Ansible role makes k8s API call creating pod with multiple containers
2. Pod starts temporary container that runs puppet, config files
written out to shared volume
3. Service container starts, config consumed from shared volume
4. Optionally run temporary bootstrapping container inside pod

This sort of pattern is documented here:

https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/

The main advantage is we don't have to reimplement config management
for every single service, but obviously we'd want this to be pluggable
in the ansible roles so other config management strategies/tools could
be used instead of our puppet model.

> It's probably worthwhile as a thought experiment to update this
> diagram[0] as to how it might look at different future stages. The
> first stage might just be t-h-t driven ansible-role-k8s-* , followed
> by a migration to ansible-role-k8s-* as the primary interface, and
> then finally perhaps no Heat[1].

Agreed this is definitely a good time to discuss moving the service
configuration workflow to pure ansible, but as noted above I'm not
convinced we're yet ready to take puppet out of the mix, so it may be
safer to leave that (by now quite well proven in our heat+ansible
container architecture) pattern in place, at least initially?

Thanks!

Steve

> [0] https://slagle.fedorapeople.org/tripleo-ansible-arch.png
> [1] Except for perhaps deployment of baremetal resources, but even
> then I'm personally of the opinion that would be better serviced by
> Mistral->Ansible->Ironic directly.
>
> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.or

Re: [openstack-dev] [tripleo] Migrating TripleO CI in-tree tomorrow - please README

2017-11-17 Thread Bogdan Dobrelya

On 11/16/17 7:20 PM, Emilien Macchi wrote:

TL;DR: don't approve or recheck any tripleo patch from now, until
further notice on this thread.

Some good progress has been made on migrating legacy tripleo CI jobs
to be in-tree:
https://review.openstack.org/#/q/topic:tripleo/migrate-to-zuulv3

The next steps:
- Let the current gate to finish their jobs running.
- Stop approving patches from now, and wait the gate to be done and cleared
- Alex and I will approve the migration patches tomorrow and we hope
to have them in the gate by Friday afternoon (US time) when gate isn't
busy anymore. We'll also have to backport them all.
- When these patches will be merged (it might take the weekend to
land, depending how the gate will be), we'll run duplicated jobs until
https://review.openstack.org/514778 is merged. I'll try to ping
someone from Infra over the week-end if we can land it, that would be
great.
- Once https://review.openstack.org/514778 is merged, people are free
to do recheck or approve any patches. We hope it should happen over
the weekend.
- I'll continue to migrate all other tripleo projects to have in-tree
layout. On the list: t-p-e, t-i-e, paunch, os-*-config,
tripleo-validations.

Thanks for your help,



Thank you for working on this Emilien! That's well done, and provides a 
good example for future use in other projects as well.


--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [security] Script injection issue

2017-11-17 Thread TommyLike Hu
Hey all,
 Recently when we integrating and testing OpenStack services. We found
there is a potential script injection issue that some of our services
accept the input with special character [1] [2], for instance we can create
an instance or a volume with the name of 'script inside'.
One of the possible solutions is add HTML encode/decode support in Horizon,
but it's not guaranteed every OpenStack user is using Horizon. So should we
apply more strict restriction on user's input?
 Also, I found  Google Cloud have a strict and explicit restrction in
their instance insert API document [3].

[1]: Nova:
https://github.com/openstack/nova/blob/master/nova/api/validation/parameter_types.py#L148
[2]: Cinder:
https://github.com/openstack/cinder/blob/master/cinder/api/openstack/wsgi.py#L1253
[3]: Google Cloud:
https://cloud.google.com/compute/docs/reference/latest/instances/insert

Thanks
TommyLike.Hu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev