Re: [openstack-dev] [Heat] Heat template example repository

2017-05-16 Thread Lance Haig

On 15.05.17 18:10, Steven Hardy wrote:

On Mon, May 15, 2017 at 04:46:28PM +0200, Lance Haig wrote:

Hi Steve,

I am happy to assist in any way to be honest.

The backwards compatibility is not always correct as I have seen when
developing our library of templates on Liberty and then trying to deploy it
on Mitaka for example.

Yeah, I guess it's true that there are sometimes deprecated resource
interfaces that get removed on upgrade to a new OpenStack version, and that
is independent of the HOT version.

As we've proven, maintaining these templates has been a challenge given the
available resources, so I guess I'm still in favor of not duplicating a bunch
of templates, e.g perhaps we could focus on a target of CI testing
templates on the current stable release as a first step?

I think this is a good way to go.
If we get the tests running we will soon see what is broken and what not 
on the stable release and then make a call as to how to go about fixing 
that.

As you guys mentioned in our discussions the Networking example I quoted is
not something you guys can deal with as the source project affects this.

Unless we can use this exercise to test these and fix them then I am
happier.

My vision would be to have a set of templates and examples that are tested
regularly against a running OS deployment so that we can make sure the
combinations still run. I am sure we can agree on a way to do this with CICD
so that we test the fetureset.

Agreed, getting the approach to testing agreed seems like the first step -
FYI we do already have automated scenario tests in the main heat tree that
consume templates similar to many of the examples:

https://github.com/openstack/heat/tree/master/heat_integrationtests/scenario

So, in theory, getting a similar test running on heat_templates should be
fairly simple, but getting all the existing templates working is likely to
be a bigger challenge.

I am happy to work on getting the existing templates tested if that helps.
As a newbie contributor I would need some help getting started and then 
I would do the work.


If you can assist with the places to start I would appreciate it.

Lance

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Flavio Percoco

On 16/05/17 09:45 -0400, Doug Hellmann wrote:

Excerpts from Flavio Percoco's message of 2017-05-15 21:50:23 -0400:

On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
>On 15 May 2017 at 11:19, Davanum Srinivas  wrote:
>> Sorry for the top post, Michal, Can you please clarify a couple of things:
>>
>> 1) Can folks install just one or two services for their specific scenario?
>
>Yes, that's more of a kolla-ansible feature and require a little bit
>of ansible know-how, but entirely possible. Kolla-k8s is built to
>allow maximum flexibility in that space.
>
>> 2) Can the container images from kolla be run on bare docker daemon?
>
>Yes, but they need to either override our default CMD (kolla_start) or
>provide ENVs requred by it, not a huge deal
>
>> 3) Can someone take the kolla container images from say dockerhub and
>> use it without the Kolla framework?
>
>Yes, there is no such thing as kolla framework really. Our images
>follow stable ABI and they can be deployed by any deploy mechanism
>that will follow it. We have several users who wrote their own deploy
>mechanism from scratch.
>
>Containers are just blobs with binaries in it. Little things that we
>add are kolla_start script to allow our config file management and
>some custom startup scripts for things like mariadb to help with
>bootstrapping, both are entirely optional.

Just as a bonus example, TripleO is currently using kolla images. They used to
be vanilla and they are not anymore but only because TripleO depends on puppet
being in the image, which has nothing to do with kolla.

Flavio



When you say "using kolla images," what do you mean? In upstream
CI tests? On contributors' dev/test systems? Production deployments?


All of them. Note that TripleO now builds its own "kolla images" (it uses the
kolla Dockerfiles and kolla-build) because the dependency of puppet. When I
said, TripleO uses kolla images was intended to answer Dims question on whether
these images (or Dockerfiles) can be consumed by other projects.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Sam Yaple
I would like to bring up a subject that hasn't really been discussed in
this thread yet, forgive me if I missed an email mentioning this.

What I personally would like to see is a publishing infrastructure to allow
pushing built images to an internal infra mirror/repo/registry for
consumption of internal infra jobs (deployment tools like kolla-ansible and
openstack-ansible). The images built from infra mirrors with security
turned off are perfect for testing internally to infra.

If you build images properly in infra, then you will have an image that is
not security checked (no gpg verification of packages) and completely
unverifiable. These are absolutely not images we want to push to
DockerHub/quay for obvious reasons. Security and verification being chief
among them. They are absolutely not images that should ever be run in
production and are only suited for testing. These are the only types of
images that can come out of infra.

Thanks,
SamYaple

On Tue, May 16, 2017 at 1:57 PM, Michał Jastrzębski 
wrote:

> On 16 May 2017 at 06:22, Doug Hellmann  wrote:
> > Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
> >> Flavio Percoco wrote:
> >> > From a release perspective, as Doug mentioned, we've avoided
> releasing projects
> >> > in any kind of built form. This was also one of the concerns I raised
> when
> >> > working on the proposal to support other programming languages. The
> problem of
> >> > releasing built images goes beyond the infrastructure requirements.
> It's the
> >> > message and the guarantees implied with the built product itself that
> are the
> >> > concern here. And I tend to agree with Doug that this might be a
> problem for us
> >> > as a community. Unfortunately, putting your name, Michal, as contact
> point is
> >> > not enough. Kolla is not the only project producing container images
> and we need
> >> > to be consistent in the way we release these images.
> >> >
> >> > Nothing prevents people for building their own images and uploading
> them to
> >> > dockerhub. Having this as part of the OpenStack's pipeline is a
> problem.
> >>
> >> I totally subscribe to the concerns around publishing binaries (under
> >> any form), and the expectations in terms of security maintenance that it
> >> would set on the publisher. At the same time, we need to have images
> >> available, for convenience and testing. So what is the best way to
> >> achieve that without setting strong security maintenance expectations
> >> for the OpenStack community ? We have several options:
> >>
> >> 1/ Have third-parties publish images
> >> It is the current situation. The issue is that the Kolla team (and
> >> likely others) would rather automate the process and use OpenStack
> >> infrastructure for it.
> >>
> >> 2/ Have third-parties publish images, but through OpenStack infra
> >> This would allow to automate the process, but it would be a bit weird to
> >> use common infra resources to publish in a private repo.
> >>
> >> 3/ Publish transient (per-commit or daily) images
> >> A "daily build" (especially if you replace it every day) would set
> >> relatively-limited expectations in terms of maintenance. It would end up
> >> picking up security updates in upstream layers, even if not immediately.
> >>
> >> 4/ Publish images and own them
> >> Staff release / VMT / stable team in a way that lets us properly own
> >> those images and publish them officially.
> >>
> >> Personally I think (4) is not realistic. I think we could make (3) work,
> >> and I prefer it to (2). If all else fails, we should keep (1).
> >>
> >
> > At the forum we talked about putting test images on a "private"
> > repository hosted on openstack.org somewhere. I think that's option
> > 3 from your list?
> >
> > Paul may be able to shed more light on the details of the technology
> > (maybe it's just an Apache-served repo, rather than a full blown
> > instance of Docker's service, for example).
>
> Issue with that is
>
> 1. Apache served is harder to use because we want to follow docker API
> and we'd have to reimplement it
> 2. Running registry is single command
> 3. If we host in in infra, in case someone actually uses it (there
> will be people like that), that will eat up lot of network traffic
> potentially
> 4. With local caching of images (working already) in nodepools we
> loose complexity of mirroring registries across nodepools
>
> So bottom line, having dockerhub/quay.io is simply easier.
>
> > Doug
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 06:22, Doug Hellmann  wrote:
> Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
>> Flavio Percoco wrote:
>> > From a release perspective, as Doug mentioned, we've avoided releasing 
>> > projects
>> > in any kind of built form. This was also one of the concerns I raised when
>> > working on the proposal to support other programming languages. The 
>> > problem of
>> > releasing built images goes beyond the infrastructure requirements. It's 
>> > the
>> > message and the guarantees implied with the built product itself that are 
>> > the
>> > concern here. And I tend to agree with Doug that this might be a problem 
>> > for us
>> > as a community. Unfortunately, putting your name, Michal, as contact point 
>> > is
>> > not enough. Kolla is not the only project producing container images and 
>> > we need
>> > to be consistent in the way we release these images.
>> >
>> > Nothing prevents people for building their own images and uploading them to
>> > dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>>
>> I totally subscribe to the concerns around publishing binaries (under
>> any form), and the expectations in terms of security maintenance that it
>> would set on the publisher. At the same time, we need to have images
>> available, for convenience and testing. So what is the best way to
>> achieve that without setting strong security maintenance expectations
>> for the OpenStack community ? We have several options:
>>
>> 1/ Have third-parties publish images
>> It is the current situation. The issue is that the Kolla team (and
>> likely others) would rather automate the process and use OpenStack
>> infrastructure for it.
>>
>> 2/ Have third-parties publish images, but through OpenStack infra
>> This would allow to automate the process, but it would be a bit weird to
>> use common infra resources to publish in a private repo.
>>
>> 3/ Publish transient (per-commit or daily) images
>> A "daily build" (especially if you replace it every day) would set
>> relatively-limited expectations in terms of maintenance. It would end up
>> picking up security updates in upstream layers, even if not immediately.
>>
>> 4/ Publish images and own them
>> Staff release / VMT / stable team in a way that lets us properly own
>> those images and publish them officially.
>>
>> Personally I think (4) is not realistic. I think we could make (3) work,
>> and I prefer it to (2). If all else fails, we should keep (1).
>>
>
> At the forum we talked about putting test images on a "private"
> repository hosted on openstack.org somewhere. I think that's option
> 3 from your list?
>
> Paul may be able to shed more light on the details of the technology
> (maybe it's just an Apache-served repo, rather than a full blown
> instance of Docker's service, for example).

Issue with that is

1. Apache served is harder to use because we want to follow docker API
and we'd have to reimplement it
2. Running registry is single command
3. If we host in in infra, in case someone actually uses it (there
will be people like that), that will eat up lot of network traffic
potentially
4. With local caching of images (working already) in nodepools we
loose complexity of mirroring registries across nodepools

So bottom line, having dockerhub/quay.io is simply easier.

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Pile] Need Exemption On Submitted Spec for the Keystone

2017-05-16 Thread Lance Bragstad
That sounds good - I'll review the spec before today's meeting [0]. Will
someone be around to answer questions about the spec if there are any?


[0] http://eavesdrop.openstack.org/#Keystone_Team_Meeting

On Mon, May 15, 2017 at 11:24 PM, Mh Raies  wrote:

> Hi Lance,
>
>
>
> We had submitted one blueprint and it’s Specs last weeks.
>
> Blueprint - https://blueprints.launchpad.
> net/keystone/+spec/api-implemetation-required-to-
> download-identity-policies
>
> Spec - https://review.openstack.org/#/c/463547/
>
>
>
> As Keystone Pike proposal freeze is already completed on April 14th 2017,
> to proceed on this Spec we need your help.
>
> Implementation of this Spec is also started and being addressed by -
> https://review.openstack.org/#/c/463543/
>
>
>
> So, if we can get an exemption to proceed with the Spec review and
> approval process, it will be a great help for us.
>
>
>
>
>
> [image: Ericsson] 
>
>
>
> *Mh Raies*
>
> *Senior Solution Integrator*
> *Ericsson** Consulting and Systems Integration*
>
> *Gurgaon, India | Mobile **+91 9901555661 <+91%2099015%2055661>*
>
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Monty Taylor

On 05/16/2017 05:39 AM, Sean Dague wrote:

On 05/15/2017 10:00 PM, Adrian Turjak wrote:



On 16/05/17 13:29, Lance Bragstad wrote:



On Mon, May 15, 2017 at 7:07 PM, Adrian Turjak
> wrote:



Based on the specs that are currently up in Keystone-specs, I
would highly recommend not doing this per user.

The scenario I imagine is you have a sysadmin at a company who
created a ton of these for various jobs and then leaves. The
company then needs to keep his user account around, or create tons
of new API keys, and then disable his user once all the scripts he
had keys for are replaced. Or more often then not, disable his
user and then cry as everything breaks and no one really knows why
or no one fully documented it all, or didn't read the docs.
Keeping them per project and unrelated to the user makes more
sense, as then someone else on your team can regenerate the
secrets for the specific Keys as they want. Sure we can advise
them to use generic user accounts within which to create these API
keys but that implies password sharing which is bad.


That said, I'm curious why we would make these as a thing separate
to users. In reality, if you can create users, you can create API
specific users. Would this be a different authentication
mechanism? Why? Why not just continue the work on better access
control and let people create users for this. Because lets be
honest, isn't a user already an API key? The issue (and the Ron's
spec mentions this) is a user having too much access, how would
this fix that when the issue is that we don't have fine grained
policy in the first place? How does a new auth mechanism fix that?
Both specs mention roles so I assume it really doesn't. If we had
fine grained policy we could just create users specific to a
service with only the roles it needs, and the same problem is
solved without any special API, new auth, or different 'user-lite'
object model. It feels like this is trying to solve an issue that
is better solved by fixing the existing problems.

I like the idea behind these specs, but... I'm curious what
exactly they are trying to solve. Not to mention if you wanted to
automate anything larger such as creating sub-projects and setting
up a basic network for each new developer to get access to your
team, this wouldn't work unless you could have your API key
inherit to subprojects or something more complex, at which point
they may as well be users. Users already work for all of this, why
reinvent the wheel when really the issue isn't the wheel itself,
but the steering mechanism (access control/policy in this case)?


All valid points, but IMO the discussions around API keys didn't set
out to fix deep-rooted issues with policy. We have several specs in
flights across projects to help mitigate the real issues with policy
[0] [1] [2] [3] [4].

I see an API key implementation as something that provides a cleaner
fit and finish once we've addressed the policy bits. It's also a
familiar concept for application developers, which was the use case
the session was targeting.

I probably should have laid out the related policy work before jumping
into API keys. We've already committed a bunch of keystone resource to
policy improvements this cycle, but I'm hoping we can work API keys
and policy improvements in parallel.

[0] https://review.openstack.org/#/c/460344/
[1] https://review.openstack.org/#/c/462733/
[2] https://review.openstack.org/#/c/464763/
[3] https://review.openstack.org/#/c/433037/
[4] https://review.openstack.org/#/c/427872/


I'm well aware of the policy work, and it is fantastic to see it
progressing! I can't wait to actually be able to play with that stuff!
We've been painstakingly tweaking the json policy files which is a giant
mess.

I'm just concerned that this feels like a feature we don't really need
when really it's just a slight variant of a user with a new auth model
(that is really just another flavour of username/password). The sole
reason most of the other cloud services have API keys is because a user
can't talk to the API directly. OpenStack does not have that problem,
users are API keys. So I think what we really need to consider is what
exact benefit does API keys actually give us that won't be solved with
users and better policy?


The benefits of API key are if it's the same across all deployments, so
your applications can depend on it working. That means the application
has to be able to:

1. provision an API Key with normal user credentials
2. set/reduce permissions with that with those same user credentials
3. operate with those credentials at the project level (so that when you
leave, someone else in your dept can take over)
4. have all it's resources built in the same project that you are in, so
API Key created resources could interact with 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 06:22, Doug Hellmann  wrote:
> Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
>> Flavio Percoco wrote:
>> > From a release perspective, as Doug mentioned, we've avoided releasing 
>> > projects
>> > in any kind of built form. This was also one of the concerns I raised when
>> > working on the proposal to support other programming languages. The 
>> > problem of
>> > releasing built images goes beyond the infrastructure requirements. It's 
>> > the
>> > message and the guarantees implied with the built product itself that are 
>> > the
>> > concern here. And I tend to agree with Doug that this might be a problem 
>> > for us
>> > as a community. Unfortunately, putting your name, Michal, as contact point 
>> > is
>> > not enough. Kolla is not the only project producing container images and 
>> > we need
>> > to be consistent in the way we release these images.
>> >
>> > Nothing prevents people for building their own images and uploading them to
>> > dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>>
>> I totally subscribe to the concerns around publishing binaries (under
>> any form), and the expectations in terms of security maintenance that it
>> would set on the publisher. At the same time, we need to have images
>> available, for convenience and testing. So what is the best way to
>> achieve that without setting strong security maintenance expectations
>> for the OpenStack community ? We have several options:
>>
>> 1/ Have third-parties publish images
>> It is the current situation. The issue is that the Kolla team (and
>> likely others) would rather automate the process and use OpenStack
>> infrastructure for it.
>>
>> 2/ Have third-parties publish images, but through OpenStack infra
>> This would allow to automate the process, but it would be a bit weird to
>> use common infra resources to publish in a private repo.
>>
>> 3/ Publish transient (per-commit or daily) images
>> A "daily build" (especially if you replace it every day) would set
>> relatively-limited expectations in terms of maintenance. It would end up
>> picking up security updates in upstream layers, even if not immediately.
>>
>> 4/ Publish images and own them
>> Staff release / VMT / stable team in a way that lets us properly own
>> those images and publish them officially.
>>
>> Personally I think (4) is not realistic. I think we could make (3) work,
>> and I prefer it to (2). If all else fails, we should keep (1).
>>
>
> At the forum we talked about putting test images on a "private"
> repository hosted on openstack.org somewhere. I think that's option
> 3 from your list?
>
> Paul may be able to shed more light on the details of the technology
> (maybe it's just an Apache-served repo, rather than a full blown
> instance of Docker's service, for example).
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 06:20, Flavio Percoco  wrote:
> On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>>
>> Flavio Percoco wrote:
>>>
>>> From a release perspective, as Doug mentioned, we've avoided releasing
>>> projects
>>> in any kind of built form. This was also one of the concerns I raised
>>> when
>>> working on the proposal to support other programming languages. The
>>> problem of
>>> releasing built images goes beyond the infrastructure requirements. It's
>>> the
>>> message and the guarantees implied with the built product itself that are
>>> the
>>> concern here. And I tend to agree with Doug that this might be a problem
>>> for us
>>> as a community. Unfortunately, putting your name, Michal, as contact
>>> point is
>>> not enough. Kolla is not the only project producing container images and
>>> we need
>>> to be consistent in the way we release these images.
>>>
>>> Nothing prevents people for building their own images and uploading them
>>> to
>>> dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>>
>>
>> I totally subscribe to the concerns around publishing binaries (under
>> any form), and the expectations in terms of security maintenance that it
>> would set on the publisher. At the same time, we need to have images
>> available, for convenience and testing. So what is the best way to
>> achieve that without setting strong security maintenance expectations
>> for the OpenStack community ? We have several options:
>>
>> 1/ Have third-parties publish images
>> It is the current situation. The issue is that the Kolla team (and
>> likely others) would rather automate the process and use OpenStack
>> infrastructure for it.
>>
>> 2/ Have third-parties publish images, but through OpenStack infra
>> This would allow to automate the process, but it would be a bit weird to
>> use common infra resources to publish in a private repo.
>>
>> 3/ Publish transient (per-commit or daily) images
>> A "daily build" (especially if you replace it every day) would set
>> relatively-limited expectations in terms of maintenance. It would end up
>> picking up security updates in upstream layers, even if not immediately.
>>
>> 4/ Publish images and own them
>> Staff release / VMT / stable team in a way that lets us properly own
>> those images and publish them officially.
>>
>> Personally I think (4) is not realistic. I think we could make (3) work,
>> and I prefer it to (2). If all else fails, we should keep (1).
>
>
> Agreed #4 is a bit unrealistic.
>
> Not sure I understand the difference between #2 and #3. Is it just the
> cadence?
>
> I'd prefer for these builds to have a daily cadence because it sets the
> expectations w.r.t maintenance right: "These images are daily builds and not
> certified releases. For stable builds you're better off building it
> yourself"

And daily builds are exactly what I wanted in the first place:) We
probably will keep publishing release packages too, but we can be so
called 3rd party. I also agree [4] is completely unrealistic and I
would be against putting such heavy burden of responsibility on any
community, including Kolla.

While daily cadence will send message that it's not stable, truth will
be that it will be more stable than what people would normally build
locally (again, it passes more gates), but I'm totally fine in not
saying that and let people decide how they want to use it.

So, can we move on with implementation?

Thanks!
Michal

>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2017-05-15 21:50:23 -0400:
> On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
> >On 15 May 2017 at 11:19, Davanum Srinivas  wrote:
> >> Sorry for the top post, Michal, Can you please clarify a couple of things:
> >>
> >> 1) Can folks install just one or two services for their specific scenario?
> >
> >Yes, that's more of a kolla-ansible feature and require a little bit
> >of ansible know-how, but entirely possible. Kolla-k8s is built to
> >allow maximum flexibility in that space.
> >
> >> 2) Can the container images from kolla be run on bare docker daemon?
> >
> >Yes, but they need to either override our default CMD (kolla_start) or
> >provide ENVs requred by it, not a huge deal
> >
> >> 3) Can someone take the kolla container images from say dockerhub and
> >> use it without the Kolla framework?
> >
> >Yes, there is no such thing as kolla framework really. Our images
> >follow stable ABI and they can be deployed by any deploy mechanism
> >that will follow it. We have several users who wrote their own deploy
> >mechanism from scratch.
> >
> >Containers are just blobs with binaries in it. Little things that we
> >add are kolla_start script to allow our config file management and
> >some custom startup scripts for things like mariadb to help with
> >bootstrapping, both are entirely optional.
> 
> Just as a bonus example, TripleO is currently using kolla images. They used to
> be vanilla and they are not anymore but only because TripleO depends on puppet
> being in the image, which has nothing to do with kolla.
> 
> Flavio
> 

When you say "using kolla images," what do you mean? In upstream
CI tests? On contributors' dev/test systems? Production deployments?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
This is one of those areas that was shared understanding for a long
time, and seems less "shared" now that we've grown and added new
projects to the community.  I intended to prepare a governance
resolution *after* having some public discussion, so that we can
restore that common understanding through documentation. I didn't
prepare the resolution as a first step, because if the consensus
is that we've changed our collective minds about whether publishing
binary artifacts is a good idea then the wording of the resolution
needs to reflect that.

Doug

Excerpts from Davanum Srinivas (dims)'s message of 2017-05-16 09:25:56 -0400:
> Steve,
> 
> We should not always ask "if this is a ruling from the TC", the
> default is that it's a discussion/exploration. If it is a "ruling", it
> won't be on a ML thread.
> 
> Thanks,
> Dims
> 
> On Tue, May 16, 2017 at 9:22 AM, Steven Dake (stdake)  
> wrote:
> > Dims,
> >
> > The [tc] was in the subject tag, and the message was represented as 
> > indicating some TC directive and has had several tc members comment on the 
> > thread.  I did nothing wrong.
> >
> > Regards
> > -steve
> >
> >
> > -Original Message-
> > From: Davanum Srinivas 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > Date: Tuesday, May 16, 2017 at 4:34 AM
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > Subject: Re: [openstack-dev] 
> > [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
> >  do we want to be publishing binary container images?
> >
> > Why drag TC into this discussion Steven? If the TC has something to
> > say, it will be in the form of a resolution with topic "formal-vote".
> > So please Stop!
> >
> > Thanks,
> > Dims
> >
> > On Tue, May 16, 2017 at 12:22 AM, Steven Dake (stdake) 
> >  wrote:
> > > Flavio,
> > >
> > > Forgive the top post – outlook ftw.
> > >
> > > I understand the concerns raised in this thread.  It is unclear if 
> > this thread is the feeling of two TC members or enough TC members care 
> > deeply about this issue to permanently limit OpenStack big tent projects’ 
> > ability to generate container images in various external artifact storage 
> > systems.  The point of discussion I see effectively raised in this thread 
> > is “OpenStack infra will not push images to dockerhub”.
> > >
> > > I’d like clarification if this is a ruling from the TC, or simply an 
> > exploratory discussion.
> > >
> > > If it is exploratory, it is prudent that OpenStack projects not be 
> > blocked by debate on this issue until the TC has made such ruling as to 
> > banning the creation of container images via OpenStack infrastructure.
> > >
> > > Regards
> > > -steve
> > >
> > > -Original Message-
> > > From: Flavio Percoco 
> > > Reply-To: "OpenStack Development Mailing List (not for usage 
> > questions)" 
> > > Date: Monday, May 15, 2017 at 7:00 PM
> > > To: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > > Subject: Re: [openstack-dev] 
> > [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
> >  do we want to be publishing binary container images?
> > >
> > > On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:
> > > >On 15 May 2017 at 12:12, Doug Hellmann  
> > wrote:
> > >
> > > [huge snip]
> > >
> > > >>> > I'm raising the issue here to get some more input into how 
> > to
> > > >>> > proceed. Do other people think this concern is overblown? 
> > Can we
> > > >>> > mitigate the risk by communicating through metadata for the 
> > images?
> > > >>> > Should we stick to publishing build instructions 
> > (Dockerfiles, or
> > > >>> > whatever) instead of binary images? Are there other options 
> > I haven't
> > > >>> > mentioned?
> > > >>>
> > > >>> Today we do publish build instructions, that's what Kolla is. 
> > We also
> > > >>> publish built containers already, just we do it manually on 
> > release
> > > >>> today. If we decide to block it, I assume we should stop 
> > doing that
> > > >>> too? That will hurt users who uses this piece of Kolla, and 
> > I'd hate
> > > >>> to hurt our users:(
> > > >>
> > > >> Well, that's the question. Today we have teams publishing those
> > > >> images themselves, right? And the proposal is to have infra do 
> > it?
> > > >> That change could be construed to imply that there is more of a
> > > >> relationship with the images and the rest of the community 
> > (remember,
> > >  

[openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Davanum Srinivas
Folks,

See $TITLE :)

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck Monitoring

2017-05-16 Thread Waines, Greg
Sam,

Two other more higher-level points I wanted to discuss with you about Masaraki.


First,
so I notice that you are doing both monitoring, auto-recovery and even host 
maintenance
type functionality as part of the Masaraki architecture.

are you open to some configurability (enabling/disabling) of these capabilities 
?

e.g. OPNFV guys would NOT want auto-recovery, they would prefer that fault 
events
  get reported to Vitrage ... and eventually filter up to 
Aodh Alarms that get
  received by VNFManagers which would be responsible for 
the recovery.

e.g. some deployers of openstack might want to disable parts or all of your 
monitoring,
 if using other mechanisms such as Zabbix or Nagios for the host 
monitoring (say)


Second,
are you open to configurably having fault events reported to Vitrage ?


Greg.


From: Sam P 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Monday, May 15, 2017 at 9:36 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck 
Monitoring

Hi Greg,

In Masakari [0] for VMHA, we have already implemented some what
similar function in masakri-monitors.
Masakari-monitors runs on nova-compute node, and monitors the host,
process or instance failures.
Masakari instance monitor has similar functionality with what you
have described.
Please see [1] for more details on instance monitoring.
[0] https://wiki.openstack.org/wiki/Masakari
[1] 
https://github.com/openstack/masakari-monitors/tree/master/masakarimonitors/instancemonitor

Once masakari-monitors detect failures, it will send notifications to
masakari-api to take appropriate recovery actions to recover that VM
from failures.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Davanum Srinivas
Steve,

We should not always ask "if this is a ruling from the TC", the
default is that it's a discussion/exploration. If it is a "ruling", it
won't be on a ML thread.

Thanks,
Dims

On Tue, May 16, 2017 at 9:22 AM, Steven Dake (stdake)  wrote:
> Dims,
>
> The [tc] was in the subject tag, and the message was represented as 
> indicating some TC directive and has had several tc members comment on the 
> thread.  I did nothing wrong.
>
> Regards
> -steve
>
>
> -Original Message-
> From: Davanum Srinivas 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Tuesday, May 16, 2017 at 4:34 AM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: Re: [openstack-dev] 
> [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
>  do we want to be publishing binary container images?
>
> Why drag TC into this discussion Steven? If the TC has something to
> say, it will be in the form of a resolution with topic "formal-vote".
> So please Stop!
>
> Thanks,
> Dims
>
> On Tue, May 16, 2017 at 12:22 AM, Steven Dake (stdake)  
> wrote:
> > Flavio,
> >
> > Forgive the top post – outlook ftw.
> >
> > I understand the concerns raised in this thread.  It is unclear if this 
> thread is the feeling of two TC members or enough TC members care deeply 
> about this issue to permanently limit OpenStack big tent projects’ ability to 
> generate container images in various external artifact storage systems.  The 
> point of discussion I see effectively raised in this thread is “OpenStack 
> infra will not push images to dockerhub”.
> >
> > I’d like clarification if this is a ruling from the TC, or simply an 
> exploratory discussion.
> >
> > If it is exploratory, it is prudent that OpenStack projects not be 
> blocked by debate on this issue until the TC has made such ruling as to 
> banning the creation of container images via OpenStack infrastructure.
> >
> > Regards
> > -steve
> >
> > -Original Message-
> > From: Flavio Percoco 
> > Reply-To: "OpenStack Development Mailing List (not for usage 
> questions)" 
> > Date: Monday, May 15, 2017 at 7:00 PM
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> > Subject: Re: [openstack-dev] 
> [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
>  do we want to be publishing binary container images?
> >
> > On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:
> > >On 15 May 2017 at 12:12, Doug Hellmann  
> wrote:
> >
> > [huge snip]
> >
> > >>> > I'm raising the issue here to get some more input into how to
> > >>> > proceed. Do other people think this concern is overblown? Can 
> we
> > >>> > mitigate the risk by communicating through metadata for the 
> images?
> > >>> > Should we stick to publishing build instructions 
> (Dockerfiles, or
> > >>> > whatever) instead of binary images? Are there other options I 
> haven't
> > >>> > mentioned?
> > >>>
> > >>> Today we do publish build instructions, that's what Kolla is. 
> We also
> > >>> publish built containers already, just we do it manually on 
> release
> > >>> today. If we decide to block it, I assume we should stop doing 
> that
> > >>> too? That will hurt users who uses this piece of Kolla, and I'd 
> hate
> > >>> to hurt our users:(
> > >>
> > >> Well, that's the question. Today we have teams publishing those
> > >> images themselves, right? And the proposal is to have infra do 
> it?
> > >> That change could be construed to imply that there is more of a
> > >> relationship with the images and the rest of the community 
> (remember,
> > >> folks outside of the main community activities do not always make
> > >> the same distinctions we do about teams). So, before we go ahead
> > >> with that, I want to make sure that we all have a chance to 
> discuss
> > >> the policy change and its implications.
> > >
> > >Infra as vm running with infra, but team to publish it can be Kolla
> > >team. I assume we'll be responsible to keep these images healthy...
> >
> > I think this is the gist of the concern and I'd like us to focus on 
> it.
> >
> > As someone that used to consume these images from kolla's dockerhub 
> account
> > directly, I can confirm they are useful. However, I do share Doug's 
> concern and
> > the impact this may have on the community.
> >
> > From a release 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Sean Dague
On 05/16/2017 09:24 AM, Doug Hellmann wrote:
> Excerpts from Luigi Toscano's message of 2017-05-16 11:50:53 +0200:
>> On Monday, 15 May 2017 21:12:16 CEST Doug Hellmann wrote:
>>> Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
>>>
 On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> I'm raising the issue here to get some more input into how to
> proceed. Do other people think this concern is overblown? Can we
> mitigate the risk by communicating through metadata for the images?
> Should we stick to publishing build instructions (Dockerfiles, or
> whatever) instead of binary images? Are there other options I haven't
> mentioned?

 Today we do publish build instructions, that's what Kolla is. We also
 publish built containers already, just we do it manually on release
 today. If we decide to block it, I assume we should stop doing that
 too? That will hurt users who uses this piece of Kolla, and I'd hate
 to hurt our users:(
>>>
>>> Well, that's the question. Today we have teams publishing those
>>> images themselves, right? And the proposal is to have infra do it?
>>> That change could be construed to imply that there is more of a
>>> relationship with the images and the rest of the community (remember,
>>> folks outside of the main community activities do not always make
>>> the same distinctions we do about teams). So, before we go ahead
>>> with that, I want to make sure that we all have a chance to discuss
>>> the policy change and its implications.
>>
>> Sorry for hijacking the thread, but we have a similar scenario for example 
>> in 
>> Sahara. It is about full VM images containing Hadoop/Spark/other_big_data 
>> stuff, and not containers, but it's looks really the same.
>> So far ready-made images have been published under 
>> http://sahara-files.mirantis.com/images/upstream/, but we are looking to 
>> have them hosted on 
>> openstack.org, just like other artifacts. 
>>
>> We asked about this few days ago on openstack-infra@, but no answer so far 
>> (the Summit didn't help):
>>
>> http://lists.openstack.org/pipermail/openstack-infra/2017-April/005312.html
>>
>> I think that the answer to the question raised in this thread is definitely 
>> going to be relevant for our use case.
>>
>> Ciao
> 
> Thanks for raising this. I think the same concerns apply to VM images.

Agreed.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Validations before upgrades and updates

2017-05-16 Thread Florian Fuchs
On Mon, May 15, 2017 at 6:27 PM, Steven Hardy  wrote:
> On Mon, May 08, 2017 at 02:45:08PM +0300, Marios Andreou wrote:
>>Hi folks, after some discussion locally with colleagues about improving
>>the upgrades experience, one of the items that came up was pre-upgrade and
>>update validations. I took an AI to look at the current status of
>>tripleo-validations [0] and posted a simple WIP [1] intended to be run
>>before an undercloud update/upgrade and which just checks service status.
>>It was pointed out by shardy that for such checks it is better to instead
>>continue to use the per-service  manifests where possible like [2] for
>>example where we check status before N..O major upgrade. There may still
>>be some undercloud specific validations that we can land into the
>>tripleo-validations repo (thinking about things like the neutron
>>networks/ports, validating the current nova nodes state etc?).
>>So do folks have any thoughts about this subject - for example the kinds
>>of things we should be checking - Steve said he had some reviews in
>>progress for collecting the overcloud ansible puppet/docker config into an
>>ansible playbook that the operator can invoke for upgrade of the 'manual'
>>nodes (for example compute in the N..O workflow) - the point being that we
>>can add more per-service ansible validation tasks into the service
>>manifests for execution when the play is run by the operator - but I'll
>>let Steve point at and talk about those.Â
>
> Thanks for starting this thread Marios, sorry for the slow reply due to
> Summit etc.
>
> As we discussed, I think adding validations is great, but I'd prefer we
> kept any overcloud validations specific to services in t-h-t instead of
> trying to manage service specific things over multiple repos.
>
> This would also help with the idea of per-step validations I think, where
> e.g you could have a "is service active" test and run it after the step
> where we expect the service to start, a blueprint was raised a while back
> asking for exactly that:
>
> https://blueprints.launchpad.net/tripleo/+spec/step-by-step-validation
>
> One way we could achive this is to add ansible tasks that perform some
> validation after each step, where we combine the tasks for all services,
> similar to how we already do upgrade_tasks and host_prep_tasks:
>
> https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/database/redis.yaml#L92
>
> With the benefit of hindsight using ansible tags for upgrade_tasks wasn't
> the best approach, because you can't change the tags via SoftwareDeployment
> (e.g you need a SoftwareConfig per step), it's better if we either generate
> the list of tasks by merging maps e.g
>
>   validation_tasks:
> step3:
>   - sometask
>
> Or via ansible conditionals where we pass a step value in to each run of
> the tasks:
>
>   validation_tasks:
> - sometask
>   when: step == 3
>
> The latter approach is probably my preference, because it'll require less
> complex merging in the heat layer.
>
> As you mentioned, I've been working on ways to make the deployment steps
> more ansible driven, so having these tasks integrated with the t-h-t model
> would be well aligned with that I think:
>
> https://review.openstack.org/#/c/454816/
>
> https://review.openstack.org/#/c/462211/
>
> Happy to discuss further when you're ready to start integrating some
> overcloud validations.

Maybe these are two kinds of pre-upgrade validations that serve
different purposes.

The more general validations (like checking connectivity, making sure
the stack is in good shape, repos are available, etc.) should give
operators a fair amount of confidence that all basic prerequisites to
start an update are met *before* the upgrade is started. They could be
run from the UI or CLI and would fit well into the tripleo-validations
repo. Similar to the existing tripleo-validations, failures don't
prevent operators from doing something.

The service-specific validations otoh are closely tied to the upgrade
process and will stop further progress when failing. They are
fundamentally different to the tripleo-validations and could therefore
live in t-h-t.

I personally don't see why we shouldn't have pre-upgrade validations
both in tripleo-validations and in t-h-t, as long as we know which
ones go where. If everything that's tied to a specific overcloud
service or upgrade step goes into t-h-t, I could see these two groups
(using the validations suggested earlier in this thread):

tripleo-validations:
- Undercloud service check
- Verify that the stack is in a *_COMPLETE state
- Verify undercloud disk space. For node replacement we recommended a
minimum of 10 GB free.
- Network/repo availability check (undercloud and overcloud)
- Verify we're at the latest version of the current release
- ...

tripleo-heat-templates:
- Pacemaker cluster health
- Ceph health
- APIs 

Re: [openstack-dev] [tempest] Proposing Fanglei Zhu for Tempest core

2017-05-16 Thread Matthew Treinish

On Tue, May 16, 2017 at 08:22:44AM +, Andrea Frittoli wrote:
> Hello team,
> 
> I'm very pleased to propose Fanglei Zhu (zhufl) for Tempest core.
> 
> Over the past two cycle Fanglei has been steadily contributing to Tempest
> and its community.
> She's done a great deal of work in making Tempest code cleaner, easier to
> read, maintain and
> debug, fixing bugs and removing cruft. Both her code as well as her reviews
> demonstrate a
> very good understanding of Tempest internals and of the project future
> direction.
> I believe Fanglei will make an excellent addition to the team.
> 
> As per the usual, if the current Tempest core team members would please
> vote +1
> or -1(veto) to the nomination when you get a chance. We'll keep the polls
> open
> for 5 days or until everyone has voted.

+1

-Matt Treinish

> 
> References:
> https://review.openstack.org/#/q/owner:zhu.fanglei%2540zte.com.cn
> https://review.openstack.org/#/q/reviewer:zhufl


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Flavio Percoco

On 16/05/17 14:08 +0200, Thierry Carrez wrote:

Flavio Percoco wrote:

From a release perspective, as Doug mentioned, we've avoided releasing projects
in any kind of built form. This was also one of the concerns I raised when
working on the proposal to support other programming languages. The problem of
releasing built images goes beyond the infrastructure requirements. It's the
message and the guarantees implied with the built product itself that are the
concern here. And I tend to agree with Doug that this might be a problem for us
as a community. Unfortunately, putting your name, Michal, as contact point is
not enough. Kolla is not the only project producing container images and we need
to be consistent in the way we release these images.

Nothing prevents people for building their own images and uploading them to
dockerhub. Having this as part of the OpenStack's pipeline is a problem.


I totally subscribe to the concerns around publishing binaries (under
any form), and the expectations in terms of security maintenance that it
would set on the publisher. At the same time, we need to have images
available, for convenience and testing. So what is the best way to
achieve that without setting strong security maintenance expectations
for the OpenStack community ? We have several options:

1/ Have third-parties publish images
It is the current situation. The issue is that the Kolla team (and
likely others) would rather automate the process and use OpenStack
infrastructure for it.

2/ Have third-parties publish images, but through OpenStack infra
This would allow to automate the process, but it would be a bit weird to
use common infra resources to publish in a private repo.

3/ Publish transient (per-commit or daily) images
A "daily build" (especially if you replace it every day) would set
relatively-limited expectations in terms of maintenance. It would end up
picking up security updates in upstream layers, even if not immediately.

4/ Publish images and own them
Staff release / VMT / stable team in a way that lets us properly own
those images and publish them officially.

Personally I think (4) is not realistic. I think we could make (3) work,
and I prefer it to (2). If all else fails, we should keep (1).


Agreed #4 is a bit unrealistic.

Not sure I understand the difference between #2 and #3. Is it just the cadence?

I'd prefer for these builds to have a daily cadence because it sets the
expectations w.r.t maintenance right: "These images are daily builds and not
certified releases. For stable builds you're better off building it yourself"

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Luigi Toscano's message of 2017-05-16 11:50:53 +0200:
> On Monday, 15 May 2017 21:12:16 CEST Doug Hellmann wrote:
> > Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
> > 
> > > On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> > > > I'm raising the issue here to get some more input into how to
> > > > proceed. Do other people think this concern is overblown? Can we
> > > > mitigate the risk by communicating through metadata for the images?
> > > > Should we stick to publishing build instructions (Dockerfiles, or
> > > > whatever) instead of binary images? Are there other options I haven't
> > > > mentioned?
> > > 
> > > Today we do publish build instructions, that's what Kolla is. We also
> > > publish built containers already, just we do it manually on release
> > > today. If we decide to block it, I assume we should stop doing that
> > > too? That will hurt users who uses this piece of Kolla, and I'd hate
> > > to hurt our users:(
> > 
> > Well, that's the question. Today we have teams publishing those
> > images themselves, right? And the proposal is to have infra do it?
> > That change could be construed to imply that there is more of a
> > relationship with the images and the rest of the community (remember,
> > folks outside of the main community activities do not always make
> > the same distinctions we do about teams). So, before we go ahead
> > with that, I want to make sure that we all have a chance to discuss
> > the policy change and its implications.
> 
> Sorry for hijacking the thread, but we have a similar scenario for example in 
> Sahara. It is about full VM images containing Hadoop/Spark/other_big_data 
> stuff, and not containers, but it's looks really the same.
> So far ready-made images have been published under 
> http://sahara-files.mirantis.com/images/upstream/, but we are looking to have 
> them hosted on 
> openstack.org, just like other artifacts. 
> 
> We asked about this few days ago on openstack-infra@, but no answer so far 
> (the Summit didn't help):
> 
> http://lists.openstack.org/pipermail/openstack-infra/2017-April/005312.html
> 
> I think that the answer to the question raised in this thread is definitely 
> going to be relevant for our use case.
> 
> Ciao

Thanks for raising this. I think the same concerns apply to VM images.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
> Flavio Percoco wrote:
> > From a release perspective, as Doug mentioned, we've avoided releasing 
> > projects
> > in any kind of built form. This was also one of the concerns I raised when
> > working on the proposal to support other programming languages. The problem 
> > of
> > releasing built images goes beyond the infrastructure requirements. It's the
> > message and the guarantees implied with the built product itself that are 
> > the
> > concern here. And I tend to agree with Doug that this might be a problem 
> > for us
> > as a community. Unfortunately, putting your name, Michal, as contact point 
> > is
> > not enough. Kolla is not the only project producing container images and we 
> > need
> > to be consistent in the way we release these images.
> > 
> > Nothing prevents people for building their own images and uploading them to
> > dockerhub. Having this as part of the OpenStack's pipeline is a problem.
> 
> I totally subscribe to the concerns around publishing binaries (under
> any form), and the expectations in terms of security maintenance that it
> would set on the publisher. At the same time, we need to have images
> available, for convenience and testing. So what is the best way to
> achieve that without setting strong security maintenance expectations
> for the OpenStack community ? We have several options:
> 
> 1/ Have third-parties publish images
> It is the current situation. The issue is that the Kolla team (and
> likely others) would rather automate the process and use OpenStack
> infrastructure for it.
> 
> 2/ Have third-parties publish images, but through OpenStack infra
> This would allow to automate the process, but it would be a bit weird to
> use common infra resources to publish in a private repo.
> 
> 3/ Publish transient (per-commit or daily) images
> A "daily build" (especially if you replace it every day) would set
> relatively-limited expectations in terms of maintenance. It would end up
> picking up security updates in upstream layers, even if not immediately.
> 
> 4/ Publish images and own them
> Staff release / VMT / stable team in a way that lets us properly own
> those images and publish them officially.
> 
> Personally I think (4) is not realistic. I think we could make (3) work,
> and I prefer it to (2). If all else fails, we should keep (1).
> 

At the forum we talked about putting test images on a "private"
repository hosted on openstack.org somewhere. I think that's option
3 from your list?

Paul may be able to shed more light on the details of the technology
(maybe it's just an Apache-served repo, rather than a full blown
instance of Docker's service, for example).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Steven Dake (stdake)
Dims,

The [tc] was in the subject tag, and the message was represented as indicating 
some TC directive and has had several tc members comment on the thread.  I did 
nothing wrong.

Regards
-steve


-Original Message-
From: Davanum Srinivas 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, May 16, 2017 at 4:34 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

Why drag TC into this discussion Steven? If the TC has something to
say, it will be in the form of a resolution with topic "formal-vote".
So please Stop!

Thanks,
Dims

On Tue, May 16, 2017 at 12:22 AM, Steven Dake (stdake)  
wrote:
> Flavio,
>
> Forgive the top post – outlook ftw.
>
> I understand the concerns raised in this thread.  It is unclear if this 
thread is the feeling of two TC members or enough TC members care deeply about 
this issue to permanently limit OpenStack big tent projects’ ability to 
generate container images in various external artifact storage systems.  The 
point of discussion I see effectively raised in this thread is “OpenStack infra 
will not push images to dockerhub”.
>
> I’d like clarification if this is a ruling from the TC, or simply an 
exploratory discussion.
>
> If it is exploratory, it is prudent that OpenStack projects not be 
blocked by debate on this issue until the TC has made such ruling as to banning 
the creation of container images via OpenStack infrastructure.
>
> Regards
> -steve
>
> -Original Message-
> From: Flavio Percoco 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

> Date: Monday, May 15, 2017 at 7:00 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 

> Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?
>
> On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:
> >On 15 May 2017 at 12:12, Doug Hellmann  wrote:
>
> [huge snip]
>
> >>> > I'm raising the issue here to get some more input into how to
> >>> > proceed. Do other people think this concern is overblown? Can we
> >>> > mitigate the risk by communicating through metadata for the 
images?
> >>> > Should we stick to publishing build instructions (Dockerfiles, 
or
> >>> > whatever) instead of binary images? Are there other options I 
haven't
> >>> > mentioned?
> >>>
> >>> Today we do publish build instructions, that's what Kolla is. We 
also
> >>> publish built containers already, just we do it manually on 
release
> >>> today. If we decide to block it, I assume we should stop doing 
that
> >>> too? That will hurt users who uses this piece of Kolla, and I'd 
hate
> >>> to hurt our users:(
> >>
> >> Well, that's the question. Today we have teams publishing those
> >> images themselves, right? And the proposal is to have infra do it?
> >> That change could be construed to imply that there is more of a
> >> relationship with the images and the rest of the community 
(remember,
> >> folks outside of the main community activities do not always make
> >> the same distinctions we do about teams). So, before we go ahead
> >> with that, I want to make sure that we all have a chance to discuss
> >> the policy change and its implications.
> >
> >Infra as vm running with infra, but team to publish it can be Kolla
> >team. I assume we'll be responsible to keep these images healthy...
>
> I think this is the gist of the concern and I'd like us to focus on 
it.
>
> As someone that used to consume these images from kolla's dockerhub 
account
> directly, I can confirm they are useful. However, I do share Doug's 
concern and
> the impact this may have on the community.
>
> From a release perspective, as Doug mentioned, we've avoided 
releasing projects
> in any kind of built form. This was also one of the concerns I raised 
when
> working on the proposal to support other programming languages. The 
problem of
> releasing built images goes beyond the infrastructure requirements. 
It's the
> message and the guarantees implied with the built product itself that 
are the
> concern here. And I tend to agree with Doug that this 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Flavio Percoco

On 16/05/17 04:22 +, Steven Dake (stdake) wrote:

Flavio,

Forgive the top post – outlook ftw.

I understand the concerns raised in this thread.  It is unclear if this thread 
is the feeling of two TC members or enough TC members care deeply about this 
issue to permanently limit OpenStack big tent projects’ ability to generate 
container images in various external artifact storage systems.  The point of 
discussion I see effectively raised in this thread is “OpenStack infra will not 
push images to dockerhub”.

I’d like clarification if this is a ruling from the TC, or simply an 
exploratory discussion.

If it is exploratory, it is prudent that OpenStack projects not be blocked by 
debate on this issue until the TC has made such ruling as to banning the 
creation of container images via OpenStack infrastructure.


Hey Steven,

It's nothing to do with the TC. It's a release management concern and I just
happen to have an opinion on it. :)

As Doug mentioned, OpenStack has (almost) never released binaries in any form.
This doesn't mean we can't revisit this "rule" but until that happens, the
concern stands.

Flavio


Regards
-steve

-Original Message-
From: Flavio Percoco 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, May 15, 2017 at 7:00 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

   On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:
   >On 15 May 2017 at 12:12, Doug Hellmann  wrote:

   [huge snip]

   >>> > I'm raising the issue here to get some more input into how to
   >>> > proceed. Do other people think this concern is overblown? Can we
   >>> > mitigate the risk by communicating through metadata for the images?
   >>> > Should we stick to publishing build instructions (Dockerfiles, or
   >>> > whatever) instead of binary images? Are there other options I haven't
   >>> > mentioned?
   >>>
   >>> Today we do publish build instructions, that's what Kolla is. We also
   >>> publish built containers already, just we do it manually on release
   >>> today. If we decide to block it, I assume we should stop doing that
   >>> too? That will hurt users who uses this piece of Kolla, and I'd hate
   >>> to hurt our users:(
   >>
   >> Well, that's the question. Today we have teams publishing those
   >> images themselves, right? And the proposal is to have infra do it?
   >> That change could be construed to imply that there is more of a
   >> relationship with the images and the rest of the community (remember,
   >> folks outside of the main community activities do not always make
   >> the same distinctions we do about teams). So, before we go ahead
   >> with that, I want to make sure that we all have a chance to discuss
   >> the policy change and its implications.
   >
   >Infra as vm running with infra, but team to publish it can be Kolla
   >team. I assume we'll be responsible to keep these images healthy...

   I think this is the gist of the concern and I'd like us to focus on it.

   As someone that used to consume these images from kolla's dockerhub account
   directly, I can confirm they are useful. However, I do share Doug's concern 
and
   the impact this may have on the community.

   From a release perspective, as Doug mentioned, we've avoided releasing 
projects
   in any kind of built form. This was also one of the concerns I raised when
   working on the proposal to support other programming languages. The problem 
of
   releasing built images goes beyond the infrastructure requirements. It's the
   message and the guarantees implied with the built product itself that are the
   concern here. And I tend to agree with Doug that this might be a problem for 
us
   as a community. Unfortunately, putting your name, Michal, as contact point is
   not enough. Kolla is not the only project producing container images and we 
need
   to be consistent in the way we release these images.

   Nothing prevents people for building their own images and uploading them to
   dockerhub. Having this as part of the OpenStack's pipeline is a problem.

   Flavio

   P.S: note this goes against my container(ish) interests but it's a
   community-wide problem.

   --
   @flaper87
   Flavio Percoco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck Monitoring

2017-05-16 Thread Waines, Greg
thanks for the pointers Sam.

I took a quick look.
I agree that the VM Heartbeat / Health-check looks like a good fit into 
Masakari.

Currently your instance monitoring looks like it is strictly black-box type 
monitoring thru libvirt events.
Is that correct ?
i.e. you do not do any intrusive type monitoring of the instance thru the QUEMU 
Guest Agent facility
   correct ?

I think this is what VM Heartbeat / Health-check would add to Masaraki.
Let me know if you agree.

Greg.

From: Sam P 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Monday, May 15, 2017 at 9:36 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck 
Monitoring

Hi Greg,

In Masakari [0] for VMHA, we have already implemented some what
similar function in masakri-monitors.
Masakari-monitors runs on nova-compute node, and monitors the host,
process or instance failures.
Masakari instance monitor has similar functionality with what you
have described.
Please see [1] for more details on instance monitoring.
[0] https://wiki.openstack.org/wiki/Masakari
[1] 
https://github.com/openstack/masakari-monitors/tree/master/masakarimonitors/instancemonitor

Once masakari-monitors detect failures, it will send notifications to
masakari-api to take appropriate recovery actions to recover that VM
from failures.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Julien Danjou
On Tue, May 16 2017, Andreas Jaeger wrote:

> It is needed to generate the translations, but can't we move it for
> oslo-i18n into test-requirements?

I've pushed this and it seems to work, pretty sure it's safe.

  https://review.openstack.org/#/c/465014/

If we can merge this today and then release quickly after that'd be a
great help -_-

> But os-testr does not need Babel at all - let's remove it,
> https://review.openstack.org/465023

Arf, sure!
I can only +1 though :(

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck Monitoring

2017-05-16 Thread Adam Spiers

Afek, Ifat (Nokia - IL/Kfar Sava)  wrote:

On 16/05/2017, 4:36, "Sam P"  wrote:

   Hi Greg,

In Masakari [0] for VMHA, we have already implemented some what
   similar function in masakri-monitors.
Masakari-monitors runs on nova-compute node, and monitors the host,
   process or instance failures.
Masakari instance monitor has similar functionality with what you
   have described.
Please see [1] for more details on instance monitoring.
[0] https://wiki.openstack.org/wiki/Masakari
[1] 
https://github.com/openstack/masakari-monitors/tree/master/masakarimonitors/instancemonitor

Once masakari-monitors detect failures, it will send notifications to
   masakari-api to take appropriate recovery actions to recover that VM
   from failures.


You can also find out more about our architectural plans by watching
this talk which Sampath and I gave in Boston:

  
https://www.openstack.org/videos/boston-2017/high-availability-for-instances-moving-to-a-converged-upstream-solution

The slides are here:

  https://aspiers.github.io/openstack-summit-2017-boston-compute-ha/

We didn't go into much depth on monitoring and recovery of individual
VMs, but as Sampath explained, Masakari already handles both of these.


Hi Greg, Sam,

As Vitrage is about correlating alarms that come from different
sources, and is not a monitor by itself – I think that it can benefit
from information retrieved by both Masakari and Zabbix monitors.

Zabbix is already integrated into Vitrage. I don’t know if there are
specific tests for VM heartbeat, but I think it is very likely that
there are.  Regarding Masakari – looking at your documents, I believe
that integrating your monitoring information into Vitrage could be
quite straight forward.


Yes, this makes sense.  Masakari already cleanly decouples
monitoring/alerting from automated recovery, so it could support this
quite nicely.  And the modular converged architecture we explained in
the presentation will maintain that clean separation of
responsibilities whilst integrating Masakari together with other
components such as Pacemaker, Mistral, and maybe Vitrage too.

For example whilst so far this thread has been about VM instance
monitoring, another area where Vitrage could integrate with Masakari
is compute host monitoring.

If you watch this part of our presentation where we explained the next
generation architecture, you'll see that we propose a new
"nova-host-alerter" component which has a driver-based mechanism for
alerting different services when a compute host experiences a failure:

   https://youtu.be/YPKE1guti8E?t=32m43s

So one obvious possibility would be to add a driver for Vitrage, so
that Vitrage can be alerted when Pacemaker spots a host failure.

Similarly, we could extend Pacemaker configurations to alert Vitrage
when individual processes such as nova-compute or libvirtd fail.

If you would like to discuss any of this further or have any more
questions, in addition to this mailing list we are also available to
talk on the #openstack-ha IRC channel!

Cheers,
Adam

P.S. I've added the [HA] badge to this thread since this discussion is
definitely related to high availability.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Thierry Carrez
Flavio Percoco wrote:
> From a release perspective, as Doug mentioned, we've avoided releasing 
> projects
> in any kind of built form. This was also one of the concerns I raised when
> working on the proposal to support other programming languages. The problem of
> releasing built images goes beyond the infrastructure requirements. It's the
> message and the guarantees implied with the built product itself that are the
> concern here. And I tend to agree with Doug that this might be a problem for 
> us
> as a community. Unfortunately, putting your name, Michal, as contact point is
> not enough. Kolla is not the only project producing container images and we 
> need
> to be consistent in the way we release these images.
> 
> Nothing prevents people for building their own images and uploading them to
> dockerhub. Having this as part of the OpenStack's pipeline is a problem.

I totally subscribe to the concerns around publishing binaries (under
any form), and the expectations in terms of security maintenance that it
would set on the publisher. At the same time, we need to have images
available, for convenience and testing. So what is the best way to
achieve that without setting strong security maintenance expectations
for the OpenStack community ? We have several options:

1/ Have third-parties publish images
It is the current situation. The issue is that the Kolla team (and
likely others) would rather automate the process and use OpenStack
infrastructure for it.

2/ Have third-parties publish images, but through OpenStack infra
This would allow to automate the process, but it would be a bit weird to
use common infra resources to publish in a private repo.

3/ Publish transient (per-commit or daily) images
A "daily build" (especially if you replace it every day) would set
relatively-limited expectations in terms of maintenance. It would end up
picking up security updates in upstream layers, even if not immediately.

4/ Publish images and own them
Staff release / VMT / stable team in a way that lets us properly own
those images and publish them officially.

Personally I think (4) is not realistic. I think we could make (3) work,
and I prefer it to (2). If all else fails, we should keep (1).

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck Monitoring

2017-05-16 Thread Afek, Ifat (Nokia - IL/Kfar Sava)


On 16/05/2017, 4:36, "Sam P"  wrote:

Hi Greg,

 In Masakari [0] for VMHA, we have already implemented some what
similar function in masakri-monitors.
 Masakari-monitors runs on nova-compute node, and monitors the host,
process or instance failures.
 Masakari instance monitor has similar functionality with what you
have described.
 Please see [1] for more details on instance monitoring.
 [0] https://wiki.openstack.org/wiki/Masakari
 [1] 
https://github.com/openstack/masakari-monitors/tree/master/masakarimonitors/instancemonitor

 Once masakari-monitors detect failures, it will send notifications to
masakari-api to take appropriate recovery actions to recover that VM
from failures.

 
Hi Greg, Sam,

As Vitrage is about correlating alarms that come from different sources, and is 
not a monitor by itself – I think that it can benefit from information 
retrieved by both Masakari and Zabbix monitors. 

Zabbix is already integrated into Vitrage. I don’t know if there are specific 
tests for VM heartbeat, but I think it is very likely that there are. 
Regarding Masakari – looking at your documents, I believe that integrating your 
monitoring information into Vitrage could be quite straight forward. 

Best Regards,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Davanum Srinivas
Why drag TC into this discussion Steven? If the TC has something to
say, it will be in the form of a resolution with topic "formal-vote".
So please Stop!

Thanks,
Dims

On Tue, May 16, 2017 at 12:22 AM, Steven Dake (stdake)  wrote:
> Flavio,
>
> Forgive the top post – outlook ftw.
>
> I understand the concerns raised in this thread.  It is unclear if this 
> thread is the feeling of two TC members or enough TC members care deeply 
> about this issue to permanently limit OpenStack big tent projects’ ability to 
> generate container images in various external artifact storage systems.  The 
> point of discussion I see effectively raised in this thread is “OpenStack 
> infra will not push images to dockerhub”.
>
> I’d like clarification if this is a ruling from the TC, or simply an 
> exploratory discussion.
>
> If it is exploratory, it is prudent that OpenStack projects not be blocked by 
> debate on this issue until the TC has made such ruling as to banning the 
> creation of container images via OpenStack infrastructure.
>
> Regards
> -steve
>
> -Original Message-
> From: Flavio Percoco 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Monday, May 15, 2017 at 7:00 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: Re: [openstack-dev] 
> [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
>  do we want to be publishing binary container images?
>
> On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:
> >On 15 May 2017 at 12:12, Doug Hellmann  wrote:
>
> [huge snip]
>
> >>> > I'm raising the issue here to get some more input into how to
> >>> > proceed. Do other people think this concern is overblown? Can we
> >>> > mitigate the risk by communicating through metadata for the images?
> >>> > Should we stick to publishing build instructions (Dockerfiles, or
> >>> > whatever) instead of binary images? Are there other options I 
> haven't
> >>> > mentioned?
> >>>
> >>> Today we do publish build instructions, that's what Kolla is. We also
> >>> publish built containers already, just we do it manually on release
> >>> today. If we decide to block it, I assume we should stop doing that
> >>> too? That will hurt users who uses this piece of Kolla, and I'd hate
> >>> to hurt our users:(
> >>
> >> Well, that's the question. Today we have teams publishing those
> >> images themselves, right? And the proposal is to have infra do it?
> >> That change could be construed to imply that there is more of a
> >> relationship with the images and the rest of the community (remember,
> >> folks outside of the main community activities do not always make
> >> the same distinctions we do about teams). So, before we go ahead
> >> with that, I want to make sure that we all have a chance to discuss
> >> the policy change and its implications.
> >
> >Infra as vm running with infra, but team to publish it can be Kolla
> >team. I assume we'll be responsible to keep these images healthy...
>
> I think this is the gist of the concern and I'd like us to focus on it.
>
> As someone that used to consume these images from kolla's dockerhub 
> account
> directly, I can confirm they are useful. However, I do share Doug's 
> concern and
> the impact this may have on the community.
>
> From a release perspective, as Doug mentioned, we've avoided releasing 
> projects
> in any kind of built form. This was also one of the concerns I raised when
> working on the proposal to support other programming languages. The 
> problem of
> releasing built images goes beyond the infrastructure requirements. It's 
> the
> message and the guarantees implied with the built product itself that are 
> the
> concern here. And I tend to agree with Doug that this might be a problem 
> for us
> as a community. Unfortunately, putting your name, Michal, as contact 
> point is
> not enough. Kolla is not the only project producing container images and 
> we need
> to be consistent in the way we release these images.
>
> Nothing prevents people for building their own images and uploading them 
> to
> dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>
> Flavio
>
> P.S: note this goes against my container(ish) interests but it's a
> community-wide problem.
>
> --
> @flaper87
> Flavio Percoco
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims


Re: [openstack-dev] [tripleo] Issue while applying customs configuration to overcloud.

2017-05-16 Thread Steven Hardy
On Tue, May 16, 2017 at 04:33:33AM +, Dnyaneshwar Pawar wrote:
> Hi TripleO team,
> 
> I am trying to apply custom configuration to an existing overcloud. (using 
> openstack overcloud deploy command)
> Though there is no error, the configuration is in not applied to overcloud.
> Am I missing anything here?
> http://paste.openstack.org/show/609619/

In your paste you have the resource_registry like this:

OS::TripleO::ControllerServer: /home/stack/test/heat3_ocata.yaml

The problem is OS::TripleO::ControllerServer isn't a resource type we use,
e.g it's not a valid hook to enable additional node configuration.

Instead try something like this:

OS::TripleO::NodeExtraConfigPost: /home/stack/test/heat3_ocata.yaml

Which will run the script on all nodes, as documented here:

https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html

Out of interest, where did you find OS::TripleO::ControllerServer, do we
have a mistake in our docs somewhere?

Also in your template the type: OS::Heat::SoftwareDeployment should be
either type: OS::Heat::SoftwareDeployments (as in the docs) or type:
OS::Heat::SoftwareDeploymentGroup (the newer name for SoftwareDeployments,
we should switch the docs to that..).

Hope that helps!

-- 
Steve Hardy
Red Hat Engineering, Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Sean Dague
On 05/15/2017 10:00 PM, Adrian Turjak wrote:
> 
> 
> On 16/05/17 13:29, Lance Bragstad wrote:
>>
>>
>> On Mon, May 15, 2017 at 7:07 PM, Adrian Turjak
>> > wrote:

>> Based on the specs that are currently up in Keystone-specs, I
>> would highly recommend not doing this per user.
>>
>> The scenario I imagine is you have a sysadmin at a company who
>> created a ton of these for various jobs and then leaves. The
>> company then needs to keep his user account around, or create tons
>> of new API keys, and then disable his user once all the scripts he
>> had keys for are replaced. Or more often then not, disable his
>> user and then cry as everything breaks and no one really knows why
>> or no one fully documented it all, or didn't read the docs.
>> Keeping them per project and unrelated to the user makes more
>> sense, as then someone else on your team can regenerate the
>> secrets for the specific Keys as they want. Sure we can advise
>> them to use generic user accounts within which to create these API
>> keys but that implies password sharing which is bad.
>>
>>
>> That said, I'm curious why we would make these as a thing separate
>> to users. In reality, if you can create users, you can create API
>> specific users. Would this be a different authentication
>> mechanism? Why? Why not just continue the work on better access
>> control and let people create users for this. Because lets be
>> honest, isn't a user already an API key? The issue (and the Ron's
>> spec mentions this) is a user having too much access, how would
>> this fix that when the issue is that we don't have fine grained
>> policy in the first place? How does a new auth mechanism fix that?
>> Both specs mention roles so I assume it really doesn't. If we had
>> fine grained policy we could just create users specific to a
>> service with only the roles it needs, and the same problem is
>> solved without any special API, new auth, or different 'user-lite'
>> object model. It feels like this is trying to solve an issue that
>> is better solved by fixing the existing problems.
>>
>> I like the idea behind these specs, but... I'm curious what
>> exactly they are trying to solve. Not to mention if you wanted to
>> automate anything larger such as creating sub-projects and setting
>> up a basic network for each new developer to get access to your
>> team, this wouldn't work unless you could have your API key
>> inherit to subprojects or something more complex, at which point
>> they may as well be users. Users already work for all of this, why
>> reinvent the wheel when really the issue isn't the wheel itself,
>> but the steering mechanism (access control/policy in this case)?
>>
>>
>> All valid points, but IMO the discussions around API keys didn't set
>> out to fix deep-rooted issues with policy. We have several specs in
>> flights across projects to help mitigate the real issues with policy
>> [0] [1] [2] [3] [4].
>>
>> I see an API key implementation as something that provides a cleaner
>> fit and finish once we've addressed the policy bits. It's also a
>> familiar concept for application developers, which was the use case
>> the session was targeting.
>>
>> I probably should have laid out the related policy work before jumping
>> into API keys. We've already committed a bunch of keystone resource to
>> policy improvements this cycle, but I'm hoping we can work API keys
>> and policy improvements in parallel.
>>
>> [0] https://review.openstack.org/#/c/460344/
>> [1] https://review.openstack.org/#/c/462733/
>> [2] https://review.openstack.org/#/c/464763/
>> [3] https://review.openstack.org/#/c/433037/
>> [4] https://review.openstack.org/#/c/427872/
>>
> I'm well aware of the policy work, and it is fantastic to see it
> progressing! I can't wait to actually be able to play with that stuff!
> We've been painstakingly tweaking the json policy files which is a giant
> mess.
> 
> I'm just concerned that this feels like a feature we don't really need
> when really it's just a slight variant of a user with a new auth model
> (that is really just another flavour of username/password). The sole
> reason most of the other cloud services have API keys is because a user
> can't talk to the API directly. OpenStack does not have that problem,
> users are API keys. So I think what we really need to consider is what
> exact benefit does API keys actually give us that won't be solved with
> users and better policy?

The benefits of API key are if it's the same across all deployments, so
your applications can depend on it working. That means the application
has to be able to:

1. provision an API Key with normal user credentials
2. set/reduce permissions with that with those same user credentials
3. operate with those credentials at the project level (so that 

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Andreas Jaeger
On 2017-05-16 12:10, Julien Danjou wrote:
> On Tue, May 16 2017, Andreas Jaeger wrote:
> 
>> what exactly happened with Babel?
>>
>> I see in global-requirements the following:
>> Babel>=2.3.4,!=2.4.0  # BSD
>>
>> that shouldn't case a problem - or does it? Or what's the problem?
> 
> Damn, at the moment I pressed the `Sent' button I thought "You just
> complained without including much detail idiot". Sorry about that!

no worries.

> One of the log that fails:
> 
>  
> http://logs.openstack.org/13/464713/2/check/gate-gnocchi-tox-py27-mysql-ceph-upgrade-from-3.1-ubuntu-xenial/db61bdf/console.html
> 
> 
> Basically oslo.policy pulls oslo.i18n which pulls Babel!=2.4.0
> But Babel is already pulled by os-testr which depends on >=2.3.4.

and os-testr is not importing global-requirements:
https://review.openstack.org/#/c/454511/

> So pip does not solve that (unfortunately) and then the failure is:
> 
> 2017-05-16 05:08:43.629772 | 2017-05-16 05:08:43.503 10699 ERROR gnocchi
> ContextualVersionConflict: (Babel 2.4.0
> (/home/jenkins/workspace/gate-gnocchi-tox-py27-mysql-ceph-upgrade-from-3.1-ubuntu-xenial/upgrade/lib/python2.7/site-packages),
> Requirement.parse('Babel!=2.4.0,>=2.3.4'), set(['oslo.i18n']))
> 
> I'm pretty sure Babel should not even be in the requirements list of
> oslo.i18n since it's not a runtime dependency AFAIU.

It is needed to generate the translations, but can't we move it for
oslo-i18n into test-requirements?

But os-testr does not need Babel at all - let's remove it,
https://review.openstack.org/465023

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] etcd tarballs for CI use

2017-05-16 Thread Davanum Srinivas
Jesse,

Great question :) We need the version that has the grpc gateway v3alpha API:
https://github.com/coreos/etcd/pull/5669

Since we want to standardize on the etcd v3 API (to avoid migration of
data from /v2 to /v3). Unfortunately the v3 API is gRPC based and has
trouble with eventlet based processes. So we need the /v3alpha HTTP
API. You can see the prior discussion and list of bugs from Jay in
https://review.openstack.org/#/c/446983/

the etcd in xenial is 2.x which does not have either the gRPC v3 or
the gRPC+gateway HTTP API.

Thanks,
Dims

On Tue, May 16, 2017 at 5:02 AM, Jesse Pretorius
 wrote:
> On 5/15/17, 11:20 PM, "Davanum Srinivas"  wrote:
>
>> At this moment, though Fedora has 3.1.7 [1], Xenial is way too old, So
>> we will need to pull down tar balls from either [2] or [3]. proposing
>> backports is a possibility, but then we need some flexibility if we
>> end up picking up some specific version (say 3.0.17 vs 3.1.7). So a
>> download location would be good to have so we can request infra to
>> push versions we can experiment with.
>
> Hi Dims,
>
> I can’t help but ask - how old is too old? By what measure are we saying
> something is too old?
>
> I think we need to be careful with what we do here and ensure that the
> Distribution partners we have are on board with the criteria and whether
> They’re ready to include more recent package versions in their extra
> Archives (eg: RDO / UCA).
>
> As developers we want the most recent things because reasons… but
> Distributions and Operators are then stuck with raised complexity in
> their release and quality management processes.
>
>
>
> 
> Rackspace Limited is a company registered in England & Wales (company 
> registered number 03897010) whose registered office is at 5 Millington Road, 
> Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
> viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
> contain confidential or privileged information intended for the recipient. 
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited. If you receive this transmission in error, please notify us 
> immediately by e-mail at ab...@rackspace.com and delete the original message. 
> Your cooperation is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Julien Danjou
On Tue, May 16 2017, Andreas Jaeger wrote:

> what exactly happened with Babel?
>
> I see in global-requirements the following:
> Babel>=2.3.4,!=2.4.0  # BSD
>
> that shouldn't case a problem - or does it? Or what's the problem?

Damn, at the moment I pressed the `Sent' button I thought "You just
complained without including much detail idiot". Sorry about that!

One of the log that fails:

 
http://logs.openstack.org/13/464713/2/check/gate-gnocchi-tox-py27-mysql-ceph-upgrade-from-3.1-ubuntu-xenial/db61bdf/console.html


Basically oslo.policy pulls oslo.i18n which pulls Babel!=2.4.0
But Babel is already pulled by os-testr which depends on >=2.3.4.
So pip does not solve that (unfortunately) and then the failure is:

2017-05-16 05:08:43.629772 | 2017-05-16 05:08:43.503 10699 ERROR gnocchi
ContextualVersionConflict: (Babel 2.4.0
(/home/jenkins/workspace/gate-gnocchi-tox-py27-mysql-ceph-upgrade-from-3.1-ubuntu-xenial/upgrade/lib/python2.7/site-packages),
Requirement.parse('Babel!=2.4.0,>=2.3.4'), set(['oslo.i18n']))

I'm pretty sure Babel should not even be in the requirements list of
oslo.i18n since it's not a runtime dependency AFAIU.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-sfc] pep8 failing

2017-05-16 Thread Vikash Kumar
Hi Team,

  pep8 is failing in master code. *translation hint helpers *are removed
from LOG messages. Is this purposefully done ? Let me know if it is not,
will change it.

./networking_sfc/db/flowclassifier_db.py:342:13: N531  Log messages require
translation hints!
LOG.info("Deleting a non-existing flow classifier.")
^
./networking_sfc/db/sfc_db.py:383:13: N531  Log messages require
translation hints!
LOG.info("Deleting a non-existing port chain.")
^
./networking_sfc/db/sfc_db.py:526:13: N531  Log messages require
translation hints!
LOG.info("Deleting a non-existing port pair.")
^
./networking_sfc/db/sfc_db.py:658:13: N531  Log messages require
translation hints!
LOG.info("Deleting a non-existing port pair group.")
^
./networking_sfc/services/flowclassifier/driver_manager.py:38:9: N531  Log
messages require translation hints!
LOG.info("Configured Flow Classifier drivers: %s", names)
^
./networking_sfc/services/flowclassifier/driver_manager.py:44:9: N531  Log
messages require translation hints!
LOG.info("Loaded Flow Classifier drivers: %s",
^
./networking_sfc/services/flowclassifier/driver_manager.py:80:9: N531  Log
messages require translation hints!
LOG.info("Registered Flow Classifier drivers: %s",
^
./networking_sfc/services/flowclassifier/driver_manager.py:87:13: N531  Log
messages require translation hints!
LOG.info("Initializing Flow Classifier driver '%s'",
^
./networking_sfc/services/flowclassifier/driver_manager.py:107:17: N531
Log messages require translation hints!
LOG.error(
^
./networking_sfc/services/flowclassifier/plugin.py:63:17: N531  Log
messages require translation hints!
LOG.error("Create flow classifier failed, "
^
./networking_sfc/services/flowclassifier/plugin.py:87:17: N531  Log
messages require translation hints!
LOG.error("Update flow classifier failed, "
^
./networking_sfc/services/flowclassifier/plugin.py:102:17: N531  Log
messages require translation hints!
LOG.error("Delete flow classifier failed, "
^
./networking_sfc/services/sfc/driver_manager.py:38:9: N531  Log messages
require translation hints!
LOG.info("Configured SFC drivers: %s", names)
^
./networking_sfc/services/sfc/driver_manager.py:43:9: N531  Log messages
require translation hints!
LOG.info("Loaded SFC drivers: %s", self.names())
^
./networking_sfc/services/sfc/driver_manager.py:78:9: N531  Log messages
require translation hints!
LOG.info("Registered SFC drivers: %s",
^
./networking_sfc/services/sfc/driver_manager.py:85:13: N531  Log messages
require translation hints!
LOG.info("Initializing SFC driver '%s'", driver.name)
^
./networking_sfc/services/sfc/driver_manager.py:104:17: N531  Log messages
require translation hints!
LOG.error(
^
./networking_sfc/services/sfc/plugin.py:57:17: N531  Log messages require
translation hints!
LOG.error("Create port chain failed, "
^
./networking_sfc/services/sfc/plugin.py:82:17: N531  Log messages require
translation hints!
LOG.error("Update port chain failed, port_chain '%s'",
^
./networking_sfc/services/sfc/plugin.py:97:17: N531  Log messages require
translation hints!
LOG.error("Delete port chain failed, portchain '%s'",
^
./networking_sfc/services/sfc/plugin.py:122:17: N531  Log messages require
translation hints!
LOG.error("Create port pair failed, "
^
./networking_sfc/services/sfc/plugin.py:144:17: N531  Log messages require
translation hints!
LOG.error("Update port pair failed, port_pair '%s'",
^
./networking_sfc/services/sfc/plugin.py:159:17: N531  Log messages require
translation hints!
LOG.error("Delete port pair failed, port_pair '%s'",
^
./networking_sfc/services/sfc/plugin.py:185:17: N531  Log messages require
translation hints!
LOG.error("Create port pair group failed, "
^
./networking_sfc/services/sfc/plugin.py:213:17: N531  Log messages require
translation hints!
LOG.error("Update port pair group failed, "
^
./networking_sfc/services/sfc/plugin.py:229:17: N531  Log messages require
translation hints!
LOG.error("Delete port pair group failed, "
^
./networking_sfc/services/sfc/agent/extensions/sfc.py:111:13: N531  Log
messages require translation hints!
LOG.error("SFC L2 extension handle_port failed")
^
./networking_sfc/services/sfc/agent/extensions/sfc.py:124:9: N531  Log
messages require translation hints!
LOG.info("a device %s is removed", port_id)
 

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Andreas Jaeger
On 2017-05-16 11:42, Julien Danjou wrote:
> On Wed, Apr 19 2017, Julien Danjou wrote:
> 
>> So Gnocchi gate is all broken (agan) because it depends on "pbr" and
>> some new release of oslo.* depends on pbr!=2.1.0.
> 
> Same things happened today with Babel. As far as Gnocchi is concerned,
> we're going to take the easiest route and remove all our oslo
> dependencies over the next months for sanely maintained alternative at
> this point.

what exactly happened with Babel?

I see in global-requirements the following:
Babel>=2.3.4,!=2.4.0  # BSD

that shouldn't case a problem - or does it? Or what's the problem?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Luigi Toscano
On Monday, 15 May 2017 21:12:16 CEST Doug Hellmann wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
> 
> > On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> > > I'm raising the issue here to get some more input into how to
> > > proceed. Do other people think this concern is overblown? Can we
> > > mitigate the risk by communicating through metadata for the images?
> > > Should we stick to publishing build instructions (Dockerfiles, or
> > > whatever) instead of binary images? Are there other options I haven't
> > > mentioned?
> > 
> > Today we do publish build instructions, that's what Kolla is. We also
> > publish built containers already, just we do it manually on release
> > today. If we decide to block it, I assume we should stop doing that
> > too? That will hurt users who uses this piece of Kolla, and I'd hate
> > to hurt our users:(
> 
> Well, that's the question. Today we have teams publishing those
> images themselves, right? And the proposal is to have infra do it?
> That change could be construed to imply that there is more of a
> relationship with the images and the rest of the community (remember,
> folks outside of the main community activities do not always make
> the same distinctions we do about teams). So, before we go ahead
> with that, I want to make sure that we all have a chance to discuss
> the policy change and its implications.

Sorry for hijacking the thread, but we have a similar scenario for example in 
Sahara. It is about full VM images containing Hadoop/Spark/other_big_data 
stuff, and not containers, but it's looks really the same.
So far ready-made images have been published under 
http://sahara-files.mirantis.com/images/upstream/, but we are looking to have 
them hosted on 
openstack.org, just like other artifacts. 

We asked about this few days ago on openstack-infra@, but no answer so far 
(the Summit didn't help):

http://lists.openstack.org/pipermail/openstack-infra/2017-April/005312.html

I think that the answer to the question raised in this thread is definitely 
going to be relevant for our use case.

Ciao
-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Julien Danjou
On Wed, Apr 19 2017, Julien Danjou wrote:

> So Gnocchi gate is all broken (agan) because it depends on "pbr" and
> some new release of oslo.* depends on pbr!=2.1.0.

Same things happened today with Babel. As far as Gnocchi is concerned,
we're going to take the easiest route and remove all our oslo
dependencies over the next months for sanely maintained alternative at
this point.

Cheers,
-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Issue while applying customs configuration to overcloud.

2017-05-16 Thread Dnyaneshwar Pawar
Hi Marios,
Thanks for your reply.
Referred example mentioned at 
https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html
 , it is failing with error mentioned at http://paste.openstack.org/show/609644/


Regards,
Dnyaneshwar

From: Marios Andreou 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, May 16, 2017 at 11:59 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [tripleo] Issue while applying customs 
configuration to overcloud.



On Tue, May 16, 2017 at 7:33 AM, Dnyaneshwar Pawar 
> wrote:
Hi TripleO team,

I am trying to apply custom configuration to an existing overcloud. (using 
openstack overcloud deploy command)
Though there is no error, the configuration is in not applied to overcloud.
Am I missing anything here?
http://paste.openstack.org/show/609619/




[stack@h-uc test]$ cat tripleo_ocata.yaml

resource_registry:

  OS::TripleO::ControllerServer: /home/stack/test/heat3_ocata.yaml



^^^ this bit won't work for you. The 'normal' ControllerServer points to 
'OS::TripleO::Server' and then 'OS::Nova::Server' 
https://github.com/openstack/tripleo-heat-templates/blob/66b39c2c21b6629222c0d212642156437119e977/overcloud-resource-registry-puppet.j2.yaml#L44-L47

You're overriding it with something that defines a 'normal' SoftwareConfig 
(afaics it is 'correct' heat template syntax fwiw) but I don't think it is 
going to run on any servers && surprised you don't get an error for the 
properties being passed in here 
https://github.com/openstack/tripleo-heat-templates/blob/ef82c3a010cf6161f1da1020698dbd38257f5a12/puppet/controller-role.yaml#L168-L175

[stack@h-uc test]$ openstack overcloud deploy --templates -e  
tripleo_ocata.yaml 2>&1 |tee dny4.log



^^^ here be aware that you should re-specify all the environment files you used 
on the original deploy in addition to your customization environments at the 
end (tripleo_ocata.yaml). Otherwise you'll be getting all the defaults 
specified by the /usr/share/openstack-tripleo-heat-templates



Have you see this 
https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html
 there are some examples there that do what you want.

Instead of overriding the ControllerServer try "OS::TripleO::NodeUserData" for 
example



hope it helps




Thanks and Regards,
Dnyaneshwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Proposing Fanglei Zhu for Tempest core

2017-05-16 Thread Ghanshyam Mann
+1. Nice work done by Fanglei and good to have her in team.

-gmann


On Tue, May 16, 2017 at 5:22 PM, Andrea Frittoli
 wrote:
> Hello team,
>
> I'm very pleased to propose Fanglei Zhu (zhufl) for Tempest core.
>
> Over the past two cycle Fanglei has been steadily contributing to Tempest
> and its community.
> She's done a great deal of work in making Tempest code cleaner, easier to
> read, maintain and
> debug, fixing bugs and removing cruft. Both her code as well as her reviews
> demonstrate a
> very good understanding of Tempest internals and of the project future
> direction.
> I believe Fanglei will make an excellent addition to the team.
>
> As per the usual, if the current Tempest core team members would please vote
> +1
> or -1(veto) to the nomination when you get a chance. We'll keep the polls
> open
> for 5 days or until everyone has voted.
>
> References:
> https://review.openstack.org/#/q/owner:zhu.fanglei%2540zte.com.cn
> https://review.openstack.org/#/q/reviewer:zhufl
>
> Thank you,
>
> Andrea (andreaf)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] etcd tarballs for CI use

2017-05-16 Thread Jesse Pretorius
On 5/15/17, 11:20 PM, "Davanum Srinivas"  wrote:

> At this moment, though Fedora has 3.1.7 [1], Xenial is way too old, So
> we will need to pull down tar balls from either [2] or [3]. proposing
> backports is a possibility, but then we need some flexibility if we
> end up picking up some specific version (say 3.0.17 vs 3.1.7). So a
> download location would be good to have so we can request infra to
> push versions we can experiment with.

Hi Dims,

I can’t help but ask - how old is too old? By what measure are we saying
something is too old?

I think we need to be careful with what we do here and ensure that the
Distribution partners we have are on board with the criteria and whether
They’re ready to include more recent package versions in their extra
Archives (eg: RDO / UCA).

As developers we want the most recent things because reasons… but
Distributions and Operators are then stuck with raised complexity in
their release and quality management processes.




Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]cancellation of weekly meeting(May 17) due to bug smash

2017-05-16 Thread joehuang
Hello, team,

The bug smash will be held May.17~19, the weekly meeting of May. 17 will be 
cancelled.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Consolidating web themes

2017-05-16 Thread Alexandra Settle
This all sounds really great ☺ thanks for taking it on board, Anne!

No questions at present ☺ looking forward to seeing the new design!

From: Anne Gentle 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, May 15, 2017 at 2:33 PM
To: "openstack-d...@lists.openstack.org" , 
OpenStack Development Mailing List 
Subject: [openstack-dev] [all] Consolidating web themes

Hi all,

I wanted to make you all aware of some consolidation efforts I'll be working on 
this release. You may have noticed a new logo for OpenStack, and perhaps you 
saw the update to the web design and headers on 
docs.openstack.org as well.

To continue these efforts, I'll also be working on having all docs pages use 
one theme, the openstackdocstheme, that has these latest updates. Currently we 
are using version 1.8.0, and I'll do more releases as we complete the UI 
consolidation.

I did an analysis to compare oslosphinx to openstackdocstheme, and I wanted to 
let this group know about the upcoming changes so you can keep an eye out for 
reviews. This effort will take a while, and I'd welcome help, of course.

There are a few UI items that I don't plan port from oslosphinx to 
openstackdocstheme:

Quick search form in bottom of left-hand navigation bar (though I'd welcome a 
way to unify that UI and UX across the themes).
Previous topic and Next topic shown in left-hand navigation bar (these are 
available in the openstackdocstheme in a different location).
Return to project home page link in left-hand navigation bar. (also would 
welcome a design that fits well in the openstackdocstheme left-hand nav)
Customized list of links in header. For example, the page 
athttps://docs.openstack.org/infra/system-config/
 contains a custom header.
When a landing page like https://docs.openstack.org/infra/ uses oslosphinx, the 
page should be redesigned with the new theme in mind.

I welcome input on these changes, as I'm sure I haven't caught every scenario, 
and this is my first wider communication about the theme changes. The spec for 
this work is detailed here: 
http://specs.openstack.org/openstack/docs-specs/specs/pike/consolidating-themes.html

Let me know what I've missed, what you cannot live without, and please reach 
out if you'd like to help.

Thanks,
Anne

--
Technical Product Manager, Cisco Metacloud
annegen...@justwriteclick.com
@annegentle


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] Proposing Fanglei Zhu for Tempest core

2017-05-16 Thread Andrea Frittoli
Hello team,

I'm very pleased to propose Fanglei Zhu (zhufl) for Tempest core.

Over the past two cycle Fanglei has been steadily contributing to Tempest
and its community.
She's done a great deal of work in making Tempest code cleaner, easier to
read, maintain and
debug, fixing bugs and removing cruft. Both her code as well as her reviews
demonstrate a
very good understanding of Tempest internals and of the project future
direction.
I believe Fanglei will make an excellent addition to the team.

As per the usual, if the current Tempest core team members would please
vote +1
or -1(veto) to the nomination when you get a chance. We'll keep the polls
open
for 5 days or until everyone has voted.

References:
https://review.openstack.org/#/q/owner:zhu.fanglei%2540zte.com.cn
https://review.openstack.org/#/q/reviewer:zhufl

Thank you,

Andrea (andreaf)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-16 Thread Mehdi Abaakouk

+1 too, I haven't seen its contributors since a while.

On Mon, May 15, 2017 at 09:42:00PM -0400, Flavio Percoco wrote:

On 15/05/17 15:29 -0500, Ben Nemec wrote:



On 05/15/2017 01:55 PM, Doug Hellmann wrote:

Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15 14:27:36 -0400:

On Mon, May 15, 2017 at 2:08 PM, Ken Giusti  wrote:

Folks,

It was decided at the oslo.messaging forum at summit that the pika
driver will be marked as deprecated [1] for removal.


[dims} +1 from me.


+1


Also +1


+1

Flavio

--
@flaper87
Flavio Percoco





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [POC] Introduce an auto-converge policy to speedup migration

2017-05-16 Thread Chao Fan
Hi Chris,

Sorry for no Cc to you, I remember I have added cc

Thanks,
Chao Fan

On Mon, May 15, 2017 at 01:30:39PM +0800, Chao Fan wrote:
>On Thu, May 11, 2017 at 02:34:16PM -0400, Chris Friesen wrote:
>>On 05/11/2017 05:58 AM, Chao Fan wrote:
>>> Hi all,
>>> 
>>> We plan to develop a policy about auto-converge, which can set cpu
>>> throttle value automatically according to the workload
>>> (dirty-pages-rate). It uses the API of libvirt to set the
>>> cpu-throttle-initial and cpu-throttle-increment.
>>> But the spec file of nova shows the dependent API is not accepted
>>> by OpenStack:
>>> 
>>> The initial decrease and increment size can be adjusted during
>>> the live migration process via the libvirt API. However these API calls
>>> are experimental so nova will not be using them.
>>> 
>>> So I am wondering if OpenStack is willing to use this API and accept
>>> the policy mentioned above.
>>
>>Just to clarify, as I understand it:
>>
>>1) You are pointing out that the auto-live-migration spec from Newton[1] says
>>that the libvirt APIs to set the initial and increment throttle values are
>>experimental and thus won't be using them.
>
>Hi Chris,
>
>Thank you for your reply, and really sorry for delay. Cause I did not
>notice this mail without Cc me.
>
>The spec file is cloned from https://github.com/openstack/nova-specs.git. 
>It looks same to your link.
>
>>
>>2) You are asking whether these APIs are now stable enough to be used in
>>nova, since you want to propose some mechanism to allow them to be changed.
>>
>>Is that accurate?
>
>Yes, your understanding is right.
>
>Thanks,
>Chao Fan
>
>>
>>Chris
>>
>>
>>[1] 
>>https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/auto-live-migration-completion.html
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [storlets] No team meeting today

2017-05-16 Thread Eran Rom
Hi All,
There will be no team meeting today.
As usual, if you have something please ping at #openstack-storlets

Thanks, 
Eran

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Issue while applying customs configuration to overcloud.

2017-05-16 Thread Marios Andreou
On Tue, May 16, 2017 at 7:33 AM, Dnyaneshwar Pawar <
dnyaneshwar.pa...@veritas.com> wrote:

> Hi TripleO team,
>
> I am trying to apply custom configuration to an existing overcloud. (using
> openstack overcloud deploy command)
> Though there is no error, the configuration is in not applied to overcloud.
> Am I missing anything here?
> http://paste.openstack.org/show/609619/
>
>
>

[stack@h-uc test]$ cat tripleo_ocata.yaml
resource_registry:
  OS::TripleO::ControllerServer: /home/stack/test/heat3_ocata.yaml

^^^ this bit won't work for you. The 'normal' ControllerServer points
to 'OS::TripleO::Server' and then 'OS::Nova::Server'
https://github.com/openstack/tripleo-heat-templates/blob/66b39c2c21b6629222c0d212642156437119e977/overcloud-resource-registry-puppet.j2.yaml#L44-L47

You're overriding it with something that defines a 'normal'
SoftwareConfig (afaics it is 'correct' heat template syntax fwiw) but
I don't think it is going to run on any servers && surprised you don't
get an error for the properties being passed in here
https://github.com/openstack/tripleo-heat-templates/blob/ef82c3a010cf6161f1da1020698dbd38257f5a12/puppet/controller-role.yaml#L168-L175

[stack@h-uc test]$ openstack overcloud deploy --templates -e
tripleo_ocata.yaml 2>&1 |tee dny4.log


^^^ here be aware that you should re-specify all the environment files
you used on the original deploy in addition to your customization
environments at the end (tripleo_ocata.yaml). Otherwise you'll be
getting all the defaults specified by the
/usr/share/openstack-tripleo-heat-templates


Have you see this
https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html
there are some examples there that do what you want.

Instead of overriding the ControllerServer try
"OS::TripleO::NodeUserData" for example



hope it helps





> Thanks and Regards,
> Dnyaneshwar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2