Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-15 Thread Colleen Murphy
On Sun, May 14, 2017 at 6:59 PM, Monty Taylor  wrote:

> On 05/11/2017 02:32 PM, Lance Bragstad wrote:
>
>> Hey all,
>>
>> One of the Baremetal/VM sessions at the summit focused on what we need
>> to do to make OpenStack more consumable for application developers [0].
>> As a group we recognized the need for application specific passwords or
>> API keys and nearly everyone (above 85% is my best guess) in the session
>> thought it was an important thing to pursue. The API
>> key/application-specific password specification is up for review [1].
>>
>> The problem is that with all the recent churn in the keystone project,
>> we don't really have the capacity to commit to this for the cycle. As a
>> project, we're still working through what we've committed to for Pike
>> before the OSIC fallout. It was suggested that we reach out to the PWG
>> to see if this is something we can get some help on from a keystone
>> development perspective. Let's use this thread to see if there is anyway
>> we can better enable the community through API keys/application-specific
>> passwords by seeing if anyone can contribute resources to this effort.
>>
>
> In the session, I signed up to help get the spec across the finish line.
> I'm also going to do my best to write up something resembling a user story
> so that we're all on the same page about what this is, what it isn't and
> what comes next.
>
> I probably will not have the time to actually implement the code - but if
> the PWG can help us get resources allocated to this I'll be happy to help
> them.
>
If anyone's counting, here are the current open specs (that I've found)
that attempt to address, in slightly different ways, the slightly different
use cases for API keys (not including the open specs to overhaul policy):

 - https://review.openstack.org/#/c/186979 - Subset tokens
 - https://review.openstack.org/#/c/389870 - Adding user credentials and
delegating role assignments to credential types
 - https://review.openstack.org/#/c/396634 - Standalone trusts
 - https://review.openstack.org/#/c/440593 - API keys
 - https://review.openstack.org/#/c/450415 - Application keys

Additionally, I think OAuth - either extending the existing OAuth1.0 plugin
or implementing OAuth2.0 - should probably be on the table.

Colleen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-15 Thread Adrian Turjak


On 16/05/17 16:13, Colleen Murphy wrote:
> On Tue, May 16, 2017 at 2:07 AM, Adrian Turjak
> > wrote:
> 
>
>
> Tangentially related to this (because my reasons are different),
> on our cloud I'm actually working on something like this, but
> under the hood all I'm doing is creating a user with a generated
> password and enforcing a username convention. I ask them for a
> name and what roles they want for the user and I spit out:
> username: "service_user_for_web_app_1@"
> password: ""
>
>  
> On Tue, May 16, 2017 at 4:10 AM, Adrian
> Turjak > wrote:
>
>
> On 16/05/17 14:00, Adrian Turjak wrote:
>
>  
>
>> I'm just concerned that this feels like a feature we don't really
>> need when really it's just a slight variant of a user with a new
>> auth model (that is really just another flavour of
>> username/password). The sole reason most of the other cloud
>> services have API keys is because a user can't talk to the API
>> directly. OpenStack does not have that problem, users are API
>> keys. So I think what we really need to consider is what exact
>> benefit does API keys actually give us that won't be solved with
>> users and better policy?
>>
>> From my look at the specs the only feature difference compared to
>> users is optional expiry of the API keys. Why make something
>> entirely different for just that one feature when, as David says
>> in his spec, there is debate if that feature is even a good idea.
>>
>> As an application developer, I don't see why I can't just create
>> a user and limit the roles. I feel as if this is better addressed
>> with documentation because it almost sounds like people are
>> asking for something that already exists, but just doesn't have
>> as nice an API as they would like. Another option, make a better
>> API in Keystone for user creation/management alongside the old
>> one? That's pretty much what we did, except we wrote a service to
>> act as a proxy/wrapper around Keystone for some customer actions.
> If expiry is the killer feature, why no just add it to users?
> Temporary user accounts could solve that, and probably be useful
> beyond the scope of just API keys.
>
>  
> It's not just expiry. I think your proposal is missing one of the
> major use cases: empowerment of non-admin users. A non-admin can't
> create new users themselves, they have to (as you've pointed out) ask
> an admin to do it for them. As an application developer, I want to be
> able to delegate a subset of my own roles to a programmatic entity
> without being dependent on some other human. One of the (numerous)
> specs proposed seeks to address that use
> case: https://review.openstack.org/#/c/396634
>
> Colleen
>

That still doesn't seem like justification enough to make an entirely
new auth type and 'user-lite' model. The problem is that you have to be
admin to create users, that shouldn't have to be the case. You should be
able to create users, but ONLY give them roles for your projects or your
project tree. Keystone doesn't do that, and there is no way with policy
to do that currently. Hell, we wrote a service just to do that on behalf
our customers since keystone didn't give us that that level of control,
and because we really didn't want them needing an admin to do it for them.

So the features we are after are:
- the ability as a non-admin to create user or user-like access objects.
- the ability to maybe expire those

This is still sounding like a feature to get around flaws in the current
system rather than fix those flaws. Are we saying it is easier and
better to introduce more models and complexity than fix the existing
system to make it useful? We only did it in an external service because
we had additional requirements that didn't fit fit into core keystone,
but then ended up with a nice place to do some wrapper logic around the
limited keystone user management.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Issue while applying customs configuration to overcloud.

2017-05-15 Thread Dnyaneshwar Pawar
Hi TripleO team,

I am trying to apply custom configuration to an existing overcloud. (using 
openstack overcloud deploy command)
Though there is no error, the configuration is in not applied to overcloud.
Am I missing anything here?
http://paste.openstack.org/show/609619/


Thanks and Regards,
Dnyaneshwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [Pile] Need Exemption On Submitted Spec for the Keystone

2017-05-15 Thread Mh Raies
Hi Lance,



We had submitted one blueprint and it’s Specs last weeks.

Blueprint - 
https://blueprints.launchpad.net/keystone/+spec/api-implemetation-required-to-download-identity-policies

Spec - https://review.openstack.org/#/c/463547/



As Keystone Pike proposal freeze is already completed on April 14th 2017, to 
proceed on this Spec we need your help.

Implementation of this Spec is also started and being addressed by - 
https://review.openstack.org/#/c/463543/



So, if we can get an exemption to proceed with the Spec review and approval 
process, it will be a great help for us.



[Ericsson]

Mh Raies
Senior Solution Integrator
Ericsson Consulting and Systems Integration
Gurgaon, India | Mobile +91 9901555661



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Steven Dake (stdake)
Flavio,

Forgive the top post – outlook ftw.

I understand the concerns raised in this thread.  It is unclear if this thread 
is the feeling of two TC members or enough TC members care deeply about this 
issue to permanently limit OpenStack big tent projects’ ability to generate 
container images in various external artifact storage systems.  The point of 
discussion I see effectively raised in this thread is “OpenStack infra will not 
push images to dockerhub”.

I’d like clarification if this is a ruling from the TC, or simply an 
exploratory discussion.

If it is exploratory, it is prudent that OpenStack projects not be blocked by 
debate on this issue until the TC has made such ruling as to banning the 
creation of container images via OpenStack infrastructure.

Regards
-steve

-Original Message-
From: Flavio Percoco 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, May 15, 2017 at 7:00 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:
>On 15 May 2017 at 12:12, Doug Hellmann  wrote:

[huge snip]

>>> > I'm raising the issue here to get some more input into how to
>>> > proceed. Do other people think this concern is overblown? Can we
>>> > mitigate the risk by communicating through metadata for the images?
>>> > Should we stick to publishing build instructions (Dockerfiles, or
>>> > whatever) instead of binary images? Are there other options I haven't
>>> > mentioned?
>>>
>>> Today we do publish build instructions, that's what Kolla is. We also
>>> publish built containers already, just we do it manually on release
>>> today. If we decide to block it, I assume we should stop doing that
>>> too? That will hurt users who uses this piece of Kolla, and I'd hate
>>> to hurt our users:(
>>
>> Well, that's the question. Today we have teams publishing those
>> images themselves, right? And the proposal is to have infra do it?
>> That change could be construed to imply that there is more of a
>> relationship with the images and the rest of the community (remember,
>> folks outside of the main community activities do not always make
>> the same distinctions we do about teams). So, before we go ahead
>> with that, I want to make sure that we all have a chance to discuss
>> the policy change and its implications.
>
>Infra as vm running with infra, but team to publish it can be Kolla
>team. I assume we'll be responsible to keep these images healthy...

I think this is the gist of the concern and I'd like us to focus on it.

As someone that used to consume these images from kolla's dockerhub account
directly, I can confirm they are useful. However, I do share Doug's concern 
and
the impact this may have on the community.

From a release perspective, as Doug mentioned, we've avoided releasing 
projects
in any kind of built form. This was also one of the concerns I raised when
working on the proposal to support other programming languages. The problem 
of
releasing built images goes beyond the infrastructure requirements. It's the
message and the guarantees implied with the built product itself that are 
the
concern here. And I tend to agree with Doug that this might be a problem 
for us
as a community. Unfortunately, putting your name, Michal, as contact point 
is
not enough. Kolla is not the only project producing container images and we 
need
to be consistent in the way we release these images.

Nothing prevents people for building their own images and uploading them to
dockerhub. Having this as part of the OpenStack's pipeline is a problem.

Flavio

P.S: note this goes against my container(ish) interests but it's a
community-wide problem.

-- 
@flaper87
Flavio Percoco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-15 Thread Colleen Murphy
On Tue, May 16, 2017 at 2:07 AM, Adrian Turjak 
wrote:

>
>
> Tangentially related to this (because my reasons are different), on our
> cloud I'm actually working on something like this, but under the hood all
> I'm doing is creating a user with a generated password and enforcing a
> username convention. I ask them for a name and what roles they want for the
> user and I spit out:
> username: "service_user_for_web_app_1@"
> password: ""
>
>
On Tue, May 16, 2017 at 4:10 AM, Adrian Turjak 
 wrote:
>
>
> On 16/05/17 14:00, Adrian Turjak wrote:
>


> I'm just concerned that this feels like a feature we don't really need
> when really it's just a slight variant of a user with a new auth model
> (that is really just another flavour of username/password). The sole reason
> most of the other cloud services have API keys is because a user can't talk
> to the API directly. OpenStack does not have that problem, users are API
> keys. So I think what we really need to consider is what exact benefit does
> API keys actually give us that won't be solved with users and better policy?
>
> From my look at the specs the only feature difference compared to users is
> optional expiry of the API keys. Why make something entirely different for
> just that one feature when, as David says in his spec, there is debate if
> that feature is even a good idea.
>
> As an application developer, I don't see why I can't just create a user
> and limit the roles. I feel as if this is better addressed with
> documentation because it almost sounds like people are asking for something
> that already exists, but just doesn't have as nice an API as they would
> like. Another option, make a better API in Keystone for user
> creation/management alongside the old one? That's pretty much what we did,
> except we wrote a service to act as a proxy/wrapper around Keystone for
> some customer actions.
>
> If expiry is the killer feature, why no just add it to users? Temporary
> user accounts could solve that, and probably be useful beyond the scope of
> just API keys.
>

It's not just expiry. I think your proposal is missing one of the major use
cases: empowerment of non-admin users. A non-admin can't create new users
themselves, they have to (as you've pointed out) ask an admin to do it for
them. As an application developer, I want to be able to delegate a subset
of my own roles to a programmatic entity without being dependent on some
other human. One of the (numerous) specs proposed seeks to address that use
case: https://review.openstack.org/#/c/396634

Colleen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FaaS] Introduce a FaaS project

2017-05-15 Thread Lingxian Kong
Hi, Rob,

Appreciate your suggestions. Please see my inline comments.

On Mon, May 15, 2017 at 11:17 PM, Robert Putt 
wrote:

> For me the important things are:
>
>
>
> a)   Sandboxed code in some container solution
>
Yeah, it's on the roadmap (may happen in several days.)

> b)   Pluggable backends for said sandbox to remove vendor lock in
>
c)   Pluggable storage for function packages, the default probably
> being Swift
>
Qinling already supports plugable storage. In order to make it easy to
test, the default is local file system. But it's up to deployer to decide
which storage solution to use.

> d)   Integration with Keystone for auth and role based access control
> e.g. sharing functions with other tenants but maybe with different
> permissions, e.g. dev tenant in a domain can publish functions but prod
> tenant can only execute the functions.
>
Qinling supports Keystone for authentication. RBAC is on the roadmap.

> e)   Integration with Neutron so functions can access tenant networks.
>
This needs to be discussed further. Currently, the code is executed inside
container which locates in orchestration system. Not sure it's easy to make
that container access tenant network.

> f)A web services gateway to create RESTful APIs and map URIs /
> verbs / API requests to functions.
>
Currently, user could invoke function by calling Qinling's rest api, but I
agree with you that an API Gateway service is indeed necessary to provide
more flexibility to end users.

> g)   It would also be nice to have some meta data service like what
> we see in Nova so functions can have an auto injected context relating to
> the tenant running it rather than having to inject all parameters via the
> API.
>

Cheers,
Lingxian Kong (Larry)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-15 Thread Adrian Turjak


On 16/05/17 14:00, Adrian Turjak wrote:
>
> On 16/05/17 13:29, Lance Bragstad wrote:
>>
>>
>> On Mon, May 15, 2017 at 7:07 PM, Adrian Turjak
>> > wrote:
>>
>>
>> On 16/05/17 01:09, Lance Bragstad wrote:
>>>
>>>
>>> On Sun, May 14, 2017 at 11:59 AM, Monty Taylor
>>> > wrote:
>>>
>>> On 05/11/2017 02:32 PM, Lance Bragstad wrote:
>>>
>>> Hey all,
>>>
>>> One of the Baremetal/VM sessions at the summit focused
>>> on what we need
>>> to do to make OpenStack more consumable for application
>>> developers [0].
>>> As a group we recognized the need for application
>>> specific passwords or
>>> API keys and nearly everyone (above 85% is my best
>>> guess) in the session
>>> thought it was an important thing to pursue. The API
>>> key/application-specific password specification is up
>>> for review [1].
>>>
>>> The problem is that with all the recent churn in the
>>> keystone project,
>>> we don't really have the capacity to commit to this for
>>> the cycle. As a
>>> project, we're still working through what we've
>>> committed to for Pike
>>> before the OSIC fallout. It was suggested that we reach
>>> out to the PWG
>>> to see if this is something we can get some help on from
>>> a keystone
>>> development perspective. Let's use this thread to see if
>>> there is anyway
>>> we can better enable the community through API
>>> keys/application-specific
>>> passwords by seeing if anyone can contribute resources
>>> to this effort.
>>>
>>>
>>> In the session, I signed up to help get the spec across the
>>> finish line. I'm also going to do my best to write up
>>> something resembling a user story so that we're all on the
>>> same page about what this is, what it isn't and what comes next.
>>>
>>>
>>> Thanks Monty. If you have questions about the current proposal,
>>> Ron might be lingering in IRC (rderose). David (dstanek) was
>>> also documenting his perspective in another spec [0].
>>>
>>>
>>> [0] https://review.openstack.org/#/c/440593/
>>> 
>>>  
>>>
>>
>> Based on the specs that are currently up in Keystone-specs, I
>> would highly recommend not doing this per user.
>>
>> The scenario I imagine is you have a sysadmin at a company who
>> created a ton of these for various jobs and then leaves. The
>> company then needs to keep his user account around, or create
>> tons of new API keys, and then disable his user once all the
>> scripts he had keys for are replaced. Or more often then not,
>> disable his user and then cry as everything breaks and no one
>> really knows why or no one fully documented it all, or didn't
>> read the docs. Keeping them per project and unrelated to the user
>> makes more sense, as then someone else on your team can
>> regenerate the secrets for the specific Keys as they want. Sure
>> we can advise them to use generic user accounts within which to
>> create these API keys but that implies password sharing which is bad.
>>
>>
>> That said, I'm curious why we would make these as a thing
>> separate to users. In reality, if you can create users, you can
>> create API specific users. Would this be a different
>> authentication mechanism? Why? Why not just continue the work on
>> better access control and let people create users for this.
>> Because lets be honest, isn't a user already an API key? The
>> issue (and the Ron's spec mentions this) is a user having too
>> much access, how would this fix that when the issue is that we
>> don't have fine grained policy in the first place? How does a new
>> auth mechanism fix that? Both specs mention roles so I assume it
>> really doesn't. If we had fine grained policy we could just
>> create users specific to a service with only the roles it needs,
>> and the same problem is solved without any special API, new auth,
>> or different 'user-lite' object model. It feels like this is
>> trying to solve an issue that is better solved by fixing the
>> existing problems.
>>
>> I like the idea behind these specs, but... I'm curious what
>> exactly they are trying to solve. Not to mention if you wanted to
>> automate anything larger such as creating sub-projects and
>> setting up a basic network for each new developer to get access
>> to your team, this wouldn't work unless you could have your API
>> key inherit to subprojects or something more complex, at which
>>   

Re: [openstack-dev] [tripleo][ci] Upgrade CI job for O->P (containerization)

2017-05-15 Thread Flavio Percoco

On 12/05/17 09:30 -0400, Emilien Macchi wrote:

On Wed, May 10, 2017 at 9:26 AM, Jiří Stránský  wrote:

Hi all,

the upgrade job which tests Ocata -> Pike/master upgrade (from bare-metal to
containers) just got a green flag from the CI [1].

I've listed the remaining patches we need to land at the very top of the
container CI etherpad [2], please let's get them reviewed and landed as soon
as we can. The sooner we get the job going, the fewer upgrade regressions
will get merged in the meantime (e.g. we have one from last week).

The CI job utilizes mixed release deployment (master undercloud, overcloud
deployed as Ocata and upgraded to latest). It tests the main overcloud
upgrade phase (no separate compute role upgrades, no converge phase). This
means the testing isn't exhaustive to the full expected "production
scenario", but it covers the most important part where we're likely to see
the most churn and potential breakages. We'll see how much spare wall time
we have to add more things once we get the job to run on patches regularly.


The job you and the team made to make that happen is amazing and outstanding.
Once the jobs are considered stable, I would move them to the gate so
we don't break it. Wdyt?


Couldn't agree more! Thanks to everyone involved...

/me bows and walks away

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Flavio Percoco

On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:

On 15 May 2017 at 12:12, Doug Hellmann  wrote:


[huge snip]


> I'm raising the issue here to get some more input into how to
> proceed. Do other people think this concern is overblown? Can we
> mitigate the risk by communicating through metadata for the images?
> Should we stick to publishing build instructions (Dockerfiles, or
> whatever) instead of binary images? Are there other options I haven't
> mentioned?

Today we do publish build instructions, that's what Kolla is. We also
publish built containers already, just we do it manually on release
today. If we decide to block it, I assume we should stop doing that
too? That will hurt users who uses this piece of Kolla, and I'd hate
to hurt our users:(


Well, that's the question. Today we have teams publishing those
images themselves, right? And the proposal is to have infra do it?
That change could be construed to imply that there is more of a
relationship with the images and the rest of the community (remember,
folks outside of the main community activities do not always make
the same distinctions we do about teams). So, before we go ahead
with that, I want to make sure that we all have a chance to discuss
the policy change and its implications.


Infra as vm running with infra, but team to publish it can be Kolla
team. I assume we'll be responsible to keep these images healthy...


I think this is the gist of the concern and I'd like us to focus on it.

As someone that used to consume these images from kolla's dockerhub account
directly, I can confirm they are useful. However, I do share Doug's concern and
the impact this may have on the community.

From a release perspective, as Doug mentioned, we've avoided releasing projects
in any kind of built form. This was also one of the concerns I raised when
working on the proposal to support other programming languages. The problem of
releasing built images goes beyond the infrastructure requirements. It's the
message and the guarantees implied with the built product itself that are the
concern here. And I tend to agree with Doug that this might be a problem for us
as a community. Unfortunately, putting your name, Michal, as contact point is
not enough. Kolla is not the only project producing container images and we need
to be consistent in the way we release these images.

Nothing prevents people for building their own images and uploading them to
dockerhub. Having this as part of the OpenStack's pipeline is a problem.

Flavio

P.S: note this goes against my container(ish) interests but it's a
community-wide problem.

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-15 Thread Adrian Turjak


On 16/05/17 13:29, Lance Bragstad wrote:
>
>
> On Mon, May 15, 2017 at 7:07 PM, Adrian Turjak
> > wrote:
>
>
> On 16/05/17 01:09, Lance Bragstad wrote:
>>
>>
>> On Sun, May 14, 2017 at 11:59 AM, Monty Taylor
>> > wrote:
>>
>> On 05/11/2017 02:32 PM, Lance Bragstad wrote:
>>
>> Hey all,
>>
>> One of the Baremetal/VM sessions at the summit focused on
>> what we need
>> to do to make OpenStack more consumable for application
>> developers [0].
>> As a group we recognized the need for application
>> specific passwords or
>> API keys and nearly everyone (above 85% is my best guess)
>> in the session
>> thought it was an important thing to pursue. The API
>> key/application-specific password specification is up for
>> review [1].
>>
>> The problem is that with all the recent churn in the
>> keystone project,
>> we don't really have the capacity to commit to this for
>> the cycle. As a
>> project, we're still working through what we've committed
>> to for Pike
>> before the OSIC fallout. It was suggested that we reach
>> out to the PWG
>> to see if this is something we can get some help on from
>> a keystone
>> development perspective. Let's use this thread to see if
>> there is anyway
>> we can better enable the community through API
>> keys/application-specific
>> passwords by seeing if anyone can contribute resources to
>> this effort.
>>
>>
>> In the session, I signed up to help get the spec across the
>> finish line. I'm also going to do my best to write up
>> something resembling a user story so that we're all on the
>> same page about what this is, what it isn't and what comes next.
>>
>>
>> Thanks Monty. If you have questions about the current proposal,
>> Ron might be lingering in IRC (rderose). David (dstanek) was also
>> documenting his perspective in another spec [0].
>>
>>
>> [0] https://review.openstack.org/#/c/440593/
>> 
>>  
>>
>
> Based on the specs that are currently up in Keystone-specs, I
> would highly recommend not doing this per user.
>
> The scenario I imagine is you have a sysadmin at a company who
> created a ton of these for various jobs and then leaves. The
> company then needs to keep his user account around, or create tons
> of new API keys, and then disable his user once all the scripts he
> had keys for are replaced. Or more often then not, disable his
> user and then cry as everything breaks and no one really knows why
> or no one fully documented it all, or didn't read the docs.
> Keeping them per project and unrelated to the user makes more
> sense, as then someone else on your team can regenerate the
> secrets for the specific Keys as they want. Sure we can advise
> them to use generic user accounts within which to create these API
> keys but that implies password sharing which is bad.
>
>
> That said, I'm curious why we would make these as a thing separate
> to users. In reality, if you can create users, you can create API
> specific users. Would this be a different authentication
> mechanism? Why? Why not just continue the work on better access
> control and let people create users for this. Because lets be
> honest, isn't a user already an API key? The issue (and the Ron's
> spec mentions this) is a user having too much access, how would
> this fix that when the issue is that we don't have fine grained
> policy in the first place? How does a new auth mechanism fix that?
> Both specs mention roles so I assume it really doesn't. If we had
> fine grained policy we could just create users specific to a
> service with only the roles it needs, and the same problem is
> solved without any special API, new auth, or different 'user-lite'
> object model. It feels like this is trying to solve an issue that
> is better solved by fixing the existing problems.
>
> I like the idea behind these specs, but... I'm curious what
> exactly they are trying to solve. Not to mention if you wanted to
> automate anything larger such as creating sub-projects and setting
> up a basic network for each new developer to get access to your
> team, this wouldn't work unless you could have your API key
> inherit to subprojects or something more complex, at which point
> they may as well be users. Users already work for all of this, why
> reinvent the wheel when really the issue isn't the wheel itself,
> 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Flavio Percoco

On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:

On 15 May 2017 at 11:19, Davanum Srinivas  wrote:

Sorry for the top post, Michal, Can you please clarify a couple of things:

1) Can folks install just one or two services for their specific scenario?


Yes, that's more of a kolla-ansible feature and require a little bit
of ansible know-how, but entirely possible. Kolla-k8s is built to
allow maximum flexibility in that space.


2) Can the container images from kolla be run on bare docker daemon?


Yes, but they need to either override our default CMD (kolla_start) or
provide ENVs requred by it, not a huge deal


3) Can someone take the kolla container images from say dockerhub and
use it without the Kolla framework?


Yes, there is no such thing as kolla framework really. Our images
follow stable ABI and they can be deployed by any deploy mechanism
that will follow it. We have several users who wrote their own deploy
mechanism from scratch.

Containers are just blobs with binaries in it. Little things that we
add are kolla_start script to allow our config file management and
some custom startup scripts for things like mariadb to help with
bootstrapping, both are entirely optional.


Just as a bonus example, TripleO is currently using kolla images. They used to
be vanilla and they are not anymore but only because TripleO depends on puppet
being in the image, which has nothing to do with kolla.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-15 Thread Flavio Percoco

On 15/05/17 15:29 -0500, Ben Nemec wrote:



On 05/15/2017 01:55 PM, Doug Hellmann wrote:

Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15 14:27:36 -0400:

On Mon, May 15, 2017 at 2:08 PM, Ken Giusti  wrote:

Folks,

It was decided at the oslo.messaging forum at summit that the pika
driver will be marked as deprecated [1] for removal.


[dims} +1 from me.


+1


Also +1


+1

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck Monitoring

2017-05-15 Thread Sam P
Hi Greg,

 In Masakari [0] for VMHA, we have already implemented some what
similar function in masakri-monitors.
 Masakari-monitors runs on nova-compute node, and monitors the host,
process or instance failures.
 Masakari instance monitor has similar functionality with what you
have described.
 Please see [1] for more details on instance monitoring.
 [0] https://wiki.openstack.org/wiki/Masakari
 [1] 
https://github.com/openstack/masakari-monitors/tree/master/masakarimonitors/instancemonitor

 Once masakari-monitors detect failures, it will send notifications to
masakari-api to take appropriate recovery actions to recover that VM
from failures.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-15 Thread Lance Bragstad
On Mon, May 15, 2017 at 7:07 PM, Adrian Turjak 
wrote:

>
> On 16/05/17 01:09, Lance Bragstad wrote:
>
>
>
> On Sun, May 14, 2017 at 11:59 AM, Monty Taylor 
> wrote:
>
>> On 05/11/2017 02:32 PM, Lance Bragstad wrote:
>>
>>> Hey all,
>>>
>>> One of the Baremetal/VM sessions at the summit focused on what we need
>>> to do to make OpenStack more consumable for application developers [0].
>>> As a group we recognized the need for application specific passwords or
>>> API keys and nearly everyone (above 85% is my best guess) in the session
>>> thought it was an important thing to pursue. The API
>>> key/application-specific password specification is up for review [1].
>>>
>>> The problem is that with all the recent churn in the keystone project,
>>> we don't really have the capacity to commit to this for the cycle. As a
>>> project, we're still working through what we've committed to for Pike
>>> before the OSIC fallout. It was suggested that we reach out to the PWG
>>> to see if this is something we can get some help on from a keystone
>>> development perspective. Let's use this thread to see if there is anyway
>>> we can better enable the community through API keys/application-specific
>>> passwords by seeing if anyone can contribute resources to this effort.
>>>
>>
>> In the session, I signed up to help get the spec across the finish line.
>> I'm also going to do my best to write up something resembling a user story
>> so that we're all on the same page about what this is, what it isn't and
>> what comes next.
>>
>
> Thanks Monty. If you have questions about the current proposal, Ron might
> be lingering in IRC (rderose). David (dstanek) was also documenting his
> perspective in another spec [0].
>
>
> [0] https://review.openstack.org/#/c/440593/
>
>
>
> Based on the specs that are currently up in Keystone-specs, I would highly
> recommend not doing this per user.
>
> The scenario I imagine is you have a sysadmin at a company who created a
> ton of these for various jobs and then leaves. The company then needs to
> keep his user account around, or create tons of new API keys, and then
> disable his user once all the scripts he had keys for are replaced. Or more
> often then not, disable his user and then cry as everything breaks and no
> one really knows why or no one fully documented it all, or didn't read the
> docs. Keeping them per project and unrelated to the user makes more sense,
> as then someone else on your team can regenerate the secrets for the
> specific Keys as they want. Sure we can advise them to use generic user
> accounts within which to create these API keys but that implies password
> sharing which is bad.
>
>
> That said, I'm curious why we would make these as a thing separate to
> users. In reality, if you can create users, you can create API specific
> users. Would this be a different authentication mechanism? Why? Why not
> just continue the work on better access control and let people create users
> for this. Because lets be honest, isn't a user already an API key? The
> issue (and the Ron's spec mentions this) is a user having too much access,
> how would this fix that when the issue is that we don't have fine grained
> policy in the first place? How does a new auth mechanism fix that? Both
> specs mention roles so I assume it really doesn't. If we had fine grained
> policy we could just create users specific to a service with only the roles
> it needs, and the same problem is solved without any special API, new auth,
> or different 'user-lite' object model. It feels like this is trying to
> solve an issue that is better solved by fixing the existing problems.
>
> I like the idea behind these specs, but... I'm curious what exactly they
> are trying to solve. Not to mention if you wanted to automate anything
> larger such as creating sub-projects and setting up a basic network for
> each new developer to get access to your team, this wouldn't work unless
> you could have your API key inherit to subprojects or something more
> complex, at which point they may as well be users. Users already work for
> all of this, why reinvent the wheel when really the issue isn't the wheel
> itself, but the steering mechanism (access control/policy in this case)?
>
>
All valid points, but IMO the discussions around API keys didn't set out to
fix deep-rooted issues with policy. We have several specs in flights across
projects to help mitigate the real issues with policy [0] [1] [2] [3] [4].

I see an API key implementation as something that provides a cleaner fit
and finish once we've addressed the policy bits. It's also a familiar
concept for application developers, which was the use case the session was
targeting.

I probably should have laid out the related policy work before jumping into
API keys. We've already committed a bunch of keystone resource to policy
improvements this cycle, but I'm hoping we can work API keys and policy
improvements in 

Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-15 Thread Adrian Turjak

On 16/05/17 01:09, Lance Bragstad wrote:
>
>
> On Sun, May 14, 2017 at 11:59 AM, Monty Taylor  > wrote:
>
> On 05/11/2017 02:32 PM, Lance Bragstad wrote:
>
> Hey all,
>
> One of the Baremetal/VM sessions at the summit focused on what
> we need
> to do to make OpenStack more consumable for application
> developers [0].
> As a group we recognized the need for application specific
> passwords or
> API keys and nearly everyone (above 85% is my best guess) in
> the session
> thought it was an important thing to pursue. The API
> key/application-specific password specification is up for
> review [1].
>
> The problem is that with all the recent churn in the keystone
> project,
> we don't really have the capacity to commit to this for the
> cycle. As a
> project, we're still working through what we've committed to
> for Pike
> before the OSIC fallout. It was suggested that we reach out to
> the PWG
> to see if this is something we can get some help on from a
> keystone
> development perspective. Let's use this thread to see if there
> is anyway
> we can better enable the community through API
> keys/application-specific
> passwords by seeing if anyone can contribute resources to this
> effort.
>
>
> In the session, I signed up to help get the spec across the finish
> line. I'm also going to do my best to write up something
> resembling a user story so that we're all on the same page about
> what this is, what it isn't and what comes next.
>
>
> Thanks Monty. If you have questions about the current proposal, Ron
> might be lingering in IRC (rderose). David (dstanek) was also
> documenting his perspective in another spec [0].
>
>
> [0] https://review.openstack.org/#/c/440593/
>  
>

Based on the specs that are currently up in Keystone-specs, I would
highly recommend not doing this per user.

The scenario I imagine is you have a sysadmin at a company who created a
ton of these for various jobs and then leaves. The company then needs to
keep his user account around, or create tons of new API keys, and then
disable his user once all the scripts he had keys for are replaced. Or
more often then not, disable his user and then cry as everything breaks
and no one really knows why or no one fully documented it all, or didn't
read the docs. Keeping them per project and unrelated to the user makes
more sense, as then someone else on your team can regenerate the secrets
for the specific Keys as they want. Sure we can advise them to use
generic user accounts within which to create these API keys but that
implies password sharing which is bad.


That said, I'm curious why we would make these as a thing separate to
users. In reality, if you can create users, you can create API specific
users. Would this be a different authentication mechanism? Why? Why not
just continue the work on better access control and let people create
users for this. Because lets be honest, isn't a user already an API key?
The issue (and the Ron's spec mentions this) is a user having too much
access, how would this fix that when the issue is that we don't have
fine grained policy in the first place? How does a new auth mechanism
fix that? Both specs mention roles so I assume it really doesn't. If we
had fine grained policy we could just create users specific to a service
with only the roles it needs, and the same problem is solved without any
special API, new auth, or different 'user-lite' object model. It feels
like this is trying to solve an issue that is better solved by fixing
the existing problems.

I like the idea behind these specs, but... I'm curious what exactly they
are trying to solve. Not to mention if you wanted to automate anything
larger such as creating sub-projects and setting up a basic network for
each new developer to get access to your team, this wouldn't work unless
you could have your API key inherit to subprojects or something more
complex, at which point they may as well be users. Users already work
for all of this, why reinvent the wheel when really the issue isn't the
wheel itself, but the steering mechanism (access control/policy in this
case)?


Tangentially related to this (because my reasons are different), on our
cloud I'm actually working on something like this, but under the hood
all I'm doing is creating a user with a generated password and enforcing
a username convention. I ask them for a name and what roles they want
for the user and I spit out:
username: "service_user_for_web_app_1@"
password: ""

l'lI always generate/regenerate that password for them, and once shown
to them once, they can't ever see it again since the plaintext secret is
never stored. Sure I can't stop them from logging into the dashboard
with that user or 

Re: [openstack-dev] [nova] [cinder] Follow up on Nova/Cinder summit sessions from an ops perspective

2017-05-15 Thread Matt Riedemann

On 5/15/2017 5:34 PM, Matt Riedemann wrote:

On 5/15/2017 5:23 PM, Eric Fried wrote:

How about aggregates?
https://www.youtube.com/watch?v=fu6jdGGdYU4=youtu.be=1784


Aggregates aren't exposed to the user directly, they are exposed to the
user via availability zones. Avoiding the explosion of 1:1 AZs is what
we want to avoid for this use case.



I guess another thing with aggregates, as noted by someone in the 
etherpad, is you can tie host aggregates to certain flavors for 
scheduling. So if you have a set of hosts with local SSD and 
nova-compute and cinder-volume running on them, then you could put those 
in a host aggregate and tie them to your HPC flavors.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Follow up on Nova/Cinder summit sessions from an ops perspective

2017-05-15 Thread Matt Riedemann

On 5/15/2017 5:23 PM, Eric Fried wrote:

How about aggregates?
https://www.youtube.com/watch?v=fu6jdGGdYU4=youtu.be=1784


Aggregates aren't exposed to the user directly, they are exposed to the 
user via availability zones. Avoiding the explosion of 1:1 AZs is what 
we want to avoid for this use case.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Follow up on Nova/Cinder summit sessions from an ops perspective

2017-05-15 Thread Eric Fried
> If there are alternative ideas on how to design or model this, I'm all
> ears.

How about aggregates?
https://www.youtube.com/watch?v=fu6jdGGdYU4=youtu.be=1784

On 05/15/2017 05:04 PM, Matt Riedemann wrote:
> On 5/15/2017 2:28 PM, Edmund Rhudy (BLOOMBERG/ 120 PARK) wrote:
>> Hi all,
>>
>> I'd like to follow up on a few discussions that took place last week in
>> Boston, specifically in the Compute Instance/Volume Affinity for HPC
>> session
>> (https://etherpad.openstack.org/p/BOS-forum-compute-instance-volume-affinity-hpc).
>>
>>
>> In this session, the discussions all trended towards adding more
>> complexity to the Nova UX, like adding --near and --distance flags to
>> the nova boot command to have the scheduler figure out how to place an
>> instance near some other resource, adding more fields to flavors or
>> flavor extra specs, etc.
>>
>> My question is: is it the right question to ask how to add more
>> fine-grained complications to the OpenStack user experience to support
>> what seemed like a pretty narrow use case?
> 
> I think we can all agree we don't want to complicate the user experience.
> 
>>
>> The only use case that I remember hearing was an operator not wanting it
>> to be possible for a user to launch an instance in a particular Nova AZ
>> and then not be able to attach a volume from a different Cinder AZ, or
>> they try to boot an instance from a volume in the wrong place and get a
>> failure to launch. This seems okay to me, though - either the user has
>> to rebuild their instance in the right place or Nova will just return an
>> error during instance build. Is it worth adding all sorts of
>> convolutions to Nova to avoid the possibility that somebody might have
>> to build instances a second time?
> 
> We might have gone down this path but it's not the intention or the use
> case as I thought I had presented it, and is in the etherpad. For what
> you're describing, we already have the CONF.cinder.cross_az_attach
> option in nova which prevents you from booting or attaching a volume to
> an instance in a different AZ from the instance. That's not what we're
> talking about though.
> 
> The use case, as I got from the mailing list discussion linked in the
> etherpad, is a user wants their volume attached as close to local
> storage for the instance as possible for performance reasons. If this
> could be on the same physical server, great. But there is the case where
> the operator doesn't want to use any local disk on the compute and wants
> to send everything to Cinder, and the backing storage might not be on
> the same physical server, so that's where we started talking about
> --near or --distance (host, rack, row, data center, etc).
> 
>>
>> The feedback I get from my cloud-experienced users most frequently is
>> that they want to know why the OpenStack user experience in the storage
>> area is so radically different from AWS, which is what they all have
>> experience with. I don't really have a great answer for them, except to
>> admit that in our clouds they just have to know what combination of
>> flavors and Horizon options or BDM structure is going to get them the
>> right tradeoff between storage durability and speed. I was pleased with
>> how the session on expanding Cinder's role for Nova ephemeral storage
>> went because of the suggestion of reducing Nova imagebackend's role to
>> just the file driver and having Cinder take over for everything else.
>> That, to me, is the kind of simplification that's a win-win for both
>> devs and ops: devs get to radically simplify a thorny part of the Nova
>> codebase, storage driver development only has to happen in Cinder,
>> operators get a storage workflow that's easier to explain to users.
>>
>> Am I off base in the view of not wanting to add more options to nova
>> boot and more logic to the scheduler? I know the AWS comparison is a
>> little North America-centric (this came up at the summit a few times
>> that EMEA/APAC operators may have very different ideas of a normal cloud
>> workflow), but I am striving to give my users a private cloud that I can
>> define for them in terms of AWS workflows and vocabulary. AWS by design
>> restricts where your volumes can live (you can use instance store
>> volumes and that data is gone on reboot or terminate, or you can put EBS
>> volumes in a particular AZ and mount them on instances in that AZ), and
>> I don't think that's a bad thing, because it makes it easy for the users
>> to understand the contract they're getting from the platform when it
>> comes to where their data is stored and what instances they can attach
>> it to.
>>
> 
> Again, we don't want to make the UX more complicated, but as noted in
> the etherpad, the solution we have today is if you want the same
> instance and volume on the same host for performance reasons, then you
> need to have a 1:1 relationship for AZs and hosts since AZs are exposed
> to the user. In a public cloud where you've got hundreds of thousands 

[openstack-dev] [infra][all] etcd tarballs for CI use

2017-05-15 Thread Davanum Srinivas
Folks,

In the Boston Summit, there was a session about introducing etcd as a
base service[0]. One question that we have to figure out is how to
ensure that our CI infra does not depend on tarballs from github.

At this moment, though Fedora has 3.1.7 [1], Xenial is way too old, So
we will need to pull down tar balls from either [2] or [3]. proposing
backports is a possibility, but then we need some flexibility if we
end up picking up some specific version (say 3.0.17 vs 3.1.7). So a
download location would be good to have so we can request infra to
push versions we can experiment with.

Thoughts please... oh, the review i have for etcd as base service in
devstack is here [4], the question was raised even before the summit
by Paul.

Thanks,
Dims

[0] https://etherpad.openstack.org/p/BOS-etcd-base-service
[1] 
https://www.rpmfind.net/linux/RPM/fedora/devel/rawhide/x86_64/e/etcd-3.1.7-1.fc27.x86_64.html
[2] https://storage.googleapis.com/etcd
[3] https://github.com/coreos/etcd/releases/download
[4] https://review.openstack.org/#/c/445432/
-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Follow up on Nova/Cinder summit sessions from an ops perspective

2017-05-15 Thread Matt Riedemann

On 5/15/2017 2:28 PM, Edmund Rhudy (BLOOMBERG/ 120 PARK) wrote:

Hi all,

I'd like to follow up on a few discussions that took place last week in
Boston, specifically in the Compute Instance/Volume Affinity for HPC
session
(https://etherpad.openstack.org/p/BOS-forum-compute-instance-volume-affinity-hpc).

In this session, the discussions all trended towards adding more
complexity to the Nova UX, like adding --near and --distance flags to
the nova boot command to have the scheduler figure out how to place an
instance near some other resource, adding more fields to flavors or
flavor extra specs, etc.

My question is: is it the right question to ask how to add more
fine-grained complications to the OpenStack user experience to support
what seemed like a pretty narrow use case?


I think we can all agree we don't want to complicate the user experience.



The only use case that I remember hearing was an operator not wanting it
to be possible for a user to launch an instance in a particular Nova AZ
and then not be able to attach a volume from a different Cinder AZ, or
they try to boot an instance from a volume in the wrong place and get a
failure to launch. This seems okay to me, though - either the user has
to rebuild their instance in the right place or Nova will just return an
error during instance build. Is it worth adding all sorts of
convolutions to Nova to avoid the possibility that somebody might have
to build instances a second time?


We might have gone down this path but it's not the intention or the use 
case as I thought I had presented it, and is in the etherpad. For what 
you're describing, we already have the CONF.cinder.cross_az_attach 
option in nova which prevents you from booting or attaching a volume to 
an instance in a different AZ from the instance. That's not what we're 
talking about though.


The use case, as I got from the mailing list discussion linked in the 
etherpad, is a user wants their volume attached as close to local 
storage for the instance as possible for performance reasons. If this 
could be on the same physical server, great. But there is the case where 
the operator doesn't want to use any local disk on the compute and wants 
to send everything to Cinder, and the backing storage might not be on 
the same physical server, so that's where we started talking about 
--near or --distance (host, rack, row, data center, etc).




The feedback I get from my cloud-experienced users most frequently is
that they want to know why the OpenStack user experience in the storage
area is so radically different from AWS, which is what they all have
experience with. I don't really have a great answer for them, except to
admit that in our clouds they just have to know what combination of
flavors and Horizon options or BDM structure is going to get them the
right tradeoff between storage durability and speed. I was pleased with
how the session on expanding Cinder's role for Nova ephemeral storage
went because of the suggestion of reducing Nova imagebackend's role to
just the file driver and having Cinder take over for everything else.
That, to me, is the kind of simplification that's a win-win for both
devs and ops: devs get to radically simplify a thorny part of the Nova
codebase, storage driver development only has to happen in Cinder,
operators get a storage workflow that's easier to explain to users.

Am I off base in the view of not wanting to add more options to nova
boot and more logic to the scheduler? I know the AWS comparison is a
little North America-centric (this came up at the summit a few times
that EMEA/APAC operators may have very different ideas of a normal cloud
workflow), but I am striving to give my users a private cloud that I can
define for them in terms of AWS workflows and vocabulary. AWS by design
restricts where your volumes can live (you can use instance store
volumes and that data is gone on reboot or terminate, or you can put EBS
volumes in a particular AZ and mount them on instances in that AZ), and
I don't think that's a bad thing, because it makes it easy for the users
to understand the contract they're getting from the platform when it
comes to where their data is stored and what instances they can attach
it to.



Again, we don't want to make the UX more complicated, but as noted in 
the etherpad, the solution we have today is if you want the same 
instance and volume on the same host for performance reasons, then you 
need to have a 1:1 relationship for AZs and hosts since AZs are exposed 
to the user. In a public cloud where you've got hundreds of thousands of 
compute hosts, 1:1 AZs aren't going to be realistic, for neither the 
admin or user. Plus, AZs are really supposed to be about fault domains, 
not performance domains, as Jay Pipes pointed out in the session.


That's where the idea of a --near or --distance=0 came in. I agree that 
having non-standard definitions of 'distance' is going to be confusing 
and not interoperable, so that's a whole 

[openstack-dev] [HA] follow-up from HA discussion at Boston Forum

2017-05-15 Thread Adam Spiers

Hi all,

Sam P  wrote:

This is a quick reminder for HA Forum session at Boston Summit.
Thank you all for your comments and effort to make this happen in Boston Summit.

Time: Thu 11 , 11:00am-11:40am
Location: Hynes Convention Center - Level One - MR 103
Etherpad: https://etherpad.openstack.org/p/BOS-forum-HA-in-openstack

Please join and let's discuss the HA issues in OpenStack...

--- Regards,
Sampath


Thanks to everyone who came to the High Availability Forum session in
Boston last week!  To me, the great turn-out proved that there is
enough general interest in HA within OpenStack to justify allocating
space for dicussion on those topics not only at each summit, but in
between the summits too.

To that end, I'd like to a) remind everyone of the weekly HA IRC
meetings:

   https://wiki.openstack.org/wiki/Meetings/HATeamMeeting

and also b) highlight an issue that we most likely need to solve:
currently these weekly IRC meetings are held at 0900 UTC on Wednesday:

   http://eavesdrop.openstack.org/#High_Availability_Meeting

which is pretty much useless for anyone in the Americas.  This time
was previously chosen because the most regular attendees were based in
Europe or Asia, but I'm now looking for suggestions on how to make
this fairer for all continents.  Some options:

- Split the 60 minutes in half, and hold two 30 minute meetings
 each week at different times, so that every timezone has convenient
 access to at least one of them.

- Alternate the timezone every other week.  This might make it hard to
 build any kind of momentum.

- Hold two meetings each week.  I'm not sure we'd have enough traffic
 to justify this, but we could try.

Any opinions, or better ideas?  Thanks!

Adam

P.S. Big thanks to Sampath for organising the Boston Forum session
and managing to attract such a healthy audience :-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2017-05-15 15:48:33 -0500:
> 
> On 05/15/2017 03:24 PM, Doug Hellmann wrote:
> > Excerpts from Legacy, Allain's message of 2017-05-15 19:20:46 +:
> >>> -Original Message-
> >>> From: Doug Hellmann [mailto:d...@doughellmann.com]
> >>> Sent: Monday, May 15, 2017 2:55 PM
> >> <...>
> >>>
> >>> Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +:
>  import eventlet
>  eventlet.monkey_patch
> >>>
> >>> That's not calling monkey_patch -- there are no '()'. Is that a typo?
> >>
> >> Yes, sorry, that was a typo when I put it in to the email.  It did have ()
> >> at the end.
> >>
> >>>
> >>> lock() claims to work differently when monkey_patch() has been called.
> >>> Without doing the monkey patching, I would expect the thread to have to
> >>> explicitly yield control.
> >>>
> >>> Did you see the problem you describe in production code, or just in this
> >>> sample program?
> >>
> >> We see this in production code.   I included the example to boil this down 
> >> to
> >> a simple enough scenario to be understood in this forum without the
> >> distraction of superfluous code.
> >>
> >
> > OK. I think from the Oslo team's perspective, this is likely to be
> > considered a bug in the application. The concurrency library is not
> > aware that it is running in an eventlet thread, so it relies on the
> > application to call the monkey patching function to inject the right
> > sort of lock class.  If that was done in the wrong order, or not
> > at all, that would cause this issue.
> 
> Does oslo.concurrency make any fairness promises?  I don't recall that 
> it does, so it's not clear to me that this is a bug.  I thought fair 
> locking was one of the motivations behind the DLM discussion.  My view 
> of the oslo.concurrency locking was that it is solely concerned with 
> preventing concurrent access to a resource.  There's no queuing or 
> anything that would ensure a given consumer can't grab the same lock 
> repeatedly.
> 

DLM is more about fairness between machines, not threads.

However, I'd agree that oslo.concurrency isn't making fairness
guarantees. It does claim to return a threading.Semaphore or
semaphore.Semaphore, neither of which facilitate fairness (nor would a
full fledged mutex).

In order to implement fairness you'll need every lock request to happen
in a FIFO queue. This is often implemented with a mutex-protected queue
of condition variables. Since the mutex for the queue is only held while
you append to the queue, you will always get the items from the queue
in the order they were written to it.

So you have lockers add themselves to the queue and wait on their
condition variable, and then a thread running all the time that reads
the queue and acts on each condition to make sure only one thread is
activated at a time (or that one thread can just always do all the work
if the arguments are simple enough to put in a queue).

> I'm also not really surprised that this example serializes all the 
> workers.  The operation being performed in each thread is simple enough 
> that it probably completes before a context switch could reasonably 
> occur, greenthreads or not.  Unfortunately one of the hard parts of 
> concurrency is that the "extraneous" details of a use case can end up 
> being important.
> 

It also gets hardware sensitive when you have true multi-threading,
since a user on a 2 core box will see different results than a 4 core.

> >
> > The next step is to look at which application had the problem, and under
> > what circumstances. Can you provide more detail there?
> 
> +1, although as I noted above I'm not sure this is actually a "bug".  It 
> would be interesting to know what real world use case is causing a 
> pathologically bad lock pattern though.
> 

I think it makes sense, especially in the greenthread example where
you're immediately seeing activity on the recently active socket and
thus just stay in that greenthread.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-15 Thread Sean Dague
On 05/14/2017 07:04 AM, Sean Dague wrote:
> One of the things that came up in a logging Forum session is how much
> effort operators are having to put into reconstructing flows for things
> like server boot when they go wrong, as every time we jump a service
> barrier the request-id is reset to something new. The back and forth
> between Nova / Neutron and Nova / Glance would be definitely well served
> by this. Especially if this is something that's easy to query in elastic
> search.

FYI the oslo.spec for this is now up here for review -
https://review.openstack.org/#/c/464746/ - it has additional details in it.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Ben Nemec



On 05/15/2017 03:24 PM, Doug Hellmann wrote:

Excerpts from Legacy, Allain's message of 2017-05-15 19:20:46 +:

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com]
Sent: Monday, May 15, 2017 2:55 PM

<...>


Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +:

import eventlet
eventlet.monkey_patch


That's not calling monkey_patch -- there are no '()'. Is that a typo?


Yes, sorry, that was a typo when I put it in to the email.  It did have ()
at the end.



lock() claims to work differently when monkey_patch() has been called.
Without doing the monkey patching, I would expect the thread to have to
explicitly yield control.

Did you see the problem you describe in production code, or just in this
sample program?


We see this in production code.   I included the example to boil this down to
a simple enough scenario to be understood in this forum without the
distraction of superfluous code.



OK. I think from the Oslo team's perspective, this is likely to be
considered a bug in the application. The concurrency library is not
aware that it is running in an eventlet thread, so it relies on the
application to call the monkey patching function to inject the right
sort of lock class.  If that was done in the wrong order, or not
at all, that would cause this issue.


Does oslo.concurrency make any fairness promises?  I don't recall that 
it does, so it's not clear to me that this is a bug.  I thought fair 
locking was one of the motivations behind the DLM discussion.  My view 
of the oslo.concurrency locking was that it is solely concerned with 
preventing concurrent access to a resource.  There's no queuing or 
anything that would ensure a given consumer can't grab the same lock 
repeatedly.


I'm also not really surprised that this example serializes all the 
workers.  The operation being performed in each thread is simple enough 
that it probably completes before a context switch could reasonably 
occur, greenthreads or not.  Unfortunately one of the hard parts of 
concurrency is that the "extraneous" details of a use case can end up 
being important.




The next step is to look at which application had the problem, and under
what circumstances. Can you provide more detail there?


+1, although as I noted above I'm not sure this is actually a "bug".  It 
would be interesting to know what real world use case is causing a 
pathologically bad lock pattern though.


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-15 Thread Ben Nemec



On 05/15/2017 01:55 PM, Doug Hellmann wrote:

Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15 14:27:36 -0400:

On Mon, May 15, 2017 at 2:08 PM, Ken Giusti  wrote:

Folks,

It was decided at the oslo.messaging forum at summit that the pika
driver will be marked as deprecated [1] for removal.


[dims} +1 from me.


+1


Also +1







The pika driver is another rabbitmq-based driver.  It was developed as
a replacement for the current rabbit driver (rabbit://).  The pika
driver is based on the 'pika' rabbitmq client library [2], rather than
the kombu library [3] of the current rabbitmq driver.  The pika
library was recommended by the rabbitmq community a couple of summits
ago as a better client than the kombu client.

However, testing done against this driver did not show "appreciable
difference in performance or reliability" over the existing rabbitmq
driver.

Given this, and the recent departure of some very talented
contributors, the consensus is to deprecate pika and recommend users
stay with the original rabbitmq driver.

The plan is to mark the driver as deprecated in Pike, removal in Rocky.

thanks,


[1] 
https://etherpad.openstack.org/p/BOS_Forum_Oslo.Messaging_driver_recommendations
  (~ line 80)
[2] https://github.com/pika/pika
[3] https://github.com/celery/kombu

--
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Doug Hellmann
Excerpts from Legacy, Allain's message of 2017-05-15 19:20:46 +:
> > -Original Message-
> > From: Doug Hellmann [mailto:d...@doughellmann.com]
> > Sent: Monday, May 15, 2017 2:55 PM
> <...>
> > 
> > Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +:
> > > import eventlet
> > > eventlet.monkey_patch
> > 
> > That's not calling monkey_patch -- there are no '()'. Is that a typo?
> 
> Yes, sorry, that was a typo when I put it in to the email.  It did have () 
> at the end.
> 
> > 
> > lock() claims to work differently when monkey_patch() has been called.
> > Without doing the monkey patching, I would expect the thread to have to
> > explicitly yield control.
> > 
> > Did you see the problem you describe in production code, or just in this
> > sample program?
> 
> We see this in production code.   I included the example to boil this down to 
> a simple enough scenario to be understood in this forum without the 
> distraction of superfluous code. 
> 

OK. I think from the Oslo team's perspective, this is likely to be
considered a bug in the application. The concurrency library is not
aware that it is running in an eventlet thread, so it relies on the
application to call the monkey patching function to inject the right
sort of lock class.  If that was done in the wrong order, or not
at all, that would cause this issue.

The next step is to look at which application had the problem, and under
what circumstances. Can you provide more detail there?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-05-15 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. rolling upgrades
1.1. the next patch is ready for reviews: 
https://review.openstack.org/#/c/412397/
2. review next BFV patch:
2.1. next: https://review.openstack.org/#/c/366197/
3. install guide updates related to the driver composition:
3.1. configuration https://review.openstack.org/462151
3.2. enrollment: https://review.openstack.org/463609
4. review e-tags spec: https://review.openstack.org/#/c/381991/


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 01 May 2017 and 15 May 2017)
- Ironic: 252 bugs + 251 wishlist items. 21 new (-1), 200 in progress (-1), 0 
critical, 26 high and 32 incomplete (+1)
- Inspector: 12 bugs (-2) + 28 wishlist items (-2). 1 new (-2), 14 in progress 
(-1), 0 critical, 2 high (+1) and 3 incomplete
- Nova bugs with Ironic tag: 11. 2 new, 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- Standalone CI tests (vsaienk0)
- next patch to be reviewed: https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests

Generic boot-from-volume (TheJulia, dtantsur)
-
- specs and blueprints:
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/volume-connection-information.html
- code: https://review.openstack.org/#/q/topic:bug/1526231
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/boot-from-volume-reference-drivers.html
- code: https://review.openstack.org/#/q/topic:bug/1559691
- https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume
- code: 
https://review.openstack.org/#/q/topic:bp/ironic-boot-from-volume
- status as of most recent weekly meeting:
- Updated driver patch to address hshiina's findings [mjturek].
- mjturek is working on getting together devstack config updates/script 
changes in order to support this configuration
- Getting back to this this week. Setting up environment and seeing how 
far I can get with the current patches.
- hshiina is looking in Nova side changes and is attempting to obtain 
clarity on some of the issues that tenant network separation introduced into 
the deployment workflow.
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/366197/ Cinder Driver
https://review.openstack.org/#/c/406290 Wiring in attach/detach 
operations
https://review.openstack.org/#/c/413324 iPXE template
https://review.openstack.org/#/c/454243/ - WIP logic changes for 
deployment process.  Tenant network separation introduced some additional 
complexity, quick conceptual feedback requested.
https://review.openstack.org/#/c/214586/ - Volume Connection 
Information Rest API Change
Additional patches exist, for python-ironicclient and one for nova.  
Links in the patch/note tracking etherpad.

Rolling upgrades and grenade-partial (rloo, jlvillal)
-
- spec approved; code patches: 
https://review.openstack.org/#/q/topic:bug/1526283
- status as of most recent weekly meeting:
- patches ready for reviews. Next one: 'Add version column': 
https://review.openstack.org/#/c/412397/
- Testing work:
- 27-Mar-2017: Grenade multi-node is non-voting
- https://review.openstack.org/456166 MERGED

Python 3.5 compatibility (Nisha, Ankit)
---
- Topic: 
https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases
- this include all projects, not only ironic
- please tag all reviews with topic "goal-python35"
- Nisha will be taking over this work(Nisha on leave from May 5 to May 22)
- Status as on May 5.  Raised patches in openstack-infra/project-config for 
adding experimental gates for the ironic governed modules
- https://review.openstack.org/462487  - python-ironicclient
- https://review.openstack.org/462511- IPA(has one +2)
- https://review.openstack.org/462695- ironic-inspector
- https://review.openstack.org/462701- ironic-lib
- https://review.openstack.org/#/c/462706/- python-ironic-inspector-client
- Not sure, if we want to do the same for ironic-staging-drivers module or 

[openstack-dev] [os-upstream-institute] Meeting reminder

2017-05-15 Thread Ildiko Vancsa
Hi Training Team,

Just a quick reminder that we have our meeting in ten minutes on 
#openstack-meeting-3.

We will mainly focus on retrospectives from the training in Boston a week ago. 
We have an etherpad with some thoughts already: 
https://etherpad.openstack.org/p/BOS_OUI_Post_Mortem 

Thanks,
Ildikó
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][logging] improvements to log debugging ready for review

2017-05-15 Thread Doug Hellmann
I have updated the Oslo spec for improving the logging debugging [1] and
the patch series that begins the implementation [2]. Please put these on
your review priority list.

Doug

[1] https://review.openstack.org/460112
[2] https://review.openstack.org/#/q/topic:improve-logging-debugging

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Michał Jastrzębski
On 15 May 2017 at 12:12, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
>> For starters, I want to emphasize that fresh set of dockerhub images
>> was one of most requested features from Kolla on this summit and few
>> other features more or less requires readily-available docker
>> registry. Features like full release upgrade gates.
>>
>> This will have numerous benefits for users that doesn't have resources
>> to put sophisticated CI/staging env, which, I'm willing to bet, is
>> still quite significant user base. If we do it correctly (and we will
>> do it correctly), images we'll going to push will go through series of
>> gates which we have in Kolla (and will have more). So when you pull
>> image, you know that it was successfully deployed within scenerios
>> available in our gates, maybe even upgrade and increase scenerio
>> coverage later? That is a huge benefit for actual users.
>
> I have no doubt that consumers of the images would like us to keep
> creating them. We had lots of discussions last week about resource
> constraints and sustainable practices, though, and this strikes me
> as an area where we're deviating from our history in a way that
> will require more maintenance work upstream.
>
>> On 15 May 2017 at 10:34, Doug Hellmann  wrote:
>> > Last week at the Forum we had a couple of discussions about
>> > collaboration between the various teams building or consuming
>> > container images. One topic that came up was deciding how to publish
>> > images from the various teams to docker hub or other container
>> > registries. While the technical bits seem easy enough to work out,
>> > there is still the question of precedence and whether it's a good
>> > idea to do so at all.
>> >
>> > In the past, we have refrained from publishing binary packages in
>> > other formats such as debs and RPMs. (We did publish debs way back
>> > in the beginning, for testing IIRC, but switched away from them to
>> > sdists to be more inclusive.) Since then, we have said it is the
>> > responsibility of downstream consumers to build production packages,
>> > either as distributors or as a deployer that is rolling their own.
>> > We do package sdists for python libraries, push some JavaScript to
>> > the NPM registries, and have tarballs of those and a bunch of other
>> > artifacts that we build out of our release tools.  But none of those
>> > is declared as "production ready," and so the community is not
>> > sending the signal that we are responsible for maintaining them in
>> > the context of production deployments, beyond continuing to produce
>> > new releases when there are bugs.
>>
>> So for us that would mean something really hacky and bad. We are
>> community driven not company driven project. We don't have Red Hat or
>> Canonical teams behind us (we have contributors, but that's
>> different).
>
> Although I work at Red Hat, I want to make sure it's clear that my
> objection is purely related to community concerns. For this
> conversation, I'm wearing my upstream TC and Release team hats.
>
>> > Container images introduce some extra complexity, over the basic
>> > operating system style packages mentioned above. Due to the way
>> > they are constructed, they are likely to include content we don't
>> > produce ourselves (either in the form of base layers or via including
>> > build tools or other things needed when assembling the full image).
>> > That extra content means there would need to be more tracking of
>> > upstream issues (bugs, CVEs, etc.) to ensure the images are updated
>> > as needed.
>>
>> We can do this by building daily, which was the plan in fact. If we
>> build every day you have at most 24hrs old packages, CVEs and things
>> like that on non-openstack packages are still maintained by distro
>> maintainers.
>
> A daily build job introduces new questions about how big the images
> are and how many of them we keep, but let's focus on whether the
> change in policy is something we want to adopt before we consider
> those questions.

http://tarballs.openstack.org/kolla/images/ we are already doing this
for last few months. Only difference is that it's hacky and we want
something that's not hacky.

Let's separate resource constrains for now please, because from
current standpoint all the resources we need is a single vm that's
gonna run 1hr every day and some uplink megabytes (probably less than
1gig every day as Docker will cache a lot). If that's an issue, we can
work on it and limit amount of pushes to just version changes,
something we were discussing anyway.

>
>> > Given our security and stable team resources, I'm not entirely
>> > comfortable with us publishing these images, and giving the appearance
>> > that the community *as a whole* is committing to supporting them.
>> > I don't have any objection to someone from the community publishing
>> > them, as long as it is made clear who the 

[openstack-dev] [nova] [cinder] Follow up on Nova/Cinder summit sessions from an ops perspective

2017-05-15 Thread Edmund Rhudy (BLOOMBERG/ 120 PARK)
Hi all,

I'd like to follow up on a few discussions that took place last week in Boston, 
specifically in the Compute Instance/Volume Affinity for HPC session 
(https://etherpad.openstack.org/p/BOS-forum-compute-instance-volume-affinity-hpc).

In this session, the discussions all trended towards adding more complexity to 
the Nova UX, like adding --near and --distance flags to the nova boot command 
to have the scheduler figure out how to place an instance near some other 
resource, adding more fields to flavors or flavor extra specs, etc.

My question is: is it the right question to ask how to add more fine-grained 
complications to the OpenStack user experience to support what seemed like a 
pretty narrow use case?

The only use case that I remember hearing was an operator not wanting it to be 
possible for a user to launch an instance in a particular Nova AZ and then not 
be able to attach a volume from a different Cinder AZ, or they try to boot an 
instance from a volume in the wrong place and get a failure to launch. This 
seems okay to me, though - either the user has to rebuild their instance in the 
right place or Nova will just return an error during instance build. Is it 
worth adding all sorts of convolutions to Nova to avoid the possibility that 
somebody might have to build instances a second time?

The feedback I get from my cloud-experienced users most frequently is that they 
want to know why the OpenStack user experience in the storage area is so 
radically different from AWS, which is what they all have experience with. I 
don't really have a great answer for them, except to admit that in our clouds 
they just have to know what combination of flavors and Horizon options or BDM 
structure is going to get them the right tradeoff between storage durability 
and speed. I was pleased with how the session on expanding Cinder's role for 
Nova ephemeral storage went because of the suggestion of reducing Nova 
imagebackend's role to just the file driver and having Cinder take over for 
everything else. That, to me, is the kind of simplification that's a win-win 
for both devs and ops: devs get to radically simplify a thorny part of the Nova 
codebase, storage driver development only has to happen in Cinder, operators 
get a storage workflow that's easier to explain to users.

Am I off base in the view of not wanting to add more options to nova boot and 
more logic to the scheduler? I know the AWS comparison is a little North 
America-centric (this came up at the summit a few times that EMEA/APAC 
operators may have very different ideas of a normal cloud workflow), but I am 
striving to give my users a private cloud that I can define for them in terms 
of AWS workflows and vocabulary. AWS by design restricts where your volumes can 
live (you can use instance store volumes and that data is gone on reboot or 
terminate, or you can put EBS volumes in a particular AZ and mount them on 
instances in that AZ), and I don't think that's a bad thing, because it makes 
it easy for the users to understand the contract they're getting from the 
platform when it comes to where their data is stored and what instances they 
can attach it to.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Legacy, Allain
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com]
> Sent: Monday, May 15, 2017 2:55 PM
<...>
> 
> Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +:
> > import eventlet
> > eventlet.monkey_patch
> 
> That's not calling monkey_patch -- there are no '()'. Is that a typo?

Yes, sorry, that was a typo when I put it in to the email.  It did have () 
at the end.

> 
> lock() claims to work differently when monkey_patch() has been called.
> Without doing the monkey patching, I would expect the thread to have to
> explicitly yield control.
> 
> Did you see the problem you describe in production code, or just in this
> sample program?

We see this in production code.   I included the example to boil this down to 
a simple enough scenario to be understood in this forum without the 
distraction of superfluous code. 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck Monitoring

2017-05-15 Thread Waines, Greg
Sorry for the slow response.

Ifat,
You do understand correctly.
And I understand, that this does not really fit in Vitrage ... i.e. Vitrage has 
no other examples of monitoring itself being done in Vitrage.

Do you know if Zabbix has VM related monitoring ?
If they don’t already do this then, I might have difficulty getting it into 
Zabbix.

The other option I was thinking of was to see if I could contribute to QEMU as 
an optional layer on top of the QEMU Guest Agent ... and then having the alarm 
consumed by Vitrage.

My only other option would be to contribute into the OPNFV Availability project 
... as an incremental VM Heartbeating / Health-checking functionality that 
would build on top of the openstack offering ... although not sure if OPNFV 
Availability project was interested in doing code ... I think they might be 
just a requirements team.

Greg.



From: "Afek, Ifat (Nokia - IL/Kfar Sava)" 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Wednesday, May 10, 2017 at 11:06 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck 
Monitoring

Hi Greg,

If I understand correctly, you would like to add a test that checks if for 
every VM a heartbeat was retrieved in the last x seconds. Right?

Vitrage is not designed to perform such tests. Vitrage datasources retrieve 
topology (either by polling or by notifications) from services like Nova, 
Cinder, Neutron or Heat, and pass the topology to the Vitrage entity graph. In 
addition, they retrieve alarms from monitors like Aodh, Zabbix, Nagios or 
Collectd, and create these alarms in the entity graph as well. There is 
currently no place where you can check if an event arrived or not.

How about adding this test to a monitoring tool like Zabbix, and then consume 
the alarm (for a missing heartbeat) in Vitrage?

Best Regards,
Ifat.

From: "Waines, Greg" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 10 May 2017 at 13:24
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck 
Monitoring

Some other UPDATES on this proposal (from outside the mailing list):


· this should probably be based on an ‘image property’ rather than a 
‘flavor extraspec’,
since it requires code to be included in the guest/VM image,




· rather than use a unique virtio-serial link for the 
Heartbeat/Health-check Monitoring Messaging,
propose that we leverage the existing http://wiki.qemu.org/Features/GuestAgent

o   NOVA already supports a ‘hw_qemu_guest_agent=True’ image property
which results in NOVA setting up a virtio-serial connection to a QEMU Guest 
Agent
within the Guest/VM,

o   use this for the transport messaging layer for VM 
Heartbeating/Health-checking


With respect to ... where to propose / contribute this functionality,
Given that

· this may require very little work in NOVA (by using QEMU Guest 
Agent), and

· the fact that the primary result of VM Heartbeating / Health-checking 
is to report per-instance HB/HC status to Vitrage,
I am thinking that this would fit better simply in Vitrage.
An optional functionality enabled thru /etc/vitrage/vitrage.conf .


Comments ?
Greg.


From: Greg Waines 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Tuesday, May 9, 2017 at 1:11 PM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck Monitoring

I am looking for guidance on where to propose some “VM Heartbeat / Health-check 
Monitoring” functionality that I would like to contribute to openstack.

Briefly, “VM Heartbeat / Health-check Monitoring”

· is optionally enabled thru a Nova flavor extra-spec,

· is a service that runs on an OpenStack Compute Node,

· it sends periodic Heartbeat / Health-check Challenge Requests to a VM
over a virtio-serial-device setup between the Compute Node and the VM thru QEMU,

· on loss of heartbeat or a failed health check status will result in 
fault event, against the VM, being
reported to Vitrage thru its data-source API.

Where should I contribute this functionality ?

· put it ALL in Vitrage ... both the monitoring and the data-source 
reporting ?

· put the monitoring in Nova, and just the data source reporting in 
Vitrage ?

· other ?

Greg.





p.s. other info ...

Benefits of “VM Heartbeat / Health-check Monitoring”





· monitors health of OS and Applications INSIDE the VM

o   i.e. even just a simple Ack of the Heartbeat would validate that the OS is 
running, IO mechanisms (sockets, etc)
are 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
> For starters, I want to emphasize that fresh set of dockerhub images
> was one of most requested features from Kolla on this summit and few
> other features more or less requires readily-available docker
> registry. Features like full release upgrade gates.
> 
> This will have numerous benefits for users that doesn't have resources
> to put sophisticated CI/staging env, which, I'm willing to bet, is
> still quite significant user base. If we do it correctly (and we will
> do it correctly), images we'll going to push will go through series of
> gates which we have in Kolla (and will have more). So when you pull
> image, you know that it was successfully deployed within scenerios
> available in our gates, maybe even upgrade and increase scenerio
> coverage later? That is a huge benefit for actual users.

I have no doubt that consumers of the images would like us to keep
creating them. We had lots of discussions last week about resource
constraints and sustainable practices, though, and this strikes me
as an area where we're deviating from our history in a way that
will require more maintenance work upstream.

> On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> > Last week at the Forum we had a couple of discussions about
> > collaboration between the various teams building or consuming
> > container images. One topic that came up was deciding how to publish
> > images from the various teams to docker hub or other container
> > registries. While the technical bits seem easy enough to work out,
> > there is still the question of precedence and whether it's a good
> > idea to do so at all.
> >
> > In the past, we have refrained from publishing binary packages in
> > other formats such as debs and RPMs. (We did publish debs way back
> > in the beginning, for testing IIRC, but switched away from them to
> > sdists to be more inclusive.) Since then, we have said it is the
> > responsibility of downstream consumers to build production packages,
> > either as distributors or as a deployer that is rolling their own.
> > We do package sdists for python libraries, push some JavaScript to
> > the NPM registries, and have tarballs of those and a bunch of other
> > artifacts that we build out of our release tools.  But none of those
> > is declared as "production ready," and so the community is not
> > sending the signal that we are responsible for maintaining them in
> > the context of production deployments, beyond continuing to produce
> > new releases when there are bugs.
> 
> So for us that would mean something really hacky and bad. We are
> community driven not company driven project. We don't have Red Hat or
> Canonical teams behind us (we have contributors, but that's
> different).

Although I work at Red Hat, I want to make sure it's clear that my
objection is purely related to community concerns. For this
conversation, I'm wearing my upstream TC and Release team hats.

> > Container images introduce some extra complexity, over the basic
> > operating system style packages mentioned above. Due to the way
> > they are constructed, they are likely to include content we don't
> > produce ourselves (either in the form of base layers or via including
> > build tools or other things needed when assembling the full image).
> > That extra content means there would need to be more tracking of
> > upstream issues (bugs, CVEs, etc.) to ensure the images are updated
> > as needed.
> 
> We can do this by building daily, which was the plan in fact. If we
> build every day you have at most 24hrs old packages, CVEs and things
> like that on non-openstack packages are still maintained by distro
> maintainers.

A daily build job introduces new questions about how big the images
are and how many of them we keep, but let's focus on whether the
change in policy is something we want to adopt before we consider
those questions.

> > Given our security and stable team resources, I'm not entirely
> > comfortable with us publishing these images, and giving the appearance
> > that the community *as a whole* is committing to supporting them.
> > I don't have any objection to someone from the community publishing
> > them, as long as it is made clear who the actual owner is. I'm not
> > sure how easy it is to make that distinction if we publish them
> > through infra jobs, so that may mean some outside process. I also
> > don't think there would be any problem in building images on our
> > infrastructure for our own gate jobs, as long as they are just for
> > testing and we don't push those to any other registries.
> 
> Today we use Kolla account for that and I'm more than happy to keep it
> this way. We license our code with ASL which gives no guarantees.
> Containers will be licensed this way too, so they're available as-is
> and "production readiness" should be decided by everyone who runs it.
> That being said what we *can* promise is that 

[openstack-dev] [ironic] Ironic-UI review requirements - single core reviews

2017-05-15 Thread Julia Kreger
All,

In our new reality, in order to maximize velocity, I propose that we
loosen the review requirements for ironic-ui to allow faster
iteration. To this end, I suggest we move ironic-ui to using a single
core reviewer for code approval, along the same lines as Horizon[0].

Our new reality is a fairly grim one, but there is always hope. We
have several distinct active core reviewers. The problem is available
time to review, and then getting any two reviewers to be on the same,
at the same time, with the same patch set. Reducing the requirements
will help us iterate faster and reduce the time a revision waits for
approval to land, which should ultimately help everyone contributing.

If there are no objections from my fellow ironic folk, then I propose
we move to this for ironic-ui immediately.

Thanks,

-Julia

[0]: 
http://lists.openstack.org/pipermail/openstack-dev/2017-February/113029.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Michał Jastrzębski
On 15 May 2017 at 11:47, Sean Dague  wrote:
> On 05/15/2017 01:52 PM, Michał Jastrzębski wrote:
>> For starters, I want to emphasize that fresh set of dockerhub images
>> was one of most requested features from Kolla on this summit and few
>> other features more or less requires readily-available docker
>> registry. Features like full release upgrade gates.
>>
>> This will have numerous benefits for users that doesn't have resources
>> to put sophisticated CI/staging env, which, I'm willing to bet, is
>> still quite significant user base. If we do it correctly (and we will
>> do it correctly), images we'll going to push will go through series of
>> gates which we have in Kolla (and will have more). So when you pull
>> image, you know that it was successfully deployed within scenerios
>> available in our gates, maybe even upgrade and increase scenerio
>> coverage later? That is a huge benefit for actual users.
>
> That concerns me quite a bit. Given the nature of the patch story on
> containers (which is a rebuild), I really feel like users should have
> their own build / CI pipeline locally to be deploying this way. Making
> that easy for them to do, is great, but skipping that required local
> infrastructure puts them in a bad position should something go wrong.

I totally agree they should. Even if they do, it's still would be
additive to gating that we run, so it's even better.

> I do get that many folks want that, but I think it builds in a set of
> expectations that it's not possible to actually meet from an upstream
> perspective.
>
>> On 15 May 2017 at 10:34, Doug Hellmann  wrote:
>>> Last week at the Forum we had a couple of discussions about
>>> collaboration between the various teams building or consuming
>>> container images. One topic that came up was deciding how to publish
>>> images from the various teams to docker hub or other container
>>> registries. While the technical bits seem easy enough to work out,
>>> there is still the question of precedence and whether it's a good
>>> idea to do so at all.
>>>
>>> In the past, we have refrained from publishing binary packages in
>>> other formats such as debs and RPMs. (We did publish debs way back
>>> in the beginning, for testing IIRC, but switched away from them to
>>> sdists to be more inclusive.) Since then, we have said it is the
>>> responsibility of downstream consumers to build production packages,
>>> either as distributors or as a deployer that is rolling their own.
>>> We do package sdists for python libraries, push some JavaScript to
>>> the NPM registries, and have tarballs of those and a bunch of other
>>> artifacts that we build out of our release tools.  But none of those
>>> is declared as "production ready," and so the community is not
>>> sending the signal that we are responsible for maintaining them in
>>> the context of production deployments, beyond continuing to produce
>>> new releases when there are bugs.
>>
>> So for us that would mean something really hacky and bad. We are
>> community driven not company driven project. We don't have Red Hat or
>> Canonical teams behind us (we have contributors, but that's
>> different).
>>
>>> Container images introduce some extra complexity, over the basic
>>> operating system style packages mentioned above. Due to the way
>>> they are constructed, they are likely to include content we don't
>>> produce ourselves (either in the form of base layers or via including
>>> build tools or other things needed when assembling the full image).
>>> That extra content means there would need to be more tracking of
>>> upstream issues (bugs, CVEs, etc.) to ensure the images are updated
>>> as needed.
>>
>> We can do this by building daily, which was the plan in fact. If we
>> build every day you have at most 24hrs old packages, CVEs and things
>> like that on non-openstack packages are still maintained by distro
>> maintainers.
>
> There have been many instances where 24 hours wasn't good enough as
> embargoes end up pretty weird in terms of when things hit mirrors. It
> also assumes that when a CVE hits some other part of the gate or
> infrastructure isn't wedged so that it's not possible to build new
> packages. Or the capacity demands happen during a feature freeze, with
> tons of delay in there. There are many single points of failure in this
> process.
>
>>> Given our security and stable team resources, I'm not entirely
>>> comfortable with us publishing these images, and giving the appearance
>>> that the community *as a whole* is committing to supporting them.
>>> I don't have any objection to someone from the community publishing
>>> them, as long as it is made clear who the actual owner is. I'm not
>>> sure how easy it is to make that distinction if we publish them
>>> through infra jobs, so that may mean some outside process. I also
>>> don't think there would be any problem in building images on our
>>> infrastructure for our own gate jobs, as 

Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-15 Thread Doug Hellmann
Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15 14:27:36 -0400:
> On Mon, May 15, 2017 at 2:08 PM, Ken Giusti  wrote:
> > Folks,
> >
> > It was decided at the oslo.messaging forum at summit that the pika
> > driver will be marked as deprecated [1] for removal.
> 
> [dims} +1 from me.

+1

> 
> >
> > The pika driver is another rabbitmq-based driver.  It was developed as
> > a replacement for the current rabbit driver (rabbit://).  The pika
> > driver is based on the 'pika' rabbitmq client library [2], rather than
> > the kombu library [3] of the current rabbitmq driver.  The pika
> > library was recommended by the rabbitmq community a couple of summits
> > ago as a better client than the kombu client.
> >
> > However, testing done against this driver did not show "appreciable
> > difference in performance or reliability" over the existing rabbitmq
> > driver.
> >
> > Given this, and the recent departure of some very talented
> > contributors, the consensus is to deprecate pika and recommend users
> > stay with the original rabbitmq driver.
> >
> > The plan is to mark the driver as deprecated in Pike, removal in Rocky.
> >
> > thanks,
> >
> >
> > [1] 
> > https://etherpad.openstack.org/p/BOS_Forum_Oslo.Messaging_driver_recommendations
> >   (~ line 80)
> > [2] https://github.com/pika/pika
> > [3] https://github.com/celery/kombu
> >
> > --
> > Ken Giusti  (kgiu...@gmail.com)
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Doug Hellmann
Excerpts from Legacy, Allain's message of 2017-05-15 18:35:58 +:
> Can someone comment on whether the following scenario has been discussed
> before or whether this is viewed by the community as a bug?
> 
> While debugging a couple of different issues our investigation has lead
> us down the path of needing to look at whether the oslo concurrency lock
> utilities are working properly or not.  What we found is that it is
> possible for a greenthread to continuously acquire a lock even though
> there are other threads queued up waiting for the lock.
> 
> For instance, a greenthread acquires a lock, does some work, releases
> the lock, and then needs to repeat this process over several iterations.
> While the first greenthread holds the lock other greenthreads come along and
> attempt to acquire the lock.  Those subsequent greenthreads are added to the
> waiters list and suspended.  The observed behavior is that as long as the
> first greenthread continues to run without ever yielding it will always
> re-acquire the lock even before any of the waiters.
> 
> To illustrate my point I have included a short program that shows the
> effect of multiple threads contending for a lock with and without
> voluntarily yielding.   The code follows, but the output from both
> sample runs are included here first.
> 
> In both examples the output is formatted as "worker=XXX: YYY" where XXX
> is the worker number, and YYY is the number of times the worker has been
> executed while holding the lock.
> 
> In the first example,  notice that each worker gets to finish all of its
> tasks before any subsequence works gets to run even once.
> 
> In the second example, notice that the workload is fair and each worker
> gets to hold the lock once before passing it on to the next in line.
> 
> Example1 (without voluntarily yielding):
> =
> worker=0: 1
> worker=0: 2
> worker=0: 3
> worker=0: 4
> worker=1: 1
> worker=1: 2
> worker=1: 3
> worker=1: 4
> worker=2: 1
> worker=2: 2
> worker=2: 3
> worker=2: 4
> worker=3: 1
> worker=3: 2
> worker=3: 3
> worker=3: 4
> 
> 
> 
> Example2 (with voluntarily yielding):
> =
> worker=0: 1
> worker=1: 1
> worker=2: 1
> worker=3: 1
> worker=0: 2
> worker=1: 2
> worker=2: 2
> worker=3: 2
> worker=0: 3
> worker=1: 3
> worker=2: 3
> worker=3: 3
> worker=0: 4
> worker=1: 4
> worker=2: 4
> worker=3: 4
> 
> 
> 
> Code:
> =
> import eventlet
> eventlet.monkey_patch

That's not calling monkey_patch -- there are no '()'. Is that a typo?

lock() claims to work differently when monkey_patch() has been
called. Without doing the monkey patching, I would expect the thread
to have to explicitly yield control.

Did you see the problem you describe in production code, or just in this
sample program?

Doug

> 
> from oslo_concurrency import lockutils
> 
> workers = {}
> 
> synchronized = lockutils.synchronized_with_prefix('foo')
> 
> @synchronized('bar')
> def do_work(index):
> global workers
> workers[index] = workers.get(index, 0) + 1
> print "worker=%s: %s" % (index, workers[index])
> 
> 
> def worker(index, nb_jobs, sleep):
> for x in xrange(0, nb_jobs):
> do_work(index)
> if sleep:
> eventlet.greenthread.sleep(0)  # yield
> return index
> 
> 
> # hold the lock before starting workers to make sure that all worker queue up 
> # on the lock before any of them actually get to run.
> @synchronized('bar')
> def start_work(pool, nb_workers=4, nb_jobs=4, sleep=False):
> for i in xrange(0, nb_workers):
> pool.spawn(worker, i, nb_jobs, sleep)
> 
> 
> print "Example1:  sleep=False"
> workers = {}
> pool = eventlet.greenpool.GreenPool()
> start_work(pool)
> pool.waitall()
> 
> 
> print "Example2:  sleep=True"
> workers = {}
> pool = eventlet.greenpool.GreenPool()
> start_work(pool, sleep=True)
> pool.waitall()
> 
> 
> 
> 
> Regards,
> Allain
> 
> 
> Allain Legacy, Software Developer, Wind River
> direct 613.270.2279  fax 613.492.7870 skype allain.legacy
> 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5
> 
>  
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Michał Jastrzębski
On 15 May 2017 at 11:19, Davanum Srinivas  wrote:
> Sorry for the top post, Michal, Can you please clarify a couple of things:
>
> 1) Can folks install just one or two services for their specific scenario?

Yes, that's more of a kolla-ansible feature and require a little bit
of ansible know-how, but entirely possible. Kolla-k8s is built to
allow maximum flexibility in that space.

> 2) Can the container images from kolla be run on bare docker daemon?

Yes, but they need to either override our default CMD (kolla_start) or
provide ENVs requred by it, not a huge deal

> 3) Can someone take the kolla container images from say dockerhub and
> use it without the Kolla framework?

Yes, there is no such thing as kolla framework really. Our images
follow stable ABI and they can be deployed by any deploy mechanism
that will follow it. We have several users who wrote their own deploy
mechanism from scratch.

Containers are just blobs with binaries in it. Little things that we
add are kolla_start script to allow our config file management and
some custom startup scripts for things like mariadb to help with
bootstrapping, both are entirely optional.

>
> Thanks,
> Dims
>
> On Mon, May 15, 2017 at 1:52 PM, Michał Jastrzębski  wrote:
>> For starters, I want to emphasize that fresh set of dockerhub images
>> was one of most requested features from Kolla on this summit and few
>> other features more or less requires readily-available docker
>> registry. Features like full release upgrade gates.
>>
>> This will have numerous benefits for users that doesn't have resources
>> to put sophisticated CI/staging env, which, I'm willing to bet, is
>> still quite significant user base. If we do it correctly (and we will
>> do it correctly), images we'll going to push will go through series of
>> gates which we have in Kolla (and will have more). So when you pull
>> image, you know that it was successfully deployed within scenerios
>> available in our gates, maybe even upgrade and increase scenerio
>> coverage later? That is a huge benefit for actual users.
>>
>> On 15 May 2017 at 10:34, Doug Hellmann  wrote:
>>> Last week at the Forum we had a couple of discussions about
>>> collaboration between the various teams building or consuming
>>> container images. One topic that came up was deciding how to publish
>>> images from the various teams to docker hub or other container
>>> registries. While the technical bits seem easy enough to work out,
>>> there is still the question of precedence and whether it's a good
>>> idea to do so at all.
>>>
>>> In the past, we have refrained from publishing binary packages in
>>> other formats such as debs and RPMs. (We did publish debs way back
>>> in the beginning, for testing IIRC, but switched away from them to
>>> sdists to be more inclusive.) Since then, we have said it is the
>>> responsibility of downstream consumers to build production packages,
>>> either as distributors or as a deployer that is rolling their own.
>>> We do package sdists for python libraries, push some JavaScript to
>>> the NPM registries, and have tarballs of those and a bunch of other
>>> artifacts that we build out of our release tools.  But none of those
>>> is declared as "production ready," and so the community is not
>>> sending the signal that we are responsible for maintaining them in
>>> the context of production deployments, beyond continuing to produce
>>> new releases when there are bugs.
>>
>> So for us that would mean something really hacky and bad. We are
>> community driven not company driven project. We don't have Red Hat or
>> Canonical teams behind us (we have contributors, but that's
>> different).
>>
>>> Container images introduce some extra complexity, over the basic
>>> operating system style packages mentioned above. Due to the way
>>> they are constructed, they are likely to include content we don't
>>> produce ourselves (either in the form of base layers or via including
>>> build tools or other things needed when assembling the full image).
>>> That extra content means there would need to be more tracking of
>>> upstream issues (bugs, CVEs, etc.) to ensure the images are updated
>>> as needed.
>>
>> We can do this by building daily, which was the plan in fact. If we
>> build every day you have at most 24hrs old packages, CVEs and things
>> like that on non-openstack packages are still maintained by distro
>> maintainers.
>>
>>> Given our security and stable team resources, I'm not entirely
>>> comfortable with us publishing these images, and giving the appearance
>>> that the community *as a whole* is committing to supporting them.
>>> I don't have any objection to someone from the community publishing
>>> them, as long as it is made clear who the actual owner is. I'm not
>>> sure how easy it is to make that distinction if we publish them
>>> through infra jobs, so that may mean some outside process. I also
>>> don't think there would 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Sean Dague
On 05/15/2017 01:52 PM, Michał Jastrzębski wrote:
> For starters, I want to emphasize that fresh set of dockerhub images
> was one of most requested features from Kolla on this summit and few
> other features more or less requires readily-available docker
> registry. Features like full release upgrade gates.
> 
> This will have numerous benefits for users that doesn't have resources
> to put sophisticated CI/staging env, which, I'm willing to bet, is
> still quite significant user base. If we do it correctly (and we will
> do it correctly), images we'll going to push will go through series of
> gates which we have in Kolla (and will have more). So when you pull
> image, you know that it was successfully deployed within scenerios
> available in our gates, maybe even upgrade and increase scenerio
> coverage later? That is a huge benefit for actual users.

That concerns me quite a bit. Given the nature of the patch story on
containers (which is a rebuild), I really feel like users should have
their own build / CI pipeline locally to be deploying this way. Making
that easy for them to do, is great, but skipping that required local
infrastructure puts them in a bad position should something go wrong.

I do get that many folks want that, but I think it builds in a set of
expectations that it's not possible to actually meet from an upstream
perspective.

> On 15 May 2017 at 10:34, Doug Hellmann  wrote:
>> Last week at the Forum we had a couple of discussions about
>> collaboration between the various teams building or consuming
>> container images. One topic that came up was deciding how to publish
>> images from the various teams to docker hub or other container
>> registries. While the technical bits seem easy enough to work out,
>> there is still the question of precedence and whether it's a good
>> idea to do so at all.
>>
>> In the past, we have refrained from publishing binary packages in
>> other formats such as debs and RPMs. (We did publish debs way back
>> in the beginning, for testing IIRC, but switched away from them to
>> sdists to be more inclusive.) Since then, we have said it is the
>> responsibility of downstream consumers to build production packages,
>> either as distributors or as a deployer that is rolling their own.
>> We do package sdists for python libraries, push some JavaScript to
>> the NPM registries, and have tarballs of those and a bunch of other
>> artifacts that we build out of our release tools.  But none of those
>> is declared as "production ready," and so the community is not
>> sending the signal that we are responsible for maintaining them in
>> the context of production deployments, beyond continuing to produce
>> new releases when there are bugs.
> 
> So for us that would mean something really hacky and bad. We are
> community driven not company driven project. We don't have Red Hat or
> Canonical teams behind us (we have contributors, but that's
> different).
> 
>> Container images introduce some extra complexity, over the basic
>> operating system style packages mentioned above. Due to the way
>> they are constructed, they are likely to include content we don't
>> produce ourselves (either in the form of base layers or via including
>> build tools or other things needed when assembling the full image).
>> That extra content means there would need to be more tracking of
>> upstream issues (bugs, CVEs, etc.) to ensure the images are updated
>> as needed.
> 
> We can do this by building daily, which was the plan in fact. If we
> build every day you have at most 24hrs old packages, CVEs and things
> like that on non-openstack packages are still maintained by distro
> maintainers.

There have been many instances where 24 hours wasn't good enough as
embargoes end up pretty weird in terms of when things hit mirrors. It
also assumes that when a CVE hits some other part of the gate or
infrastructure isn't wedged so that it's not possible to build new
packages. Or the capacity demands happen during a feature freeze, with
tons of delay in there. There are many single points of failure in this
process.

>> Given our security and stable team resources, I'm not entirely
>> comfortable with us publishing these images, and giving the appearance
>> that the community *as a whole* is committing to supporting them.
>> I don't have any objection to someone from the community publishing
>> them, as long as it is made clear who the actual owner is. I'm not
>> sure how easy it is to make that distinction if we publish them
>> through infra jobs, so that may mean some outside process. I also
>> don't think there would be any problem in building images on our
>> infrastructure for our own gate jobs, as long as they are just for
>> testing and we don't push those to any other registries.
> 
> Today we use Kolla account for that and I'm more than happy to keep it
> this way. We license our code with ASL which gives no guarantees.
> Containers will be licensed this way too, 

[openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-15 Thread Legacy, Allain
Can someone comment on whether the following scenario has been discussed
before or whether this is viewed by the community as a bug?

While debugging a couple of different issues our investigation has lead
us down the path of needing to look at whether the oslo concurrency lock
utilities are working properly or not.  What we found is that it is
possible for a greenthread to continuously acquire a lock even though
there are other threads queued up waiting for the lock.

For instance, a greenthread acquires a lock, does some work, releases
the lock, and then needs to repeat this process over several iterations.
While the first greenthread holds the lock other greenthreads come along and
attempt to acquire the lock.  Those subsequent greenthreads are added to the
waiters list and suspended.  The observed behavior is that as long as the
first greenthread continues to run without ever yielding it will always
re-acquire the lock even before any of the waiters.

To illustrate my point I have included a short program that shows the
effect of multiple threads contending for a lock with and without
voluntarily yielding.   The code follows, but the output from both
sample runs are included here first.

In both examples the output is formatted as "worker=XXX: YYY" where XXX
is the worker number, and YYY is the number of times the worker has been
executed while holding the lock.

In the first example,  notice that each worker gets to finish all of its
tasks before any subsequence works gets to run even once.

In the second example, notice that the workload is fair and each worker
gets to hold the lock once before passing it on to the next in line.

Example1 (without voluntarily yielding):
=
worker=0: 1
worker=0: 2
worker=0: 3
worker=0: 4
worker=1: 1
worker=1: 2
worker=1: 3
worker=1: 4
worker=2: 1
worker=2: 2
worker=2: 3
worker=2: 4
worker=3: 1
worker=3: 2
worker=3: 3
worker=3: 4



Example2 (with voluntarily yielding):
=
worker=0: 1
worker=1: 1
worker=2: 1
worker=3: 1
worker=0: 2
worker=1: 2
worker=2: 2
worker=3: 2
worker=0: 3
worker=1: 3
worker=2: 3
worker=3: 3
worker=0: 4
worker=1: 4
worker=2: 4
worker=3: 4



Code:
=
import eventlet
eventlet.monkey_patch

from oslo_concurrency import lockutils

workers = {}

synchronized = lockutils.synchronized_with_prefix('foo')

@synchronized('bar')
def do_work(index):
global workers
workers[index] = workers.get(index, 0) + 1
print "worker=%s: %s" % (index, workers[index])


def worker(index, nb_jobs, sleep):
for x in xrange(0, nb_jobs):
do_work(index)
if sleep:
eventlet.greenthread.sleep(0)  # yield
return index


# hold the lock before starting workers to make sure that all worker queue up 
# on the lock before any of them actually get to run.
@synchronized('bar')
def start_work(pool, nb_workers=4, nb_jobs=4, sleep=False):
for i in xrange(0, nb_workers):
pool.spawn(worker, i, nb_jobs, sleep)


print "Example1:  sleep=False"
workers = {}
pool = eventlet.greenpool.GreenPool()
start_work(pool)
pool.waitall()


print "Example2:  sleep=True"
workers = {}
pool = eventlet.greenpool.GreenPool()
start_work(pool, sleep=True)
pool.waitall()




Regards,
Allain


Allain Legacy, Software Developer, Wind River
direct 613.270.2279  fax 613.492.7870 skype allain.legacy
350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5

 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-15 Thread Davanum Srinivas
On Mon, May 15, 2017 at 2:08 PM, Ken Giusti  wrote:
> Folks,
>
> It was decided at the oslo.messaging forum at summit that the pika
> driver will be marked as deprecated [1] for removal.

[dims} +1 from me.

>
> The pika driver is another rabbitmq-based driver.  It was developed as
> a replacement for the current rabbit driver (rabbit://).  The pika
> driver is based on the 'pika' rabbitmq client library [2], rather than
> the kombu library [3] of the current rabbitmq driver.  The pika
> library was recommended by the rabbitmq community a couple of summits
> ago as a better client than the kombu client.
>
> However, testing done against this driver did not show "appreciable
> difference in performance or reliability" over the existing rabbitmq
> driver.
>
> Given this, and the recent departure of some very talented
> contributors, the consensus is to deprecate pika and recommend users
> stay with the original rabbitmq driver.
>
> The plan is to mark the driver as deprecated in Pike, removal in Rocky.
>
> thanks,
>
>
> [1] 
> https://etherpad.openstack.org/p/BOS_Forum_Oslo.Messaging_driver_recommendations
>   (~ line 80)
> [2] https://github.com/pika/pika
> [3] https://github.com/celery/kombu
>
> --
> Ken Giusti  (kgiu...@gmail.com)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron][nova][Openstack-operators][interop] Time for a bikeshed - help me name types of networking

2017-05-15 Thread PACHECO, RODOLFO J
Monty

The one missing aspect in this network model, is the ability to identify 
/categorize a VNF VM, that i.e. Is using SRIOV and only cares about VLAN’s
Such VM’s would not have IP Addresses in some cases, and wouldn’t be 
describable with the external/internal address labels.

I feel you need some other mechanism or tag to capture that type of server. Or 
at least be able to account for them. (no-external-ip / or external-vlan , or 
something)

It’s possible that the case I mention is what you refer to here “ Again, there 
are more complex combinations possible. For now this is 
focused on the 80% case. I'm deliberately ignoring questions like vpn or 
tricircle-style intra-cloud networks for now. “



Regards
 

Rodolfo 

 

Home/Office 732 5337671

On 5/14/17, 1:02 PM, "Monty Taylor"  wrote:

Hey all!

LONG EMAIL WARNING

I'm working on a proposal to formalize a cloud profile document. (we 
keep and support these in os-client-config, but it's grown up ad-hoc and 
is hard for other languages to consume -so we're going to rev it and try 
to encode information in it more sanely) I need help in coming up with 
names for some things that we can, if not all agree on, at least not 
have pockets of violent dissent against.

tl;dr: What do we name some enum values?

First, some long-winded background

== Background ==

The profile document is where we keep information about a cloud that an 
API consumer needs to know to effectively use the cloud - and is stored 
in a machine readable manner so that libraries and tools (including but 
hopefully not limited to shade) can make appropriate choices.

Information in profiles is the information that's generally true for all 
normal users. OpenStack is flexible, and some API consumers have 
different access. That's fine - the cloud profiles are not for them. 
Cloud profiles define the qualities about a cloud that end users can 
safely expect to be true. Advanced use is never restricted by annotating 
the general case.

First off, we need to define two terms:
"external" - an address that can be used for north-south communication 
off the cloud
"internal" - an address that can be used for east-west communication 
with and only with other things on the same cloud

Again, there are more complex combinations possible. For now this is 
focused on the 80% case. I'm deliberately ignoring questions like vpn or 
tricircle-style intra-cloud networks for now. If we can agree on an 
outcome here - we can always come back and add words to describe more 
things.

** Bikeshed #1 **

Are "internal" and "external" ok with folks as terms for those two ideas?

We need a term for each - if we prefer different terms, replacing their 
use in the following is simple.

== Booting Servers ==

When booting a server, a typical user wants one of the following:

- Give me a server with an external address
- Give me a server with an internal address
- Give me a server with both
- Give me a server with very specific networking connections

The fourth doesn't need any help - it's the current state of the world 
today and is well served. It's the "I have a network I am aware of 
and/or a pre-existing floating ip, etc and I want to use them". This is 
not about those people - they're fine.

Related to the first three cases, depending on how the cloud is 
deployed, any of the following can be non-exclusively true:

- External addresses are provided via Fixed IPs
- External addresses are provided via Floating IPs
- Internal addresses are provided via Fixed IPs
- Internal addresses can be provided via Floating IPs
- Users can create and define their own internal networks

Additionally, External addresses can be IPv4 or IPv6

== Proposal - complete with Unpainted Sheds ==

I want to add information to the existing cloud profile telling the API 
user which of the models above are available.

The cloud profile will gain a field called "network-models" which will 
contain one or more names from an enum of pre-defined models. Multiple 
values can be listed, because some clouds provide more than one option.

** Bikeshed #2 **

Anybody have a problem with the key name "network-models"?

(Incidentally, the idea from this is borrowed from GCE's 
"compute#accessConfig" [0] - although they only have one model in their 
enum: "ONE_TO_ONE_NAT")

In a perfect future world where we have per-service capabilities 
discovery I'd love for such information to be exposed directly by 
neutron. Therefore, I'd LOVE if we can at agree that the concepts are 
concepts and on what to name them so that users who get the info from a 
  

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Davanum Srinivas
Sorry for the top post, Michal, Can you please clarify a couple of things:

1) Can folks install just one or two services for their specific scenario?
2) Can the container images from kolla be run on bare docker daemon?
3) Can someone take the kolla container images from say dockerhub and
use it without the Kolla framework?

Thanks,
Dims

On Mon, May 15, 2017 at 1:52 PM, Michał Jastrzębski  wrote:
> For starters, I want to emphasize that fresh set of dockerhub images
> was one of most requested features from Kolla on this summit and few
> other features more or less requires readily-available docker
> registry. Features like full release upgrade gates.
>
> This will have numerous benefits for users that doesn't have resources
> to put sophisticated CI/staging env, which, I'm willing to bet, is
> still quite significant user base. If we do it correctly (and we will
> do it correctly), images we'll going to push will go through series of
> gates which we have in Kolla (and will have more). So when you pull
> image, you know that it was successfully deployed within scenerios
> available in our gates, maybe even upgrade and increase scenerio
> coverage later? That is a huge benefit for actual users.
>
> On 15 May 2017 at 10:34, Doug Hellmann  wrote:
>> Last week at the Forum we had a couple of discussions about
>> collaboration between the various teams building or consuming
>> container images. One topic that came up was deciding how to publish
>> images from the various teams to docker hub or other container
>> registries. While the technical bits seem easy enough to work out,
>> there is still the question of precedence and whether it's a good
>> idea to do so at all.
>>
>> In the past, we have refrained from publishing binary packages in
>> other formats such as debs and RPMs. (We did publish debs way back
>> in the beginning, for testing IIRC, but switched away from them to
>> sdists to be more inclusive.) Since then, we have said it is the
>> responsibility of downstream consumers to build production packages,
>> either as distributors or as a deployer that is rolling their own.
>> We do package sdists for python libraries, push some JavaScript to
>> the NPM registries, and have tarballs of those and a bunch of other
>> artifacts that we build out of our release tools.  But none of those
>> is declared as "production ready," and so the community is not
>> sending the signal that we are responsible for maintaining them in
>> the context of production deployments, beyond continuing to produce
>> new releases when there are bugs.
>
> So for us that would mean something really hacky and bad. We are
> community driven not company driven project. We don't have Red Hat or
> Canonical teams behind us (we have contributors, but that's
> different).
>
>> Container images introduce some extra complexity, over the basic
>> operating system style packages mentioned above. Due to the way
>> they are constructed, they are likely to include content we don't
>> produce ourselves (either in the form of base layers or via including
>> build tools or other things needed when assembling the full image).
>> That extra content means there would need to be more tracking of
>> upstream issues (bugs, CVEs, etc.) to ensure the images are updated
>> as needed.
>
> We can do this by building daily, which was the plan in fact. If we
> build every day you have at most 24hrs old packages, CVEs and things
> like that on non-openstack packages are still maintained by distro
> maintainers.
>
>> Given our security and stable team resources, I'm not entirely
>> comfortable with us publishing these images, and giving the appearance
>> that the community *as a whole* is committing to supporting them.
>> I don't have any objection to someone from the community publishing
>> them, as long as it is made clear who the actual owner is. I'm not
>> sure how easy it is to make that distinction if we publish them
>> through infra jobs, so that may mean some outside process. I also
>> don't think there would be any problem in building images on our
>> infrastructure for our own gate jobs, as long as they are just for
>> testing and we don't push those to any other registries.
>
> Today we use Kolla account for that and I'm more than happy to keep it
> this way. We license our code with ASL which gives no guarantees.
> Containers will be licensed this way too, so they're available as-is
> and "production readiness" should be decided by everyone who runs it.
> That being said what we *can* promise is that our containers passed
> through more or less rigorous gates and that's more than most of
> packages/self-built containers ever do. I think that value would be
> appreciated by small to mid companies that just want to work with
> openstack and don't have means to spare teams/resources for CI.
>
>> I'm raising the issue here to get some more input into how to
>> proceed. Do other people think this concern is 

Re: [openstack-dev] [neutron] diagnostics

2017-05-15 Thread Boden Russell

On 5/12/17 12:31 PM, Armando M. wrote:
>
> Please, do provide feedback in case I omitted some other key takeaway.
>
> [1] https://etherpad.openstack.org/p/pike-neutron-diagnostics
> [2] 
> http://specs.openstack.org/openstack/neutron-specs/specs/pike/diagnostics.html
>
Glad you all got a chance to discuss this topic!

I've added some additional notes and comments to the etherpad ([1] from
your list). Please feel free to reach out to me on IRC ('boden') for further
discussion.

Thanks



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-15 Thread Ken Giusti
Folks,

It was decided at the oslo.messaging forum at summit that the pika
driver will be marked as deprecated [1] for removal.

The pika driver is another rabbitmq-based driver.  It was developed as
a replacement for the current rabbit driver (rabbit://).  The pika
driver is based on the 'pika' rabbitmq client library [2], rather than
the kombu library [3] of the current rabbitmq driver.  The pika
library was recommended by the rabbitmq community a couple of summits
ago as a better client than the kombu client.

However, testing done against this driver did not show "appreciable
difference in performance or reliability" over the existing rabbitmq
driver.

Given this, and the recent departure of some very talented
contributors, the consensus is to deprecate pika and recommend users
stay with the original rabbitmq driver.

The plan is to mark the driver as deprecated in Pike, removal in Rocky.

thanks,


[1] 
https://etherpad.openstack.org/p/BOS_Forum_Oslo.Messaging_driver_recommendations
  (~ line 80)
[2] https://github.com/pika/pika
[3] https://github.com/celery/kombu

-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Michał Jastrzębski
For starters, I want to emphasize that fresh set of dockerhub images
was one of most requested features from Kolla on this summit and few
other features more or less requires readily-available docker
registry. Features like full release upgrade gates.

This will have numerous benefits for users that doesn't have resources
to put sophisticated CI/staging env, which, I'm willing to bet, is
still quite significant user base. If we do it correctly (and we will
do it correctly), images we'll going to push will go through series of
gates which we have in Kolla (and will have more). So when you pull
image, you know that it was successfully deployed within scenerios
available in our gates, maybe even upgrade and increase scenerio
coverage later? That is a huge benefit for actual users.

On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> Last week at the Forum we had a couple of discussions about
> collaboration between the various teams building or consuming
> container images. One topic that came up was deciding how to publish
> images from the various teams to docker hub or other container
> registries. While the technical bits seem easy enough to work out,
> there is still the question of precedence and whether it's a good
> idea to do so at all.
>
> In the past, we have refrained from publishing binary packages in
> other formats such as debs and RPMs. (We did publish debs way back
> in the beginning, for testing IIRC, but switched away from them to
> sdists to be more inclusive.) Since then, we have said it is the
> responsibility of downstream consumers to build production packages,
> either as distributors or as a deployer that is rolling their own.
> We do package sdists for python libraries, push some JavaScript to
> the NPM registries, and have tarballs of those and a bunch of other
> artifacts that we build out of our release tools.  But none of those
> is declared as "production ready," and so the community is not
> sending the signal that we are responsible for maintaining them in
> the context of production deployments, beyond continuing to produce
> new releases when there are bugs.

So for us that would mean something really hacky and bad. We are
community driven not company driven project. We don't have Red Hat or
Canonical teams behind us (we have contributors, but that's
different).

> Container images introduce some extra complexity, over the basic
> operating system style packages mentioned above. Due to the way
> they are constructed, they are likely to include content we don't
> produce ourselves (either in the form of base layers or via including
> build tools or other things needed when assembling the full image).
> That extra content means there would need to be more tracking of
> upstream issues (bugs, CVEs, etc.) to ensure the images are updated
> as needed.

We can do this by building daily, which was the plan in fact. If we
build every day you have at most 24hrs old packages, CVEs and things
like that on non-openstack packages are still maintained by distro
maintainers.

> Given our security and stable team resources, I'm not entirely
> comfortable with us publishing these images, and giving the appearance
> that the community *as a whole* is committing to supporting them.
> I don't have any objection to someone from the community publishing
> them, as long as it is made clear who the actual owner is. I'm not
> sure how easy it is to make that distinction if we publish them
> through infra jobs, so that may mean some outside process. I also
> don't think there would be any problem in building images on our
> infrastructure for our own gate jobs, as long as they are just for
> testing and we don't push those to any other registries.

Today we use Kolla account for that and I'm more than happy to keep it
this way. We license our code with ASL which gives no guarantees.
Containers will be licensed this way too, so they're available as-is
and "production readiness" should be decided by everyone who runs it.
That being said what we *can* promise is that our containers passed
through more or less rigorous gates and that's more than most of
packages/self-built containers ever do. I think that value would be
appreciated by small to mid companies that just want to work with
openstack and don't have means to spare teams/resources for CI.

> I'm raising the issue here to get some more input into how to
> proceed. Do other people think this concern is overblown? Can we
> mitigate the risk by communicating through metadata for the images?
> Should we stick to publishing build instructions (Dockerfiles, or
> whatever) instead of binary images? Are there other options I haven't
> mentioned?

Today we do publish build instructions, that's what Kolla is. We also
publish built containers already, just we do it manually on release
today. If we decide to block it, I assume we should stop doing that
too? That will hurt users who uses this piece of Kolla, and I'd hate
to hurt our 

[openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-15 Thread Doug Hellmann
Last week at the Forum we had a couple of discussions about
collaboration between the various teams building or consuming
container images. One topic that came up was deciding how to publish
images from the various teams to docker hub or other container
registries. While the technical bits seem easy enough to work out,
there is still the question of precedence and whether it's a good
idea to do so at all.

In the past, we have refrained from publishing binary packages in
other formats such as debs and RPMs. (We did publish debs way back
in the beginning, for testing IIRC, but switched away from them to
sdists to be more inclusive.) Since then, we have said it is the
responsibility of downstream consumers to build production packages,
either as distributors or as a deployer that is rolling their own.
We do package sdists for python libraries, push some JavaScript to
the NPM registries, and have tarballs of those and a bunch of other
artifacts that we build out of our release tools.  But none of those
is declared as "production ready," and so the community is not
sending the signal that we are responsible for maintaining them in
the context of production deployments, beyond continuing to produce
new releases when there are bugs.

Container images introduce some extra complexity, over the basic
operating system style packages mentioned above. Due to the way
they are constructed, they are likely to include content we don't
produce ourselves (either in the form of base layers or via including
build tools or other things needed when assembling the full image).
That extra content means there would need to be more tracking of
upstream issues (bugs, CVEs, etc.) to ensure the images are updated
as needed.

Given our security and stable team resources, I'm not entirely
comfortable with us publishing these images, and giving the appearance
that the community *as a whole* is committing to supporting them.
I don't have any objection to someone from the community publishing
them, as long as it is made clear who the actual owner is. I'm not
sure how easy it is to make that distinction if we publish them
through infra jobs, so that may mean some outside process. I also
don't think there would be any problem in building images on our
infrastructure for our own gate jobs, as long as they are just for
testing and we don't push those to any other registries.

I'm raising the issue here to get some more input into how to
proceed. Do other people think this concern is overblown? Can we
mitigate the risk by communicating through metadata for the images?
Should we stick to publishing build instructions (Dockerfiles, or
whatever) instead of binary images? Are there other options I haven't
mentioned?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron][nova][Openstack-operators][interop] Time for a bikeshed - help me name types of networking

2017-05-15 Thread Monty Taylor

On 05/15/2017 11:44 AM, Ian Wells wrote:

I'm coming to this cold, so apologies when I put my foot in my mouth.
But I'm trying to understand what you're actually getting at, here -
other than helpful simplicity - and I'm not following the detail of
you're thinking, so take this as a form of enquiry.


Thanks for diving in - no foot-mouth worries here. It's a hard topic.


On 14 May 2017 at 10:02, Monty Taylor > wrote:

First off, we need to define two terms:
"external" - an address that can be used for north-south
communication off the cloud
"internal" - an address that can be used for east-west communication
with and only with other things on the same cloud


I'm going through the network detail of this and picking out
shortcomings, so please understand that before you read on.

I think I see what you're trying to accomplish, but the details don't
add up for me.  The right answer might be 'you don't use this if you
want fine detailed network APIs' - and that's fine - but I think the
issue is not coming up with a model that contradicts the fine detail of
what you can do with a network today and how you can put it to use.


Yes. The _general_ answer is that this is not supposed to attempt to 
describe all of the networking possibilities - but also exactly what you 
sayd, we don't want to _contradict_ those things.



1. What if there are more domains of connectivity, since a cloud can be
connected to multiple domains?  I see this in its current form as
intended for public cloud providers as much as anything, in which case
there is probably only one definition of 'external', for instance, but
if you want to make it more widely useful you could define off-cloud
routing domain names, of which 'external' (or, in that context,
'internet') is one with a very specific meaning.


Right. So 'external' isn't intended to describe destination, as much as 
'does this go to other things' and 'if so how'.


You're totally right - if there are more than one "external" domain, 
this information will be insufficient. I'd expect each of the networks 
in question to be tagged as "external" - but the choice of where they go 
is a time when a user is going to need more information.


Related to an above reponse though - let's say a user has a cloud that 
connects to three network domains that are not in the cloud in question. 
A, B and C. At no point in that user's life is "I want a VM that has 
external connectivity" going to be a sensible request. They will, out of 
necessity, always need to say "I want a VM that can connect to domain A" 
- which is already very nicely handled.


This is definitely inspired by the Public Clouds, since this is one of 
the top 2 things it's super hard for users to figure out and that we 
currently have to paper over for them in client libraries. (image 
upload, fwiw, is the other) But I hope it degrades well for more complex 
private clouds. That is - the existence of (or lack of) such tags on a 
private cloud should not impact the existing experience for those users 
at all.



2. What is 'internal', precisely?  It seems to be in-cloud, though I
don't entirely understand how NAT comes into that.  What of a route to
the provider's internal network?  How does it apply when I have multiple
tenant networks that can't talk to each other, when they're provisioned
for me and I can't create them, and so on?  Why doesn't it apply to IPv6?


Yes, internal is intended to be in cloud. It's intended to handle the 
"please give me a server that can't talk to things that aren't in my 
personal private network" case. Similar to the external case, if a user 
has more than one project networks, there is no way "I want a VM that 
does not have external connectivity" will be a useful thing to say. 
They'll have to say "I want a VM that is attached to the network I 
created for my database tier" - and again that's currently handled nicely.


In the current proposal IPv6 isn't covered in more depth because the 
question with IPv6 is (in my experience - obviously could be wrong) - 
"can I get me some IPv6".


As for NAT - I included internal NAT for completeness because I heard 
that someone added the ability to neutron to be able to get floating ips 
from one neutron private network and use them to connect to a different 
neutron private network. I have not seen this in action myself, nor do I 
want it - but it seemed having an enum value to cover it wasn't expensive.


As to why we cover NAT at all - that's more to do with workloads running 
on the server. Some workloads like to be able to look at the IP stack on 
the server and see their own network information (kerberos comes to 
mind) while others don't care. And some folks may be designing apps that 
assume they're behind NAT of some sort. On some clouds both are 
available, so a user with a preference to form needs to be able to 
express that.



3. Why doesn't your format tell me how to 

Re: [openstack-dev] [api][neutron][nova][Openstack-operators][interop] Time for a bikeshed - help me name types of networking

2017-05-15 Thread Monty Taylor

On 05/15/2017 11:51 AM, Doug Hellmann wrote:

Excerpts from Jay Pipes's message of 2017-05-15 12:40:17 -0400:

On 05/14/2017 01:02 PM, Monty Taylor wrote:

** Bikeshed #1 **

Are "internal" and "external" ok with folks as terms for those two ideas?


Yup, ++ from me on the above.


** Bikeshed #2 **

Anybody have a problem with the key name "network-models"?


They're not network models. They're access/connectivity policies. :)


(Incidentally, the idea from this is borrowed from GCE's
"compute#accessConfig" [0] - although they only have one model in their
enum: "ONE_TO_ONE_NAT")

In a perfect future world where we have per-service capabilities
discovery I'd love for such information to be exposed directly by
neutron.


I actually don't see this as a Neutron thing. It's the *workload*
connectivity expectations that you're describing, not anything to do
with networks, subnets or ports.

So, I think actually Nova would be a better home for this capability
discovery, for similar reasons why get-me-a-network was mostly a Nova
user experience...

So, I suppose I'd prefer to call this thing an "access policy" or
"access model", optionally prefixing that with "network", i.e. "network
access policy".


We have enough things overloading the term "policy." Let's get out
a thesaurus for this one. ;-)


Good points from both of you - thank you.

Access Model would be fine by me - it's very similar to the GCE term 
which is "access config".





** Bikeshed #3 **

What do we call the general concepts represented by fixed and floating
ips? Do we use the words "fixed" and "floating"? Do we instead try
something else, such as "direct" and "nat"?

I have two proposals for the values in our enum:

#1 - using fixed / floating

ipv4-external-fixed
ipv4-external-floating
ipv4-internal-fixed
ipv4-internal-floating
ipv6-fixed


Definitely -1 on using fixed/floating.


#2 - using direct / nat

ipv4-external-direct
ipv4-external-nat
ipv4-internal-direct
ipv4-internal-nat
ipv6-direct


I'm good with direct and nat. +1 from me.


On the other hand, "direct" isn't exactly a commonly used word in this
context. I asked a ton of people at the Summit last week and nobody
could come up with a better term for "IP that is configured inside of
the server's network stack". "non-natted", "attached", "routed" and
"normal" were all suggested. I'm not sure any of those are super-great -
so I'm proposing "direct" - but please if you have a better suggestion
please make it.


The other problem with the term "direct" is that there is already a vNIC
type of the same name which refers to a guest's vNIC using a host
passthrough device.


We need more words. :)


So, maybe non-nat or no-nat would be better? Or hell, make it a boolean
is_nat or has_nat if we're really just referring to whether an IP is
NATted or not?


I think the questions are:

"Does this cloud support accessing this server using NAT?"
"Does this cloud support accessing this server without NAT?"

(which your suggestion would carry)

However, I'm skiddish on calling it "nat" and "not-nat" - because as a 
user not coming from a background where servers are accessed via nat - 
I'm not sure I'd think to express my desire as "can I have a not-nat 
please?" But maybe that's just the world we live in - where normal 
"Internet" connectivity is only known as "isn't natted"...


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-15 Thread Zane Bitter

On 15/05/17 12:10, Steven Hardy wrote:

On Mon, May 15, 2017 at 04:46:28PM +0200, Lance Haig wrote:

Hi Steve,

I am happy to assist in any way to be honest.


It was great to meet you in Boston, and thanks very much for 
volunteering to help out.


BTW one issue I'm aware of is that the autoscaling template examples we 
have all use OS::Ceilometer::* resources for alarms. We have a global 
environment thingy that maps those to OS::Aodh::*, so at least in theory 
those templates should continue to work, but there are actually no 
examples that I can find of autoscaling templates doing things the way 
we want everyone to do them.



The backwards compatibility is not always correct as I have seen when
developing our library of templates on Liberty and then trying to deploy it
on Mitaka for example.


Yeah, I guess it's true that there are sometimes deprecated resource
interfaces that get removed on upgrade to a new OpenStack version, and that
is independent of the HOT version.


What if instead of a directory per release, we just had a 'deprecated' 
directory where we move stuff that is going away (e.g. anything relying 
on OS::Glance::Image), and then deleted them when it disappeared from 
any supported release (e.g. LBaaSv1 must be close if it isn't gone already).



As we've proven, maintaining these templates has been a challenge given the
available resources, so I guess I'm still in favor of not duplicating a bunch
of templates, e.g perhaps we could focus on a target of CI testing
templates on the current stable release as a first step?


I'd rather do CI against Heat master, I think, but yeah that sounds like 
the first step. Note that if we're doing CI on old stuff then we'd need 
to do heat-templates stable branches rather than directory-per-release.


With my suggestion above, we could just not check anything in the 
'deprecated' directory maybe?



As you guys mentioned in our discussions the Networking example I quoted is
not something you guys can deal with as the source project affects this.

Unless we can use this exercise to test these and fix them then I am
happier.

My vision would be to have a set of templates and examples that are tested
regularly against a running OS deployment so that we can make sure the
combinations still run. I am sure we can agree on a way to do this with CICD
so that we test the fetureset.


Agreed, getting the approach to testing agreed seems like the first step -
FYI we do already have automated scenario tests in the main heat tree that
consume templates similar to many of the examples:

https://github.com/openstack/heat/tree/master/heat_integrationtests/scenario

So, in theory, getting a similar test running on heat_templates should be
fairly simple, but getting all the existing templates working is likely to
be a bigger challenge.


Even if we just ran the 'template validate' command on them to check 
that all of the resource types & properties still exist, that would be 
pretty helpful. It'd catch of of the times when we break backwards 
compatibility so we can decide to either fix it or deprecate/remove the 
obsolete template. (Note that you still need all of the services 
installed, or at least endpoints in the catalog, for the validation to 
work.)


Actually creating all of the stuff would be nice, but it'll likely be 
difficult (just keeping up-to-date OS images to boot from is a giant 
pain). And even then that isn't sufficient to test that it actually 
_works_. Let's keep that out of scope for now?


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][logging] oslo.log fluentd native logging

2017-05-15 Thread Joe Talerico
On Wed, May 10, 2017 at 5:41 PM, Dan Prince  wrote:
> On Mon, 2017-04-24 at 07:47 -0400, Joe Talerico wrote:
>> Hey owls - I have been playing with oslo.log fluentd integration[1]
>> in
>> a poc commit here [2]. Enabling the native service logging is nice
>> and
>> tracebacks no longer multiple inserts into elastic - there is a
>> "traceback" key which would contain the traceback if there was one.
>>
>> The system-level / kernel level logging is still needed with the
>> fluent client on each Overcloud node.
>>
>> I see Martin did the initial work [3] to integrate fluentd, is there
>> anyone looking at migrating the OpenStack services to using the
>> oslo.log facility?
>
> Nobody officially implementing this yet that I know of. But it does
> look promising.
>
> The idea of using oslo.logs fluentd formatter could dovetail very
> nicely into our new containers (docker) servers for Pike in that it
> would allow us to log to stdout directly within the container... but
> still support the Fluentd logging interfaces that we have today.

Right, I think we give the user the option for oslo.log fluentd for
OpenStack services. We will still need fluentd to send the other noise
-- kernel/rabbit/etc

>
> The only downside would be that not all services in OpenStack support
> olso.log (I don't think Swift does for example). Nor do some of the
> core services we deploy like Galera and RabbitMQ. So we'd have a mixed
> bag of host and stdout logging perhaps for some things or would need to
> integrate with Fluentd differently for services without oslo.log
> support.

Yeah, this is the downside...

>
> Our current approach to containers logging in TripleO recently landed
> here and exposed the logs to a directory on the host specifically so
> that we could aim to support Fluentd integrations:
>
> https://review.openstack.org/#/c/442603/
>
> Perhaps we should revisit this in the (near) future to improve our
> containers deployments.
>
> Dan

I think oslo.log fluentd fluentd work shouldn't be much to integrate,
which could give the container work something to play with sooner than
later.

Who from the ops-tools side could I work with on this -- or maybe
people don't see this as a high enough priority?

Joe

>
>>
>> Joe
>>
>> [1] https://github.com/openstack/oslo.log/blob/master/oslo_log/format
>> ters.py#L167
>> [2] https://review.openstack.org/#/c/456760/
>> [3] https://specs.openstack.org/openstack/tripleo-specs/specs/newton/
>> tripleo-opstools-centralized-logging.html
>>
>> _
>> _
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
>> cribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron][nova][Openstack-operators][interop] Time for a bikeshed - help me name types of networking

2017-05-15 Thread Doug Hellmann
Excerpts from Jay Pipes's message of 2017-05-15 12:40:17 -0400:
> On 05/14/2017 01:02 PM, Monty Taylor wrote:
> > ** Bikeshed #1 **
> >
> > Are "internal" and "external" ok with folks as terms for those two ideas?
> 
> Yup, ++ from me on the above.
> 
> > ** Bikeshed #2 **
> >
> > Anybody have a problem with the key name "network-models"?
> 
> They're not network models. They're access/connectivity policies. :)
> 
> > (Incidentally, the idea from this is borrowed from GCE's
> > "compute#accessConfig" [0] - although they only have one model in their
> > enum: "ONE_TO_ONE_NAT")
> >
> > In a perfect future world where we have per-service capabilities
> > discovery I'd love for such information to be exposed directly by
> > neutron.
> 
> I actually don't see this as a Neutron thing. It's the *workload* 
> connectivity expectations that you're describing, not anything to do 
> with networks, subnets or ports.
> 
> So, I think actually Nova would be a better home for this capability 
> discovery, for similar reasons why get-me-a-network was mostly a Nova 
> user experience...
> 
> So, I suppose I'd prefer to call this thing an "access policy" or 
> "access model", optionally prefixing that with "network", i.e. "network 
> access policy".

We have enough things overloading the term "policy." Let's get out
a thesaurus for this one. ;-)

Doug

> 
> > ** Bikeshed #3 **
> >
> > What do we call the general concepts represented by fixed and floating
> > ips? Do we use the words "fixed" and "floating"? Do we instead try
> > something else, such as "direct" and "nat"?
> >
> > I have two proposals for the values in our enum:
> >
> > #1 - using fixed / floating
> >
> > ipv4-external-fixed
> > ipv4-external-floating
> > ipv4-internal-fixed
> > ipv4-internal-floating
> > ipv6-fixed
> 
> Definitely -1 on using fixed/floating.
> 
> > #2 - using direct / nat
> >
> > ipv4-external-direct
> > ipv4-external-nat
> > ipv4-internal-direct
> > ipv4-internal-nat
> > ipv6-direct
> 
> I'm good with direct and nat. +1 from me.
> 
> > On the other hand, "direct" isn't exactly a commonly used word in this
> > context. I asked a ton of people at the Summit last week and nobody
> > could come up with a better term for "IP that is configured inside of
> > the server's network stack". "non-natted", "attached", "routed" and
> > "normal" were all suggested. I'm not sure any of those are super-great -
> > so I'm proposing "direct" - but please if you have a better suggestion
> > please make it.
> 
> The other problem with the term "direct" is that there is already a vNIC 
> type of the same name which refers to a guest's vNIC using a host 
> passthrough device.
> 
> So, maybe non-nat or no-nat would be better? Or hell, make it a boolean 
> is_nat or has_nat if we're really just referring to whether an IP is 
> NATted or not?
> 
> Best,
> -jay
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron][nova][Openstack-operators][interop] Time for a bikeshed - help me name types of networking

2017-05-15 Thread Ian Wells
I'm coming to this cold, so apologies when I put my foot in my mouth.  But
I'm trying to understand what you're actually getting at, here - other than
helpful simplicity - and I'm not following the detail of you're thinking,
so take this as a form of enquiry.

On 14 May 2017 at 10:02, Monty Taylor  wrote:

> First off, we need to define two terms:
> "external" - an address that can be used for north-south communication off
> the cloud
> "internal" - an address that can be used for east-west communication with
> and only with other things on the same cloud
>

I'm going through the network detail of this and picking out shortcomings,
so please understand that before you read on.

I think I see what you're trying to accomplish, but the details don't add
up for me.  The right answer might be 'you don't use this if you want fine
detailed network APIs' - and that's fine - but I think the issue is not
coming up with a model that contradicts the fine detail of what you can do
with a network today and how you can put it to use.

1. What if there are more domains of connectivity, since a cloud can be
connected to multiple domains?  I see this in its current form as intended
for public cloud providers as much as anything, in which case there is
probably only one definition of 'external', for instance, but if you want
to make it more widely useful you could define off-cloud routing domain
names, of which 'external' (or, in that context, 'internet') is one with a
very specific meaning.

2. What is 'internal', precisely?  It seems to be in-cloud, though I don't
entirely understand how NAT comes into that.  What of a route to the
provider's internal network?  How does it apply when I have multiple tenant
networks that can't talk to each other, when they're provisioned for me and
I can't create them, and so on?  Why doesn't it apply to IPv6?

3. Why doesn't your format tell me how to get a port/address of the type in
question?  Do you feel that everything will be consistent in that regard?
To my mind it's more useful - at the least - to tell me the *identity* of
the network I should be using rather than saying 'such a thing is possible
in the abstract'.

[...]

"get me a server with only an internal ipv4 and please fail if that isn't
> possible"
>
>   create_server(
>   'my-server', external_network=False, internal_network=True)
>

A comment on all of these: are you considering this to be an argument that
is acted upon in the library, or available on the server?

Doing this in the library makes more sense to me.  I prefer the idea of
documenting in machine-readable form how to use the APIs, because it means
I can use a cloud without the cloud supporting the API.  For many clouds,
the description could be a static file, but for more complex situations it
would be possible to generate it programmatically per tenant.

Doing it the other way could also lead to cloud-specific code, and without
some clearer specification it might also lead to cloud-specific behaviour.

It's also complexity that simply doesn't need to be in the cloud; putting
it in the application gives an application with a newer library the
opportunity to use an older cloud.

2) As information on networks themselves:
>
> GET /networks.json
> {
>   "networks": [
> {
>   "status": "ACTIVE",
>   "name": "GATEWAY_NET_V6",
>   "id": "54753d2c-0a58-4928-9b32-084c59dd20a6",
>   "network-models": [
> "ipv4-internal-direct",
> "ipv6-direct"
>   ]
> },
>

[...]

I think the problem with this as a concept, if this is what you're
eventually driving towards, is how you would enumerate this for a network.

IPv6 may be routed to the internet (or other domains) or it may not, but if
it is it's not currently optional to be locally routed and not internet
routed on a given network as it is for a v4 address to be fixed without a
floating component.  (You've skipped this by listing only ipv6-direct, I
think, as an option, where you have ipv4-fixed).

ipv4 may be routed to the internet if a router is connected, but I can
connect a router after the fact and I can add a floating IP to a port after
the fact too.  If you're just thinking in terms of 'when starting a VM, at
this instant in time' then that might not be quite so much of an issue.

I'm not suggesting putting info on subnets, since one requests connectivity
> from a network, not a subnet.
>

Not accurate - I can select a subnet on a network, and it can change who I
can talk to based on routes.  Neutron routers are attached to subnets, not
networks.

On a final note, this is really more about 'how do I make a port with this
sort of connectivity' with the next logical step being that many VMs only
need one port.
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [api][neutron][nova][Openstack-operators][interop] Time for a bikeshed - help me name types of networking

2017-05-15 Thread Jay Pipes

On 05/14/2017 01:02 PM, Monty Taylor wrote:

** Bikeshed #1 **

Are "internal" and "external" ok with folks as terms for those two ideas?


Yup, ++ from me on the above.


** Bikeshed #2 **

Anybody have a problem with the key name "network-models"?


They're not network models. They're access/connectivity policies. :)


(Incidentally, the idea from this is borrowed from GCE's
"compute#accessConfig" [0] - although they only have one model in their
enum: "ONE_TO_ONE_NAT")

In a perfect future world where we have per-service capabilities
discovery I'd love for such information to be exposed directly by
neutron.


I actually don't see this as a Neutron thing. It's the *workload* 
connectivity expectations that you're describing, not anything to do 
with networks, subnets or ports.


So, I think actually Nova would be a better home for this capability 
discovery, for similar reasons why get-me-a-network was mostly a Nova 
user experience...


So, I suppose I'd prefer to call this thing an "access policy" or 
"access model", optionally prefixing that with "network", i.e. "network 
access policy".



** Bikeshed #3 **

What do we call the general concepts represented by fixed and floating
ips? Do we use the words "fixed" and "floating"? Do we instead try
something else, such as "direct" and "nat"?

I have two proposals for the values in our enum:

#1 - using fixed / floating

ipv4-external-fixed
ipv4-external-floating
ipv4-internal-fixed
ipv4-internal-floating
ipv6-fixed


Definitely -1 on using fixed/floating.


#2 - using direct / nat

ipv4-external-direct
ipv4-external-nat
ipv4-internal-direct
ipv4-internal-nat
ipv6-direct


I'm good with direct and nat. +1 from me.


On the other hand, "direct" isn't exactly a commonly used word in this
context. I asked a ton of people at the Summit last week and nobody
could come up with a better term for "IP that is configured inside of
the server's network stack". "non-natted", "attached", "routed" and
"normal" were all suggested. I'm not sure any of those are super-great -
so I'm proposing "direct" - but please if you have a better suggestion
please make it.


The other problem with the term "direct" is that there is already a vNIC 
type of the same name which refers to a guest's vNIC using a host 
passthrough device.


So, maybe non-nat or no-nat would be better? Or hell, make it a boolean 
is_nat or has_nat if we're really just referring to whether an IP is 
NATted or not?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] [heat] [telemetry] - RFC cross project request id tracking

2017-05-15 Thread Sean Dague
On 05/15/2017 12:16 PM, Doug Hellmann wrote:
> Excerpts from Zane Bitter's message of 2017-05-15 11:43:07 -0400:
>> On 15/05/17 10:35, Doug Hellmann wrote:
>>> Excerpts from Sean Dague's message of 2017-05-15 10:01:20 -0400:
 On 05/15/2017 09:35 AM, Doug Hellmann wrote:
> Excerpts from Sean Dague's message of 2017-05-14 07:04:03 -0400:
>> One of the things that came up in a logging Forum session is how much
>> effort operators are having to put into reconstructing flows for things
>> like server boot when they go wrong, as every time we jump a service
>> barrier the request-id is reset to something new. The back and forth
>> between Nova / Neutron and Nova / Glance would be definitely well served
>> by this. Especially if this is something that's easy to query in elastic
>> search.
>>
>> The last time this came up, some people were concerned that trusting
>> request-id on the wire was concerning to them because it's coming from
>> random users. We're going to assume that's still a concern by some.
>> However, since the last time that came up, we've introduced the concept
>> of "service users", which are a set of higher priv services that we are
>> using to wrap user requests between services so that long running
>> request chains (like image snapshot). We trust these service users
>> enough to keep on trucking even after the user token has expired for
>> this long run operations. We could use this same trust path for
>> request-id chaining.
>>
>> So, the basic idea is, services will optionally take an inbound
>> X-OpenStack-Request-ID which will be strongly validated to the format
>> (req-$uuid). They will continue to always generate one as well. When the
>
> Do all of our services use that format for request ID? I thought Heat
> used something added on to a UUID, or at least longer than a UUID?
>>
>> FWIW I don't recall ever hearing this.
>>
>> - ZB
> 
> OK, maybe I'm mixing it up with some other field that we expected to be
> a UUID and wasn't. Ignore me and proceed. :-)

Given that the validation will be in a single function in
oslo.middeware.request_id, if projects have other needs in the future,
there will be a single knob to turn. However, starting strict to be
req-$UUID eliminates a whole class of potential bugs and injection concerns.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Validations before upgrades and updates

2017-05-15 Thread Steven Hardy
On Mon, May 08, 2017 at 02:45:08PM +0300, Marios Andreou wrote:
>Hi folks, after some discussion locally with colleagues about improving
>the upgrades experience, one of the items that came up was pre-upgrade and
>update validations. I took an AI to look at the current status of
>tripleo-validations [0] and posted a simple WIP [1] intended to be run
>before an undercloud update/upgrade and which just checks service status.
>It was pointed out by shardy that for such checks it is better to instead
>continue to use the per-service  manifests where possible like [2] for
>example where we check status before N..O major upgrade. There may still
>be some undercloud specific validations that we can land into the
>tripleo-validations repo (thinking about things like the neutron
>networks/ports, validating the current nova nodes state etc?).
>So do folks have any thoughts about this subject - for example the kinds
>of things we should be checking - Steve said he had some reviews in
>progress for collecting the overcloud ansible puppet/docker config into an
>ansible playbook that the operator can invoke for upgrade of the 'manual'
>nodes (for example compute in the N..O workflow) - the point being that we
>can add more per-service ansible validation tasks into the service
>manifests for execution when the play is run by the operator - but I'll
>let Steve point at and talk about those. 

Thanks for starting this thread Marios, sorry for the slow reply due to
Summit etc.

As we discussed, I think adding validations is great, but I'd prefer we
kept any overcloud validations specific to services in t-h-t instead of
trying to manage service specific things over multiple repos.

This would also help with the idea of per-step validations I think, where
e.g you could have a "is service active" test and run it after the step
where we expect the service to start, a blueprint was raised a while back
asking for exactly that:

https://blueprints.launchpad.net/tripleo/+spec/step-by-step-validation

One way we could achive this is to add ansible tasks that perform some
validation after each step, where we combine the tasks for all services,
similar to how we already do upgrade_tasks and host_prep_tasks:

https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/database/redis.yaml#L92

With the benefit of hindsight using ansible tags for upgrade_tasks wasn't
the best approach, because you can't change the tags via SoftwareDeployment
(e.g you need a SoftwareConfig per step), it's better if we either generate
the list of tasks by merging maps e.g 

  validation_tasks:
step3:
  - sometask

Or via ansible conditionals where we pass a step value in to each run of
the tasks:

  validation_tasks:
- sometask
  when: step == 3

The latter approach is probably my preference, because it'll require less
complex merging in the heat layer.

As you mentioned, I've been working on ways to make the deployment steps
more ansible driven, so having these tasks integrated with the t-h-t model
would be well aligned with that I think:

https://review.openstack.org/#/c/454816/

https://review.openstack.org/#/c/462211/

Happy to discuss further when you're ready to start integrating some
overcloud validations.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases] Stable branch conflicting information

2017-05-15 Thread Sean McGinnis
On Mon, May 15, 2017 at 03:19:09PM +0200, Thierry Carrez wrote:
> Sean McGinnis wrote:
> > So I noticed today that the release information [0] for Newton appears to 
> > have
> > the wrong date for when Newton transitions to the Legacy Phase. According to
> > this conversation [1], I think (thought?) we established that rolling over 
> > to
> > each support phase would stay on a 6 month cycle, despite Ocata being a 
> > shorter
> > development cycle.
> > 
> > I am not talking about EOL here, just the transition periods for stable
> > branches to move to the next phase.
> > 
> > Based on this, the Next Phase for Newton appears to be wrong because it is 
> > on
> > a 6 month period from the Ocata release, not based on Newton's actual 
> > release
> > date.
> 
> You are correct. Phase transitions are based on the initial release
> date, not the next ones. Phase III for Newton should start on 2017-10-06.
> 

Thanks Thierry. I've submitted https://review.openstack.org/#/c/464683/ to
correct that. I actually state 2017-10-09 there as it appears most recently
all of our transition dates are the first Monday following the 6 month mark.

That can be debated in the review though if there are differing opinions on
that.

> > I was going to put up a patch to fix this, but then got myself really 
> > confused
> > because I couldn't actually reconcile the dates based on how the rest of the
> > phase information is listed there. Going off of what we state in our Stable
> > Branch phases [2], we are not following what we have published there.
> > 
> > Based on that information, Mitaka should still be in the Legacy phase, and
> > not actually EOL'd for another 6 months. (Well, technically that actual EOL
> > date isn't called out in the documentation, so I'm just assuming another 6
> > months)
> 
> Actually the duration of stable branch life support is independent of
> the definition of the 3 support phases. If you read the end of that
> paragraph, it says:
> 
> [snip]
> 
> Currently, the stable maint team supports branches for about 12 months.
> Depending on when exactly the branch is EOLed, that basically means you
> do not do much (if any) phase III support.
> 

It appears in practice we do not actually do phase III support. We could
clarify that, but I suppose leaving it as is gives us some leeway if we
do choose to keep a branch around a little longer.

It just looks a little odd to me to have the phase III date immediately
at or before the EOL date, but on the other hand, that does in fact
accurately reflect reality, so not really a concern.

> Hope this clarifies,
> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-15 Thread Kaz Shinohara
Hi Lico and team,


Let me show up in this thread again, because I think now is really good
timing to introduce myself.
My name is Kazunori Shinohara (Kaz) working for NTT Communications as
software engineer.
I'm taking Heat as public cloud's orchestration services by adding own
resource plugins and some patches, I do believe my experience on this
should help the further development of Heat and the relevant projects.

In the last summit, I got upstream institute and Heat onboarding session, I
guess I was next to Lance at there.
Now I've done my first contribution for a document bug with Zane and
Huang's help.
https://review.openstack.org/#/c/463154/
I will definitely keep contributing more*, esp..bug and blueprint part,
also I don't mind any tasks like as template example if I can help.*

> *tutorial*: We got some reports about the lack of tutorials for for for
features like software config/ rolling upgrade,
Yea I think so, honestly speaking I skipped software config function for
our public cloud because I could not figure out well how it works

> Also, we do hope to get more reports on how people use heat,
I will be able to have feedback from actual use case on our public cloud
going forward.

>(Wednesdays at 1500 UTC in #openstack-meeting-5) :)
I will join too.

Regards,

Kaz Shinohara
IRC: kazsh

2017-05-16 1:10 GMT+09:00 Steven Hardy :

> On Mon, May 15, 2017 at 04:46:28PM +0200, Lance Haig wrote:
> > Hi Steve,
> >
> > I am happy to assist in any way to be honest.
> >
> > The backwards compatibility is not always correct as I have seen when
> > developing our library of templates on Liberty and then trying to deploy
> it
> > on Mitaka for example.
>
> Yeah, I guess it's true that there are sometimes deprecated resource
> interfaces that get removed on upgrade to a new OpenStack version, and that
> is independent of the HOT version.
>
> As we've proven, maintaining these templates has been a challenge given the
> available resources, so I guess I'm still in favor of not duplicating a
> bunch
> of templates, e.g perhaps we could focus on a target of CI testing
> templates on the current stable release as a first step?
>
> > As you guys mentioned in our discussions the Networking example I quoted
> is
> > not something you guys can deal with as the source project affects this.
> >
> > Unless we can use this exercise to test these and fix them then I am
> > happier.
> >
> > My vision would be to have a set of templates and examples that are
> tested
> > regularly against a running OS deployment so that we can make sure the
> > combinations still run. I am sure we can agree on a way to do this with
> CICD
> > so that we test the fetureset.
>
> Agreed, getting the approach to testing agreed seems like the first step -
> FYI we do already have automated scenario tests in the main heat tree that
> consume templates similar to many of the examples:
>
> https://github.com/openstack/heat/tree/master/heat_
> integrationtests/scenario
>
> So, in theory, getting a similar test running on heat_templates should be
> fairly simple, but getting all the existing templates working is likely to
> be a bigger challenge.
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] [heat] [telemetry] - RFC cross project request id tracking

2017-05-15 Thread Doug Hellmann
Excerpts from Zane Bitter's message of 2017-05-15 11:43:07 -0400:
> On 15/05/17 10:35, Doug Hellmann wrote:
> > Excerpts from Sean Dague's message of 2017-05-15 10:01:20 -0400:
> >> On 05/15/2017 09:35 AM, Doug Hellmann wrote:
> >>> Excerpts from Sean Dague's message of 2017-05-14 07:04:03 -0400:
>  One of the things that came up in a logging Forum session is how much
>  effort operators are having to put into reconstructing flows for things
>  like server boot when they go wrong, as every time we jump a service
>  barrier the request-id is reset to something new. The back and forth
>  between Nova / Neutron and Nova / Glance would be definitely well served
>  by this. Especially if this is something that's easy to query in elastic
>  search.
> 
>  The last time this came up, some people were concerned that trusting
>  request-id on the wire was concerning to them because it's coming from
>  random users. We're going to assume that's still a concern by some.
>  However, since the last time that came up, we've introduced the concept
>  of "service users", which are a set of higher priv services that we are
>  using to wrap user requests between services so that long running
>  request chains (like image snapshot). We trust these service users
>  enough to keep on trucking even after the user token has expired for
>  this long run operations. We could use this same trust path for
>  request-id chaining.
> 
>  So, the basic idea is, services will optionally take an inbound
>  X-OpenStack-Request-ID which will be strongly validated to the format
>  (req-$uuid). They will continue to always generate one as well. When the
> >>>
> >>> Do all of our services use that format for request ID? I thought Heat
> >>> used something added on to a UUID, or at least longer than a UUID?
> 
> FWIW I don't recall ever hearing this.
> 
> - ZB

OK, maybe I'm mixing it up with some other field that we expected to be
a UUID and wasn't. Ignore me and proceed. :-)

Doug

> 
> >> Don't know, now is a good time to speak up.
> >> http://logs.openstack.org/85/464585/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2-ubuntu-xenial/e1bca9e/logs/screen-h-eng.txt.gz#_2017-05-15_10_08_10_617
> >> seems to indicate that it's using the format everyone else is using.
> >>
> >> Swift does things a bit differently with suffixes, but they aren't using
> >> the common middleware.
> >>
> >> I've done code look throughs on nova/glance/cinder/neutron/keystone, but
> >> beyond that folks will need to speak up as to where this might break
> >> down. At worst failing validation just means you end up in the old
> >> (current) behavior.
> >>
> >> -Sean
> >>
> >
> > OK. I vaguely remembered something from the early days of ceilometer
> > trying to collect those notifications, but maybe I'm confusing it with
> > something else. I've added [heat] to the subject line to get that team's
> > attention for input.
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-15 Thread Steven Hardy
On Mon, May 15, 2017 at 04:46:28PM +0200, Lance Haig wrote:
> Hi Steve,
> 
> I am happy to assist in any way to be honest.
> 
> The backwards compatibility is not always correct as I have seen when
> developing our library of templates on Liberty and then trying to deploy it
> on Mitaka for example.

Yeah, I guess it's true that there are sometimes deprecated resource
interfaces that get removed on upgrade to a new OpenStack version, and that
is independent of the HOT version.

As we've proven, maintaining these templates has been a challenge given the
available resources, so I guess I'm still in favor of not duplicating a bunch
of templates, e.g perhaps we could focus on a target of CI testing
templates on the current stable release as a first step?

> As you guys mentioned in our discussions the Networking example I quoted is
> not something you guys can deal with as the source project affects this.
> 
> Unless we can use this exercise to test these and fix them then I am
> happier.
> 
> My vision would be to have a set of templates and examples that are tested
> regularly against a running OS deployment so that we can make sure the
> combinations still run. I am sure we can agree on a way to do this with CICD
> so that we test the fetureset.

Agreed, getting the approach to testing agreed seems like the first step -
FYI we do already have automated scenario tests in the main heat tree that
consume templates similar to many of the examples:

https://github.com/openstack/heat/tree/master/heat_integrationtests/scenario

So, in theory, getting a similar test running on heat_templates should be
fairly simple, but getting all the existing templates working is likely to
be a bigger challenge.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] [heat] [telemetry] - RFC cross project request id tracking

2017-05-15 Thread Zane Bitter

On 15/05/17 10:35, Doug Hellmann wrote:

Excerpts from Sean Dague's message of 2017-05-15 10:01:20 -0400:

On 05/15/2017 09:35 AM, Doug Hellmann wrote:

Excerpts from Sean Dague's message of 2017-05-14 07:04:03 -0400:

One of the things that came up in a logging Forum session is how much
effort operators are having to put into reconstructing flows for things
like server boot when they go wrong, as every time we jump a service
barrier the request-id is reset to something new. The back and forth
between Nova / Neutron and Nova / Glance would be definitely well served
by this. Especially if this is something that's easy to query in elastic
search.

The last time this came up, some people were concerned that trusting
request-id on the wire was concerning to them because it's coming from
random users. We're going to assume that's still a concern by some.
However, since the last time that came up, we've introduced the concept
of "service users", which are a set of higher priv services that we are
using to wrap user requests between services so that long running
request chains (like image snapshot). We trust these service users
enough to keep on trucking even after the user token has expired for
this long run operations. We could use this same trust path for
request-id chaining.

So, the basic idea is, services will optionally take an inbound
X-OpenStack-Request-ID which will be strongly validated to the format
(req-$uuid). They will continue to always generate one as well. When the


Do all of our services use that format for request ID? I thought Heat
used something added on to a UUID, or at least longer than a UUID?


FWIW I don't recall ever hearing this.

- ZB


Don't know, now is a good time to speak up.
http://logs.openstack.org/85/464585/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2-ubuntu-xenial/e1bca9e/logs/screen-h-eng.txt.gz#_2017-05-15_10_08_10_617
seems to indicate that it's using the format everyone else is using.

Swift does things a bit differently with suffixes, but they aren't using
the common middleware.

I've done code look throughs on nova/glance/cinder/neutron/keystone, but
beyond that folks will need to speak up as to where this might break
down. At worst failing validation just means you end up in the old
(current) behavior.

-Sean



OK. I vaguely remembered something from the early days of ceilometer
trying to collect those notifications, but maybe I'm confusing it with
something else. I've added [heat] to the subject line to get that team's
attention for input.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification update week 20

2017-05-15 Thread Balazs Gibizer

Hi,

Here is the status update / focus setting mail about notification work
for week 20.

Bugs

[Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance
notifications are sent with inconsistent timestamp format.
The solution is now consits of three patches and the series is
wait for code review:
https://review.openstack.org/#/q/topic:bug/1657428

[Medium] https://bugs.launchpad.net/nova/+bug/1687012
flavor-delete notification should not try to lazy-load projects
The patch https://review.openstack.org/#/c/461032 needs core review.


Versioned notification transformation
-
Let's continue focusing on the next three transformation patches:
* https://review.openstack.org/#/c/396225/ Transform
instance.trigger_crash_dump notification
* https://review.openstack.org/#/c/396210/ Transform aggregate.add_host
notification
* https://review.openstack.org/#/c/396211/ Transform
aggregate.remove_host notification


Searchlight integration
---
bp additional-notification-fields-for-searchlight
~

The keypairs patch has been split to add whole keypair objects only to
the instance.create notification and add only the key_name to every
instance. notification:
* https://review.openstack.org/#/c/463001 Add separate instance.create
payload type
* https://review.openstack.org/#/c/419730 Add keypairs field to
InstanceCreatePayload
* https://review.openstack.org/#/c/463002 Add key_name field to
InstancePayload

Adding BDM to instance. is also in the pipe:
* https://review.openstack.org/#/c/448779/

There is also a separate patch to add tags to instance.create:
https://review.openstack.org/#/c/459493/ Add tags to instance.create
Notification


Small improvements
~~
* https://review.openstack.org/#/c/418489/ Remove **kwargs passing in 
payload __init__
* https://review.openstack.org/#/c/428199/ Improve assertJsonEqual 
error reporting

* https://review.openstack.org/#/c/450787/ remove ugly local import
* https://review.openstack.org/#/c/453077 Add snapshot id to the 
snapshot notifications


* https://review.openstack.org/#/q/topic:refactor-notification-samples 
Factor out duplicated notification sample data
This is a start of a longer patch series to deduplicate notification 
sample data. The third patch already shows how much sample data can be 
deleted from nova tree. We added a minimal hand rolled json ref 
implementation to notification sample test as the existing python json 
ref implementations are not well maintained.



Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC
on openstack-meeting-4. The next meeting will be held on 16th of May.
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170516T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Jenkins verification failures with stable release gates

2017-05-15 Thread Andrea Frittoli
On Mon, May 15, 2017 at 2:51 PM Hemanth N  wrote:

> Hi
>
> I have written new Tempest functional test cases for Identity OAUTH API.
> https://review.openstack.org/#/c/463240/
>
>
> This patch have dependency on a keystone fix that is still under
> review and I have mentioned this patch as Depends-On for above one.
> (https://review.openstack.org/#/c/464577/)
>
> The gates with stable releases are failing but on master it is successful
> gate-tempest-dsvm-neutron-full-ubuntu-xenial-ocata
> gate-tempest-dsvm-neutron-full-ubuntu-xenial-newton
>
> I am assuming the stable releases will cherry-pick the Depends-On
> patches and then builds/verifies the environment.
> Is my understanding correct?
>

Your change is for keystone master, and it won't be automatically
cherry-picked.


> If not, how should I proceed in such scenarios.
>

Your change make it possible to use the oath API in deployments where TLS
is terminated in front of the API server. I would use a feature flag on
Tempest side
to indicate whether this keystone feature is available. Default to false,
 set to true in Pike
in devstack, skip the tests if the feature is not available.

Even if your change was back ported, merging this would immediately enforce
the new behaviour on all Netwton+ clouds, to I think having a feature flag
would be
useful anyways.

andrea


> Thanks in Advance.
>
> Best Regards,
> Hemanth
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Validations before upgrades and updates

2017-05-15 Thread Ben Nemec



On 05/08/2017 06:45 AM, Marios Andreou wrote:

Hi folks, after some discussion locally with colleagues about improving
the upgrades experience, one of the items that came up was pre-upgrade
and update validations. I took an AI to look at the current status of
tripleo-validations [0] and posted a simple WIP [1] intended to be run
before an undercloud update/upgrade and which just checks service
status. It was pointed out by shardy that for such checks it is better
to instead continue to use the per-service  manifests where possible
like [2] for example where we check status before N..O major upgrade.
There may still be some undercloud specific validations that we can land
into the tripleo-validations repo (thinking about things like the
neutron networks/ports, validating the current nova nodes state etc?).

So do folks have any thoughts about this subject - for example the kinds
of things we should be checking - Steve said he had some reviews in
progress for collecting the overcloud ansible puppet/docker config into
an ansible playbook that the operator can invoke for upgrade of the
'manual' nodes (for example compute in the N..O workflow) - the point
being that we can add more per-service ansible validation tasks into the
service manifests for execution when the play is run by the operator -
but I'll let Steve point at and talk about those.


We had a similar discussion regarding controller node replacement 
because starting that process with the overcloud in an inconsistent 
state tends to end badly.  Unfortunately those docs are only available 
downstream at this time, but the basics were:


-Verify that the stack is in a *_COMPLETE state (this may seem obvious, 
but we've had people try to do these major processes while the stack is 
in a broken state)
-Verify undercloud disk space.  For node replacement we recommended a 
minimum of 10 GB free.

-Verify that all pacemaker services are up.
-Check Galera and Rabbit clusters and verify all nodes are up.
-For node replacement we also disabled stonith.  That might be a good 
idea during upgrades as well in case some services take a while to come 
back up.  You really don't want a node getting killed during the process.

-General undercloud service checks (nova, ironic, etc.)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Stepping down from core

2017-05-15 Thread Loo, Ruby
Hi Lucas,

This is a big loss for our community (but lucky OVS/OVN projects!) It has been 
awesome to work with you over the years. I'll always treasure the friendship we 
have! I know you're still around but it won't be the same; I'll miss you :-(

So long core, and thanks for the fish, although Pixie Boots might be your 
biggest legacy ;)
--ruby

From: Lucas Alvares Gomes 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, May 9, 2017 at 10:15 AM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [Ironic] Stepping down from core

Hi all,

This is a difficult email to send. As some of you might already know,
lately I've been focusing on the OVS/OVN (and related) projects and I
don't have much time left to dedicate on reviewing patches in Ironic,
at least for now.

My biggest priority for this cycle was to create a basic Redfish
driver and now that the patches are merged and a -nv job is running in
the gate I feel like it would be a good time to step down from the
core team. If needed, I could still help out as core reviewer on some
small projects which I'm very familiar with such as Sushy and
VirtualBMC, those doesn't take much time and the review queue is
usually short.

Also, this is not a good-bye email, I'm still very interested in
Ironic and I will continue to follow the project closely as well as be
around in the #openstack-ironic IRC channel at all times (-:

So, thanks for everything everyone, it's been great to work with you
all for all these years!!!

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-15 Thread Lance Haig

Hi Steve,

I am happy to assist in any way to be honest.

The backwards compatibility is not always correct as I have seen when 
developing our library of templates on Liberty and then trying to deploy 
it on Mitaka for example.


As you guys mentioned in our discussions the Networking example I quoted 
is not something you guys can deal with as the source project affects this.


Unless we can use this exercise to test these and fix them then I am 
happier.


My vision would be to have a set of templates and examples that are 
tested regularly against a running OS deployment so that we can make 
sure the combinations still run. I am sure we can agree on a way to do 
this with CICD so that we test the fetureset.


I look forward to assisting the community with this.

Regards

Lance



On 15.05.17 16:03, Steven Hardy wrote:

On Mon, May 15, 2017 at 03:21:30AM -0400, Lance Haig wrote:

Good to know that there is interest.

Thanks for starting this effort - I agree it would be great to see the
example templates we provide improved and over time become better
references for heat features (as well as being more well tested).


I was thinking that we should perhaps create a directory for each
openstack version.

I'm personally not keen on this - Heat should handle old HOT versions in a
backwards compatible way, and we can use the template version (which
supports using the release name in recent heat versions) to document the
required version e.g if demonstrating some new resource or function.

FWIW we did already try something similar in the early days of heat, where
we had duplicate wordpress examples for different releases (operating
systems not OpenStack versions but it's the same problem).  We found that
old versions quickly became unmaintained, and ultimately got broken anyway
due to changes unrelated to Heat or OpenStack versions.


so we start say with a mitaka directory and then move the files there and
test them all so that they work with Liberty.
Then we can copy it over to Mitaka and do the same but add the extra
functionality.

While some manual testing each release is better than nothing, honestly I
feel like CI testing some (or ideally all) examples is the only way to
ensure they're not broken.  Clearly that's going to be more work initially,
but it'd be worth considering I think.

To make this simple for template authors, we could perhaps just create the
template with the default parameters, and codify some special place to
define the expected ouput values (we could for example have a special
expected_output parameter which the CI test consumes and compares after the
stack create completes).


and then Newton etc...
That way if someone is on a specific version they only have to go to a
specific directory to get the examples they need.

As mentioned above, I think just using the template version should be
enough - we could even generate some docs using this to highlight example
templates that are specific to a release?

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][gluon]Gluon project plans

2017-05-15 Thread pcarver
This email is to summarize the contents of several impromptu discussions at
the summit last week for those who weren't present. Obviously anything
anybody writes is going to include their own perspective, but hopefully I've
talked about this in enough detail to enough people that I can do a
reasonably fair job of summarizing.

 

It can be argued that Gluon represents an alternative to Neutron, but that's
not the intent and the Gluon team doesn't want that to be the case. My
preferred description is that Gluon is a proof of concept that starts from
several different primary design goals and that the best, most cooperative
path forward is to combine the key Gluon ideas and benefits with the
existing strengths of Neutron.

 

There are three main parts of the Gluon project:

*   Gluon itself - Gluon is a core plugin to Neutron derived as a
subclass of ML2 so that it can provide all ML2 functionality, plus a bit
extra when a VLAN-esque layer 2 broadcast domain is not the correct
semantic, or at least not the cleanest semantic
*   Proton Server - A server similar to if the Neutron server only ran
service plugins without running a core plugin and where each service plugin
is defined in domain specific modeling language rather than as Python code
*   Particle Generator - A sort of "compiler" for the YAML DSL in which
"Protons" are written in order to load the API models into the Proton server

 

The direction which we would like to take is to begin incorporating these
elements of Gluon into the Neutron project and complying with all of the
Neutron Stadium guidelines.

 

We will start by positioning the Gluon component itself as an available
choice of Neutron core plugin. I view this a little bit like how Neutron
offers a choice of linuxbridge and OvS drivers. The ML2 and Gluon core
plugins are closely related with Gluon just adding a little extra
functionality. Deployers can continue to use ML2 if they don't care about
the few extra bits that Gluon adds. This is similar to deployers who find
OvS too complicated and want to deploy the simpler linuxbridge to get most,
but not all, of the same functionality. A key goal will be that anything
that works using ML2 as the core plugin should also work when using Gluon as
the core plugin.

 

The second thing we will do is position Particle Generator as a sort of
"Neutron Service Plugin Generator" which will make the Proton Server
unnecessary since the Neutron Server will act as the host for the service
plugins.

 

We'll need to work on improving documentation, but the goal will be to offer
current and future Neutron Service Plugin authors the option of using the
YAML DSL to define new Neutron Service Plugins. This doesn't force any
change, but if the DSL proves easy to use it will offer a possibly faster
way of writing Service Plugins

 

The YAML models which are parsed by the Particle Generator are similar to
the API definitions in neutron-lib[1] but in YAML rather than a dictionary
in Python syntax. My hope is that we will continue to add to Particle
Generator outputs and in the end state it could produce all of the following
as basically output to "compiling" a YAML model of an API:

*   Neutron Service Plugin
*   Database model and expand/contract migrations
*   API documentation - i.e. end user readable content for [2]
*   OpenStack CLI extension (and neutronclient extension if worthwhile,
but perhaps not since it's deprecated)
*   Heat resources
*   Stretch Goal: Horizon GUI panels if a standard structure can be
devised to map the model to a GUI layout

 

The fourth part of the Gluon project that doesn't quite fit inside of the
Gluon project is the "shim layers" that map APIs onto specific SDN
controllers. Following the Neutron Service Plugin model, these should live
in the various networking-* repos such as networking-odl, networking-ovn,
etc, including the newly created networking-opencontrail.

 

[1]
https://github.com/openstack/neutron-lib/tree/master/neutron_lib/api/definit
ions

[2] https://developer.openstack.org/api-ref/networking/v2/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Suspected SPAM - Re: [vitrage] about  "is_admin" in ctx

2017-05-15 Thread Weyl, Alexey (Nokia - IL/Kfar Sava)
Hi Wenjuan,

Sorry it took me so long to answer due to the Boston Summit.

After making some more checks in order to make it sure, the results are the 
following:

1.   The context that we use has 2 properties regarding admin (is_admin, 
is_admin_project).

2.   The is_admin property regards whether the user that has done the 
inquiry is the admin or not. So the only way it is True is if the user is admin.

3.   The is_admin_project I thought will represent the tenant of the user, 
but from all of the user and tenants that I have used, it laways returned me 
True.

4.   Due to that I have decided to use the is_admin property in the context 
to indicate whether the user can see all-tenants or not.

5.   This is not a perfect solution because also users such as 
nova/cinder/all project names seems to be able to see the admin tab. In our 
case what will happen is that although in the UI we have the admin tab for 
those users, the data we will show in the vitrage tab is not of all the tenants 
but this specific tenant.

Alexey

From: Weyl, Alexey (Nokia - IL/Kfar Sava) [mailto:alexey.w...@nokia.com]
Sent: Tuesday, April 25, 2017 3:10 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Suspected SPAM - Re: [openstack-dev] [vitrage] about  "is_admin" in ctx

Hi Wenjuan,

This is a good question, I need to check it a bit more thoroughly.

It’s just that at the moment we are preparing for the Boston Openstack Summit 
and thus it will take me a bit more time to answer that.

Sorry for the delay.

Alexey ☺

From: dong.wenj...@zte.com.cn 
[mailto:dong.wenj...@zte.com.cn]
Sent: Friday, April 21, 2017 11:08 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [vitrage] about  "is_admin" in ctx


Hi all,

I'm a little confused about the "is_amdin" in ctx.

From the 
hook(https://github.com/openstack/vitrage/blob/master/vitrage/api/hooks.py#L73),
 "is_admin" means admin user,.

But we define the macro as "admin project"( 
https://github.com/openstack/vitrage/blob/master/vitrage/api_handler/apis/base.py#L94
 ). But in my opinion,  it should be the admin role. Correct me if i'm wrong :).



BR,

dwj






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-15 Thread Lance Haig

Hi Rico,

Great to meet you at the summit.


The version related templates I think would work as follows.

In each named directory we would have the same set of templates
e.g.
Single instance
Clustered instance
resource group
etc..

And so if you are on Pike you would go to the pike directory and you 
will find all th3e examples we have created for pike.


I am not sure that I properly understand how we want to take this 
forward as the current structure of the repository is confusing.


DO we just want to update all the examples there with the latest "Pike" 
way of creating these resources or do we want to have some backwards 
compatibility?

We will need to agree on that at first I think.

I will definitely join the meeting.

Regards

Lance


On 15.05.17 12:13, Rico Lin wrote:

Hi Lance and all others who shows interest

IMO, after some feedback from the summit. I think it will be greate to 
have efforts on


  * **bug/blueprint*: We needs more people doing fix/review/spec.
Since we still on the way to make heat more handy as Orchestration
tools*
  * *template example*: Since we do have some new functions but didn't
actually give a proper update of it.
  * *tutorial*: We got some reports about the lack of tutorials for
for for features like software config/ rolling upgrade, so I
definitely think we require some improvement here.
  * *test*: Our integration test(tempest test) seems not cover
every scenario (like we just cover some snapshots test these few
weeks). Also, we do hope to get more reports on how people use
heat, and what's the test result.

So yes from me, Lance, that will help:)

Also, most of our functions can be directly called by future version, 
so if we separate it into versions, how can Pike user find that 
example? I like to idea to make all user aware of template version. 
but not sure to make version specific directory will help. Maybe a 
version info in template description will do? We can discussion this 
at the meeting (Wednesdays at 1500 UTC in #openstack-meeting-5) :)


2017-05-15 15:21 GMT+08:00 Lance Haig >:


Good to know that there is interest.

I was thinking that we should perhaps create a directory for each
openstack version.

so we start say with a mitaka directory and then move the files
there and test them all so that they work with Liberty.
Then we can copy it over to Mitaka and do the same but add the
extra functionality.
and then Newton etc...

That way if someone is on a specific version they only have to go
to a specific directory to get the examples they need.

What do you think?

Lance


On 14 May 2017 at 23:14, Kaz Shinohara > wrote:

Hi Lance,

I like it too.
We should keep them updated according to the latest spec and
actual use cases.

Regards,
Kaz Shinohara


2017-05-13 13:00 GMT+09:00 Foss Geek >:

Hi Lance, I am also interested to assisting you on this.

Thanks
Mohan

On 11-May-2017 2:25 am, "Lance Haig" > wrote:

Hi,

I would like to introduce myself to the heat team.

My name is Lance Haig I currently work for Mirantis
doing workload onboarding to openstack.

Part of my job is assisting customers with using the
new Openstack cloud they have been given.

I recently gave a talk with a colleague Florin
Stingaciu on LCM with heat at the Boston Summit.

I am interested in assisting the project.

We have noticed that there are some outdated examples
in the heat-examples repository and I am not sure that
they all still function.

I was wondering if it would be valuable for me to take
a look at these and fix them or perhaps we can rethink
how we present the examples.

I am interested in what you guys think.

Thanks

Lance


__
OpenStack Development Mailing List (not for usage
questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe



http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development 

Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] [heat] [telemetry] - RFC cross project request id tracking

2017-05-15 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-05-15 10:01:20 -0400:
> On 05/15/2017 09:35 AM, Doug Hellmann wrote:
> > Excerpts from Sean Dague's message of 2017-05-14 07:04:03 -0400:
> >> One of the things that came up in a logging Forum session is how much 
> >> effort operators are having to put into reconstructing flows for things 
> >> like server boot when they go wrong, as every time we jump a service 
> >> barrier the request-id is reset to something new. The back and forth 
> >> between Nova / Neutron and Nova / Glance would be definitely well served 
> >> by this. Especially if this is something that's easy to query in elastic 
> >> search.
> >>
> >> The last time this came up, some people were concerned that trusting 
> >> request-id on the wire was concerning to them because it's coming from 
> >> random users. We're going to assume that's still a concern by some. 
> >> However, since the last time that came up, we've introduced the concept 
> >> of "service users", which are a set of higher priv services that we are 
> >> using to wrap user requests between services so that long running 
> >> request chains (like image snapshot). We trust these service users 
> >> enough to keep on trucking even after the user token has expired for 
> >> this long run operations. We could use this same trust path for 
> >> request-id chaining.
> >>
> >> So, the basic idea is, services will optionally take an inbound 
> >> X-OpenStack-Request-ID which will be strongly validated to the format 
> >> (req-$uuid). They will continue to always generate one as well. When the 
> > 
> > Do all of our services use that format for request ID? I thought Heat
> > used something added on to a UUID, or at least longer than a UUID?
> 
> Don't know, now is a good time to speak up.
> http://logs.openstack.org/85/464585/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2-ubuntu-xenial/e1bca9e/logs/screen-h-eng.txt.gz#_2017-05-15_10_08_10_617
> seems to indicate that it's using the format everyone else is using.
> 
> Swift does things a bit differently with suffixes, but they aren't using
> the common middleware.
> 
> I've done code look throughs on nova/glance/cinder/neutron/keystone, but
> beyond that folks will need to speak up as to where this might break
> down. At worst failing validation just means you end up in the old
> (current) behavior.
> 
> -Sean
> 

OK. I vaguely remembered something from the early days of ceilometer
trying to collect those notifications, but maybe I'm confusing it with
something else. I've added [heat] to the subject line to get that team's
attention for input.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron][nova][Openstack-operators][interop] Time for a bikeshed - help me name types of networking

2017-05-15 Thread Neil Jerram
On Sun, May 14, 2017 at 6:02 PM Monty Taylor  wrote:

> Are "internal" and "external" ok with folks as terms for those two ideas?
>

Yes, I think so.  Slight worry that 'external' is also used in
'router:external' - but I think it will be clear that your proposed context
is different.


> - External addresses are provided via Fixed IPs
> - External addresses are provided via Floating IPs
> - Internal addresses are provided via Fixed IPs
> - Internal addresses can be provided via Floating IPs
>

FWIW, I don't think I've ever heard of this last one.


> Anybody have a problem with the key name "network-models"?
>

No; sounds good.


> What do we call the general concepts represented by fixed and floating
> ips? Do we use the words "fixed" and "floating"? Do we instead try
> something else, such as "direct" and "nat"?
>
> I have two proposals for the values in our enum:
>
> #1 - using fixed / floating
>
> ipv4-external-fixed
> ipv4-external-floating
> ipv4-internal-fixed
> ipv4-internal-floating
> ipv6-fixed
>
> #2 - using direct / nat
>
> ipv4-external-direct
> ipv4-external-nat
> ipv4-internal-direct
> ipv4-internal-nat
> ipv6-direct
>
> Does anyone have strong feelings one way or the other?
>

Not strong, no.  I feel as though anyone in or close to OpenStack would be
familiar already with the floating and fixed terms - and so why risk the
bother and churn of changing to something else?  But also appreciate that
other clouds do not use those terms.


>
> My personal preference is direct/nat. "floating" has a tendency to imply
> different things to different people (just watch, we're going to have at
> least one rabbit hole that will be an argument about the meaning of
> floating ips) ... while anyone with a background in IT knows what "nat"
> is. It's also a characteristic from a server/workload perspective that
> is related to a choice the user might want to make:
>
>   Does the workload need the server to know its own IP?
>   Does the workload prefer to be behind NAT?
>   Does the workload not care and just wants connectivity?
>
> On the other hand, "direct" isn't exactly a commonly used word in this
> context. I asked a ton of people at the Summit last week and nobody
> could come up with a better term for "IP that is configured inside of
> the server's network stack". "non-natted", "attached", "routed" and
> "normal" were all suggested. I'm not sure any of those are super-great -
> so I'm proposing "direct" - but please if you have a better suggestion
> please make it.
>

Not sure it's better, but "Internet address space" or something else that
conveys the idea that the address given to the VM is in the same address
space (aka scope) as things outside the cluster.

Regards - Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-15 Thread Steven Hardy
On Mon, May 15, 2017 at 03:21:30AM -0400, Lance Haig wrote:
>Good to know that there is interest.

Thanks for starting this effort - I agree it would be great to see the
example templates we provide improved and over time become better
references for heat features (as well as being more well tested).

>I was thinking that we should perhaps create a directory for each
>openstack version.

I'm personally not keen on this - Heat should handle old HOT versions in a
backwards compatible way, and we can use the template version (which
supports using the release name in recent heat versions) to document the
required version e.g if demonstrating some new resource or function.

FWIW we did already try something similar in the early days of heat, where
we had duplicate wordpress examples for different releases (operating
systems not OpenStack versions but it's the same problem).  We found that
old versions quickly became unmaintained, and ultimately got broken anyway
due to changes unrelated to Heat or OpenStack versions.

>so we start say with a mitaka directory and then move the files there and
>test them all so that they work with Liberty.
>Then we can copy it over to Mitaka and do the same but add the extra
>functionality.

While some manual testing each release is better than nothing, honestly I
feel like CI testing some (or ideally all) examples is the only way to
ensure they're not broken.  Clearly that's going to be more work initially,
but it'd be worth considering I think.

To make this simple for template authors, we could perhaps just create the
template with the default parameters, and codify some special place to
define the expected ouput values (we could for example have a special
expected_output parameter which the CI test consumes and compares after the
stack create completes).

>and then Newton etc...
>That way if someone is on a specific version they only have to go to a
>specific directory to get the examples they need.

As mentioned above, I think just using the template version should be
enough - we could even generate some docs using this to highlight example
templates that are specific to a release?

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-15 Thread Sean Dague
On 05/15/2017 09:35 AM, Doug Hellmann wrote:
> Excerpts from Sean Dague's message of 2017-05-14 07:04:03 -0400:
>> One of the things that came up in a logging Forum session is how much 
>> effort operators are having to put into reconstructing flows for things 
>> like server boot when they go wrong, as every time we jump a service 
>> barrier the request-id is reset to something new. The back and forth 
>> between Nova / Neutron and Nova / Glance would be definitely well served 
>> by this. Especially if this is something that's easy to query in elastic 
>> search.
>>
>> The last time this came up, some people were concerned that trusting 
>> request-id on the wire was concerning to them because it's coming from 
>> random users. We're going to assume that's still a concern by some. 
>> However, since the last time that came up, we've introduced the concept 
>> of "service users", which are a set of higher priv services that we are 
>> using to wrap user requests between services so that long running 
>> request chains (like image snapshot). We trust these service users 
>> enough to keep on trucking even after the user token has expired for 
>> this long run operations. We could use this same trust path for 
>> request-id chaining.
>>
>> So, the basic idea is, services will optionally take an inbound 
>> X-OpenStack-Request-ID which will be strongly validated to the format 
>> (req-$uuid). They will continue to always generate one as well. When the 
> 
> Do all of our services use that format for request ID? I thought Heat
> used something added on to a UUID, or at least longer than a UUID?

Don't know, now is a good time to speak up.
http://logs.openstack.org/85/464585/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2-ubuntu-xenial/e1bca9e/logs/screen-h-eng.txt.gz#_2017-05-15_10_08_10_617
seems to indicate that it's using the format everyone else is using.

Swift does things a bit differently with suffixes, but they aren't using
the common middleware.

I've done code look throughs on nova/glance/cinder/neutron/keystone, but
beyond that folks will need to speak up as to where this might break
down. At worst failing validation just means you end up in the old
(current) behavior.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] Jenkins verification failures with stable release gates

2017-05-15 Thread Hemanth N
Hi

I have written new Tempest functional test cases for Identity OAUTH API.
https://review.openstack.org/#/c/463240/


This patch have dependency on a keystone fix that is still under
review and I have mentioned this patch as Depends-On for above one.
(https://review.openstack.org/#/c/464577/)

The gates with stable releases are failing but on master it is successful
gate-tempest-dsvm-neutron-full-ubuntu-xenial-ocata
gate-tempest-dsvm-neutron-full-ubuntu-xenial-newton

I am assuming the stable releases will cherry-pick the Depends-On
patches and then builds/verifies the environment.
Is my understanding correct?
If not, how should I proceed in such scenarios.

Thanks in Advance.

Best Regards,
Hemanth

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Consolidating web themes

2017-05-15 Thread Anne Gentle
Hi all,

I wanted to make you all aware of some consolidation efforts I'll be
working on this release. You may have noticed a new logo for OpenStack, and
perhaps you saw the update to the web design and headers on
docs.openstack.org as well.

To continue these efforts, I'll also be working on having all docs pages
use one theme, the openstackdocstheme, that has these latest updates.
Currently we are using version 1.8.0, and I'll do more releases as we
complete the UI consolidation.

I did an analysis to compare oslosphinx to openstackdocstheme, and I wanted
to let this group know about the upcoming changes so you can keep an eye
out for reviews. This effort will take a while, and I'd welcome help, of
course.

There are a few UI items that I don't plan port from oslosphinx to
openstackdocstheme:

Quick search form in bottom of left-hand navigation bar (though I'd welcome
a way to unify that UI and UX across the themes).
Previous topic and Next topic shown in left-hand navigation bar (these are
available in the openstackdocstheme in a different location).
Return to project home page link in left-hand navigation bar. (also would
welcome a design that fits well in the openstackdocstheme left-hand nav)
Customized list of links in header. For example, the page athttps://
docs.openstack.org/infra/system-config/ contains a custom header.
When a landing page like https://docs.openstack.org/infra/ uses oslosphinx,
the page should be redesigned with the new theme in mind.

I welcome input on these changes, as I'm sure I haven't caught every
scenario, and this is my first wider communication about the theme changes.
The spec for this work is detailed here: http://specs.openstack.
org/openstack/docs-specs/specs/pike/consolidating-themes.html

Let me know what I've missed, what you cannot live without, and please
reach out if you'd like to help.

Thanks,
Anne

--
Technical Product Manager, Cisco Metacloud
annegen...@justwriteclick.com
@annegentle
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-15 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-05-14 07:04:03 -0400:
> One of the things that came up in a logging Forum session is how much 
> effort operators are having to put into reconstructing flows for things 
> like server boot when they go wrong, as every time we jump a service 
> barrier the request-id is reset to something new. The back and forth 
> between Nova / Neutron and Nova / Glance would be definitely well served 
> by this. Especially if this is something that's easy to query in elastic 
> search.
> 
> The last time this came up, some people were concerned that trusting 
> request-id on the wire was concerning to them because it's coming from 
> random users. We're going to assume that's still a concern by some. 
> However, since the last time that came up, we've introduced the concept 
> of "service users", which are a set of higher priv services that we are 
> using to wrap user requests between services so that long running 
> request chains (like image snapshot). We trust these service users 
> enough to keep on trucking even after the user token has expired for 
> this long run operations. We could use this same trust path for 
> request-id chaining.
> 
> So, the basic idea is, services will optionally take an inbound 
> X-OpenStack-Request-ID which will be strongly validated to the format 
> (req-$uuid). They will continue to always generate one as well. When the 

Do all of our services use that format for request ID? I thought Heat
used something added on to a UUID, or at least longer than a UUID?

Doug

> context is built (which is typically about 3 more steps down the paste 
> pipeline), we'll check that the service user was involved, and if not, 
> reset the request_id to the local generated one. We'll log both the 
> global and local request ids. All of these changes happen in 
> oslo.middleware, oslo.context, oslo.log, and most projects won't need 
> anything to get this infrastructure.
> 
> The python clients, and callers, will then need to be augmented to pass 
> the request-id in on requests. Servers will effectively decide when they 
> want to opt into calling other services this way.
> 
> This only ends up logging the top line global request id as well as the 
> last leaf for each call. This does mean that full tree construction will 
> take more work if you are bouncing through 3 or more servers, but it's a 
> step which I think can be completed this cycle.
> 
> I've got some more detailed notes, but before going through the process 
> of putting this into an oslo spec I wanted more general feedback on it 
> so that any objections we didn't think about yet can be raised before 
> going through the detailed design.
> 
> -Sean
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases] Stable branch conflicting information

2017-05-15 Thread Thierry Carrez
Sean McGinnis wrote:
> So I noticed today that the release information [0] for Newton appears to have
> the wrong date for when Newton transitions to the Legacy Phase. According to
> this conversation [1], I think (thought?) we established that rolling over to
> each support phase would stay on a 6 month cycle, despite Ocata being a 
> shorter
> development cycle.
> 
> I am not talking about EOL here, just the transition periods for stable
> branches to move to the next phase.
> 
> Based on this, the Next Phase for Newton appears to be wrong because it is on
> a 6 month period from the Ocata release, not based on Newton's actual release
> date.

You are correct. Phase transitions are based on the initial release
date, not the next ones. Phase III for Newton should start on 2017-10-06.

> I was going to put up a patch to fix this, but then got myself really confused
> because I couldn't actually reconcile the dates based on how the rest of the
> phase information is listed there. Going off of what we state in our Stable
> Branch phases [2], we are not following what we have published there.
> 
> Based on that information, Mitaka should still be in the Legacy phase, and
> not actually EOL'd for another 6 months. (Well, technically that actual EOL
> date isn't called out in the documentation, so I'm just assuming another 6
> months)

Actually the duration of stable branch life support is independent of
the definition of the 3 support phases. If you read the end of that
paragraph, it says:

"""The exact length of any given stable branch life support is discussed
amongst stable branch maintainers and QA/infrastructure teams at every
Design Summit. It is generally between 9 and 15 months, at which point
the value of the stable branch is clearly outweighed by the cost in
maintaining it in our continuous integration systems."""

Currently, the stable maint team supports branches for about 12 months.
Depending on when exactly the branch is EOLed, that basically means you
do not do much (if any) phase III support.

Hope this clarifies,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-15 Thread Lance Bragstad
On Sun, May 14, 2017 at 11:59 AM, Monty Taylor  wrote:

> On 05/11/2017 02:32 PM, Lance Bragstad wrote:
>
>> Hey all,
>>
>> One of the Baremetal/VM sessions at the summit focused on what we need
>> to do to make OpenStack more consumable for application developers [0].
>> As a group we recognized the need for application specific passwords or
>> API keys and nearly everyone (above 85% is my best guess) in the session
>> thought it was an important thing to pursue. The API
>> key/application-specific password specification is up for review [1].
>>
>> The problem is that with all the recent churn in the keystone project,
>> we don't really have the capacity to commit to this for the cycle. As a
>> project, we're still working through what we've committed to for Pike
>> before the OSIC fallout. It was suggested that we reach out to the PWG
>> to see if this is something we can get some help on from a keystone
>> development perspective. Let's use this thread to see if there is anyway
>> we can better enable the community through API keys/application-specific
>> passwords by seeing if anyone can contribute resources to this effort.
>>
>
> In the session, I signed up to help get the spec across the finish line.
> I'm also going to do my best to write up something resembling a user story
> so that we're all on the same page about what this is, what it isn't and
> what comes next.
>

Thanks Monty. If you have questions about the current proposal, Ron might
be lingering in IRC (rderose). David (dstanek) was also documenting his
perspective in another spec [0].


[0] https://review.openstack.org/#/c/440593/


>
> I probably will not have the time to actually implement the code - but if
> the PWG can help us get resources allocated to this I'll be happy to help
> them.
>
> [0] https://etherpad.openstack.org/p/BOS-forum-using-vm-and-baremetal
>> 
>> [1] https://review.openstack.org/#/c/450415/
>> 
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-15 Thread Sean Dague
On 05/15/2017 08:16 AM, Lance Bragstad wrote:
> 
> 
> On Mon, May 15, 2017 at 6:20 AM, Sean Dague  > wrote:
> 
> On 05/15/2017 05:59 AM, Andrey Volkov wrote:
> >
> >> The last time this came up, some people were concerned that trusting
> >> request-id on the wire was concerning to them because it's coming from
> >> random users.
> >
> > TBH I don't see the reason why a validated request-id value can't be
> > logged on a callee service side, probably because I missed some previous
> > context. Could you please give an example of such concerns?
> >
> > With service user I see two blocks:
> > - A callee service needs to know if it's "special" user or not.
> > - Until all services don't use a service user we'll not get the 
> complete trace.
> 
> That is doable, but then you need to build special tools to generate
> even basic flows. It means that the Elastic Search use case (where
> plopping in a request id shows you things across services) does not
> work. Because the child flows don't have the new id.
> 
> It's also fine to *also* cross log the child/callee request idea on the
> parent/caller, but it's not actually going to be sufficiently useful to
> most people.
> 
> 
> +1
> 
> To me it makes sense to supply the override so that a single request-id
> can track multiple operations across services. But I'm struggling to
> find a case where passing a list(global_request_id, local_request_id) is
> useful. This might be something we can elaborate on later, if we find a
> use case for including multiple request-ids.

I'm not sure I understand the question... so perhaps some examples

The theory is, say you kick off a Nova server build, you'll see
something like:

2017 May 15 nova-api [req-0001---4
req-0001---4 my_project my_user]
2017 May 15 nova-compute [req-0001---4
req-0001---4 my_project my_user]

Then when calling into glance for image download nova would pass in
X-OpenStack-Request-ID: req-0001---4, so that in the
glance logs you'd see:

2017 May 15 glance-api [req-0001---4
req-aef2---7 my_project my_user]

The second id is locally generated during the inbound request. If no
global id is sent (or we decide later that the caller was not
sufficiently trusted), the global id will be set to the local id.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-15 Thread Lance Bragstad
On Mon, May 15, 2017 at 6:20 AM, Sean Dague  wrote:

> On 05/15/2017 05:59 AM, Andrey Volkov wrote:
> >
> >> The last time this came up, some people were concerned that trusting
> >> request-id on the wire was concerning to them because it's coming from
> >> random users.
> >
> > TBH I don't see the reason why a validated request-id value can't be
> > logged on a callee service side, probably because I missed some previous
> > context. Could you please give an example of such concerns?
> >
> > With service user I see two blocks:
> > - A callee service needs to know if it's "special" user or not.
> > - Until all services don't use a service user we'll not get the complete
> trace.
>
> That is doable, but then you need to build special tools to generate
> even basic flows. It means that the Elastic Search use case (where
> plopping in a request id shows you things across services) does not
> work. Because the child flows don't have the new id.
>
> It's also fine to *also* cross log the child/callee request idea on the
> parent/caller, but it's not actually going to be sufficiently useful to
> most people.
>

+1

To me it makes sense to supply the override so that a single request-id can
track multiple operations across services. But I'm struggling to find a
case where passing a list(global_request_id, local_request_id) is useful.
This might be something we can elaborate on later, if we find a use case
for including multiple request-ids.


>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FaaS] Introduce a FaaS project

2017-05-15 Thread Robert Putt
Hi,

I am very interested in FaaS coming to OpenStack, it totally makes sense to 
have it as part of the platform. I have observed the current OpenStack Picasso 
project although I too am concerned about vendor lock in to IronFunctions with 
the Picasso project. I think it is also important not to alienate the current 
OpenStack community by introducing non standard components or languages, I am 
hopeful the new releases of Python can provide reasonable performance so it is 
possible to keep Python as a primary language for a FaaS project. My main 
concern is that we seem to have several FaaS projects with different approaches 
rather than having us all work on one superior FaaS solution. Is there a way we 
can win over the Picasso project team to be more understanding of the vendor 
lock in and language concerns?

For me the important things are:


a)   Sandboxed code in some container solution

b)   Pluggable backends for said sandbox to remove vendor lock in

c)   Pluggable storage for function packages, the default probably being 
Swift

d)   Integration with Keystone for auth and role based access control e.g. 
sharing functions with other tenants but maybe with different permissions, e.g. 
dev tenant in a domain can publish functions but prod tenant can only execute 
the functions.

e)   Integration with Neutron so functions can access tenant networks.

f)A web services gateway to create RESTful APIs and map URIs / verbs / 
API requests to functions.

g)   It would also be nice to have some meta data service like what we see 
in Nova so functions can have an auto injected context relating to the tenant 
running it rather than having to inject all parameters via the API.

Just some thoughts. If you’d like these converted into basic blueprints for the 
project let me know, I know some of them may seem like very stretched goals at 
the moment but I am sure their time will come.

Best Regards,

Rob


From: Lingxian Kong 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, May 15, 2017 at 10:56 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [FaaS] Introduce a FaaS project

On Mon, May 15, 2017 at 8:32 PM, Sam P 
> wrote:
Hi Larry,
 Thank you for the details.
 I am interested and like the idea of no vendor/platform lock-in.

 However,  I still have this stupid question in me.
 Why FaaS need to be in the OpenStack ecosystem? Can it survive
outside and still be able to integrate with OpenStack?

In OpenStack ecosystem, I mean put this project under OpenStack umbrella so 
that it could leverage OpenStack facilities, and integrating with other 
OpenStack services means it is an option to be deployed together with them and 
be triggered by event/notification from them.

 This FaaS must able to well integrated with OpenStack ecosystem and
no argument there.

>>IMHO, none of them can be well integrated with OpenStack ecosystem.
Can you share more details on this?  If you have done any survey on
this,  please share.
Crating FaaS with pure OpenStack means, we need to create something
similar to OpenWhisk or IronFunctions with existing or new OpenStack
components.
I just want to make sure it is worth it to recreate the wheels.

Yeah, you are right, as I said at the beginning, I'm sort of recreating the 
wheels. I hope the new project can be easily installed together with other 
OpenStack projects using similar methodology, it can provide a beautiful 
RESTful API to end users, it's easy for OpenStack developers to understand and 
maintain. I don't think it is that easy if we go with OpenWhisk or 
IronFunctions. Actually, in container world, there are already a lot of 
projects doing the same thing. But again, I'm OpenStack developer, we are 
running an OpenStack based public cloud, I don't want to mess things up to 
introduce things which will probably introduce other things.



Jsut for the info, I think this [0] is your previous ML thread...
[0] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116472.html


Thanks to find it out :)



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.

[openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-15 Thread Sean Dague
We had a forum session in Boston on Postgresql and out of that agreed to
the following steps forward:

1. explicitly warn in operator facing documentation that Postgresql is
less supported than MySQL. This was deemed better than just removing
documentation, because when people see Postgresql files in tree they'll
make assumptions (at least one set of operators did).

2. Suse is in process of investigating migration from PG to Gallera for
future versions of their OpenStack product. They'll make their findings
and tooling open to help determine how burdensome this kind of
transition would be for folks.

After those findings, we can come back with any next steps (or just
leave it as good enough there).

The TC governance patch is updated here -
https://review.openstack.org/#/c/427880/ - or if there are other
discussion questions feel free to respond to this thread.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-15 Thread Sean Dague
On 05/15/2017 05:59 AM, Andrey Volkov wrote:
> 
>> The last time this came up, some people were concerned that trusting 
>> request-id on the wire was concerning to them because it's coming from 
>> random users.
> 
> TBH I don't see the reason why a validated request-id value can't be
> logged on a callee service side, probably because I missed some previous
> context. Could you please give an example of such concerns?
> 
> With service user I see two blocks:
> - A callee service needs to know if it's "special" user or not.
> - Until all services don't use a service user we'll not get the complete 
> trace.

That is doable, but then you need to build special tools to generate
even basic flows. It means that the Elastic Search use case (where
plopping in a request id shows you things across services) does not
work. Because the child flows don't have the new id.

It's also fine to *also* cross log the child/callee request idea on the
parent/caller, but it's not actually going to be sufficiently useful to
most people.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-15 Thread Rico Lin
Hi Lance and all others who shows interest

IMO, after some feedback from the summit. I think it will be greate to have
efforts on


   - *bug/blueprint: We needs more people doing fix/review/spec. Since we
   still on the way to make heat more handy as Orchestration tools*
   - *template example*: Since we do have some new functions but didn't
   actually give a proper update of it.
   - *tutorial*: We got some reports about the lack of tutorials for for
   for features like software config/ rolling upgrade, so I definitely think
   we require some improvement here.
   - *test*: Our integration test(tempest test) seems not cover
   every scenario (like we just cover some snapshots test these few weeks).
   Also, we do hope to get more reports on how people use heat, and what's the
   test result.

So yes from me, Lance, that will help:)

Also, most of our functions can be directly called by future version, so if
we separate it into versions, how can Pike user find that example? I like
to idea to make all user aware of template version. but not sure to make
version specific directory will help. Maybe a version info in template
description will do? We can discussion this at the meeting (Wednesdays at
1500 UTC in #openstack-meeting-5) :)

2017-05-15 15:21 GMT+08:00 Lance Haig :

> Good to know that there is interest.
>
> I was thinking that we should perhaps create a directory for each
> openstack version.
>
> so we start say with a mitaka directory and then move the files there and
> test them all so that they work with Liberty.
> Then we can copy it over to Mitaka and do the same but add the extra
> functionality.
> and then Newton etc...
>
> That way if someone is on a specific version they only have to go to a
> specific directory to get the examples they need.
>
> What do you think?
>
> Lance
>
>
> On 14 May 2017 at 23:14, Kaz Shinohara  wrote:
>
>> Hi Lance,
>>
>> I like it too.
>> We should keep them updated according to the latest spec and actual use
>> cases.
>>
>> Regards,
>> Kaz Shinohara
>>
>>
>> 2017-05-13 13:00 GMT+09:00 Foss Geek :
>>
>>> Hi Lance, I am also interested to assisting you on this.
>>>
>>> Thanks
>>> Mohan
>>> On 11-May-2017 2:25 am, "Lance Haig"  wrote:
>>>
 Hi,

 I would like to introduce myself to the heat team.

 My name is Lance Haig I currently work for Mirantis doing workload
 onboarding to openstack.

 Part of my job is assisting customers with using the new Openstack
 cloud they have been given.

 I recently gave a talk with a colleague Florin Stingaciu on LCM with
 heat at the Boston Summit.

 I am interested in assisting the project.

 We have noticed that there are some outdated examples in the
 heat-examples repository and I am not sure that they all still function.

 I was wondering if it would be valuable for me to take a look at these
 and fix them or perhaps we can rethink how we present the examples.

 I am interested in what you guys think.

 Thanks

 Lance

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-15 Thread Andrey Volkov

> The last time this came up, some people were concerned that trusting 
> request-id on the wire was concerning to them because it's coming from 
> random users.

TBH I don't see the reason why a validated request-id value can't be
logged on a callee service side, probably because I missed some previous
context. Could you please give an example of such concerns?

With service user I see two blocks:
- A callee service needs to know if it's "special" user or not.
- Until all services don't use a service user we'll not get the complete trace.

Sean Dague writes:

> One of the things that came up in a logging Forum session is how much 
> effort operators are having to put into reconstructing flows for things 
> like server boot when they go wrong, as every time we jump a service 
> barrier the request-id is reset to something new. The back and forth 
> between Nova / Neutron and Nova / Glance would be definitely well served 
> by this. Especially if this is something that's easy to query in elastic 
> search.
>
> The last time this came up, some people were concerned that trusting 
> request-id on the wire was concerning to them because it's coming from 
> random users. We're going to assume that's still a concern by some. 
> However, since the last time that came up, we've introduced the concept 
> of "service users", which are a set of higher priv services that we are 
> using to wrap user requests between services so that long running 
> request chains (like image snapshot). We trust these service users 
> enough to keep on trucking even after the user token has expired for 
> this long run operations. We could use this same trust path for 
> request-id chaining.
>
> So, the basic idea is, services will optionally take an inbound 
> X-OpenStack-Request-ID which will be strongly validated to the format 
> (req-$uuid). They will continue to always generate one as well. When the 
> context is built (which is typically about 3 more steps down the paste 
> pipeline), we'll check that the service user was involved, and if not, 
> reset the request_id to the local generated one. We'll log both the 
> global and local request ids. All of these changes happen in 
> oslo.middleware, oslo.context, oslo.log, and most projects won't need 
> anything to get this infrastructure.
>
> The python clients, and callers, will then need to be augmented to pass 
> the request-id in on requests. Servers will effectively decide when they 
> want to opt into calling other services this way.
>
> This only ends up logging the top line global request id as well as the 
> last leaf for each call. This does mean that full tree construction will 
> take more work if you are bouncing through 3 or more servers, but it's a 
> step which I think can be completed this cycle.
>
> I've got some more detailed notes, but before going through the process 
> of putting this into an oslo spec I wanted more general feedback on it 
> so that any objections we didn't think about yet can be raised before 
> going through the detailed design.
>
>   -Sean

-- 
Thanks,

Andrey Volkov,
Software Engineer, Mirantis, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FaaS] Introduce a FaaS project

2017-05-15 Thread Lingxian Kong
On Mon, May 15, 2017 at 8:32 PM, Sam P  wrote:

> Hi Larry,
>  Thank you for the details.
>  I am interested and like the idea of no vendor/platform lock-in.
>
>  However,  I still have this stupid question in me.
>  Why FaaS need to be in the OpenStack ecosystem? Can it survive
> outside and still be able to integrate with OpenStack?
>

In OpenStack ecosystem, I mean put this project under OpenStack umbrella so
that it could leverage OpenStack facilities, and integrating with other
OpenStack services means it is an option to be deployed together with them
and be triggered by event/notification from them.


>  This FaaS must able to well integrated with OpenStack ecosystem and
> no argument there.
>
> >>IMHO, none of them can be well integrated with OpenStack ecosystem.
> Can you share more details on this?  If you have done any survey on
> this,  please share.
> Crating FaaS with pure OpenStack means, we need to create something
> similar to OpenWhisk or IronFunctions with existing or new OpenStack
> components.
> I just want to make sure it is worth it to recreate the wheels.
>

Yeah, you are right, as I said at the beginning, I'm sort of recreating the
wheels. I hope the new project can be easily installed together with other
OpenStack projects using similar methodology, it can provide a beautiful
RESTful API to end users, it's easy for OpenStack developers to understand
and maintain. I don't think it is that easy if we go with OpenWhisk or
IronFunctions. Actually, in container world, there are already a lot of
projects doing the same thing. But again, I'm OpenStack developer, we are
running an OpenStack based public cloud, I don't want to mess things up to
introduce things which will probably introduce other things.


>
>
> Jsut for the info, I think this [0] is your previous ML thread...
> [0] http://lists.openstack.org/pipermail/openstack-dev/2017-
> May/116472.html
>
>
Thanks to find it out :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FaaS] Introduce a FaaS project

2017-05-15 Thread Lingxian Kong
On Mon, May 15, 2017 at 5:59 PM, Li Ma  wrote:

> Do you have submitted a proposal to create this project under
> OpenStack umbrella?
>

Yeah, as the first step: https://review.openstack.org/#/c/463953/


Cheers,
Lingxian Kong (Larry)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FaaS] Introduce a FaaS project

2017-05-15 Thread Sam P
Hi Larry,
 Thank you for the details.
 I am interested and like the idea of no vendor/platform lock-in.

 However,  I still have this stupid question in me.
 Why FaaS need to be in the OpenStack ecosystem? Can it survive
outside and still be able to integrate with OpenStack?
 This FaaS must able to well integrated with OpenStack ecosystem and
no argument there.

>>IMHO, none of them can be well integrated with OpenStack ecosystem.
Can you share more details on this?  If you have done any survey on
this,  please share.
Crating FaaS with pure OpenStack means, we need to create something
similar to OpenWhisk or IronFunctions with existing or new OpenStack
components.
I just want to make sure it is worth it to recreate the wheels.


Jsut for the info, I think this [0] is your previous ML thread...
[0] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116472.html
--- Regards,
Sampath



On Mon, May 15, 2017 at 2:59 PM, Li Ma  wrote:
> That's interesting. Serverless is a general computing engine that can
> brings lots of possibility of how to make use of resource managed by
> OpenStack. I'd like to see a purely OpenStack-powered solution there.
>
> Do you have submitted a proposal to create this project under
> OpenStack umbrella?
>
> On Mon, May 15, 2017 at 9:36 AM, Lingxian Kong  wrote:
>> Yes, I am recreating the wheels :-)
>>
>> I am sending this email not intend to say Qinling[1] project is a better
>> option than others as a project of function as a service, I just provide
>> another
>> possibility for developers/operators already in OpenStack world, and try my
>> luck to seek people who have the same interest in serverless area and
>> cooperate
>> together to make it more and more mature if possible, because I see
>> serverless
>> becomes more and more popular in current agile IT world but I don't see
>> there
>> is a good candidate in OpenStack ecosystem.
>>
>> I remember I asked the question that if we have a FaaS available project in
>> OpenStack, what I got are something like: Picasso[2], OpenWhisk[3], etc, but
>> IMHO, none of them can be well integrated with OpenStack ecosystem. I don't
>> mean they are not good, on the contrary, they are good, especially OpenWhisk
>> which is already deployed and available in IBM Bluemix production. Picasso
>> is
>> only a very thin proxy layer to IronFunctions which is an open source
>> project
>> comes from Iron.io company who also has a commercial FaaS product.
>>
>> However, there are several reasons make me create a new project:
>>
>> - Maybe not many OpenStack operators/developers want to touch a project
>>   written in another programming language besides Python (and maybe Go? not
>> sure
>>   the result of TC resolution). The deployment/dependency management/code
>>   maintenance will bring much more overhead.
>>
>> - I'd like to see a project which is using the similar
>>   components/infrastructure as most of the other OpenStack projects, e.g.
>>   keystone authentication, message queue(in order to receive notification
>> from
>>   Panko then trigger functions), database, oslo library, swift(for code
>>   package storage), etc. Of course, I could directly contribute and modify
>>   some existing project(e.g. Picasso) to satisfy these conditions, but I am
>>   afraid the time and effort it could take is exactly the same as if I
>> create
>>   a new one.
>>
>> - I'd like to see a project with no vendor/platform lock-in. Most of the
>> FaaS
>>   projects are based on one specific container orchestration platform or
>> want
>>   to promote usage of its own commercial product. For me, it's always a good
>>   thing to have more technical options when evaluating a new service.
>>
>> Qinling project is still at the very very early stage. I created it one
>> month ago
>> and work on it only in my spare time. But it works, you can see a basic
>> usage
>> introduction in README.rst and give it a try. A lot of things are still
>> missing, CLI, UT, devstack plugin, UI, etc.
>>
>> Of course, you can ignore me (still appreciate you read here) if you think
>> it's really not necessary and stupid to create such a project in OpenStack,
>> or you can join me to discuss what we could do to improve it gradually and
>> provide a better option for a real function as a service to people in
>> OpenStack world.
>>
>> [1]: https://github.com/LingxianKong/qinling
>> [2]: https://github.com/openstack/picasso
>> [3]: https://github.com/openwhisk/openwhisk
>>
>> Cheers,
>> Lingxian Kong (Larry)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
>
> Li Ma (Nick)
> Email: skywalker.n...@gmail.com
>
> __
> 

Re: [openstack-dev] [Heat] Heat template example repository

2017-05-15 Thread Lance Haig
Good to know that there is interest.

I was thinking that we should perhaps create a directory for each openstack
version.

so we start say with a mitaka directory and then move the files there and
test them all so that they work with Liberty.
Then we can copy it over to Mitaka and do the same but add the extra
functionality.
and then Newton etc...

That way if someone is on a specific version they only have to go to a
specific directory to get the examples they need.

What do you think?

Lance


On 14 May 2017 at 23:14, Kaz Shinohara  wrote:

> Hi Lance,
>
> I like it too.
> We should keep them updated according to the latest spec and actual use
> cases.
>
> Regards,
> Kaz Shinohara
>
>
> 2017-05-13 13:00 GMT+09:00 Foss Geek :
>
>> Hi Lance, I am also interested to assisting you on this.
>>
>> Thanks
>> Mohan
>> On 11-May-2017 2:25 am, "Lance Haig"  wrote:
>>
>>> Hi,
>>>
>>> I would like to introduce myself to the heat team.
>>>
>>> My name is Lance Haig I currently work for Mirantis doing workload
>>> onboarding to openstack.
>>>
>>> Part of my job is assisting customers with using the new Openstack cloud
>>> they have been given.
>>>
>>> I recently gave a talk with a colleague Florin Stingaciu on LCM with
>>> heat at the Boston Summit.
>>>
>>> I am interested in assisting the project.
>>>
>>> We have noticed that there are some outdated examples in the
>>> heat-examples repository and I am not sure that they all still function.
>>>
>>> I was wondering if it would be valuable for me to take a look at these
>>> and fix them or perhaps we can rethink how we present the examples.
>>>
>>> I am interested in what you guys think.
>>>
>>> Thanks
>>>
>>> Lance
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ui] another i18n proposal for heat templates 'description' help strings

2017-05-15 Thread Peng Wu
On Wed, 2017-05-10 at 14:05 +0200, Jiri Tomasek wrote:
> I am probably a bit late to the discussion, but I think we're
> missing 
> quite important thing and that is the fact that TripleO UI is
> supposed 
> to use various plans (template sets) not strictly tripleo-heat-
> templates 
> repository contents. Tripleo-heat-templates repository is just a
> default 
> plan, but user can provide own changed files to the plan. Or create
> new 
> plan which is very different from what default tripleo-heat-
> templates 
> repository holds.
> 
> Also I am quite scared of keeping the GUI-specific file in sync with 
> tripleo-heat-templates contents.
> 
> IMHO a proper solution is introducing translations as part of 
> tripleo-heat-templates repository - template files hold the keys and 
> translations are held in a separate files in THT.
> 
> -- Jirka


Right, will consider to load the translations directly from
tripleo-heat-templates repository.

Maybe the generated javascript file still keep in THT,
and translated in THT.

Maybe we can name translations files in some hard-code file names
or in some config file, then load dynamically in tripleo-ui.

Regards,
  Peng


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FaaS] Introduce a FaaS project

2017-05-15 Thread Li Ma
That's interesting. Serverless is a general computing engine that can
brings lots of possibility of how to make use of resource managed by
OpenStack. I'd like to see a purely OpenStack-powered solution there.

Do you have submitted a proposal to create this project under
OpenStack umbrella?

On Mon, May 15, 2017 at 9:36 AM, Lingxian Kong  wrote:
> Yes, I am recreating the wheels :-)
>
> I am sending this email not intend to say Qinling[1] project is a better
> option than others as a project of function as a service, I just provide
> another
> possibility for developers/operators already in OpenStack world, and try my
> luck to seek people who have the same interest in serverless area and
> cooperate
> together to make it more and more mature if possible, because I see
> serverless
> becomes more and more popular in current agile IT world but I don't see
> there
> is a good candidate in OpenStack ecosystem.
>
> I remember I asked the question that if we have a FaaS available project in
> OpenStack, what I got are something like: Picasso[2], OpenWhisk[3], etc, but
> IMHO, none of them can be well integrated with OpenStack ecosystem. I don't
> mean they are not good, on the contrary, they are good, especially OpenWhisk
> which is already deployed and available in IBM Bluemix production. Picasso
> is
> only a very thin proxy layer to IronFunctions which is an open source
> project
> comes from Iron.io company who also has a commercial FaaS product.
>
> However, there are several reasons make me create a new project:
>
> - Maybe not many OpenStack operators/developers want to touch a project
>   written in another programming language besides Python (and maybe Go? not
> sure
>   the result of TC resolution). The deployment/dependency management/code
>   maintenance will bring much more overhead.
>
> - I'd like to see a project which is using the similar
>   components/infrastructure as most of the other OpenStack projects, e.g.
>   keystone authentication, message queue(in order to receive notification
> from
>   Panko then trigger functions), database, oslo library, swift(for code
>   package storage), etc. Of course, I could directly contribute and modify
>   some existing project(e.g. Picasso) to satisfy these conditions, but I am
>   afraid the time and effort it could take is exactly the same as if I
> create
>   a new one.
>
> - I'd like to see a project with no vendor/platform lock-in. Most of the
> FaaS
>   projects are based on one specific container orchestration platform or
> want
>   to promote usage of its own commercial product. For me, it's always a good
>   thing to have more technical options when evaluating a new service.
>
> Qinling project is still at the very very early stage. I created it one
> month ago
> and work on it only in my spare time. But it works, you can see a basic
> usage
> introduction in README.rst and give it a try. A lot of things are still
> missing, CLI, UT, devstack plugin, UI, etc.
>
> Of course, you can ignore me (still appreciate you read here) if you think
> it's really not necessary and stupid to create such a project in OpenStack,
> or you can join me to discuss what we could do to improve it gradually and
> provide a better option for a real function as a service to people in
> OpenStack world.
>
> [1]: https://github.com/LingxianKong/qinling
> [2]: https://github.com/openstack/picasso
> [3]: https://github.com/openwhisk/openwhisk
>
> Cheers,
> Lingxian Kong (Larry)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev