[openstack-dev] [daisycloud-core] Agenda for IRC meeting 0800UTC Mar. 17 2017

2017-03-16 Thread hu.zhijiang
1) Roll Call

2) Cinder/Ceph support

3) Daisy as Kolla image version manager

4) OPNFV: VM/BM Deployment & Functest Integration

5) OPNFV: Escalator Support

6) AoB











B. R.,

Zhijiang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-16 Thread Lance Bragstad
On Thu, Mar 16, 2017 at 4:31 PM, John Dickinson  wrote:

>
>
> On 16 Mar 2017, at 14:10, Lance Bragstad wrote:
>
> Hey folks,
>
> The reseller use case [0] has been popping up frequently in various
> discussions [1], including unified limits.
>
> For those who are unfamiliar with the reseller concept, it came out of
> early discussions regarding hierarchical multi-tenancy (HMT). It
> essentially allows a certain level of opaqueness within project trees. This
> opaqueness would make it easier for providers to "resell" infrastructure,
> without having customers/providers see all the way up and down the project
> tree, hence it was termed reseller. Keystone originally had some ideas of
> how to implement this after the HMT implementation laid the ground work,
> but it was never finished.
>
> With it popping back up in conversations, I'm looking for folks who are
> willing to represent the idea. Participating in this thread doesn't mean
> you're on the hook for implementing it or anything like that.
>
> Are you interested in reseller and willing to provide use-cases?
>
>
>
> [0] http://specs.openstack.org/openstack/keystone-specs/
> specs/keystone/mitaka/reseller.html#problem-description
>
>
> This is interesting to me. It sounds very similar to the reseller concept
> that Swift has. In Swift, the reseller is used to group accounts. Remember
> that an account in Swift is like a bank account. It's where you put stuff,
> and is mapped to one or more users via an auth system. So a Swift account
> is scoped to a particular reseller, and an auth system is responsible for
> one or more resellers.
>
> You can see this in practice with the "reseller prefix" that's used in
> Swift's URLs. The default is "AUTH_", so my account might be "AUTH_john".
> But it's totally possible that there could be another auth system assigned
> to a different reseller prefix. If that other reseller prefix is "BAUTH_",
> then there could exist a totally independent "BAUTH_john" account. The only
> thing that ties some user creds (or token) to a particular account in Swift
> is the auth system.
>
> So this reseller concept in Swift allows deployers to have more than one
> auth system installed in the same cluster at the same time. And, in fact,
> this is exactly why it was first used. If you get an account with Rackspace
> Cloud Files, you'll see the reseller prefix is "MossoCloudFS_", but it
> turns out that when Swift was created JungleDisk was an internal Rackspace
> product and also stored a bunch of data in the same system. JungleDisk
> managed it's own users and auth system, so they had a different reseller
> prefix that was tied to a different auth system.
>
> From the Keystone spec, it seems that the reseller idea is a way to group
> domains, very much like the reseller concept in Swift. I'd suggest that
> instead of building ever-increasing hierarchies of groups of users,
> supporting more than one auth system at a time is a proven way to scale out
> this solution. So instead of adding the complexity to Keystone of
> ever-deepening groupings, support having more than one Keystone instance
> (or even Keystone + another auth system) in a project's pipeline. This
> allows grouping users into distinct partitions, and it scales by adding
> more keystones instead of bigger keystones.
>

This is super interesting, I've never thought about it this way before. In
your example was there information that was shared that both auth systems
needed? If so, do you remember how that was done?

So, if I think about this conceptually...

Is it safe to assume that both identity systems have to share a subset of
information (i.e. regions, services, endpoints, roles, etc.) or they have
to duplicate data? Phrasing it as a question actually reminds me of a time
when folks came to keystone asking if they could isolate the storage of
users and groups from the rest of the deployment (i.e. the identity table
of keystone database would be region specific and the rest of the database
would be shared globally). Instead of performing that isolation, we've seen
that as more of need to improve federated use-cases (because why couldn't
that be solved with federation).

So, building on your separate identity systems idea. What if we had a way
to redirect authentication requests to an identity provider based on the
domain being operated in? The other identity system could be keystone, it
could be something else, whatever serves up SAML assertions. If you wanted
to provide a reseller environment for a customer you could do so by setting
up a keystone-to-keystone federated environment, making the reseller's
keystone the identity provider for that domain. I guess the real big hurdle
now is what to do with projects, because the whole idea of reseller is to
have the ability to create and manage my own subtrees. In today's world,
projects and domains are left up to the service provider to manage.

Feel free to reel me back in if I've gone off the deep end!


>

Re: [openstack-dev] [tripleo] CI Squad Meeting Summary (week 11)

2017-03-16 Thread Emilien Macchi
On Thu, Mar 16, 2017 at 2:21 PM, Attila Darazs  wrote:
> If the topics below interest you and you want to contribute to the
> discussion, feel free to join the next meeting:
>
> Time: Thursdays, 14:30-15:30 UTC (WARNING: time changed due to DST)
> Place: https://bluejeans.com/4113567798/
>
> Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
>
> The last week was very significant in the CI Squad's life: we migrated the
> first set of TripleO gating jobs to Quickstart[1] and it went more or less
> smoothly. There were a few failed gate jobs, but we quickly patched up the
> problematic parts.

Highlight: https://bugs.launchpad.net/tripleo/+bug/1673585
Please hold everything else off until this one is fixed.
It's currently blocking all backports to stable/ocata (and
newton/mitaka of course), therefore downstream releases are blocked.

> For the "phase2" of the transition we're going to concentrate on three
> areas:
>
> 1) usability improvements, to make the logs from the jobs easier to browse
> and understand
>
> 2) make sure the speed of the new jobs are roughly at the same level as the
> previous ones
>
> 3) get the OVB jobs ported as well

Just to be clear, we won't move OVB until 1) and 2) are solved.
I think we prefer to wait a few more weeks to avoid any critical
issues like we're facing right now with multinode-oooq jobs on stable
branches.

> We use the "oooq-t-phase2"[2] gerrit topic for the changes around these
> areas. As the OVB related ones are kind of big, we will not migrate the jobs
> next week, most probably only on the beginning of the week after.
>
> We're also trying to utilize the new RDO Cloud, hopefully we will be able to
> offload a couple of gate jobs on it soon.
>
> Best regards,
> Attila

Thanks for this work,

> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113996.html
> [2] https://review.openstack.org/#/q/topic:oooq-t-phase2
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Emilien Macchi
On Thu, Mar 16, 2017 at 11:18 AM, Kendall Nelson  wrote:
> Hello Emilien,
>
> So, we have our slots basically filled, BUT you can ask one of the projects
> that did get a slot if you can be refugees in their room and share. Or the
> rooms will be empty over lunch and we can make sure to get you time there
> and if your time is willing to do the grab and go lunches, you can use the
> lunch time. Let me know what you would prefer.

No worries.

I'll poll the mailing-list before the Forum to ask who would be
interested by such a slot and try to find a place at the conference
center where we can do it.
Otherwise, we'll gently ask for some space in the existing slots :-)

Thanks,


> -Kendall (diablo_rojo)
>
>
> On Thu, Mar 16, 2017 at 10:07 AM Emilien Macchi  wrote:
>>
>> On Wed, Mar 15, 2017 at 2:20 PM, Kendall Nelson 
>> wrote:
>> > Hello All!
>> >
>> > As you may have seen in a previous thread [1] the Forum will offer
>> > project
>> > on-boarding rooms! This idea is that these rooms will provide a place
>> > for
>> > new contributors to a given project to find out more about the project,
>> > people, and code base. The slots will be spread out throughout the whole
>> > Summit and will be 90 min long.
>> >
>> > We have a very limited slots available for interested projects so it
>> > will be
>> > a first come first served process. Let me know if you are interested and
>> > I
>> > will reserve a slot for you if there are spots left.
>>
>> If possible, please add TripleO project. Several TripleO developers
>> will be here and we'll easily find someone (or two) to host this slot
>> (if not me).
>>
>> Thanks,
>>
>> > - Kendall Nelson (diablo_rojo)
>> >
>> > [1]
>> >
>> > http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-university] [os-upstream-institute] NAME CHANGE

2017-03-16 Thread Ghanshyam Mann
Noted.

Also added redirect link on old wiki
https://wiki.openstack.org/wiki/OpenStack_University

-gmann

On Fri, Mar 17, 2017 at 6:47 AM, Ildiko Vancsa 
wrote:

> Hi All,
>
> Due to some issues around the name OpenStack University we unfortunately
> had to pick a new one.
>
> The new name of the program is OpenStack Upstream Institute.
>
> We renamed the corresponding assets as well, so please join or move to the
> #openstack-upstream-institute IRC channel. The new ML tag is
> os-upstream-institute.
>
> You can find the wiki page for the team and activities here:
> https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute
>
> Sorry for the inconvenience.
>
> Thanks and Best Regards,
> Ildikó
> IRC: ildikov
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [Patrole] Nominating Felipe Monteiro for patrole core

2017-03-16 Thread Ghanshyam Mann
+1.
Yea, Felipe is doing great work.

-gmann

On Fri, Mar 17, 2017 at 3:28 AM, BARTRA, RICK  wrote:

> Felipe has done a tremendous amount of work stabilizing, enabling gates,
> contributing new tests, and extensively reviewing code in the Patrole
> project. In fact, he is the number one contributor to Patrole in terms of
> lines of code. He is also driving direction in the project and genuinely
> cares about the success of Patrole. As core spots are limited, I am
> recommending that Felipe replace Sangeet Gupta (sg7...@att.com) as core
> due to Sangeet’s inactivity on the project.
>
>
>
> -Rick Bartra
>
> rb5...@att.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Tony Breeds
On Thu, Mar 16, 2017 at 10:31:49AM +0100, Thierry Carrez wrote:
 
> Infrastructure, QA, Release Management, Requirements, Stable maint
> 
> Those teams are all under-staffed and wanting to grow new members, but
> 90 min is both too long and too short for them. I feel like regrouping
> them in a single slot and give each of those teams ~15 min to explain
> what they do, their process and tooling, and a pointer to next steps /
> mentors would be immensely useful.

This plan sounds good to me.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-16 Thread Rodrigo Duarte
On Thu, Mar 16, 2017 at 3:42 PM, Lance Bragstad  wrote:

>
> On Thu, Mar 16, 2017 at 12:46 PM, Morgan Fainberg <
> morgan.fainb...@gmail.com> wrote:
>
>>
>>
>> On Mar 16, 2017 07:28, "Jeremy Stanley"  wrote:
>>
>> On 2017-03-16 08:34:58 -0500 (-0500), Lance Bragstad wrote:
>> [...]
>> > These security-related corner cases have always come up in the past when
>> > we've talked about implementing reseller. Another good example that I
>> > struggle with is what happens when you flip the reseller bit for a
>> project
>> > admin who goes off and creates their own entities but then wants
>> support?
>> > What does the support model look like for the project admin that needs
>> help
>> > in a way that maintains data integrity?
>>
>> It's still entirely unclear to me how giving someone the ability to
>> hide resources you've delegated them access to create in any way
>> enables "reseller" use cases. I can understand the global admins
>> wanting to have optional views where they don't see all the resold
>> hierarchy (for the sake of their own sanity), but why would a
>> down-tree admin have any expectation they could reasonably hide
>> resources they create from those who maintain the overall system?
>>
>> In other multi-tenant software I've used where reseller
>> functionality is present, top-level admins have some means of
>> examining delegated resources and usually even of impersonating
>> their down-tree owners for improved supportability.
>> --
>> Jeremy Stanley
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> Hiding projects is a lot like implementing Mandatory Access Control
>> within OpenStack. I would like to go on record and say we should squash the
>> hidden projects concept (within a single hierarchy). If we want to
>> implement MAC (SELinux equivalent) in OpenStack, we have a much, much,
>> bigger scope to cover than just in Keystone, and this feels outside the
>> scope of any heirarchical multi-tenancy work that has been done/will be
>> done.
>>
>> TL DR: let's not try and hide projects from users with rights in the same
>> (peer, or above) hierarchy.
>>
>
> If that's the direction - we need to realign our documentation [0].
>
>
> [0] http://specs.openstack.org/openstack/keystone-specs/
> specs/keystone/mitaka/reseller.html#problem-description
>

I guess we need a new spec to discuss improvements in the "regular" HMT
implementation. Once we cover just the "project hierarchy" we can think in
improvements for the reseller use case as well.


>
>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-16 Thread Rodrigo Duarte
On Thu, Mar 16, 2017 at 2:46 PM, Morgan Fainberg 
wrote:

>
>
> On Mar 16, 2017 07:28, "Jeremy Stanley"  wrote:
>
> On 2017-03-16 08:34:58 -0500 (-0500), Lance Bragstad wrote:
> [...]
> > These security-related corner cases have always come up in the past when
> > we've talked about implementing reseller. Another good example that I
> > struggle with is what happens when you flip the reseller bit for a
> project
> > admin who goes off and creates their own entities but then wants support?
> > What does the support model look like for the project admin that needs
> help
> > in a way that maintains data integrity?
>
> It's still entirely unclear to me how giving someone the ability to
> hide resources you've delegated them access to create in any way
> enables "reseller" use cases. I can understand the global admins
> wanting to have optional views where they don't see all the resold
> hierarchy (for the sake of their own sanity), but why would a
> down-tree admin have any expectation they could reasonably hide
> resources they create from those who maintain the overall system?
>
> In other multi-tenant software I've used where reseller
> functionality is present, top-level admins have some means of
> examining delegated resources and usually even of impersonating
> their down-tree owners for improved supportability.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> Hiding projects is a lot like implementing Mandatory Access Control within
> OpenStack. I would like to go on record and say we should squash the hidden
> projects concept (within a single hierarchy). If we want to implement MAC
> (SELinux equivalent) in OpenStack, we have a much, much, bigger scope to
> cover than just in Keystone, and this feels outside the scope of any
> heirarchical multi-tenancy work that has been done/will be done.
>
>
Liked the comparison here, with this in mind, we can try to improve the
design of the current solution.


> TL DR: let's not try and hide projects from users with rights in the same
> (peer, or above) hierarchy.
>

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-16 Thread Fox, Kevin M
Yeah, that would probably handle the use case too.

Thanks,
Kevin

From: Lance Bragstad [lbrags...@gmail.com]
Sent: Thursday, March 16, 2017 4:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone][all] Reseller - do we need it?


On Thu, Mar 16, 2017 at 5:54 PM, Fox, Kevin M 
> wrote:
At our site, we have some larger projects that would be really nice if we could 
just give a main project all the resources they need, and let them suballocate 
it as their own internal subprojects needs change. Right now, we have to deal 
with all the subprojects directly. The reseller concept may fit this use case?

Sounds like this might also be solved by better RBAC that allows real project 
administrators to control their own subtrees. Is there a use case to limit 
visibility either up or down the tree? If not, would it be a nice-to-have?


Thanks,
Kevin


From: Lance Bragstad [lbrags...@gmail.com]
Sent: Thursday, March 16, 2017 2:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [keystone][all] Reseller - do we need it?

Hey folks,

The reseller use case [0] has been popping up frequently in various discussions 
[1], including unified limits.

For those who are unfamiliar with the reseller concept, it came out of early 
discussions regarding hierarchical multi-tenancy (HMT). It essentially allows a 
certain level of opaqueness within project trees. This opaqueness would make it 
easier for providers to "resell" infrastructure, without having 
customers/providers see all the way up and down the project tree, hence it was 
termed reseller. Keystone originally had some ideas of how to implement this 
after the HMT implementation laid the ground work, but it was never finished.

With it popping back up in conversations, I'm looking for folks who are willing 
to represent the idea. Participating in this thread doesn't mean you're on the 
hook for implementing it or anything like that.

Are you interested in reseller and willing to provide use-cases?



[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html#problem-description

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-16 Thread Lance Bragstad
On Thu, Mar 16, 2017 at 5:54 PM, Fox, Kevin M  wrote:

> At our site, we have some larger projects that would be really nice if we
> could just give a main project all the resources they need, and let them
> suballocate it as their own internal subprojects needs change. Right now,
> we have to deal with all the subprojects directly. The reseller concept may
> fit this use case?
>

Sounds like this might also be solved by better RBAC that allows real
project administrators to control their own subtrees. Is there a use case
to limit visibility either up or down the tree? If not, would it be a
nice-to-have?


>
> Thanks,
> Kevin
>
> --
> *From:* Lance Bragstad [lbrags...@gmail.com]
> *Sent:* Thursday, March 16, 2017 2:10 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [keystone][all] Reseller - do we need it?
>
> Hey folks,
>
> The reseller use case [0] has been popping up frequently in various
> discussions [1], including unified limits.
>
> For those who are unfamiliar with the reseller concept, it came out of
> early discussions regarding hierarchical multi-tenancy (HMT). It
> essentially allows a certain level of opaqueness within project trees. This
> opaqueness would make it easier for providers to "resell" infrastructure,
> without having customers/providers see all the way up and down the project
> tree, hence it was termed reseller. Keystone originally had some ideas of
> how to implement this after the HMT implementation laid the ground work,
> but it was never finished.
>
> With it popping back up in conversations, I'm looking for folks who are
> willing to represent the idea. Participating in this thread doesn't mean
> you're on the hook for implementing it or anything like that.
>
> Are you interested in reseller and willing to provide use-cases?
>
>
>
> [0] http://specs.openstack.org/openstack/keystone-specs/
> specs/keystone/mitaka/reseller.html#problem-description
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-16 Thread Fox, Kevin M
At our site, we have some larger projects that would be really nice if we could 
just give a main project all the resources they need, and let them suballocate 
it as their own internal subprojects needs change. Right now, we have to deal 
with all the subprojects directly. The reseller concept may fit this use case?

Thanks,
Kevin


From: Lance Bragstad [lbrags...@gmail.com]
Sent: Thursday, March 16, 2017 2:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [keystone][all] Reseller - do we need it?

Hey folks,

The reseller use case [0] has been popping up frequently in various discussions 
[1], including unified limits.

For those who are unfamiliar with the reseller concept, it came out of early 
discussions regarding hierarchical multi-tenancy (HMT). It essentially allows a 
certain level of opaqueness within project trees. This opaqueness would make it 
easier for providers to "resell" infrastructure, without having 
customers/providers see all the way up and down the project tree, hence it was 
termed reseller. Keystone originally had some ideas of how to implement this 
after the HMT implementation laid the ground work, but it was never finished.

With it popping back up in conversations, I'm looking for folks who are willing 
to represent the idea. Participating in this thread doesn't mean you're on the 
hook for implementing it or anything like that.

Are you interested in reseller and willing to provide use-cases?



[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html#problem-description
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Kendall Nelson
@Jay, yes Cinder has a slot; they were in the list I posted to this thread
this morning I believe :)

@Telles I will leave it up to you and Zun (hongbin...@huawei.com) :) If you
want to share the room simultaneously or split the time; its up to you.

On Thu, Mar 16, 2017 at 3:58 PM Telles Nobrega  wrote:

> Thank you Kendall.
>
> Just to be sure we would have 45 minutes on boarding session for sahara
> correct?
>
> Thanks.
> On Thu, 16 Mar 2017 at 17:46 HU, BIN  wrote:
>
> Kendall,
>
>
>
> That sounds great, and thank you very much for consideration.
>
>
>
> Cheers J
>
> Bin
>
>
>
> *From:* Kendall Nelson [mailto:kennelso...@gmail.com]
> *Sent:* Thursday, March 16, 2017 11:59 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
>
>
> *Subject:* Re: [openstack-dev] [ptls] Project On-Boarding Rooms
>
>
>
> Hello Bin :)
>
>Currently we don't have other official projects waiting in line, but
> there was one other unofficial project that asked for space before Gluon.
> As of right now, the plan is to wait to see if any other official projects
> trickle in as I don't think all the timezones have gotten to weigh in on
> this thread yet. If no more voice interest, I can see if I can fit Cyborg
> and Gluon in.
>
> -Kendall (diablo_rojo)
>
>
>
> On Thu, Mar 16, 2017 at 1:21 PM HU, BIN  wrote:
>
> I know we Gluon is an unofficial project. Since Zun is willing to share
> the room, is it possible to share it with Gluon unless there are other
> projects are waiting in queue?
>
> Thanks
>
>
> Bin
>
>
>
>  Original Message 
> From: Kendall Nelson 
> Date: 11:13AM, Thu, Mar 16, 2017
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [ptls] Project On-Boarding Rooms
>
> Thanks! I will make note that you are willing to share.
>
>
>
> On Thu, Mar 16, 2017 at 12:38 PM Hongbin Lu  wrote:
>
> Zun team could squeeze the session into 45 minutes and give the other 45
> minutes to another team if anyone interest.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Kendall Nelson [mailto:kennelso...@gmail.com]
> *Sent:* March-16-17 11:11 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
>
>
> *Subject:* Re: [openstack-dev] [ptls] Project On-Boarding Rooms
>
>
>
> Hello All!
>
> I am pleased to see how much interest there is in these onboarding rooms.
> As of right now I can accommodate all the official projects (sorry Cyborg)
> that have requested a room to make all the requests fit, I have combined
> docs and i18n and taken Thierry's suggestion to combine
> Infra/QA/RelMgmt/Regs/Stable.
>
> These are the projects that have requested a slot:
>
> Solum
>
> Tricircle
>
> Karbor
>
> Freezer
>
> Kuryr
>
> Mistral
>
> Dragonflow
>
> Coudkitty
>
> Designate
>
> Trove
>
> Watcher
>
> Magnum
>
> Barbican
>
> Charms
>
> Tacker
>
> Zun
>
> Swift
>
> Watcher
>
> Kolla
>
> Horizon
>
> Keystone
>
> Nova
>
> Cinder
>
> Telemetry
> Infra/QA/RelMgmt/Regs/Stable
>
> Docs/i18n
>
> If there are any other projects willing to share a slot together please
> let me know!
>
> -Kendall Nelson (diablo_rojo)
>
>
>
> On Thu, Mar 16, 2017 at 8:49 AM Jeremy Stanley  wrote:
>
> On 2017-03-16 10:31:49 +0100 (+0100), Thierry Carrez wrote:
> [...]
> > I think we could share a 90-min slot between a number of the supporting
> > teams:
> >
> > Infrastructure, QA, Release Management, Requirements, Stable maint
> >
> > Those teams are all under-staffed and wanting to grow new members, but
> > 90 min is both too long and too short for them. I feel like regrouping
> > them in a single slot and give each of those teams ~15 min to explain
> > what they do, their process and tooling, and a pointer to next steps /
> > mentors would be immensely useful.
>
> I can see this working okay for the Infra team. Pretty sure I can't
> come up with anything useful (to our team) we could get through in a
> 90-minute slot given our new contributor learning curve, so would
> feel bad wasting a full session. A "this is who we are and what we
> do, if you're interested in these sorts of things and want to find
> out more on getting involved go here, thank you for your time" over
> 10 minutes with an additional 5 for questions could at least be
> minimally valuable for us, on the other hand.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> 

Re: [openstack-dev] [i18n] [nova] understanding log domain change - https://review.openstack.org/#/c/439500

2017-03-16 Thread Doug Hellmann

> On Mar 16, 2017, at 5:00 PM, Matt Riedemann  wrote:
> 
> On 3/16/2017 2:00 PM, Doug Hellmann wrote:
>> 
>> Based on the feedback on that list, there seems to be no real support
>> for maintaining the ability to translate log messages. I therefore
>> recommend that teams accept the patches to remove them, and stop
>> enforcing their use. Whether or not teams want to make a concerted
>> effort to sweep through the code and delete them is up to them.
>> 
>> Please keep translations for exceptions and other user-facing messages,
>> for now.
>> 
>> As Sean pointed out elsewhere, we can deal with other log-related
>> changes independently, since those are likely to need more thought to
>> write and review.
>> 
>> Doug
> 
> I would suggest someone put some notes in the oslo.i18n docs to convey the 
> message that those instructions are no longer relevant:
> 
> https://docs.openstack.org/developer/oslo.i18n/guidelines.html#log-translation

Done: https://review.openstack.org/446762


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [Patrole] Nominating Felipe Monteiro for patrole core

2017-03-16 Thread PURCELL, DAVID
+1

From: BARTRA, RICK
Sent: Thursday, March 16, 2017 2:28 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [QA] [Patrole] Nominating Felipe Monteiro for patrole 
core

Felipe has done a tremendous amount of work stabilizing, enabling gates, 
contributing new tests, and extensively reviewing code in the Patrole project. 
In fact, he is the number one contributor to Patrole in terms of lines of code. 
He is also driving direction in the project and genuinely cares about the 
success of Patrole. As core spots are limited, I am recommending that Felipe 
replace Sangeet Gupta (sg7...@att.com) as core due to 
Sangeet’s inactivity on the project.

-Rick Bartra
rb5...@att.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-university] [os-upstream-institute] NAME CHANGE

2017-03-16 Thread Ildiko Vancsa
Hi All,

Due to some issues around the name OpenStack University we unfortunately had to 
pick a new one.

The new name of the program is OpenStack Upstream Institute.

We renamed the corresponding assets as well, so please join or move to the 
#openstack-upstream-institute IRC channel. The new ML tag is 
os-upstream-institute.

You can find the wiki page for the team and activities here: 
https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute

Sorry for the inconvenience.

Thanks and Best Regards,
Ildikó
IRC: ildikov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-16 Thread John Dickinson


On 16 Mar 2017, at 14:10, Lance Bragstad wrote:

> Hey folks,
>
> The reseller use case [0] has been popping up frequently in various
> discussions [1], including unified limits.
>
> For those who are unfamiliar with the reseller concept, it came out of
> early discussions regarding hierarchical multi-tenancy (HMT). It
> essentially allows a certain level of opaqueness within project trees. This
> opaqueness would make it easier for providers to "resell" infrastructure,
> without having customers/providers see all the way up and down the project
> tree, hence it was termed reseller. Keystone originally had some ideas of
> how to implement this after the HMT implementation laid the ground work,
> but it was never finished.
>
> With it popping back up in conversations, I'm looking for folks who are
> willing to represent the idea. Participating in this thread doesn't mean
> you're on the hook for implementing it or anything like that.
>
> Are you interested in reseller and willing to provide use-cases?
>
>
>
> [0]
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html#problem-description



This is interesting to me. It sounds very similar to the reseller concept that 
Swift has. In Swift, the reseller is used to group accounts. Remember that an 
account in Swift is like a bank account. It's where you put stuff, and is 
mapped to one or more users via an auth system. So a Swift account is scoped to 
a particular reseller, and an auth system is responsible for one or more 
resellers.

You can see this in practice with the "reseller prefix" that's used in Swift's 
URLs. The default is "AUTH_", so my account might be "AUTH_john". But it's 
totally possible that there could be another auth system assigned to a 
different reseller prefix. If that other reseller prefix is "BAUTH_", then 
there could exist a totally independent "BAUTH_john" account. The only thing 
that ties some user creds (or token) to a particular account in Swift is the 
auth system.

So this reseller concept in Swift allows deployers to have more than one auth 
system installed in the same cluster at the same time. And, in fact, this is 
exactly why it was first used. If you get an account with Rackspace Cloud 
Files, you'll see the reseller prefix is "MossoCloudFS_", but it turns out that 
when Swift was created JungleDisk was an internal Rackspace product and also 
stored a bunch of data in the same system. JungleDisk managed it's own users 
and auth system, so they had a different reseller prefix that was tied to a 
different auth system.

From the Keystone spec, it seems that the reseller idea is a way to group 
domains, very much like the reseller concept in Swift. I'd suggest that instead 
of building ever-increasing hierarchies of groups of users, supporting more 
than one auth system at a time is a proven way to scale out this solution. So 
instead of adding the complexity to Keystone of ever-deepening groupings, 
support having more than one Keystone instance (or even Keystone + another auth 
system) in a project's pipeline. This allows grouping users into distinct 
partitions, and it scales by adding more keystones instead of bigger keystones.


--John






signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-16 Thread Lance Bragstad
Hey folks,

The reseller use case [0] has been popping up frequently in various
discussions [1], including unified limits.

For those who are unfamiliar with the reseller concept, it came out of
early discussions regarding hierarchical multi-tenancy (HMT). It
essentially allows a certain level of opaqueness within project trees. This
opaqueness would make it easier for providers to "resell" infrastructure,
without having customers/providers see all the way up and down the project
tree, hence it was termed reseller. Keystone originally had some ideas of
how to implement this after the HMT implementation laid the ground work,
but it was never finished.

With it popping back up in conversations, I'm looking for folks who are
willing to represent the idea. Participating in this thread doesn't mean
you're on the hook for implementing it or anything like that.

Are you interested in reseller and willing to provide use-cases?



[0]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html#problem-description
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] [nova] understanding log domain change - https://review.openstack.org/#/c/439500

2017-03-16 Thread Matt Riedemann

On 3/16/2017 2:00 PM, Doug Hellmann wrote:


Based on the feedback on that list, there seems to be no real support
for maintaining the ability to translate log messages. I therefore
recommend that teams accept the patches to remove them, and stop
enforcing their use. Whether or not teams want to make a concerted
effort to sweep through the code and delete them is up to them.

Please keep translations for exceptions and other user-facing messages,
for now.

As Sean pointed out elsewhere, we can deal with other log-related
changes independently, since those are likely to need more thought to
write and review.

Doug


I would suggest someone put some notes in the oslo.i18n docs to convey 
the message that those instructions are no longer relevant:


https://docs.openstack.org/developer/oslo.i18n/guidelines.html#log-translation

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Telles Nobrega
Thank you Kendall.

Just to be sure we would have 45 minutes on boarding session for sahara
correct?

Thanks.
On Thu, 16 Mar 2017 at 17:46 HU, BIN  wrote:

> Kendall,
>
>
>
> That sounds great, and thank you very much for consideration.
>
>
>
> Cheers J
>
> Bin
>
>
>
> *From:* Kendall Nelson [mailto:kennelso...@gmail.com]
> *Sent:* Thursday, March 16, 2017 11:59 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
>
>
> *Subject:* Re: [openstack-dev] [ptls] Project On-Boarding Rooms
>
>
>
> Hello Bin :)
>
>Currently we don't have other official projects waiting in line, but
> there was one other unofficial project that asked for space before Gluon.
> As of right now, the plan is to wait to see if any other official projects
> trickle in as I don't think all the timezones have gotten to weigh in on
> this thread yet. If no more voice interest, I can see if I can fit Cyborg
> and Gluon in.
>
> -Kendall (diablo_rojo)
>
>
>
> On Thu, Mar 16, 2017 at 1:21 PM HU, BIN  wrote:
>
> I know we Gluon is an unofficial project. Since Zun is willing to share
> the room, is it possible to share it with Gluon unless there are other
> projects are waiting in queue?
>
> Thanks
>
>
> Bin
>
>
>
>  Original Message 
> From: Kendall Nelson 
> Date: 11:13AM, Thu, Mar 16, 2017
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [ptls] Project On-Boarding Rooms
>
> Thanks! I will make note that you are willing to share.
>
>
>
> On Thu, Mar 16, 2017 at 12:38 PM Hongbin Lu  wrote:
>
> Zun team could squeeze the session into 45 minutes and give the other 45
> minutes to another team if anyone interest.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Kendall Nelson [mailto:kennelso...@gmail.com]
> *Sent:* March-16-17 11:11 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
>
>
> *Subject:* Re: [openstack-dev] [ptls] Project On-Boarding Rooms
>
>
>
> Hello All!
>
> I am pleased to see how much interest there is in these onboarding rooms.
> As of right now I can accommodate all the official projects (sorry Cyborg)
> that have requested a room to make all the requests fit, I have combined
> docs and i18n and taken Thierry's suggestion to combine
> Infra/QA/RelMgmt/Regs/Stable.
>
> These are the projects that have requested a slot:
>
> Solum
>
> Tricircle
>
> Karbor
>
> Freezer
>
> Kuryr
>
> Mistral
>
> Dragonflow
>
> Coudkitty
>
> Designate
>
> Trove
>
> Watcher
>
> Magnum
>
> Barbican
>
> Charms
>
> Tacker
>
> Zun
>
> Swift
>
> Watcher
>
> Kolla
>
> Horizon
>
> Keystone
>
> Nova
>
> Cinder
>
> Telemetry
> Infra/QA/RelMgmt/Regs/Stable
>
> Docs/i18n
>
> If there are any other projects willing to share a slot together please
> let me know!
>
> -Kendall Nelson (diablo_rojo)
>
>
>
> On Thu, Mar 16, 2017 at 8:49 AM Jeremy Stanley  wrote:
>
> On 2017-03-16 10:31:49 +0100 (+0100), Thierry Carrez wrote:
> [...]
> > I think we could share a 90-min slot between a number of the supporting
> > teams:
> >
> > Infrastructure, QA, Release Management, Requirements, Stable maint
> >
> > Those teams are all under-staffed and wanting to grow new members, but
> > 90 min is both too long and too short for them. I feel like regrouping
> > them in a single slot and give each of those teams ~15 min to explain
> > what they do, their process and tooling, and a pointer to next steps /
> > mentors would be immensely useful.
>
> I can see this working okay for the Infra team. Pretty sure I can't
> come up with anything useful (to our team) we could get through in a
> 90-minute slot given our new contributor learning curve, so would
> feel bad wasting a full session. A "this is who we are and what we
> do, if you're interested in these sorts of things and want to find
> out more on getting involved go here, thank you for your time" over
> 10 minutes with an additional 5 for questions could at least be
> minimally valuable for us, on the other hand.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
> 

Re: [openstack-dev] [i18n] [nova] understanding log domain change - https://review.openstack.org/#/c/439500

2017-03-16 Thread Ihar Hrachyshka
On Thu, Mar 16, 2017 at 12:00 PM, Doug Hellmann  wrote:
> Please keep translations for exceptions and other user-facing messages,
> for now.

To clarify, that means LOG.exception(_LE(...)) should also be cleaned
up? The only things that we should leave are messages that eventually
get to users (by means of stdout for clients, or thru API payload, for
services).

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread HU, BIN
Kendall,

That sounds great, and thank you very much for consideration.

Cheers ☺
Bin

From: Kendall Nelson [mailto:kennelso...@gmail.com]
Sent: Thursday, March 16, 2017 11:59 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ptls] Project On-Boarding Rooms

Hello Bin :)
   Currently we don't have other official projects waiting in line, but there 
was one other unofficial project that asked for space before Gluon. As of right 
now, the plan is to wait to see if any other official projects trickle in as I 
don't think all the timezones have gotten to weigh in on this thread yet. If no 
more voice interest, I can see if I can fit Cyborg and Gluon in.
-Kendall (diablo_rojo)

On Thu, Mar 16, 2017 at 1:21 PM HU, BIN > 
wrote:
I know we Gluon is an unofficial project. Since Zun is willing to share the 
room, is it possible to share it with Gluon unless there are other projects are 
waiting in queue?

Thanks

Bin


 Original Message 
From: Kendall Nelson >
Date: 11:13AM, Thu, Mar 16, 2017
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [ptls] Project On-Boarding Rooms
Thanks! I will make note that you are willing to share.

On Thu, Mar 16, 2017 at 12:38 PM Hongbin Lu 
> wrote:
Zun team could squeeze the session into 45 minutes and give the other 45 
minutes to another team if anyone interest.

Best regards,
Hongbin

From: Kendall Nelson 
[mailto:kennelso...@gmail.com]
Sent: March-16-17 11:11 AM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [ptls] Project On-Boarding Rooms

Hello All!
I am pleased to see how much interest there is in these onboarding rooms. As of 
right now I can accommodate all the official projects (sorry Cyborg) that have 
requested a room to make all the requests fit, I have combined docs and i18n 
and taken Thierry's suggestion to combine Infra/QA/RelMgmt/Regs/Stable.
These are the projects that have requested a slot:
Solum
Tricircle
Karbor
Freezer
Kuryr
Mistral
Dragonflow
Coudkitty
Designate
Trove
Watcher
Magnum
Barbican
Charms
Tacker
Zun
Swift
Watcher
Kolla
Horizon
Keystone
Nova
Cinder
Telemetry
Infra/QA/RelMgmt/Regs/Stable
Docs/i18n
If there are any other projects willing to share a slot together please let me 
know!
-Kendall Nelson (diablo_rojo)

On Thu, Mar 16, 2017 at 8:49 AM Jeremy Stanley 
> wrote:
On 2017-03-16 10:31:49 +0100 (+0100), Thierry Carrez wrote:
[...]
> I think we could share a 90-min slot between a number of the supporting
> teams:
>
> Infrastructure, QA, Release Management, Requirements, Stable maint
>
> Those teams are all under-staffed and wanting to grow new members, but
> 90 min is both too long and too short for them. I feel like regrouping
> them in a single slot and give each of those teams ~15 min to explain
> what they do, their process and tooling, and a pointer to next steps /
> mentors would be immensely useful.

I can see this working okay for the Infra team. Pretty sure I can't
come up with anything useful (to our team) we could get through in a
90-minute slot given our new contributor learning curve, so would
feel bad wasting a full session. A "this is who we are and what we
do, if you're interested in these sorts of things and want to find
out more on getting involved go here, thank you for your time" over
10 minutes with an additional 5 for questions could at least be
minimally valuable for us, on the other hand.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Jay Bryant

Kendall,

Did you end up reserving a room for Cinder or did others need it?

Since Sean left it open I wanted to know the status so we could plan 
appropriately.


Thanks!

Jay


On 3/16/2017 3:03 PM, Kendall Nelson wrote:

@James Already have Charms on the list :)

@Telles I can put Sahara down to share with Zun.


On Thu, Mar 16, 2017 at 2:38 PM Telles Nobrega > wrote:


Hello Kendall,

I would like to have a room for Sahara as well, if it is still
possible. We sure can split with other team.

On Thu, Mar 16, 2017 at 10:07 AM Ian Y. Choi > wrote:

Hello Kendall!

I18n team loves to have a project on-boarding room for new
translators :)
Please reserve a room for I18n if available.


With many thanks,

/Ian


Kendall Nelson wrote on 3/16/2017 3:20 AM:
> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will
offer
> project on-boarding rooms! This idea is that these rooms
will provide
> a place for new contributors to a given project to find out
more about
> the project, people, and code base. The slots will be spread out
> throughout the whole Summit and will be 90 min long.
>
> We have a very limited slots available for interested
projects so it
> will be a first come first served process. Let me know if
you are
> interested and I will reserve a slot for you if there are
spots left.
>
> - Kendall Nelson (diablo_rojo)
>
> [1]
>

http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
>
>
>

__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Kendall Nelson
@James Already have Charms on the list :)

@Telles I can put Sahara down to share with Zun.


On Thu, Mar 16, 2017 at 2:38 PM Telles Nobrega  wrote:

Hello Kendall,

I would like to have a room for Sahara as well, if it is still possible. We
sure can split with other team.

On Thu, Mar 16, 2017 at 10:07 AM Ian Y. Choi  wrote:

Hello Kendall!

I18n team loves to have a project on-boarding room for new translators :)
Please reserve a room for I18n if available.


With many thanks,

/Ian


Kendall Nelson wrote on 3/16/2017 3:20 AM:
> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will offer
> project on-boarding rooms! This idea is that these rooms will provide
> a place for new contributors to a given project to find out more about
> the project, people, and code base. The slots will be spread out
> throughout the whole Summit and will be 90 min long.
>
> We have a very limited slots available for interested projects so it
> will be a first come first served process. Let me know if you are
> interested and I will reserve a slot for you if there are spots left.
>
> - Kendall Nelson (diablo_rojo)
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib][heat] dib-utils/dib-run-parts/dib v2 concern

2017-03-16 Thread Clint Byrum
Excerpts from Clark Boylan's message of 2017-03-16 10:16:37 -0700:
> On Thu, Mar 16, 2017, at 09:46 AM, Steven Hardy wrote:
> > On Thu, Mar 16, 2017 at 10:30:48AM -0500, Gregory Haynes wrote:
> > > On Thu, Mar 16, 2017, at 05:18 AM, Steven Hardy wrote:
> > > > On Wed, Mar 15, 2017 at 04:22:37PM -0500, Ben Nemec wrote:
> > > > > While looking through the dib v2 changes after the feature branch was 
> > > > > merged
> > > > > to master, I noticed this commit[1], which bring dib-run-parts back 
> > > > > into dib
> > > > > itself.  Unfortunately I missed the original proposal to do this, but 
> > > > > I have
> > > > > some concerns about the impact of this change.
> > > > > 
> > > > > Originally the split was done so that dib-run-parts and one of the
> > > > > os-*-config projects (looks like os-refresh-config) that depends on 
> > > > > it could
> > > > > be included in a stock distro cloud image without pulling in all of 
> > > > > dib.
> > > > > Note that it is still present in the requirements of orc: 
> > > > > https://github.com/openstack/os-refresh-config/blob/master/requirements.txt#L5
> > > > > 
> > > > > Disk space in a distro cloud image is at a premium, so pulling in a 
> > > > > project
> > > > > like diskimage-builder to get one script out of it was not 
> > > > > acceptable, at
> > > > > least from what I was told at the time.
> > > > > 
> > > > > I believe this was done so a distro cloud image could be used with 
> > > > > Heat out
> > > > > of the box, hence the heat tag on this message.  I don't know exactly 
> > > > > what
> > > > > happened after we split out dib-utils, so I'm hoping someone can 
> > > > > confirm
> > > > > whether this requirement still exists.  I think Steve was the one who 
> > > > > made
> > > > > the original request.  There were a lot of Steves working on Heat at 
> > > > > the
> > > > > time though, so it's possible I'm wrong. ;-)
> > > > 
> > > > I don't think I'm the Steve you're referring to, but I do have some
> > > > additional info as a result of investigating this bug:
> > > > 
> > > > https://bugs.launchpad.net/tripleo/+bug/1673144
> > > > 
> > > > It appears we have three different versions of dib-run-parts on the
> > > > undercloud (and, presumably overcloud nodes) at the moment, which is a
> > > > pretty major headache from a maintenance/debugging perspective.
> > > > 
> > > 
> > > I looked at the bug and I think there may only be two different
> > > versions? The versions in /bin and /usr/bin seem to come from the same
> > > package (so I hope they are the same version). I don't understand what
> > > is going on with the ./lib version but that seems like either a local
> > > package / checkout or something else non-dib related.
> > > 
> > > Two versions is certainly less than ideal, though :).
> > 
> > No I think there are four versions, three unique:
> > 
> > (undercloud) [stack@undercloud ~]$ rpm -qf /usr/bin/dib-run-parts
> > dib-utils-0.0.11-1.el7.noarch
> > (undercloud) [stack@undercloud ~]$ rpm -qf /bin/dib-run-parts
> > dib-utils-0.0.11-1.el7.noarch
> > (undercloud) [stack@undercloud ~]$ rpm -qf
> > /usr/lib/python2.7/site-packages/diskimage_builder/lib/dib-run-parts
> > diskimage-builder-2.0.1-0.20170314023517.756923c.el7.centos.noarch
> > (undercloud) [stack@undercloud ~]$ rpm -qf /usr/local/bin/dib-run-parts
> > file /usr/local/bin/dib-run-parts is not owned by any package
> > 
> > /usr/bin/dib-run-parts and /bin/dib-run-parts are the same file, owned by
> > dib-utils
> > 
> > /usr/lib/python2.7/site-packages/diskimage_builder/lib/dib-run-parts is
> > owned by diskimage-builder
> > 
> > /usr/local/bin/dib-run-parts is the mystery file presumed from image
> > building
> > 
> > But the exciting thing from a rolling-out-bugfixes perspective is that
> > the
> > one actually running via o-r-c isn't either of the packaged versions
> > (doh!)
> > so we probably need to track down which element is installing it.
> > 
> > This is a little OT for this thread (sorry), but hopefully provides more
> > context around my concerns about creating another fork etc.
> > 
> > > > However we resolve this, *please* can we avoid permanently forking the
> > > > tool, as e.g in that bug, where do I send the patch to fix leaking
> > > > profiledir directories?  What package needs an update?  What is
> > > > installing
> > > > the script being run that's not owned by any package?
> > > > 
> > > > Yes, I know the answer to some of those questions, but I'm trying to
> > > > point
> > > > out duplicating this script and shipping it from multiple repos/packages
> > > > is
> > > > pretty horrible from a maintenance perspective, especially for new or
> > > > casual contributors.
> > > > 
> > > 
> > > I agree. You answered my previous question of whether os-refresh-config
> > > is still in use (sounds like it definitely is) so this complicates
> > > things a bit.
> > > 
> > > > If we have to fork it, I'd suggest we should rename the script to avoid
> > > > the
> > > > 

Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Telles Nobrega
Hello Kendall,

I would like to have a room for Sahara as well, if it is still possible. We
sure can split with other team.

On Thu, Mar 16, 2017 at 10:07 AM Ian Y. Choi  wrote:

Hello Kendall!

I18n team loves to have a project on-boarding room for new translators :)
Please reserve a room for I18n if available.


With many thanks,

/Ian


Kendall Nelson wrote on 3/16/2017 3:20 AM:
> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will offer
> project on-boarding rooms! This idea is that these rooms will provide
> a place for new contributors to a given project to find out more about
> the project, people, and code base. The slots will be spread out
> throughout the whole Summit and will be 90 min long.
>
> We have a very limited slots available for interested projects so it
> will be a first come first served process. Let me know if you are
> interested and I will reserve a slot for you if there are spots left.
>
> - Kendall Nelson (diablo_rojo)
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] [nova] understanding log domain change - https://review.openstack.org/#/c/439500

2017-03-16 Thread Sean Dague
On 03/16/2017 03:00 PM, Doug Hellmann wrote:
> Excerpts from Doug Hellmann's message of 2017-03-10 09:39:29 -0500:
>> Excerpts from Doug Hellmann's message of 2017-03-10 09:28:52 -0500:
>>> Excerpts from Ian Y. Choi's message of 2017-03-10 01:22:40 +0900:
 Doug Hellmann wrote on 3/9/2017 9:24 PM:
> Excerpts from Sean McGinnis's message of 2017-03-07 07:17:09 -0600:
>> On Mon, Mar 06, 2017 at 09:06:18AM -0500, Sean Dague wrote:
>>> On 03/06/2017 08:43 AM, Andreas Jaeger wrote:
 On 2017-03-06 14:03, Sean Dague  wrote:
> I'm trying to understand the implications of
> https://review.openstack.org/#/c/439500. And the comment in the linked
> email:
>
> ">> Yes, we decided some time ago to not translate the log files 
> anymore and
>>> thus our tools do not handle them anymore - and in general, we 
>>> remove
>>> these kind of files."
> Does that mean that all the _LE, _LI, _LW stuff in projects should be
> fully removed? Nova currently enforces those things are there -
> https://github.com/openstack/nova/blob/e88dd0034b1b135d680dae3494597e295add9cfe/nova/hacking/checks.py#L314-L333
> and want to make sure our tools aren't making us do work that the i18n
> team is ignoring and throwing away.
>> So... just looking for a definitive statement on this since there has
>> been some back and forth discussion.
>>
>> Is it correct to say - all projects may (should?) now remove all bits in
>> place for using and enforcing the _Lx() translation markers. Only _()
>> should be used for user visible error messages.
>>
>> Sean (smcginnis)
>>
> The situation is still not quite clear to me, and it would be
> unfortunate to undo too much of the translation support work because
> it will be hard to redo it.
>
> Is there documentation somewhere describing what the i18n team has
> committed to trying to translate?

 I18n team describes translation plan and priority in Zanata - 
 translation platform
   : https://translate.openstack.org/ .

>   I think I heard that there was a
> shift in emphasis to "user interfaces", but I'm not sure if that
> includes error messages in services. Should we remove all use of
> oslo.i18n from services? Or only when dealing with logs?

 When I18n team decided to removal of log translations in Barcelona last 
 October, there had been no
 discussion on the removal of oslo.i18n translation support for log 
 messages.
 (I have kept track of what I18n team discussed during Barcelona I18n 
 meetup on Etherpad - [1])

 Now I think that the final decision of oslo.i18n log translation support 
 needs more involvement
 with translators considering oslo.i18n translation support, and also 
 more people on community wide including
 project working groups, user committee, and operators as Matt suggested.

 If translating log messages is meaningful to some community members and 
 some translators show interests
 on translating log messages, then I18n team can revert the policy with 
 rolling back of translations.
 Translated strings are still alive in not only previous stable branches, 
 but also in translation memory in Zanata - translation platform.

 I would like to find some ways to discuss this topic with more community 
 wide.
>>>
>>> I would suggest that we discuss this at the Forum in Boston, but I think
>>> we need to gather some input before then because if there is a consensus
>>> that log translations are not useful we can allow the code cleanup to
>>> occur and not take up face-to-face time.
>>
>> I've started a thread on the operators mailing list [1].
>>
>> Doug
>>
>> [1] 
>> http://lists.openstack.org/pipermail/openstack-operators/2017-March/012887.html
>>
> 
> Based on the feedback on that list, there seems to be no real support
> for maintaining the ability to translate log messages. I therefore
> recommend that teams accept the patches to remove them, and stop
> enforcing their use. Whether or not teams want to make a concerted
> effort to sweep through the code and delete them is up to them.
> 
> Please keep translations for exceptions and other user-facing messages,
> for now.
> 
> As Sean pointed out elsewhere, we can deal with other log-related
> changes independently, since those are likely to need more thought to
> write and review.

ACK. Just landed the stop enforcing patch in Nova -
https://review.openstack.org/#/c/446452/

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread James Page
Hi Kendall

On Wed, 15 Mar 2017 at 18:22, Kendall Nelson  wrote:

> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will offer project
> on-boarding rooms! This idea is that these rooms will provide a place for
> new contributors to a given project to find out more about the project,
> people, and code base. The slots will be spread out throughout the whole
> Summit and will be 90 min long.
>
> We have a very limited slots available for interested projects so it will
> be a first come first served process. Let me know if you are interested and
> I will reserve a slot for you if there are spots left.
>

Charms project would like a slot if possible please!

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [tripleo] [deployment] Keystone Fernet keys rotations spec

2017-03-16 Thread Emilien Macchi
On Thu, Mar 16, 2017 at 2:53 PM, Lance Bragstad  wrote:
> Yeah, that's a good point. If we end up with something like etcd does all
> config have to be in there, or can we limit it to certain parts of config?

This is a good question but I'm not sure it belongs to this threads,
since it's not related to Fernet tokens storage backend.

> For some reason I was expecting more correlation between these two topics. I
> apologize for getting my wires crossed.

They have one thing in common: etcd (or a key/value store used by
OpenStack projects, for storing stuffs (config, tokens, etc?).

Thanks,

> On Thu, Mar 16, 2017 at 1:02 PM, Davanum Srinivas  wrote:
>>
>> Lance,
>>
>> in the other thread, we have not been talking about having any kind of
>> security for the fernet keys. Isn't that a requirement since if we
>> throw that in etcd it may be vulnerable?
>>
>> Thanks,
>> Dims
>>
>> On Thu, Mar 16, 2017 at 12:45 PM, Lance Bragstad 
>> wrote:
>> > I think the success of this, or a revived fernet-backend spec, is going
>> > to
>> > have a hard requirement on the outcome of the configuration opts
>> > discussion
>> > [0]. When we attempted to introduce an abstraction for fernet keys
>> > previously, it led down a rabbit hole of duplicated work across
>> > implementations, which was part of the reason for dropping the spec.
>> >
>> >
>> > [0]
>> >
>> > http://lists.openstack.org/pipermail/openstack-dev/2017-March/113941.html
>> >
>> > On Thu, Mar 16, 2017 at 10:12 AM, Emilien Macchi 
>> > wrote:
>> >>
>> >> On Tue, Mar 14, 2017 at 1:27 PM, Emilien Macchi 
>> >> wrote:
>> >> > Folks,
>> >> >
>> >> > I found useful to share a spec that I started to write this morning:
>> >> > https://review.openstack.org/445592
>> >> >
>> >> > The goal is to do Keystone Fernet keys rotations in a way that scales
>> >> > and is secure, by using the standard tools and not re-inventing the
>> >> > wheel.
>> >> > In other words: if you're working on Keystone or TripleO or any other
>> >> > deployment tool: please read the spec and give any feedback.
>> >> >
>> >> > We would like to find a solution that would work for all OpenStack
>> >> > deployment tools (Kolla, OSA, Fuel, TripleO, Helm, etc) but I sent
>> >> > the
>> >> > specs to tripleo project
>> >> > to get some feedback.
>> >> >
>> >> > If you already has THE solution that you think is the best one, then
>> >> > we would be very happy to learn from it in a comment directly in the
>> >> > spec.
>> >> >
>> >>
>> >> After 2 days of review from Keystone, TripleO, OSA (and probably some
>> >> groups I missed), it's pretty clear the problem is already being fixed
>> >> in different places in different ways and that's bad.
>> >> IMHO we should engage some work to fix it in Keystone and investigate
>> >> again a storage backend for Keystone tokens.
>> >>
>> >> The Keystone specs that started this investigation was removed for
>> >> Pike:
>> >> https://review.openstack.org/#/c/439194/
>> >>
>> >> I see 2 options here:
>> >>
>> >> - we keep duplicating efforts and let deployers implement their own
>> >> solutions.
>> >>
>> >> - we work with Keystone team to re-enable the spec and move forward to
>> >> solve the problem in Keystone itself, therefore for all deployments
>> >> tools in OpenStack (my favorite option).
>> >>
>> >>
>> >> I would like to hear from Keystone folks what are the main blockers
>> >> for option #2 and if this is only a human resource issue or if there
>> >> is some technical points we need to solve (in that case, it could be
>> >> done in the specs).
>> >>
>> >>
>> >> Thanks,
>> >> --
>> >> Emilien Macchi
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [i18n] [nova] understanding log domain change - https://review.openstack.org/#/c/439500

2017-03-16 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2017-03-10 09:39:29 -0500:
> Excerpts from Doug Hellmann's message of 2017-03-10 09:28:52 -0500:
> > Excerpts from Ian Y. Choi's message of 2017-03-10 01:22:40 +0900:
> > > Doug Hellmann wrote on 3/9/2017 9:24 PM:
> > > > Excerpts from Sean McGinnis's message of 2017-03-07 07:17:09 -0600:
> > > >> On Mon, Mar 06, 2017 at 09:06:18AM -0500, Sean Dague wrote:
> > > >>> On 03/06/2017 08:43 AM, Andreas Jaeger wrote:
> > >  On 2017-03-06 14:03, Sean Dague  wrote:
> > > > I'm trying to understand the implications of
> > > > https://review.openstack.org/#/c/439500. And the comment in the 
> > > > linked
> > > > email:
> > > >
> > > > ">> Yes, we decided some time ago to not translate the log files 
> > > > anymore and
> > > >>> thus our tools do not handle them anymore - and in general, we 
> > > >>> remove
> > > >>> these kind of files."
> > > > Does that mean that all the _LE, _LI, _LW stuff in projects should 
> > > > be
> > > > fully removed? Nova currently enforces those things are there -
> > > > https://github.com/openstack/nova/blob/e88dd0034b1b135d680dae3494597e295add9cfe/nova/hacking/checks.py#L314-L333
> > > > and want to make sure our tools aren't making us do work that the 
> > > > i18n
> > > > team is ignoring and throwing away.
> > > >> So... just looking for a definitive statement on this since there has
> > > >> been some back and forth discussion.
> > > >>
> > > >> Is it correct to say - all projects may (should?) now remove all bits 
> > > >> in
> > > >> place for using and enforcing the _Lx() translation markers. Only _()
> > > >> should be used for user visible error messages.
> > > >>
> > > >> Sean (smcginnis)
> > > >>
> > > > The situation is still not quite clear to me, and it would be
> > > > unfortunate to undo too much of the translation support work because
> > > > it will be hard to redo it.
> > > >
> > > > Is there documentation somewhere describing what the i18n team has
> > > > committed to trying to translate?
> > > 
> > > I18n team describes translation plan and priority in Zanata - 
> > > translation platform
> > >   : https://translate.openstack.org/ .
> > > 
> > > >   I think I heard that there was a
> > > > shift in emphasis to "user interfaces", but I'm not sure if that
> > > > includes error messages in services. Should we remove all use of
> > > > oslo.i18n from services? Or only when dealing with logs?
> > > 
> > > When I18n team decided to removal of log translations in Barcelona last 
> > > October, there had been no
> > > discussion on the removal of oslo.i18n translation support for log 
> > > messages.
> > > (I have kept track of what I18n team discussed during Barcelona I18n 
> > > meetup on Etherpad - [1])
> > > 
> > > Now I think that the final decision of oslo.i18n log translation support 
> > > needs more involvement
> > > with translators considering oslo.i18n translation support, and also 
> > > more people on community wide including
> > > project working groups, user committee, and operators as Matt suggested.
> > > 
> > > If translating log messages is meaningful to some community members and 
> > > some translators show interests
> > > on translating log messages, then I18n team can revert the policy with 
> > > rolling back of translations.
> > > Translated strings are still alive in not only previous stable branches, 
> > > but also in translation memory in Zanata - translation platform.
> > > 
> > > I would like to find some ways to discuss this topic with more community 
> > > wide.
> > 
> > I would suggest that we discuss this at the Forum in Boston, but I think
> > we need to gather some input before then because if there is a consensus
> > that log translations are not useful we can allow the code cleanup to
> > occur and not take up face-to-face time.
> 
> I've started a thread on the operators mailing list [1].
> 
> Doug
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-operators/2017-March/012887.html
> 

Based on the feedback on that list, there seems to be no real support
for maintaining the ability to translate log messages. I therefore
recommend that teams accept the patches to remove them, and stop
enforcing their use. Whether or not teams want to make a concerted
effort to sweep through the code and delete them is up to them.

Please keep translations for exceptions and other user-facing messages,
for now.

As Sean pointed out elsewhere, we can deal with other log-related
changes independently, since those are likely to need more thought to
write and review.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Kendall Nelson
Hello Bin :)

   Currently we don't have other official projects waiting in line, but
there was one other unofficial project that asked for space before Gluon.
As of right now, the plan is to wait to see if any other official projects
trickle in as I don't think all the timezones have gotten to weigh in on
this thread yet. If no more voice interest, I can see if I can fit Cyborg
and Gluon in.

-Kendall (diablo_rojo)

On Thu, Mar 16, 2017 at 1:21 PM HU, BIN  wrote:

> I know we Gluon is an unofficial project. Since Zun is willing to share
> the room, is it possible to share it with Gluon unless there are other
> projects are waiting in queue?
>
> Thanks
>
> Bin
>
>
>  Original Message 
> From: Kendall Nelson 
> Date: 11:13AM, Thu, Mar 16, 2017
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [ptls] Project On-Boarding Rooms
>
> Thanks! I will make note that you are willing to share.
>
> On Thu, Mar 16, 2017 at 12:38 PM Hongbin Lu  wrote:
>
> Zun team could squeeze the session into 45 minutes and give the other 45
> minutes to another team if anyone interest.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Kendall Nelson [mailto:kennelso...@gmail.com]
> *Sent:* March-16-17 11:11 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
>
>
> *Subject:* Re: [openstack-dev] [ptls] Project On-Boarding Rooms
>
>
>
> Hello All!
>
> I am pleased to see how much interest there is in these onboarding rooms.
> As of right now I can accommodate all the official projects (sorry Cyborg)
> that have requested a room to make all the requests fit, I have combined
> docs and i18n and taken Thierry's suggestion to combine
> Infra/QA/RelMgmt/Regs/Stable.
>
> These are the projects that have requested a slot:
>
> Solum
>
> Tricircle
>
> Karbor
>
> Freezer
>
> Kuryr
>
> Mistral
>
> Dragonflow
>
> Coudkitty
>
> Designate
>
> Trove
>
> Watcher
>
> Magnum
>
> Barbican
>
> Charms
>
> Tacker
>
> Zun
>
> Swift
>
> Watcher
>
> Kolla
>
> Horizon
>
> Keystone
>
> Nova
>
> Cinder
>
> Telemetry
> Infra/QA/RelMgmt/Regs/Stable
>
> Docs/i18n
>
> If there are any other projects willing to share a slot together please
> let me know!
>
> -Kendall Nelson (diablo_rojo)
>
>
>
> On Thu, Mar 16, 2017 at 8:49 AM Jeremy Stanley  wrote:
>
> On 2017-03-16 10:31:49 +0100 (+0100), Thierry Carrez wrote:
> [...]
> > I think we could share a 90-min slot between a number of the supporting
> > teams:
> >
> > Infrastructure, QA, Release Management, Requirements, Stable maint
> >
> > Those teams are all under-staffed and wanting to grow new members, but
> > 90 min is both too long and too short for them. I feel like regrouping
> > them in a single slot and give each of those teams ~15 min to explain
> > what they do, their process and tooling, and a pointer to next steps /
> > mentors would be immensely useful.
>
> I can see this working okay for the Infra team. Pretty sure I can't
> come up with anything useful (to our team) we could get through in a
> 90-minute slot given our new contributor learning curve, so would
> feel bad wasting a full session. A "this is who we are and what we
> do, if you're interested in these sorts of things and want to find
> out more on getting involved go here, thank you for your time" over
> 10 minutes with an additional 5 for questions could at least be
> minimally valuable for us, on the other hand.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Re: [openstack-dev] [keystone] [tripleo] [deployment] Keystone Fernet keys rotations spec

2017-03-16 Thread Lance Bragstad
Yeah, that's a good point. If we end up with something like etcd does all
config have to be in there, or can we limit it to certain parts of config?

For some reason I was expecting more correlation between these two topics.
I apologize for getting my wires crossed.

On Thu, Mar 16, 2017 at 1:02 PM, Davanum Srinivas  wrote:

> Lance,
>
> in the other thread, we have not been talking about having any kind of
> security for the fernet keys. Isn't that a requirement since if we
> throw that in etcd it may be vulnerable?
>
> Thanks,
> Dims
>
> On Thu, Mar 16, 2017 at 12:45 PM, Lance Bragstad 
> wrote:
> > I think the success of this, or a revived fernet-backend spec, is going
> to
> > have a hard requirement on the outcome of the configuration opts
> discussion
> > [0]. When we attempted to introduce an abstraction for fernet keys
> > previously, it led down a rabbit hole of duplicated work across
> > implementations, which was part of the reason for dropping the spec.
> >
> >
> > [0]
> > http://lists.openstack.org/pipermail/openstack-dev/2017-
> March/113941.html
> >
> > On Thu, Mar 16, 2017 at 10:12 AM, Emilien Macchi 
> wrote:
> >>
> >> On Tue, Mar 14, 2017 at 1:27 PM, Emilien Macchi 
> >> wrote:
> >> > Folks,
> >> >
> >> > I found useful to share a spec that I started to write this morning:
> >> > https://review.openstack.org/445592
> >> >
> >> > The goal is to do Keystone Fernet keys rotations in a way that scales
> >> > and is secure, by using the standard tools and not re-inventing the
> >> > wheel.
> >> > In other words: if you're working on Keystone or TripleO or any other
> >> > deployment tool: please read the spec and give any feedback.
> >> >
> >> > We would like to find a solution that would work for all OpenStack
> >> > deployment tools (Kolla, OSA, Fuel, TripleO, Helm, etc) but I sent the
> >> > specs to tripleo project
> >> > to get some feedback.
> >> >
> >> > If you already has THE solution that you think is the best one, then
> >> > we would be very happy to learn from it in a comment directly in the
> >> > spec.
> >> >
> >>
> >> After 2 days of review from Keystone, TripleO, OSA (and probably some
> >> groups I missed), it's pretty clear the problem is already being fixed
> >> in different places in different ways and that's bad.
> >> IMHO we should engage some work to fix it in Keystone and investigate
> >> again a storage backend for Keystone tokens.
> >>
> >> The Keystone specs that started this investigation was removed for Pike:
> >> https://review.openstack.org/#/c/439194/
> >>
> >> I see 2 options here:
> >>
> >> - we keep duplicating efforts and let deployers implement their own
> >> solutions.
> >>
> >> - we work with Keystone team to re-enable the spec and move forward to
> >> solve the problem in Keystone itself, therefore for all deployments
> >> tools in OpenStack (my favorite option).
> >>
> >>
> >> I would like to hear from Keystone folks what are the main blockers
> >> for option #2 and if this is only a human resource issue or if there
> >> is some technical points we need to solve (in that case, it could be
> >> done in the specs).
> >>
> >>
> >> Thanks,
> >> --
> >> Emilien Macchi
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-16 Thread Joshua Harlow
I'd be fine with it also, not sure it will change much, but meh, worth a 
shot. We are all happy loving people after all, so might as well try to 
help others when we can :-P


-Josh

Davanum Srinivas wrote:

+1 from me to bring castellan under Oslo governance with folks from
both oslo and Barbican as reviewers without a project rename. Let's
see if that helps get more adoption of castellan

Thanks,
Dims

On Thu, Mar 16, 2017 at 12:25 PM, Farr, Kaitlin M.
  wrote:

This thread has generated quite the discussion, so I will try to
address a few points in this email, echoing a lot of what Dave said.

Clint originally explained what we are trying to solve very well. The hope was
that the rename would emphasize that Castellan is just a basic
interface that supports operations common between key managers
(the existing Barbican back end and other back ends that may exist
in the future), much like oslo.db supports the common operations
between PostgreSQL and MySQL. The thought was that renaming to have
oslo part of the name would help reinforce that it's just an interface,
rather than a standalone key manager. Right now, the only Castellan
back end that would work in DevStack is Barbican. There has been talk
in the past for creating other Castellan back ends (Vault or Tang), but
no one has committed to writing the code for those yet.

The intended proposal was to rename the project, maintain the current
review team (which is only a handful of Barbican people), and bring on
a few Oslo folks, if any were available and interested, to give advice
about (and +2s for) OpenStack library best practices. However, perhaps
pulling it under oslo's umbrella without a rename is blessing it enough.

In response to Julien's proposal to make Castellan "the way you can do
key management in Python" -- it would be great if Castellan were that
abstract, but in practice it is pretty OpenStack-specific. Currently,
the Barbican team is great at working on key management projects
(including both Barbican and Castellan), but a lot of our focus now is
how we can maintain and grow integration with the rest of the OpenStack
projects, for which having the name and expertise of oslo would be a
great help.

Thanks,

Kaitlin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-16 Thread Lance Bragstad
On Thu, Mar 16, 2017 at 12:46 PM, Morgan Fainberg  wrote:

>
>
> On Mar 16, 2017 07:28, "Jeremy Stanley"  wrote:
>
> On 2017-03-16 08:34:58 -0500 (-0500), Lance Bragstad wrote:
> [...]
> > These security-related corner cases have always come up in the past when
> > we've talked about implementing reseller. Another good example that I
> > struggle with is what happens when you flip the reseller bit for a
> project
> > admin who goes off and creates their own entities but then wants support?
> > What does the support model look like for the project admin that needs
> help
> > in a way that maintains data integrity?
>
> It's still entirely unclear to me how giving someone the ability to
> hide resources you've delegated them access to create in any way
> enables "reseller" use cases. I can understand the global admins
> wanting to have optional views where they don't see all the resold
> hierarchy (for the sake of their own sanity), but why would a
> down-tree admin have any expectation they could reasonably hide
> resources they create from those who maintain the overall system?
>
> In other multi-tenant software I've used where reseller
> functionality is present, top-level admins have some means of
> examining delegated resources and usually even of impersonating
> their down-tree owners for improved supportability.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> Hiding projects is a lot like implementing Mandatory Access Control within
> OpenStack. I would like to go on record and say we should squash the hidden
> projects concept (within a single hierarchy). If we want to implement MAC
> (SELinux equivalent) in OpenStack, we have a much, much, bigger scope to
> cover than just in Keystone, and this feels outside the scope of any
> heirarchical multi-tenancy work that has been done/will be done.
>
> TL DR: let's not try and hide projects from users with rights in the same
> (peer, or above) hierarchy.
>

If that's the direction - we need to realign our documentation [0].


[0]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html#problem-description


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Status of Zuul v3

2017-03-16 Thread Robyn Bergeron
Greetings!

This periodic update is primarily intended as a way to keep
contributors to the OpenStack community apprised of Zuul v3 project
status, including future changes and milestones on our way to use in
production. Additionally, the numerous existing and future users of
Zuul outside of the OpenStack community may find this update useful as
a way to track Zuul v3 development status.

If "changes are coming in the land of Zuul" is new news to you, please
read the section "About Zuul and Zuul v3" towards the end of this
email.

== Zuul v3 project status and updates ==

The Big Big news: Updates to Nodepool to support Zuul v3 are done!
This has been a large effort (approximately the size of one Stay-Puft
marshmallow man), and the team is super excited to now have this in
place. Give shrews a high-five if you see him! We still have bugs to
shake out, documentation to update, and a backwards-incompatible
configuration syntax change to make
(http://lists.openstack.org/pipermail/openstack-infra/2017-January/005018.html),
but we can consider it feature complete.

Continuing to work on and discuss sample jobs: Putting together an
ideal "sample job" and documenting what best practices look like is
essential in enabling future users of Zuul to quickly and easily
create jobs of their own. Paul Belanger has been making progress on
this in the form of a generic tox job (see
https://review.openstack.org/438281), and we're now at the point of
discussing why / how this is a foundational example.

Additionally, as more of the major subsystems of Zuul complete their
refactoring, the ability for one to contribute to Zuul v3 continues to
get a bit easier. Even though at the moment we still recommend having
deep familiarity with one or more of the subsystems, we do now have
some "low-hanging fruit" tasks listed in storyboard. Use this magical
link to find your way:
https://storyboard.openstack.org/#!/story/list?status=active=low-hanging-fruit_id=679

Some new specs for enhancements and/or features have been put forth as well:
* An interface for Zuul Job Reporting https://review.openstack.org/444088
* Zuulv3 Executor Security Enhancement https://review.openstack.org/95
* Update job trees to graphs https://review.openstack.org/443985 (yes,
technically this is an *update* to a spec, not a new one.)

Upcoming tasks and focus:
* Re-enabling disabled tests: We're continuing to make our way through
the list of remaining tests that need enabling. See the list, which
includes an annotation as to complexity for each test, here:
https://etherpad.openstack.org/p/zuulv3skips
* Full task list and plan is in the Zuul v3 storyboard:
https://storyboard.openstack.org/#!/board/41

Recent changes:
* Zuul v3: 
https://review.openstack.org/#/q/status:closed+project:openstack-infra/zuul+branch:feature/zuulv3,25
(Early adoptors should be aware we renamed zuul-launcher to
zuul-executor, this is a breaking change:
https://review.openstack.org/#/c/445594)
* Nodepool: 
https://review.openstack.org/#/q/status:closed+project:openstack-infra/nodepool+branch:feature/zuulv3,25

Previous IRC Meeting minutes & logs:
* 2017-03-06 Minutes:
http://eavesdrop.openstack.org/meetings/zuul/2017/zuul.2017-03-06-22.03.html
* 2017-03-06 Full log:
http://eavesdrop.openstack.org/meetings/zuul/2017/zuul.2017-03-06-22.03.log.html
* 2017-03-13 Minutes:
http://eavesdrop.openstack.org/meetings/zuul/2017/zuul.2017-03-13-22.02.html
* 2017-03-13 Full log:
http://eavesdrop.openstack.org/meetings/zuul/2017/zuul.2017-03-13-22.02.log.html

== About Zuul and Zuul v3 ==
Zuul is a pipeline-oriented project gating system, driving the project
automation necessary to enable a continuous integration environment
for the OpenStack community. It directs the OpenStack community’s
testing, running tens of thousands of jobs each day, responding to
events from the code review system and stacking potential changes to
be tested together. Testing and continuous integration for drivers is
also enabled by OpenStack’s deployment of Zuul for ~50 third-party
projects.

Zuul v3 is the upcoming, not-yet-in-production version of Zuul
currently under development, bringing (amongst many others) the
following improvements and features:
* simplifying the ability to run jobs in multi-node environments
* improved management of large numbers of jobs and job variations
* support for in-tree job configuration
* ability to define jobs using Ansible (http://github.com/ansible/ansible)

Even though prior versions of Zuul are already in use by numerous
projects and companies not related to OpenStack efforts, a primary
goal of Zuul v3 is to make Zuul a universally useful CI / CD engine
for any size use case. The design of Zuul v3 improves the modularity
of the system, and enables new triggers (such as GitHub) and reporters
(where results are sent) to be more easily integrated with Zuul;
additionally, the ability to execute jobs on non-OpenStack clouds,
such as, well, the other ones :D, as well as non-cloud 

[openstack-dev] [QA] [Patrole] Nominating Felipe Monteiro for patrole core

2017-03-16 Thread BARTRA, RICK
Felipe has done a tremendous amount of work stabilizing, enabling gates, 
contributing new tests, and extensively reviewing code in the Patrole project. 
In fact, he is the number one contributor to Patrole in terms of lines of code. 
He is also driving direction in the project and genuinely cares about the 
success of Patrole. As core spots are limited, I am recommending that Felipe 
replace Sangeet Gupta (sg7...@att.com) as core due to Sangeet’s inactivity on 
the project.

-Rick Bartra
rb5...@att.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 11)

2017-03-16 Thread Attila Darazs
If the topics below interest you and you want to contribute to the 
discussion, feel free to join the next meeting:


Time: Thursdays, 14:30-15:30 UTC (WARNING: time changed due to DST)
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

The last week was very significant in the CI Squad's life: we migrated 
the first set of TripleO gating jobs to Quickstart[1] and it went more 
or less smoothly. There were a few failed gate jobs, but we quickly 
patched up the problematic parts.


For the "phase2" of the transition we're going to concentrate on three 
areas:


1) usability improvements, to make the logs from the jobs easier to 
browse and understand


2) make sure the speed of the new jobs are roughly at the same level as 
the previous ones


3) get the OVB jobs ported as well

We use the "oooq-t-phase2"[2] gerrit topic for the changes around these 
areas. As the OVB related ones are kind of big, we will not migrate the 
jobs next week, most probably only on the beginning of the week after.


We're also trying to utilize the new RDO Cloud, hopefully we will be 
able to offload a couple of gate jobs on it soon.


Best regards,
Attila

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113996.html

[2] https://review.openstack.org/#/q/topic:oooq-t-phase2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread HU, BIN
I know we Gluon is an unofficial project. Since Zun is willing to share the 
room, is it possible to share it with Gluon unless there are other projects are 
waiting in queue?

Thanks
Bin

 Original Message 
From: Kendall Nelson 
Date: 11:13AM, Thu, Mar 16, 2017
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [ptls] Project On-Boarding Rooms

Thanks! I will make note that you are willing to share.

On Thu, Mar 16, 2017 at 12:38 PM Hongbin Lu 
> wrote:

Zun team could squeeze the session into 45 minutes and give the other 45 
minutes to another team if anyone interest.



Best regards,

Hongbin



From: Kendall Nelson 
[mailto:kennelso...@gmail.com]
Sent: March-16-17 11:11 AM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [ptls] Project On-Boarding Rooms



Hello All!

I am pleased to see how much interest there is in these onboarding rooms. As of 
right now I can accommodate all the official projects (sorry Cyborg) that have 
requested a room to make all the requests fit, I have combined docs and i18n 
and taken Thierry's suggestion to combine Infra/QA/RelMgmt/Regs/Stable.

These are the projects that have requested a slot:

Solum

Tricircle

Karbor

Freezer

Kuryr

Mistral

Dragonflow

Coudkitty

Designate

Trove

Watcher

Magnum

Barbican

Charms

Tacker

Zun

Swift

Watcher

Kolla

Horizon

Keystone

Nova

Cinder

Telemetry
Infra/QA/RelMgmt/Regs/Stable

Docs/i18n

If there are any other projects willing to share a slot together please let me 
know!

-Kendall Nelson (diablo_rojo)



On Thu, Mar 16, 2017 at 8:49 AM Jeremy Stanley 
> wrote:

On 2017-03-16 10:31:49 +0100 (+0100), Thierry Carrez wrote:
[...]
> I think we could share a 90-min slot between a number of the supporting
> teams:
>
> Infrastructure, QA, Release Management, Requirements, Stable maint
>
> Those teams are all under-staffed and wanting to grow new members, but
> 90 min is both too long and too short for them. I feel like regrouping
> them in a single slot and give each of those teams ~15 min to explain
> what they do, their process and tooling, and a pointer to next steps /
> mentors would be immensely useful.

I can see this working okay for the Infra team. Pretty sure I can't
come up with anything useful (to our team) we could get through in a
90-minute slot given our new contributor learning curve, so would
feel bad wasting a full session. A "this is who we are and what we
do, if you're interested in these sorts of things and want to find
out more on getting involved go here, thank you for your time" over
10 minutes with an additional 5 for questions could at least be
minimally valuable for us, on the other hand.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] Kubernetes-based long running processes

2017-03-16 Thread Clint Byrum
Excerpts from Dean Troyer's message of 2017-03-16 12:19:36 -0500:
> On Wed, Mar 15, 2017 at 5:28 PM, Taryma, Joanna  
> wrote:
> > I’m reaching out to you to ask if you’re aware of any other use cases that
> > could leverage such solution. If there’s a need for it in other project, it
> > may be a good idea to implement this in some sort of a common place.
> 
> Before implementing something new it would be a good exercise to have
> a look at the other existing ways to run VMs and containers already in
> the OpenStack ecosystem.  Service VMs are a thing, and projects like
> Octavia are built around running inside the existing infrastructure.
> There are a bunch of deployment projects that are also designed
> specifically to run services with minimal base requirements.

The console access bit Joanna mentioned is special in that it needs to be
able to reach things like IPMI controllers. So that's not going to really
be able to run on a service VM easily. It's totally doable (I think we
could have achieved it with VTEP switches and OVN when I was playing
with that), but I can understand why a container solution running on
the same host as the conductor might be more desirable than service VMs.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Kendall Nelson
Thanks! I will make note that you are willing to share.

On Thu, Mar 16, 2017 at 12:38 PM Hongbin Lu  wrote:

> Zun team could squeeze the session into 45 minutes and give the other 45
> minutes to another team if anyone interest.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Kendall Nelson [mailto:kennelso...@gmail.com]
> *Sent:* March-16-17 11:11 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
>
>
> *Subject:* Re: [openstack-dev] [ptls] Project On-Boarding Rooms
>
>
>
> Hello All!
>
> I am pleased to see how much interest there is in these onboarding rooms.
> As of right now I can accommodate all the official projects (sorry Cyborg)
> that have requested a room to make all the requests fit, I have combined
> docs and i18n and taken Thierry's suggestion to combine
> Infra/QA/RelMgmt/Regs/Stable.
>
> These are the projects that have requested a slot:
>
> Solum
>
> Tricircle
>
> Karbor
>
> Freezer
>
> Kuryr
>
> Mistral
>
> Dragonflow
>
> Coudkitty
>
> Designate
>
> Trove
>
> Watcher
>
> Magnum
>
> Barbican
>
> Charms
>
> Tacker
>
> Zun
>
> Swift
>
> Watcher
>
> Kolla
>
> Horizon
>
> Keystone
>
> Nova
>
> Cinder
>
> Telemetry
> Infra/QA/RelMgmt/Regs/Stable
>
> Docs/i18n
>
> If there are any other projects willing to share a slot together please
> let me know!
>
> -Kendall Nelson (diablo_rojo)
>
>
>
> On Thu, Mar 16, 2017 at 8:49 AM Jeremy Stanley  wrote:
>
> On 2017-03-16 10:31:49 +0100 (+0100), Thierry Carrez wrote:
> [...]
> > I think we could share a 90-min slot between a number of the supporting
> > teams:
> >
> > Infrastructure, QA, Release Management, Requirements, Stable maint
> >
> > Those teams are all under-staffed and wanting to grow new members, but
> > 90 min is both too long and too short for them. I feel like regrouping
> > them in a single slot and give each of those teams ~15 min to explain
> > what they do, their process and tooling, and a pointer to next steps /
> > mentors would be immensely useful.
>
> I can see this working okay for the Infra team. Pretty sure I can't
> come up with anything useful (to our team) we could get through in a
> 90-minute slot given our new contributor learning curve, so would
> feel bad wasting a full session. A "this is who we are and what we
> do, if you're interested in these sorts of things and want to find
> out more on getting involved go here, thank you for your time" over
> 10 minutes with an additional 5 for questions could at least be
> minimally valuable for us, on the other hand.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Kendall Nelson
Sounds good Christophe! Thanks. I can move Telemetry and Cloudkitty to the
same slot and give Heat their own.

-Kendall

On Thu, Mar 16, 2017 at 12:35 PM Christophe Sauthier <
christophe.sauth...@objectif-libre.com> wrote:

> Hey Kendall
>
> Actually I would be really happy to share too.
>
> And it might makes sense that we share with telemetry since we have a
> strong relationship between the projets... So that you can give Heat a
> full slot...
>
> Cheers
>
>
>   Christophe
>
> 
> Christophe Sauthier   Mail :
> christophe.sauth...@objectif-libre.com
> CEO   Mob : +33 (0) 6 16 98 63 96
> <+33%206%2016%2098%2063%2096>
> Objectif LibreURL : www.objectif-libre.com
> Au service de votre Cloud Twitter : @objectiflibre
>
> Suivez les actualités OpenStack en français en vous abonnant à la Pause
> OpenStack
> http://olib.re/pause-openstack
>
> Le 2017-03-16 18:28, Kendall Nelson a écrit :
> > Sorry! I will add you to telemetry's slot since you are both okay to
> > share :)
> >
> > On Thu, Mar 16, 2017 at 10:58 AM Rico Lin 
> > wrote:
> >
> >> And we're ok to share the slot:)
> >>
> >> 2017-03-16 23:53 GMT+08:00 Rico Lin :
> >>
>  These are the projects that have requested a slot:
> 
>  Solum
> 
>  Tricircle
> 
>  Karbor
> 
>  Freezer
> 
>  Kuryr
> 
>  Mistral
> 
>  Dragonflow
> 
>  Coudkitty
> 
>  Designate
> 
>  Trove
> 
>  Watcher
> 
>  Magnum
> 
>  Barbican
> 
>  Charms
> 
>  Tacker
> 
>  Zun
> 
>  Swift
> 
>  Watcher
> 
>  Kolla
> 
>  Horizon
> 
>  Keystone
> 
>  Nova
> 
>  Cinder
> 
>  Telemetry
>  Infra/QA/RelMgmt/Regs/Stable
> 
>  Docs/i18n
> >>>
> >>> Heat is missing:)
> >>
> >> --
> >>
> >> May The Force of OpenStack Be With You,
> >> Rico Lin
> >> irc: ricolin
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe [1]
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2]
> >
> >
> > Links:
> > --
> > [1]
> > http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > [2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] small wart in the resource class API

2017-03-16 Thread Jay Pipes

Yup, I'm cool with that, too.

Best,
-jay

On 03/16/2017 11:00 AM, Sylvain Bauza wrote:



Le 16/03/2017 14:12, Chris Dent a écrit :


(This message is a question asking "should we fix this?" and "if so,
I guess that needs spec, since it is a microversion change, but
would an update to the existing spec be good enough?")

We have a small wart in the API for creating and updating resources
classes [1] that only became clear while evaluating the API for
resource traits [2]. The interface for creating a resource class is
not particularly idempotent and as a result the code for doing so
from nova-compute [3] is not as simple as it could be.

It's all in the name _get_of_create_resource_class. There is at
least one but sometimes two HTTP requests: first a GET to
/resource_classes/{class} then a POST with a body to
/resource_classes.

If instead there was just a straight PUT to
/resource_classes/{class} with no body that returned success either
upon create or "yeah it's already there" then it would always be one
request and the above code could be simplified. This is how we've
ended up defining things for traits [2].




We recently decided to not ship a specific client project for tricks
like that, and we preferred to have a better REST API quite well documented.

Given that consensus, I think I'm totally fine using the PUT verb
instead of GET+POST and just verify the HTTP return code.



Making this change would also allow us to address the fact that
right now the PUT to /resource_classes/{class} takes a body which is
the _new_ name with which to replace the name of the resource class
identified by {class}.  This is an operation I'm pretty sure we
don't want to do (commonly) as it means that anywhere that custom
resource class was used in an inventory it's now going to have this
new name (the relationship at the HTTP and outer layers is by name,
but at the database level by id, the PUT does a row update) but the
outside world is not savvy to this change.



Agreed as well.

-Sylvain


Thoughts?

[1]
http://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/custom-resource-classes.html#rest-api-impact

[2]
http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/resource-provider-traits.html#rest-api-impact

[3]
https://github.com/openstack/nova/blob/d02c0aa7ba0e37fb61d9fe2b683835f28f528623/nova/scheduler/client/report.py#L704




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [tripleo] [deployment] Keystone Fernet keys rotations spec

2017-03-16 Thread Davanum Srinivas
Lance,

in the other thread, we have not been talking about having any kind of
security for the fernet keys. Isn't that a requirement since if we
throw that in etcd it may be vulnerable?

Thanks,
Dims

On Thu, Mar 16, 2017 at 12:45 PM, Lance Bragstad  wrote:
> I think the success of this, or a revived fernet-backend spec, is going to
> have a hard requirement on the outcome of the configuration opts discussion
> [0]. When we attempted to introduce an abstraction for fernet keys
> previously, it led down a rabbit hole of duplicated work across
> implementations, which was part of the reason for dropping the spec.
>
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113941.html
>
> On Thu, Mar 16, 2017 at 10:12 AM, Emilien Macchi  wrote:
>>
>> On Tue, Mar 14, 2017 at 1:27 PM, Emilien Macchi 
>> wrote:
>> > Folks,
>> >
>> > I found useful to share a spec that I started to write this morning:
>> > https://review.openstack.org/445592
>> >
>> > The goal is to do Keystone Fernet keys rotations in a way that scales
>> > and is secure, by using the standard tools and not re-inventing the
>> > wheel.
>> > In other words: if you're working on Keystone or TripleO or any other
>> > deployment tool: please read the spec and give any feedback.
>> >
>> > We would like to find a solution that would work for all OpenStack
>> > deployment tools (Kolla, OSA, Fuel, TripleO, Helm, etc) but I sent the
>> > specs to tripleo project
>> > to get some feedback.
>> >
>> > If you already has THE solution that you think is the best one, then
>> > we would be very happy to learn from it in a comment directly in the
>> > spec.
>> >
>>
>> After 2 days of review from Keystone, TripleO, OSA (and probably some
>> groups I missed), it's pretty clear the problem is already being fixed
>> in different places in different ways and that's bad.
>> IMHO we should engage some work to fix it in Keystone and investigate
>> again a storage backend for Keystone tokens.
>>
>> The Keystone specs that started this investigation was removed for Pike:
>> https://review.openstack.org/#/c/439194/
>>
>> I see 2 options here:
>>
>> - we keep duplicating efforts and let deployers implement their own
>> solutions.
>>
>> - we work with Keystone team to re-enable the spec and move forward to
>> solve the problem in Keystone itself, therefore for all deployments
>> tools in OpenStack (my favorite option).
>>
>>
>> I would like to hear from Keystone folks what are the main blockers
>> for option #2 and if this is only a human resource issue or if there
>> is some technical points we need to solve (in that case, it could be
>> done in the specs).
>>
>>
>> Thanks,
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-16 Thread Morgan Fainberg
On Mar 16, 2017 07:28, "Jeremy Stanley"  wrote:

On 2017-03-16 08:34:58 -0500 (-0500), Lance Bragstad wrote:
[...]
> These security-related corner cases have always come up in the past when
> we've talked about implementing reseller. Another good example that I
> struggle with is what happens when you flip the reseller bit for a project
> admin who goes off and creates their own entities but then wants support?
> What does the support model look like for the project admin that needs
help
> in a way that maintains data integrity?

It's still entirely unclear to me how giving someone the ability to
hide resources you've delegated them access to create in any way
enables "reseller" use cases. I can understand the global admins
wanting to have optional views where they don't see all the resold
hierarchy (for the sake of their own sanity), but why would a
down-tree admin have any expectation they could reasonably hide
resources they create from those who maintain the overall system?

In other multi-tenant software I've used where reseller
functionality is present, top-level admins have some means of
examining delegated resources and usually even of impersonating
their down-tree owners for improved supportability.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hiding projects is a lot like implementing Mandatory Access Control within
OpenStack. I would like to go on record and say we should squash the hidden
projects concept (within a single hierarchy). If we want to implement MAC
(SELinux equivalent) in OpenStack, we have a much, much, bigger scope to
cover than just in Keystone, and this feels outside the scope of any
heirarchical multi-tenancy work that has been done/will be done.

TL DR: let's not try and hide projects from users with rights in the same
(peer, or above) hierarchy.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Hongbin Lu
Zun team could squeeze the session into 45 minutes and give the other 45 
minutes to another team if anyone interest.

Best regards,
Hongbin

From: Kendall Nelson [mailto:kennelso...@gmail.com]
Sent: March-16-17 11:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ptls] Project On-Boarding Rooms

Hello All!
I am pleased to see how much interest there is in these onboarding rooms. As of 
right now I can accommodate all the official projects (sorry Cyborg) that have 
requested a room to make all the requests fit, I have combined docs and i18n 
and taken Thierry's suggestion to combine Infra/QA/RelMgmt/Regs/Stable.
These are the projects that have requested a slot:
Solum
Tricircle
Karbor
Freezer
Kuryr
Mistral
Dragonflow
Coudkitty
Designate
Trove
Watcher
Magnum
Barbican
Charms
Tacker
Zun
Swift
Watcher
Kolla
Horizon
Keystone
Nova
Cinder
Telemetry
Infra/QA/RelMgmt/Regs/Stable
Docs/i18n
If there are any other projects willing to share a slot together please let me 
know!
-Kendall Nelson (diablo_rojo)

On Thu, Mar 16, 2017 at 8:49 AM Jeremy Stanley 
> wrote:
On 2017-03-16 10:31:49 +0100 (+0100), Thierry Carrez wrote:
[...]
> I think we could share a 90-min slot between a number of the supporting
> teams:
>
> Infrastructure, QA, Release Management, Requirements, Stable maint
>
> Those teams are all under-staffed and wanting to grow new members, but
> 90 min is both too long and too short for them. I feel like regrouping
> them in a single slot and give each of those teams ~15 min to explain
> what they do, their process and tooling, and a pointer to next steps /
> mentors would be immensely useful.

I can see this working okay for the Infra team. Pretty sure I can't
come up with anything useful (to our team) we could get through in a
90-minute slot given our new contributor learning curve, so would
feel bad wasting a full session. A "this is who we are and what we
do, if you're interested in these sorts of things and want to find
out more on getting involved go here, thank you for your time" over
10 minutes with an additional 5 for questions could at least be
minimally valuable for us, on the other hand.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing duonghq for core

2017-03-16 Thread Dave Walker
+1, some great contributions.  Looking forward to having Duong on the team.

--
Kind Regards,
Dave Walker

On 15 March 2017 at 19:52, Vikram Hosakote (vhosakot) 
wrote:

> +1  Great job Duong!
>
>
>
> Regards,
>
> Vikram Hosakote
>
> IRC:  vhosakot
>
>
>
> *From: *Michał Jastrzębski 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Wednesday, March 08, 2017 at 11:21 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *[openstack-dev] [kolla] Proposing duonghq for core
>
>
>
> Hello,
>
>
>
> I'd like to start voting to include Duong (duonghq) in Kolla and
>
> Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
>
> 21st of March).
>
>
>
> Consider this my +1 vote.
>
>
>
> Cheers,
>
> Michal
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
>
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Kendall Nelson
Yes I can reserve a lunch slot for Glance :)

-Kendall

On Thu, Mar 16, 2017 at 12:01 PM Brian Rosmaita 
wrote:

> On 3/16/17 11:18 AM, Kendall Nelson wrote:
> > Hello Emilien,
> >
> > So, we have our slots basically filled, BUT you can ask one of the
> projects
> > that did get a slot if you can be refugees in their room and share. Or
> the
> > rooms will be empty over lunch and we can make sure to get you time there
> > and if your time is willing to do the grab and go lunches, you can use
> the
> > lunch time. Let me know what you would prefer.
>
> I'm not sure what the Glance staffing situation will be, but if you
> could pencil in Glance for one of the rooms during lunchtime, that would
> be great.
>
> thanks,
> brian
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Kendall Nelson
Sorry! I will add you to telemetry's slot since you are both okay to share
:)

On Thu, Mar 16, 2017 at 10:58 AM Rico Lin  wrote:

> And we're ok to share the slot:)
>
> 2017-03-16 23:53 GMT+08:00 Rico Lin :
>
>
> These are the projects that have requested a slot:
>
> Solum
> Tricircle
> Karbor
> Freezer
> Kuryr
> Mistral
> Dragonflow
> Coudkitty
> Designate
> Trove
> Watcher
> Magnum
> Barbican
> Charms
> Tacker
> Zun
> Swift
> Watcher
> Kolla
> Horizon
> Keystone
> Nova
> Cinder
> Telemetry
> Infra/QA/RelMgmt/Regs/Stable
> Docs/i18n
>
>
> Heat is missing:)
>
>
>
>
> --
> May The Force of OpenStack Be With You,
>
> *Rico Lin*irc: ricolin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] Kubernetes-based long running processes

2017-03-16 Thread Dean Troyer
On Wed, Mar 15, 2017 at 5:28 PM, Taryma, Joanna  wrote:
> I’m reaching out to you to ask if you’re aware of any other use cases that
> could leverage such solution. If there’s a need for it in other project, it
> may be a good idea to implement this in some sort of a common place.

Before implementing something new it would be a good exercise to have
a look at the other existing ways to run VMs and containers already in
the OpenStack ecosystem.  Service VMs are a thing, and projects like
Octavia are built around running inside the existing infrastructure.
There are a bunch of deployment projects that are also designed
specifically to run services with minimal base requirements.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib][heat] dib-utils/dib-run-parts/dib v2 concern

2017-03-16 Thread Clark Boylan
On Thu, Mar 16, 2017, at 09:46 AM, Steven Hardy wrote:
> On Thu, Mar 16, 2017 at 10:30:48AM -0500, Gregory Haynes wrote:
> > On Thu, Mar 16, 2017, at 05:18 AM, Steven Hardy wrote:
> > > On Wed, Mar 15, 2017 at 04:22:37PM -0500, Ben Nemec wrote:
> > > > While looking through the dib v2 changes after the feature branch was 
> > > > merged
> > > > to master, I noticed this commit[1], which bring dib-run-parts back 
> > > > into dib
> > > > itself.  Unfortunately I missed the original proposal to do this, but I 
> > > > have
> > > > some concerns about the impact of this change.
> > > > 
> > > > Originally the split was done so that dib-run-parts and one of the
> > > > os-*-config projects (looks like os-refresh-config) that depends on it 
> > > > could
> > > > be included in a stock distro cloud image without pulling in all of dib.
> > > > Note that it is still present in the requirements of orc: 
> > > > https://github.com/openstack/os-refresh-config/blob/master/requirements.txt#L5
> > > > 
> > > > Disk space in a distro cloud image is at a premium, so pulling in a 
> > > > project
> > > > like diskimage-builder to get one script out of it was not acceptable, 
> > > > at
> > > > least from what I was told at the time.
> > > > 
> > > > I believe this was done so a distro cloud image could be used with Heat 
> > > > out
> > > > of the box, hence the heat tag on this message.  I don't know exactly 
> > > > what
> > > > happened after we split out dib-utils, so I'm hoping someone can confirm
> > > > whether this requirement still exists.  I think Steve was the one who 
> > > > made
> > > > the original request.  There were a lot of Steves working on Heat at the
> > > > time though, so it's possible I'm wrong. ;-)
> > > 
> > > I don't think I'm the Steve you're referring to, but I do have some
> > > additional info as a result of investigating this bug:
> > > 
> > > https://bugs.launchpad.net/tripleo/+bug/1673144
> > > 
> > > It appears we have three different versions of dib-run-parts on the
> > > undercloud (and, presumably overcloud nodes) at the moment, which is a
> > > pretty major headache from a maintenance/debugging perspective.
> > > 
> > 
> > I looked at the bug and I think there may only be two different
> > versions? The versions in /bin and /usr/bin seem to come from the same
> > package (so I hope they are the same version). I don't understand what
> > is going on with the ./lib version but that seems like either a local
> > package / checkout or something else non-dib related.
> > 
> > Two versions is certainly less than ideal, though :).
> 
> No I think there are four versions, three unique:
> 
> (undercloud) [stack@undercloud ~]$ rpm -qf /usr/bin/dib-run-parts
> dib-utils-0.0.11-1.el7.noarch
> (undercloud) [stack@undercloud ~]$ rpm -qf /bin/dib-run-parts
> dib-utils-0.0.11-1.el7.noarch
> (undercloud) [stack@undercloud ~]$ rpm -qf
> /usr/lib/python2.7/site-packages/diskimage_builder/lib/dib-run-parts
> diskimage-builder-2.0.1-0.20170314023517.756923c.el7.centos.noarch
> (undercloud) [stack@undercloud ~]$ rpm -qf /usr/local/bin/dib-run-parts
> file /usr/local/bin/dib-run-parts is not owned by any package
> 
> /usr/bin/dib-run-parts and /bin/dib-run-parts are the same file, owned by
> dib-utils
> 
> /usr/lib/python2.7/site-packages/diskimage_builder/lib/dib-run-parts is
> owned by diskimage-builder
> 
> /usr/local/bin/dib-run-parts is the mystery file presumed from image
> building
> 
> But the exciting thing from a rolling-out-bugfixes perspective is that
> the
> one actually running via o-r-c isn't either of the packaged versions
> (doh!)
> so we probably need to track down which element is installing it.
> 
> This is a little OT for this thread (sorry), but hopefully provides more
> context around my concerns about creating another fork etc.
> 
> > > However we resolve this, *please* can we avoid permanently forking the
> > > tool, as e.g in that bug, where do I send the patch to fix leaking
> > > profiledir directories?  What package needs an update?  What is
> > > installing
> > > the script being run that's not owned by any package?
> > > 
> > > Yes, I know the answer to some of those questions, but I'm trying to
> > > point
> > > out duplicating this script and shipping it from multiple repos/packages
> > > is
> > > pretty horrible from a maintenance perspective, especially for new or
> > > casual contributors.
> > > 
> > 
> > I agree. You answered my previous question of whether os-refresh-config
> > is still in use (sounds like it definitely is) so this complicates
> > things a bit.
> > 
> > > If we have to fork it, I'd suggest we should rename the script to avoid
> > > the
> > > confusion I outline in the bug above, e.g one script -> one repo -> one
> > > package?
> > 
> > I really like this idea of renaming the script in dib which should
> > clarify the source of each script and prevent conflicts, but this still
> > leaves the fork-related issues. If we go the route of just keeping the
> 

Re: [openstack-dev] [codesearch] how to exclude projects?

2017-03-16 Thread Ihar Hrachyshka
Thank a lot Diana, the info is exactly what I was looking for.

I proposed the patch to ignore those repos in:
https://review.openstack.org/446634

Ihar

On Wed, Mar 15, 2017 at 5:30 PM, Diana Clarke
 wrote:
> Hi Ihar:
>
> The OpenStack Hound instance [1] is passed config.json with the
> projects to index.
>
> That file is generated by jeepyb here [2] based on the projects
> defined in projects.yaml [3].
>
> Here's an example config.json [4] I manually generated a couple of
> weeks back when I was looking into why some tripleo projects weren't
> being indexed (turns out it was just stale because puppet was
> disabled).
>
> IIUC, puppet is still disabled for codesearch, so you'll need to ping
> infra once you've modified jeepyb to exclude the openstack/deb-*
> repos.
>
> pabelanger in #openstack-infra kindly did a manual puppet run for me
> when I recently wanted config.json refreshed, so he'll know what to do
> when you're ready.
>
> Finally, there's also this entry in the infra system-config docs [5]
> that points to the bug tracker etc.
>
> Hope that helps!
>
> Cheers,
>
> --diana
>
> [1] http://codesearch.openstack.org/
> [2] 
> https://github.com/openstack-infra/jeepyb/blob/master/jeepyb/cmd/create_hound_config.py
> [3] 
> https://github.com/openstack-infra/project-config/blob/master/gerrit/projects.yaml
> [4] https://gist.github.com/dianaclarke/1533448ed33232f5c1c348ab57cb884e
> [5] https://docs.openstack.org/infra/system-config/codesearch.html
>
> On Wed, Mar 15, 2017 at 5:06 PM, Ihar Hrachyshka  wrote:
>> Hi all,
>>
>> lately I noticed that any search in codesearch triggers duplicate
>> matches because it seems code for lots of projects is stored in
>> openstack/deb- repos, probably used for debian
>> packaging. I would like to be able to exclude the projects from the
>> search by openstack/deb-* pattern. Is it possible?
>>
>> Ideally, we would exclude them by default, but I couldn't find where I
>> patch codesearch to do it. Is there a repo for the webui that I can
>> chew?
>>
>> Ihar
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Thierry Carrez
Sean Dague wrote:
> On 03/16/2017 05:31 AM, Thierry Carrez wrote:
>> Kendall Nelson wrote:
>>> We have a very limited slots available for interested projects so it
>>> will be a first come first served process. Let me know if you are
>>> interested and I will reserve a slot for you if there are spots left.
>>
>> I think we could share a 90-min slot between a number of the supporting
>> teams:
>>
>> Infrastructure, QA, Release Management, Requirements, Stable maint
>>
>> Those teams are all under-staffed and wanting to grow new members, but
>> 90 min is both too long and too short for them. I feel like regrouping
>> them in a single slot and give each of those teams ~15 min to explain
>> what they do, their process and tooling, and a pointer to next steps /
>> mentors would be immensely useful.
> 
> If we are going to call these onboarding rooms, I think it's important
> that the teams doing these get a full 90 minutes and actually dive deep
> into something concrete that people could contribute to.
> 
> Splitting a 90 minute slot over 5 teams to have each give a lightning
> talk that they exist isn't really onboarding. And doesn't really take
> advantage of the format of having people there in the room.

Sure... the trick is we don't have so many rooms (and don't want to run
too many in parallel and have them compete with each other that much).
And on the other hand, fungi was not having any Infra on-boarding
because 90min was too small to do any serious on-boarding anyway.

So rather than not have any presence from cross-project teams at all
(and then complain that nobody joins cross-project teams), I felt like
having a "strategic contributions" catch-all session would be better
than nothing. Let each team present what their day-to-day work looks
like, and give 'next' pointers to people interested. I expect some teams
in that catch-all to skip the session anyway, so it will probably be
more like 20/30min per team -- not really a "lightning talk".

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][deployment] Deployment Working Group (DWG)

2017-03-16 Thread Jeremy Stanley
On 2017-03-16 11:42:25 -0500 (-0500), Devdatta Kulkarni wrote:
[...]
> Related to the work that would be produced by the group, what is
> the thought around where would deployment artifacts live? Within
> each individual project's repository? As part of a single
> deployment project?
[...]

The proposal is to identify a higher-level working group with
participants from the current (and future) multitude of different
development project teams to focus on ways in which they can
standardize and cooperate with one another, not to combine them into
one single new team replacing the current set of independent teams.
The latter would just be going back to what we had in the days
before the project structure reform of 2014 (a.k.a. "big tent").
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-03-16 Thread Ed Leafe
Greetings OpenStack community,

Today's meeting was short and sweet. We lamented the decline in active 
membership in our group, and discussed ways to encourage more participation 
from others. So if you have an interest in helping OpenStack APIs become better 
and more consistent, please join us! We don't bite (except for cdent, 
sometimes).

We discussed the pagination spec [4] that was recently abandoned by its author. 
Seeing as it was very close to agreement, edleafe agreed to take over the spec 
and address the remaining questions.

The API Stability guideline was the next topic. One thing that was noted was 
that several people disagree with some of the wording, but are reluctant to 
speak out publicly because of potential loss of social capital. In other words, 
they feel that some of the big names of OpenStack are on one side of the 
discussion, so they feel intimidated about arguing for the other side. That's a 
real shame, as I know that no one involved would hold it against anyone for 
presenting a reasoned argument. Please, if you feel that part of the guideline 
is not reasonable to you, say so. No one will think less of you.

Lastly, we wished cdent a speedy recovery from whatever bug has taken over his 
body.

# Newly Published Guidelines

Nothing new this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community. Nothing at 
the moment, but feel free to get in early on the reviews below.

# Guidelines Currently Under Review [3]

* Add API capabilities discovery guideline
  https://review.openstack.org/#/c/386555/

* Refactor and re-validate api change guidelines
  https://review.openstack.org/#/c/421846/

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your 
concerns in an email to the OpenStack developer mailing list[1] with the tag 
"[api]" in the subject. In your email, you should include any relevant reviews, 
links, and comments to help guide the discussion of the specific challenge you 
are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://review.openstack.org/#/c/390973/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Brian Rosmaita
On 3/16/17 11:18 AM, Kendall Nelson wrote:
> Hello Emilien,
> 
> So, we have our slots basically filled, BUT you can ask one of the projects
> that did get a slot if you can be refugees in their room and share. Or the
> rooms will be empty over lunch and we can make sure to get you time there
> and if your time is willing to do the grab and go lunches, you can use the
> lunch time. Let me know what you would prefer.

I'm not sure what the Glance staffing situation will be, but if you
could pencil in Glance for one of the rooms during lunchtime, that would
be great.

thanks,
brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib][heat] dib-utils/dib-run-parts/dib v2 concern

2017-03-16 Thread Steven Hardy
On Thu, Mar 16, 2017 at 10:30:48AM -0500, Gregory Haynes wrote:
> On Thu, Mar 16, 2017, at 05:18 AM, Steven Hardy wrote:
> > On Wed, Mar 15, 2017 at 04:22:37PM -0500, Ben Nemec wrote:
> > > While looking through the dib v2 changes after the feature branch was 
> > > merged
> > > to master, I noticed this commit[1], which bring dib-run-parts back into 
> > > dib
> > > itself.  Unfortunately I missed the original proposal to do this, but I 
> > > have
> > > some concerns about the impact of this change.
> > > 
> > > Originally the split was done so that dib-run-parts and one of the
> > > os-*-config projects (looks like os-refresh-config) that depends on it 
> > > could
> > > be included in a stock distro cloud image without pulling in all of dib.
> > > Note that it is still present in the requirements of orc: 
> > > https://github.com/openstack/os-refresh-config/blob/master/requirements.txt#L5
> > > 
> > > Disk space in a distro cloud image is at a premium, so pulling in a 
> > > project
> > > like diskimage-builder to get one script out of it was not acceptable, at
> > > least from what I was told at the time.
> > > 
> > > I believe this was done so a distro cloud image could be used with Heat 
> > > out
> > > of the box, hence the heat tag on this message.  I don't know exactly what
> > > happened after we split out dib-utils, so I'm hoping someone can confirm
> > > whether this requirement still exists.  I think Steve was the one who made
> > > the original request.  There were a lot of Steves working on Heat at the
> > > time though, so it's possible I'm wrong. ;-)
> > 
> > I don't think I'm the Steve you're referring to, but I do have some
> > additional info as a result of investigating this bug:
> > 
> > https://bugs.launchpad.net/tripleo/+bug/1673144
> > 
> > It appears we have three different versions of dib-run-parts on the
> > undercloud (and, presumably overcloud nodes) at the moment, which is a
> > pretty major headache from a maintenance/debugging perspective.
> > 
> 
> I looked at the bug and I think there may only be two different
> versions? The versions in /bin and /usr/bin seem to come from the same
> package (so I hope they are the same version). I don't understand what
> is going on with the ./lib version but that seems like either a local
> package / checkout or something else non-dib related.
> 
> Two versions is certainly less than ideal, though :).

No I think there are four versions, three unique:

(undercloud) [stack@undercloud ~]$ rpm -qf /usr/bin/dib-run-parts
dib-utils-0.0.11-1.el7.noarch
(undercloud) [stack@undercloud ~]$ rpm -qf /bin/dib-run-parts
dib-utils-0.0.11-1.el7.noarch
(undercloud) [stack@undercloud ~]$ rpm -qf 
/usr/lib/python2.7/site-packages/diskimage_builder/lib/dib-run-parts
diskimage-builder-2.0.1-0.20170314023517.756923c.el7.centos.noarch
(undercloud) [stack@undercloud ~]$ rpm -qf /usr/local/bin/dib-run-parts
file /usr/local/bin/dib-run-parts is not owned by any package

/usr/bin/dib-run-parts and /bin/dib-run-parts are the same file, owned by
dib-utils

/usr/lib/python2.7/site-packages/diskimage_builder/lib/dib-run-parts is
owned by diskimage-builder

/usr/local/bin/dib-run-parts is the mystery file presumed from image
building

But the exciting thing from a rolling-out-bugfixes perspective is that the
one actually running via o-r-c isn't either of the packaged versions (doh!)
so we probably need to track down which element is installing it.

This is a little OT for this thread (sorry), but hopefully provides more
context around my concerns about creating another fork etc.

> > However we resolve this, *please* can we avoid permanently forking the
> > tool, as e.g in that bug, where do I send the patch to fix leaking
> > profiledir directories?  What package needs an update?  What is
> > installing
> > the script being run that's not owned by any package?
> > 
> > Yes, I know the answer to some of those questions, but I'm trying to
> > point
> > out duplicating this script and shipping it from multiple repos/packages
> > is
> > pretty horrible from a maintenance perspective, especially for new or
> > casual contributors.
> > 
> 
> I agree. You answered my previous question of whether os-refresh-config
> is still in use (sounds like it definitely is) so this complicates
> things a bit.
> 
> > If we have to fork it, I'd suggest we should rename the script to avoid
> > the
> > confusion I outline in the bug above, e.g one script -> one repo -> one
> > package?
> 
> I really like this idea of renaming the script in dib which should
> clarify the source of each script and prevent conflicts, but this still
> leaves the fork-related issues. If we go the route of just keeping the
> current state (of there being a fork) I think we should do the rename.
> 
> The issue I spoke of (complications with depending on dib-utils when
> installing dib in a venv) I think came from a combination of this
> dependency and not requiring a package install (you used to be able to
> 

Re: [openstack-dev] [keystone] [tripleo] [deployment] Keystone Fernet keys rotations spec

2017-03-16 Thread Lance Bragstad
I think the success of this, or a revived fernet-backend spec, is going to
have a hard requirement on the outcome of the configuration opts discussion
[0]. When we attempted to introduce an abstraction for fernet keys
previously, it led down a rabbit hole of duplicated work across
implementations, which was part of the reason for dropping the spec.


[0]
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113941.html

On Thu, Mar 16, 2017 at 10:12 AM, Emilien Macchi  wrote:

> On Tue, Mar 14, 2017 at 1:27 PM, Emilien Macchi 
> wrote:
> > Folks,
> >
> > I found useful to share a spec that I started to write this morning:
> > https://review.openstack.org/445592
> >
> > The goal is to do Keystone Fernet keys rotations in a way that scales
> > and is secure, by using the standard tools and not re-inventing the
> > wheel.
> > In other words: if you're working on Keystone or TripleO or any other
> > deployment tool: please read the spec and give any feedback.
> >
> > We would like to find a solution that would work for all OpenStack
> > deployment tools (Kolla, OSA, Fuel, TripleO, Helm, etc) but I sent the
> > specs to tripleo project
> > to get some feedback.
> >
> > If you already has THE solution that you think is the best one, then
> > we would be very happy to learn from it in a comment directly in the
> > spec.
> >
>
> After 2 days of review from Keystone, TripleO, OSA (and probably some
> groups I missed), it's pretty clear the problem is already being fixed
> in different places in different ways and that's bad.
> IMHO we should engage some work to fix it in Keystone and investigate
> again a storage backend for Keystone tokens.
>
> The Keystone specs that started this investigation was removed for Pike:
> https://review.openstack.org/#/c/439194/
>
> I see 2 options here:
>
> - we keep duplicating efforts and let deployers implement their own
> solutions.
>
> - we work with Keystone team to re-enable the spec and move forward to
> solve the problem in Keystone itself, therefore for all deployments
> tools in OpenStack (my favorite option).
>
>
> I would like to hear from Keystone folks what are the main blockers
> for option #2 and if this is only a human resource issue or if there
> is some technical points we need to solve (in that case, it could be
> done in the specs).
>
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-16 Thread Davanum Srinivas
+1 from me to bring castellan under Oslo governance with folks from
both oslo and Barbican as reviewers without a project rename. Let's
see if that helps get more adoption of castellan

Thanks,
Dims

On Thu, Mar 16, 2017 at 12:25 PM, Farr, Kaitlin M.
 wrote:
> This thread has generated quite the discussion, so I will try to
> address a few points in this email, echoing a lot of what Dave said.
>
> Clint originally explained what we are trying to solve very well. The hope was
> that the rename would emphasize that Castellan is just a basic
> interface that supports operations common between key managers
> (the existing Barbican back end and other back ends that may exist
> in the future), much like oslo.db supports the common operations
> between PostgreSQL and MySQL. The thought was that renaming to have
> oslo part of the name would help reinforce that it's just an interface,
> rather than a standalone key manager. Right now, the only Castellan
> back end that would work in DevStack is Barbican. There has been talk
> in the past for creating other Castellan back ends (Vault or Tang), but
> no one has committed to writing the code for those yet.
>
> The intended proposal was to rename the project, maintain the current
> review team (which is only a handful of Barbican people), and bring on
> a few Oslo folks, if any were available and interested, to give advice
> about (and +2s for) OpenStack library best practices. However, perhaps
> pulling it under oslo's umbrella without a rename is blessing it enough.
>
> In response to Julien's proposal to make Castellan "the way you can do
> key management in Python" -- it would be great if Castellan were that
> abstract, but in practice it is pretty OpenStack-specific. Currently,
> the Barbican team is great at working on key management projects
> (including both Barbican and Castellan), but a lot of our focus now is
> how we can maintain and grow integration with the rest of the OpenStack
> projects, for which having the name and expertise of oslo would be a
> great help.
>
> Thanks,
>
> Kaitlin
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][deployment] Deployment Working Group (DWG)

2017-03-16 Thread Devdatta Kulkarni
This is a great initiative.

Coming from Solum - a project that in on the fringes with regards to user
adoption [1][2], I feel that one of the things that
can help in increasing the adoption is deployment tooling available for
operators. If there is a standardized way to introduce
a project in their OpenStack setups, it is possible that operators would
try it.

Related to the work that would be produced by the group, what is the
thought around where would deployment artifacts live?
Within each individual project's repository? As part of a single deployment
project?

Also, are there any documents with information about constructing
deployment artifacts that we can refer to currently?

Regards,
Devdatta

[1] https://www.dropbox.com/s/jzkuimcc12w3iju/solum-interest-prod.png?dl=0

[2]
https://www.dropbox.com/s/nrlov1w4hn3cv6u/solum-interest-testing.png?dl=0



On Tue, Mar 14, 2017 at 3:31 PM, Emilien Macchi  wrote:

> OpenStack community has been a welcoming place for Deployment tools.
> They bring great value to OpenStack because most of them are used to
> deploy OpenStack in production, versus a development environment.
>
> Over the last few years, deployment tools project have been trying
> to solve similar challenges. Recently we've seen some desire to
> collaborate, work on common topics and resolve issues seen by all
> these tools.
>
> Some examples of collaboration:
>
> * OpenStack Ansible and Puppet OpenStack have been collaborating on
>   Continuous Integration scenarios but also on Nova upgrades orchestration.
> * TripleO and Kolla share the same tool for container builds.
> * TripleO and Fuel share the same Puppet OpenStack modules.
> * OpenStack and Kubernetes are interested in collaborating on configuration
>   management.
> * Most of tools want to collect OpenStack parameters for configuration
>   management in a common fashion.
> * [more]
>
> The big tent helped to make these projects part of OpenStack, but no
> official
> group was created to share common problems across tools until now.
>
> During the Atlanta Project Team Gathering in 2017 [1], most of current
> active
> deployment tools project team leaders met in a room and decided to start
> actual
> collaboration on different topics.
> This resolution is a first iteration of creating a new working group
> for Deployment Tools.
>
> The mission of the Deployment Working Group would be the following::
>
>   To collaborate on best practices for deploying and configuring OpenStack
>   in production environments.
>
>
> Note: in some cases, some challenges will be solved by adopting a
> technology
> or a tool. But sometimes, it won't happen because of the deployment tool
> background. This group would have to figure out how we can increase
> this adoption and converge to the same technologies eventually.
>
>
> For now, we'll use the wiki to document how we work together:
> https://wiki.openstack.org/wiki/Deployment
>
> The etherpad presented in [1] might be transformed in a Wiki page if
> needed but for now we expect people to update it.
>
> [1] https://etherpad.openstack.org/p/deployment-pike
>
>
> Let's make OpenStack deployments better and together ;-)
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Sample Roles

2017-03-16 Thread Alex Schultz
On Thu, Mar 16, 2017 at 12:13 AM, Saravanan KR  wrote:
> Thanks Alex. This is really an important requirement. Today, product
> documentation has such roles incorporated to it, which is not good.
> This patch simplifies it.
>
> A suggestion though, instead of an external generation tool, is it not
> possible to include the individual roles yaml files directly to the a
> parent yam file? Why to add one extra step. May be we can provide all
> options in the list and say enabled as 0 or 1. Much simpler to use.
>

I think this goes into the long term improvement of roles. Right now
this effort is mainly to capture and centralize an existing set of
pre-canned roles. The problem with the roles as they are currently
constructed is that we need to be able to expose a way to add
requirements or dependencies. Especially as we offer pre-canned roles
to move specific services to their own nodes that used to fall under
the 'Controller' role.  I see that being a deficiency in how we're
capturing the OS::TripleO::Service::* items within what we're calling
a 'role'.  At the moment this is being captured in these files as
documentation to indicate if you want to use the
'ControllerOpenstack', you must also use 'Networker', 'Messaging', and
'Database' roles.   I still working through the best way to organize
the roles/services in ways were these dependencies could be raised.
It might include adding some metadata into the yaml files as far as
what this role exposes so we could do some checks to ensure we've got
all our basics covered like databases/network/messaging/clustering.
That effort is outside of this specific blueprint, but something I
agree we need.  There are some yaml tricks we could do to improve this
(I'm not sure heat supports them), but first I would like to just
collect all the role definitions in a single place.

> This external tool can be an mistral action to derive the final roles
> list internally.
>

I think this is a good long term vision, but is currently out of scope
for this effort in Pike. I would like to see improvements in how we
are exposing these services/roles to the end user. I agree that
rolling this into a workflow would be a good idea.

Thanks,
-Alex

> Regards,
> Saravanan KR
>
> On Thu, Mar 16, 2017 at 3:37 AM, Emilien Macchi  wrote:
>> On Wed, Mar 15, 2017 at 5:28 PM, Alex Schultz  wrote:
>>> Ahoy folks,
>>>
>>> For the Pike cycle, we have a blueprint[0] to provide a few basic
>>> environment configurations with some custom roles.  For this effort
>>> and to reduce the complexity when dealing with roles I have put
>>> together a patch to try and organize roles in a more consumable
>>> fashion[1].  The goal behind this is that we can document the standard
>>> role configurations and also be able to ensure that when we add a new
>>> OS::TripleO::Service::* we can make sure they get applied to all of
>>> the appropriate roles.  The goal of this initial change is to also
>>> allow us all to reuse the same roles and work from a single
>>> configuration repository.  Please also review the existing roles in
>>> the review and make sure we're not missing any services.
>>
>> Sounds super cool!
>>
>>> Also my ask is that if you have any standard roles, please consider
>>> publishing them to the new roles folder[1] so we can also identify
>>> future CI testing scenarios we would like to support.
>>
>> Can we document it here maybe?
>> https://docs.openstack.org/developer/tripleo-docs/developer/tht_walkthrough/tht_walkthrough.html
>>
>>> Thanks,
>>> -Alex
>>>
>>> [0] 
>>> https://blueprints.launchpad.net/tripleo/+spec/example-custom-role-environments
>>> [1] https://review.openstack.org/#/c/445687/
>>
>> Thanks,
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-16 Thread Farr, Kaitlin M.
​​This thread has generated quite the discussion, so I will try to
address a few points in this email, echoing a lot of what Dave said.

Clint originally explained what we are trying to solve very well. The hope was
that the rename would emphasize that Castellan is just a basic
interface that supports operations common between key managers
(the existing Barbican back end and other back ends that may exist
in the future), much like oslo.db supports the common operations
between PostgreSQL and MySQL. The thought was that renaming to have
oslo part of the name would help reinforce that it's just an interface,
rather than a standalone key manager. Right now, the only Castellan
back end that would work in DevStack is Barbican. There has been talk
in the past for creating other Castellan back ends (Vault or Tang), but
no one has committed to writing the code for those yet.

The intended proposal was to rename the project, maintain the current
review team (which is only a handful of Barbican people), and bring on
a few Oslo folks, if any were available and interested, to give advice
about (and +2s for) OpenStack library best practices. However, perhaps
pulling it under oslo's umbrella without a rename is blessing it enough.

In response to Julien's proposal to make Castellan "the way you can do
key management in Python" -- it would be great if Castellan were that
abstract, but in practice it is pretty OpenStack-specific. Currently,
the Barbican team is great at working on key management projects
(including both Barbican and Castellan), but a lot of our focus now is
how we can maintain and grow integration with the rest of the OpenStack
projects, for which having the name and expertise of oslo would be a
great help.

Thanks,

Kaitlin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] what is "API to register on Vitrage notificaitons"

2017-03-16 Thread Yujun Zhang (ZTE)
Hi root causers

I want to confirm some detail about one subject in vitrage Pike roadmap


   - API to register on Vitrage notificaitons


Is it linked to blueprint
https://blueprints.launchpad.net/vitrage/+spec/configurable-notifications ?

-- 
Yujun Zhang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Rico Lin
And we're ok to share the slot:)

2017-03-16 23:53 GMT+08:00 Rico Lin :

>
>> These are the projects that have requested a slot:
>>
>> Solum
>> Tricircle
>> Karbor
>> Freezer
>> Kuryr
>> Mistral
>> Dragonflow
>> Coudkitty
>> Designate
>> Trove
>> Watcher
>> Magnum
>> Barbican
>> Charms
>> Tacker
>> Zun
>> Swift
>> Watcher
>> Kolla
>> Horizon
>> Keystone
>> Nova
>> Cinder
>> Telemetry
>> Infra/QA/RelMgmt/Regs/Stable
>> Docs/i18n
>>
>
> Heat is missing:)
>
>


-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Weekly meeting March 16th is cancelled

2017-03-16 Thread Vladimir Kuklin
Agenda is empty for today except for review requests, so I am calliing the
meeting as cancelled.

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Rico Lin
>
>
> These are the projects that have requested a slot:
>
> Solum
> Tricircle
> Karbor
> Freezer
> Kuryr
> Mistral
> Dragonflow
> Coudkitty
> Designate
> Trove
> Watcher
> Magnum
> Barbican
> Charms
> Tacker
> Zun
> Swift
> Watcher
> Kolla
> Horizon
> Keystone
> Nova
> Cinder
> Telemetry
> Infra/QA/RelMgmt/Regs/Stable
> Docs/i18n
>

Heat is missing:)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Julien Danjou
On Thu, Mar 16 2017, Kendall Nelson wrote:


[…]

> If there are any other projects willing to share a slot together please let
> me know!

We're ok to share our slot, a 90 minutes slot is plenty enough for us.
I'd be (happily) surprised we get people and things going for that long.
:)

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib][heat] dib-utils/dib-run-parts/dib v2 concern

2017-03-16 Thread Gregory Haynes
On Thu, Mar 16, 2017, at 05:18 AM, Steven Hardy wrote:
> On Wed, Mar 15, 2017 at 04:22:37PM -0500, Ben Nemec wrote:
> > While looking through the dib v2 changes after the feature branch was merged
> > to master, I noticed this commit[1], which bring dib-run-parts back into dib
> > itself.  Unfortunately I missed the original proposal to do this, but I have
> > some concerns about the impact of this change.
> > 
> > Originally the split was done so that dib-run-parts and one of the
> > os-*-config projects (looks like os-refresh-config) that depends on it could
> > be included in a stock distro cloud image without pulling in all of dib.
> > Note that it is still present in the requirements of orc: 
> > https://github.com/openstack/os-refresh-config/blob/master/requirements.txt#L5
> > 
> > Disk space in a distro cloud image is at a premium, so pulling in a project
> > like diskimage-builder to get one script out of it was not acceptable, at
> > least from what I was told at the time.
> > 
> > I believe this was done so a distro cloud image could be used with Heat out
> > of the box, hence the heat tag on this message.  I don't know exactly what
> > happened after we split out dib-utils, so I'm hoping someone can confirm
> > whether this requirement still exists.  I think Steve was the one who made
> > the original request.  There were a lot of Steves working on Heat at the
> > time though, so it's possible I'm wrong. ;-)
> 
> I don't think I'm the Steve you're referring to, but I do have some
> additional info as a result of investigating this bug:
> 
> https://bugs.launchpad.net/tripleo/+bug/1673144
> 
> It appears we have three different versions of dib-run-parts on the
> undercloud (and, presumably overcloud nodes) at the moment, which is a
> pretty major headache from a maintenance/debugging perspective.
> 

I looked at the bug and I think there may only be two different
versions? The versions in /bin and /usr/bin seem to come from the same
package (so I hope they are the same version). I don't understand what
is going on with the ./lib version but that seems like either a local
package / checkout or something else non-dib related.

Two versions is certainly less than ideal, though :).

> However we resolve this, *please* can we avoid permanently forking the
> tool, as e.g in that bug, where do I send the patch to fix leaking
> profiledir directories?  What package needs an update?  What is
> installing
> the script being run that's not owned by any package?
> 
> Yes, I know the answer to some of those questions, but I'm trying to
> point
> out duplicating this script and shipping it from multiple repos/packages
> is
> pretty horrible from a maintenance perspective, especially for new or
> casual contributors.
> 

I agree. You answered my previous question of whether os-refresh-config
is still in use (sounds like it definitely is) so this complicates
things a bit.

> If we have to fork it, I'd suggest we should rename the script to avoid
> the
> confusion I outline in the bug above, e.g one script -> one repo -> one
> package?

I really like this idea of renaming the script in dib which should
clarify the source of each script and prevent conflicts, but this still
leaves the fork-related issues. If we go the route of just keeping the
current state (of there being a fork) I think we should do the rename.

The issue I spoke of (complications with depending on dib-utils when
installing dib in a venv) I think came from a combination of this
dependency and not requiring a package install (you used to be able to
./bin/disk-image-create without installation). Now that we require
installation this may be less of an issue.

So the two reasonable options seem to be: 
* Deal with the forking cost. Not the biggest cost when you notice
dib-utils hasn't had a commit in over 3 months and that one was a robot
commit to add some github flair.
* Switch back to dib-utils in the other repo. I'm starting to prefer
this slightly given that it seems there's a valid use case for it to
live externally and our installation story has become a lot more clean.
AFAIK this shouldn't prevent us from making the script more portable,
but please correct me if there's something I'm missing.

> 
> Thanks!
> 
> Steve
> 

Cheers,
- Greg


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaasv2] Migrate LBaaS instance

2017-03-16 Thread Kosnik, Lubosz
Hello Zhi,
Just one small information. Yesterday on Octavia weekly meeting we decided that 
we’re gonna add new features to LBaaSv2 till Pike-1 so the windows is very 
small.
This decision was made as LBaaSv2 is currently Octavia delivery, not Neutron 
anymore and this project is going into deprecations stage.

Cheers,
Lubosz

On Mar 16, 2017, at 5:39 AM, zhi 
> wrote:

Hi, all
Currently, LBaaS v2 doesn't support migration. Just like router instances, we 
can remove a router instance from one L3 agent and add it to another L3 agent.

So, there is a single point failure in LBaaS agent. As far as I know, LBaaS 
supports " allow_automatic_lbaas_agent_failover ". But in many cases, we want 
to migrate LBaaS instances manually. Do we plan to do this?

I'm doing this right now. But I meet a question. I define a function in 
agent_scheduler.py like this:

def remove_loadbalancer_from_lbaas_agent(self, context, agent_id, 
loadbalancer_id):
self._unschedule_loadbalancer(context, loadbalancer_id, agent_id)

The question is, how do I notify LBaaS agent?

Hope for your reply.



Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Kendall Nelson
Hello Emilien,

So, we have our slots basically filled, BUT you can ask one of the projects
that did get a slot if you can be refugees in their room and share. Or the
rooms will be empty over lunch and we can make sure to get you time there
and if your time is willing to do the grab and go lunches, you can use the
lunch time. Let me know what you would prefer.

-Kendall (diablo_rojo)

On Thu, Mar 16, 2017 at 10:07 AM Emilien Macchi  wrote:

> On Wed, Mar 15, 2017 at 2:20 PM, Kendall Nelson 
> wrote:
> > Hello All!
> >
> > As you may have seen in a previous thread [1] the Forum will offer
> project
> > on-boarding rooms! This idea is that these rooms will provide a place for
> > new contributors to a given project to find out more about the project,
> > people, and code base. The slots will be spread out throughout the whole
> > Summit and will be 90 min long.
> >
> > We have a very limited slots available for interested projects so it
> will be
> > a first come first served process. Let me know if you are interested and
> I
> > will reserve a slot for you if there are spots left.
>
> If possible, please add TripleO project. Several TripleO developers
> will be here and we'll easily find someone (or two) to host this slot
> (if not me).
>
> Thanks,
>
> > - Kendall Nelson (diablo_rojo)
> >
> > [1]
> >
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] CI results show up about a week late, if at all

2017-03-16 Thread Matt Riedemann

It's that time of the quarter to ask about the VMware NSX CI.

I noticed on a patch [1] that merged on Feb 22, that the VMware CI 
results showed up on Feb 26th, then again on March 3rd and just today 
again on March 16th.


It's bad that the results show up several days after a patch is actually 
posted, but it's just odd that results are showing up weeks later.


Can anyone explain that? What is the current status on the VMware NSX 
CI? I feel like no one is ever around in the Nova IRC channel from the 
VMware subteam (Eric Brown is sometimes) to answer questions like this, 
and the driver is really feeling like it's no longer maintained. Just 
looking at the changes to the nova/virt/vmwareapi subtree during Ocata 
and Newton it's pretty clear there is not much going on.


I'm starting to think we should add a warning when the vcenter driver 
starts up that it's considered experimental now since we don't actually 
know what state it's in since there don't appear to be maintainers and 
the CI is not reliable.


[1] https://review.openstack.org/#/c/435010/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [tripleo] [deployment] Keystone Fernet keys rotations spec

2017-03-16 Thread Emilien Macchi
On Tue, Mar 14, 2017 at 1:27 PM, Emilien Macchi  wrote:
> Folks,
>
> I found useful to share a spec that I started to write this morning:
> https://review.openstack.org/445592
>
> The goal is to do Keystone Fernet keys rotations in a way that scales
> and is secure, by using the standard tools and not re-inventing the
> wheel.
> In other words: if you're working on Keystone or TripleO or any other
> deployment tool: please read the spec and give any feedback.
>
> We would like to find a solution that would work for all OpenStack
> deployment tools (Kolla, OSA, Fuel, TripleO, Helm, etc) but I sent the
> specs to tripleo project
> to get some feedback.
>
> If you already has THE solution that you think is the best one, then
> we would be very happy to learn from it in a comment directly in the
> spec.
>

After 2 days of review from Keystone, TripleO, OSA (and probably some
groups I missed), it's pretty clear the problem is already being fixed
in different places in different ways and that's bad.
IMHO we should engage some work to fix it in Keystone and investigate
again a storage backend for Keystone tokens.

The Keystone specs that started this investigation was removed for Pike:
https://review.openstack.org/#/c/439194/

I see 2 options here:

- we keep duplicating efforts and let deployers implement their own solutions.

- we work with Keystone team to re-enable the spec and move forward to
solve the problem in Keystone itself, therefore for all deployments
tools in OpenStack (my favorite option).


I would like to hear from Keystone folks what are the main blockers
for option #2 and if this is only a human resource issue or if there
is some technical points we need to solve (in that case, it could be
done in the specs).


Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Kendall Nelson
Hello All!

I am pleased to see how much interest there is in these onboarding rooms.
As of right now I can accommodate all the official projects (sorry Cyborg)
that have requested a room to make all the requests fit, I have combined
docs and i18n and taken Thierry's suggestion to combine
Infra/QA/RelMgmt/Regs/Stable.

These are the projects that have requested a slot:

Solum
Tricircle
Karbor
Freezer
Kuryr
Mistral
Dragonflow
Coudkitty
Designate
Trove
Watcher
Magnum
Barbican
Charms
Tacker
Zun
Swift
Watcher
Kolla
Horizon
Keystone
Nova
Cinder
Telemetry
Infra/QA/RelMgmt/Regs/Stable
Docs/i18n

If there are any other projects willing to share a slot together please let
me know!

-Kendall Nelson (diablo_rojo)

On Thu, Mar 16, 2017 at 8:49 AM Jeremy Stanley  wrote:

On 2017-03-16 10:31:49 +0100 (+0100), Thierry Carrez wrote:
[...]
> I think we could share a 90-min slot between a number of the supporting
> teams:
>
> Infrastructure, QA, Release Management, Requirements, Stable maint
>
> Those teams are all under-staffed and wanting to grow new members, but
> 90 min is both too long and too short for them. I feel like regrouping
> them in a single slot and give each of those teams ~15 min to explain
> what they do, their process and tooling, and a pointer to next steps /
> mentors would be immensely useful.

I can see this working okay for the Infra team. Pretty sure I can't
come up with anything useful (to our team) we could get through in a
90-minute slot given our new contributor learning curve, so would
feel bad wasting a full session. A "this is who we are and what we
do, if you're interested in these sorts of things and want to find
out more on getting involved go here, thank you for your time" over
10 minutes with an additional 5 for questions could at least be
minimally valuable for us, on the other hand.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Emilien Macchi
On Wed, Mar 15, 2017 at 2:20 PM, Kendall Nelson  wrote:
> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will offer project
> on-boarding rooms! This idea is that these rooms will provide a place for
> new contributors to a given project to find out more about the project,
> people, and code base. The slots will be spread out throughout the whole
> Summit and will be 90 min long.
>
> We have a very limited slots available for interested projects so it will be
> a first come first served process. Let me know if you are interested and I
> will reserve a slot for you if there are spots left.

If possible, please add TripleO project. Several TripleO developers
will be here and we'll easily find someone (or two) to host this slot
(if not me).

Thanks,

> - Kendall Nelson (diablo_rojo)
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] small wart in the resource class API

2017-03-16 Thread Sylvain Bauza


Le 16/03/2017 14:12, Chris Dent a écrit :
> 
> (This message is a question asking "should we fix this?" and "if so,
> I guess that needs spec, since it is a microversion change, but
> would an update to the existing spec be good enough?")
> 
> We have a small wart in the API for creating and updating resources
> classes [1] that only became clear while evaluating the API for
> resource traits [2]. The interface for creating a resource class is
> not particularly idempotent and as a result the code for doing so
> from nova-compute [3] is not as simple as it could be.
> 
> It's all in the name _get_of_create_resource_class. There is at
> least one but sometimes two HTTP requests: first a GET to
> /resource_classes/{class} then a POST with a body to
> /resource_classes.
> 
> If instead there was just a straight PUT to
> /resource_classes/{class} with no body that returned success either
> upon create or "yeah it's already there" then it would always be one
> request and the above code could be simplified. This is how we've
> ended up defining things for traits [2].
> 


We recently decided to not ship a specific client project for tricks
like that, and we preferred to have a better REST API quite well documented.

Given that consensus, I think I'm totally fine using the PUT verb
instead of GET+POST and just verify the HTTP return code.


> Making this change would also allow us to address the fact that
> right now the PUT to /resource_classes/{class} takes a body which is
> the _new_ name with which to replace the name of the resource class
> identified by {class}.  This is an operation I'm pretty sure we
> don't want to do (commonly) as it means that anywhere that custom
> resource class was used in an inventory it's now going to have this
> new name (the relationship at the HTTP and outer layers is by name,
> but at the database level by id, the PUT does a row update) but the
> outside world is not savvy to this change.
> 

Agreed as well.

-Sylvain

> Thoughts?
> 
> [1]
> http://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/custom-resource-classes.html#rest-api-impact
> 
> [2]
> http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/resource-provider-traits.html#rest-api-impact
> 
> [3]
> https://github.com/openstack/nova/blob/d02c0aa7ba0e37fb61d9fe2b683835f28f528623/nova/scheduler/client/report.py#L704
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-16 Thread Jeremy Stanley
On 2017-03-16 08:34:58 -0500 (-0500), Lance Bragstad wrote:
[...]
> These security-related corner cases have always come up in the past when
> we've talked about implementing reseller. Another good example that I
> struggle with is what happens when you flip the reseller bit for a project
> admin who goes off and creates their own entities but then wants support?
> What does the support model look like for the project admin that needs help
> in a way that maintains data integrity?

It's still entirely unclear to me how giving someone the ability to
hide resources you've delegated them access to create in any way
enables "reseller" use cases. I can understand the global admins
wanting to have optional views where they don't see all the resold
hierarchy (for the sake of their own sanity), but why would a
down-tree admin have any expectation they could reasonably hide
resources they create from those who maintain the overall system?

In other multi-tenant software I've used where reseller
functionality is present, top-level admins have some means of
examining delegated resources and usually even of impersonating
their down-tree owners for improved supportability.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all]One last slot open for Leadership Training - April 11/12/13

2017-03-16 Thread Colette Alexander
Hi everyone,

Just a quick note that one slot has opened up for leadership training. If
you'd like to attend, and can confirm that you will be there, please sign
up on the etherpad:

https://etherpad.openstack.org/p/Leadershiptraining


Thanks!

-colette
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] Kubernetes-based long running processes

2017-03-16 Thread Jeremy Stanley
On 2017-03-15 22:39:30 + (+), Fox, Kevin M wrote:
> Maybe glance has some stuff that would apply? I think they had a
> job kind of api at one point. I could see it being useful to
> download an image, do some conversion or scanning, etc.
[...]

That was (is) "tasks" and because it's backed by turing-complete
automation customizable by the deployer, it leads to widespread
interoperability failure when every deployment implements its own
tasks with widely varied names and functionality.

https://docs.openstack.org/developer/glance/tasks.html

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat] Heat memory usage in the TripleO gate: Ocata edition

2017-03-16 Thread Zane Bitter

On 16/03/17 09:36, Joe Talerico wrote:

On Wed, Mar 15, 2017 at 5:53 PM, Zane Bitter  wrote:

On 15/03/17 15:52, Joe Talerico wrote:


Can we start looking at CPU usage as well? Not sure if your data has
this as well...



Usage by Heat specifically? Or just in general?


heat-engine specifically.



We're limited by what is logged in the gate, so CPU usage by Heat is
definitely a non-starter. Picking a random gate run, the Heat memory use
comes from this file:

http://logs.openstack.org/27/445627/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-nonha/9232979/logs/ps.txt.gz

which is generated by running `ps` at the end of the test.

We also have this file (including historical data) from dstat:

http://logs.openstack.org/27/445627/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-nonha/9232979/logs/dstat.txt.gz

so there is _some_ data there, it's mostly a question of how to process it
down to something we can plot against time. My first guess would be to do a
box-and-whisker-style plot showing the distribution of the 1m load average
during the test. (CPU usage itself is generally a pretty bad measure of...
CPU usage.) What problems are you hoping to catch?


Just curiosity.

We have a set of tools which capture per-process utilization for
cpu/mem/disk/etc. I wonder if we could implement this into your work?


I somebody can get those tools to log that data in the gate then I can 
try to make a chart of it :)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Jeremy Stanley
On 2017-03-16 10:31:49 +0100 (+0100), Thierry Carrez wrote:
[...]
> I think we could share a 90-min slot between a number of the supporting
> teams:
> 
> Infrastructure, QA, Release Management, Requirements, Stable maint
> 
> Those teams are all under-staffed and wanting to grow new members, but
> 90 min is both too long and too short for them. I feel like regrouping
> them in a single slot and give each of those teams ~15 min to explain
> what they do, their process and tooling, and a pointer to next steps /
> mentors would be immensely useful.

I can see this working okay for the Infra team. Pretty sure I can't
come up with anything useful (to our team) we could get through in a
90-minute slot given our new contributor learning curve, so would
feel bad wasting a full session. A "this is who we are and what we
do, if you're interested in these sorts of things and want to find
out more on getting involved go here, thank you for your time" over
10 minutes with an additional 5 for questions could at least be
minimally valuable for us, on the other hand.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-16 Thread Brian Rosmaita
Big snip ... the original message is here:
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113798.html

On 3/13/17 3:09 AM, Ghanshyam Mann wrote:
> On Sat, Mar 11, 2017 at 12:10 AM, Andrea Frittoli > wrote:
>> On Fri, Mar 10, 2017 at 2:49 PM Andrea Frittoli 
>> wrote:
>> Of course Glance v1 still has to run on the stable/newton gate jobs, until
>> Newton EOL (TBD), so tests will stay in Tempest for a cycle more at least.
>> I guess I shouldn't be sending emails on a Friday afternoon?
>>
> 
> ​humm, Till Mitaka right? Newton version of glance is with v1 API as
> deprecated. And Mitaka is v1 as supported APIs so we only need to care
> about Mitaka where v1 APIs were supported and keep tests till Mitaka EOL.

(Sorry for the lateness of my reply, I was under the weather earlier
this week.)

Glance v1 is a special case, because even though it's DEPRECATED, out of
courtesy to operators, we won't remove it until some specific features
(primarily, image import with copy-from) are available in v2.  We'd
originally thought that this feature was unnecessary, but heard
differently from operators once v1 went into deprecation.  Thus, through
the Ocata release, there are valid reasons for operators to deploy v1
even though it's deprecated.  We didn't un-deprecate v1 because we
wanted to send a clear message that v1 is definitely going away.

So, it's important that v1 continue to be tested in the gate.

cheers,
brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat] Heat memory usage in the TripleO gate: Ocata edition

2017-03-16 Thread Joe Talerico
On Wed, Mar 15, 2017 at 5:53 PM, Zane Bitter  wrote:
> On 15/03/17 15:52, Joe Talerico wrote:
>>
>> Can we start looking at CPU usage as well? Not sure if your data has
>> this as well...
>
>
> Usage by Heat specifically? Or just in general?

heat-engine specifically.

>
> We're limited by what is logged in the gate, so CPU usage by Heat is
> definitely a non-starter. Picking a random gate run, the Heat memory use
> comes from this file:
>
> http://logs.openstack.org/27/445627/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-nonha/9232979/logs/ps.txt.gz
>
> which is generated by running `ps` at the end of the test.
>
> We also have this file (including historical data) from dstat:
>
> http://logs.openstack.org/27/445627/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-nonha/9232979/logs/dstat.txt.gz
>
> so there is _some_ data there, it's mostly a question of how to process it
> down to something we can plot against time. My first guess would be to do a
> box-and-whisker-style plot showing the distribution of the 1m load average
> during the test. (CPU usage itself is generally a pretty bad measure of...
> CPU usage.) What problems are you hoping to catch?

Just curiosity.

We have a set of tools which capture per-process utilization for
cpu/mem/disk/etc. I wonder if we could implement this into your work?

Joe

>
> cheers,
> Zane.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-16 Thread Lance Bragstad
On Thu, Mar 16, 2017 at 8:07 AM, Jeremy Stanley  wrote:

> On 2017-03-15 13:46:42 +1300 (+1300), Adrian Turjak wrote:
> > See, subdomains I can kind of see working, but the problem I have with
> > all this in general is that it is kind of silly to try and stop access
> > down the tree. If you have a role that lets you do 'admin'-like things
> > at a high point in the tree, you inherently always have access to the
> > whole tree below you.
> [...]
> > Really if you don't want someone to access or know about
> > 'secret_project_d' you make sure 'secret_project_d' is in a totally
> > unrelated domain from the people you are trying to hide it from.
>
> I have to agree on these points; any attempt to build a feature
> intended to hide resources from the same groups who delegate the
> permission to create them is 1. misguided, and 2. probably entirely
> futile. It will ultimately get treated as a feel-good control with
> no actual teeth, as well as a hindrance to people who end up working
> around it by adding and removing permissions for themselves so they
> can see/manage stuff which would otherwise be hidden from them.
>
> If this makes it in as a supported option, I can't begin to imagine
> the embarrassing security holes you'll end up having to squash all
> over the place where information about "hidden" resources gets
> leaked through side channels in other services (telemetry,
> monitoring, basic math on aggregate quotas, et cetera).
>

These security-related corner cases have always come up in the past when
we've talked about implementing reseller. Another good example that I
struggle with is what happens when you flip the reseller bit for a project
admin who goes off and creates their own entities but then wants support?
What does the support model look like for the project admin that needs help
in a way that maintains data integrity?


> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swg] Today's meeting (March 16, 1400 UTC) cancelled

2017-03-16 Thread Colette Alexander
Hi everyone,

The Stewardship Working Group is in the midst of discussing alternatives to
regularly scheduled meetings to keep our workflow going. Partly because of
time zone issues and partly because of overscheduled people who are in the
SWG.

To allow for some breathing room for these discussions, and to give folks a
break this week in their schedules, I think we should cancel. We're always
in #openstack-swg if you need us, and the email thread on the status of
what we'll do in place of meetings is a good one to watch[0].


Thanks!

-colette/gothicmindfood

[0]
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113994.html
is Thierry's response re: meeting timing & beginning of discussion of not
having one
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swg][tc] Moving Stewardship Working Group meeting

2017-03-16 Thread Colette Alexander
On Thu, Mar 16, 2017 at 8:19 AM, Jim Rollenhagen 
wrote:

> On Thu, Mar 16, 2017 at 5:17 AM, Thierry Carrez 
> wrote:
>
>> Jim Rollenhagen wrote:
>> > On Wed, Mar 15, 2017 at 8:15 AM, John Garbutt > > > wrote:
>> >> In the absence of tooling, could we replace the meeting with weekly
>> >> email reporting current working streams, and whats planned next?
>> That
>> >> would include fixing any problems we face trying to work well
>> >> together.
>> >
>> > This is a good idea, I've quite liked cdent's weekly placement update,
>> > maybe something similar, and others can chime in with their own
>> updates/etc.
>>
>> I was thinking we could use a status board to track the various
>> activities / objectives / threads. As part of the status board the
>> leader of each thread would provide a $time_interval update by a given
>> deadline, and the workgroup chair would collect them all and post them
>> (to a single email) on the ML.
>>
>
> ++
>
>

I'm fine with this - I actually think it would be helpful for me, since
it's more likely I'll make an hour of time here and there throughout a week
to get work and updates done. I'm especially fine with it if it means more
participation in the SWG from the community because we move towards other
ways of collaborating that allow for time zone/scheduling differences.



>
>> The update could have predetermined sections: current status / progress,
>> assigned actions, open questions... If the open questions can't get
>> solved in the ML thread resulting from the $time_interval update, *then*
>> schedule an ad-hoc discussion on #openstack-swg between interested
>> parties.
>>
>> In terms of tooling, we can start by using a wiki page for tracking and
>> updates snippets. We can use framadate.org for ad-hoc discussion
>> scheduling.
>>
>
> Any reason not to just use storyboard for tracking? AIUI, it gives us a
> status
> board for free.
>
> // jim
>

I'm fine with trying whatever the group prefers for a couple of weeks,
assessing, and then continuing with something else or trying a different
tool.


I think we should probably just cancel today's meeting, though, since it
seems like most people are pretty booked anyhow.


-colette



>
>
>>
>> Combined with extra activity on the channel, I think that would preserve
>> all 3 attributes of the meeting (especially the $time_interval regular
>> kick in the butt to make continuous progress).
>>
>> --
>> Thierry Carrez (ttx)
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] small wart in the resource class API

2017-03-16 Thread Chris Dent


(This message is a question asking "should we fix this?" and "if so,
I guess that needs spec, since it is a microversion change, but
would an update to the existing spec be good enough?")

We have a small wart in the API for creating and updating resources
classes [1] that only became clear while evaluating the API for
resource traits [2]. The interface for creating a resource class is
not particularly idempotent and as a result the code for doing so
from nova-compute [3] is not as simple as it could be.

It's all in the name _get_of_create_resource_class. There is at
least one but sometimes two HTTP requests: first a GET to
/resource_classes/{class} then a POST with a body to
/resource_classes.

If instead there was just a straight PUT to
/resource_classes/{class} with no body that returned success either
upon create or "yeah it's already there" then it would always be one
request and the above code could be simplified. This is how we've
ended up defining things for traits [2].

Making this change would also allow us to address the fact that
right now the PUT to /resource_classes/{class} takes a body which is
the _new_ name with which to replace the name of the resource class
identified by {class}.  This is an operation I'm pretty sure we
don't want to do (commonly) as it means that anywhere that custom
resource class was used in an inventory it's now going to have this
new name (the relationship at the HTTP and outer layers is by name,
but at the database level by id, the PUT does a row update) but the
outside world is not savvy to this change.

Thoughts?

[1] 
http://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/custom-resource-classes.html#rest-api-impact
[2] 
http://specs.openstack.org/openstack/nova-specs/specs/pike/approved/resource-provider-traits.html#rest-api-impact
[3] 
https://github.com/openstack/nova/blob/d02c0aa7ba0e37fb61d9fe2b683835f28f528623/nova/scheduler/client/report.py#L704

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-16 Thread Jeremy Stanley
On 2017-03-15 13:46:42 +1300 (+1300), Adrian Turjak wrote:
> See, subdomains I can kind of see working, but the problem I have with
> all this in general is that it is kind of silly to try and stop access
> down the tree. If you have a role that lets you do 'admin'-like things
> at a high point in the tree, you inherently always have access to the
> whole tree below you.
[...]
> Really if you don't want someone to access or know about
> 'secret_project_d' you make sure 'secret_project_d' is in a totally
> unrelated domain from the people you are trying to hide it from.

I have to agree on these points; any attempt to build a feature
intended to hide resources from the same groups who delegate the
permission to create them is 1. misguided, and 2. probably entirely
futile. It will ultimately get treated as a feel-good control with
no actual teeth, as well as a hindrance to people who end up working
around it by adding and removing permissions for themselves so they
can see/manage stuff which would otherwise be hidden from them.

If this makes it in as a supported option, I can't begin to imagine
the embarrassing security holes you'll end up having to squash all
over the place where information about "hidden" resources gets
leaked through side channels in other services (telemetry,
monitoring, basic math on aggregate quotas, et cetera).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Ian Y. Choi

Hello Kendall!

I18n team loves to have a project on-boarding room for new translators :)
Please reserve a room for I18n if available.


With many thanks,

/Ian


Kendall Nelson wrote on 3/16/2017 3:20 AM:

Hello All!

As you may have seen in a previous thread [1] the Forum will offer 
project on-boarding rooms! This idea is that these rooms will provide 
a place for new contributors to a given project to find out more about 
the project, people, and code base. The slots will be spread out 
throughout the whole Summit and will be 90 min long.


We have a very limited slots available for interested projects so it 
will be a first come first served process. Let me know if you are 
interested and I will reserve a slot for you if there are spots left.


- Kendall Nelson (diablo_rojo)

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swg][tc] Moving Stewardship Working Group meeting

2017-03-16 Thread Jim Rollenhagen
On Thu, Mar 16, 2017 at 5:17 AM, Thierry Carrez 
wrote:

> Jim Rollenhagen wrote:
> > On Wed, Mar 15, 2017 at 8:15 AM, John Garbutt  > > wrote:
> >> In the absence of tooling, could we replace the meeting with weekly
> >> email reporting current working streams, and whats planned next?
> That
> >> would include fixing any problems we face trying to work well
> >> together.
> >
> > This is a good idea, I've quite liked cdent's weekly placement update,
> > maybe something similar, and others can chime in with their own
> updates/etc.
>
> I was thinking we could use a status board to track the various
> activities / objectives / threads. As part of the status board the
> leader of each thread would provide a $time_interval update by a given
> deadline, and the workgroup chair would collect them all and post them
> (to a single email) on the ML.
>

++


>
> The update could have predetermined sections: current status / progress,
> assigned actions, open questions... If the open questions can't get
> solved in the ML thread resulting from the $time_interval update, *then*
> schedule an ad-hoc discussion on #openstack-swg between interested parties.
>
> In terms of tooling, we can start by using a wiki page for tracking and
> updates snippets. We can use framadate.org for ad-hoc discussion
> scheduling.
>

Any reason not to just use storyboard for tracking? AIUI, it gives us a
status
board for free.

// jim


>
> Combined with extra activity on the channel, I think that would preserve
> all 3 attributes of the meeting (especially the $time_interval regular
> kick in the butt to make continuous progress).
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [notification] BlockDeviceMapping in InstancePayload

2017-03-16 Thread Balazs Gibizer
On Wed, Mar 15, 2017 at 1:44 PM, John Garbutt  
wrote:
On 13 March 2017 at 17:14, Balazs Gibizer 
 wrote:

 Hi,

 As part of the Searchlight integration we need to extend our 
instance
 notifications with BDM data [1]. As far as I understand the main 
goal is to
 provide enough data about the instance to Searchlight so that Nova 
can use

 Searchlight to generate the response of the GET /servers/{server_id}
 requests based on the data stored in Searchlight.

 I checked the server API response and I found one field that needs 
BDM
 related data: os-extended-volumes:volumes_attached. Only the uuid 
of the
 volume and the value of delete_on_terminate is provided in the API 
response.


 I have two options about what to add to the InstancePayload and I 
want to

 get some opinions about which direction we should go with the
 implementation.

 Option A: Add only the minimum required information from the BDM to 
the

 InstancePayload

  additional InstancePayload field:
  block_devices: ListOfObjectsField(BlockDevicePayload)

  class BlockDevicePayload(base.NotificationPayloadBase):
fields = {
'delete_on_termination': fields.BooleanField(default=False),
'volume_id': fields.StringField(nullable=True),
}

 This payload would be generated from the BDMs connected to the 
instance

 where the BDM.destination_type == 'volume'.


 Option B: Provide a comprehensive set of BDM attributes

  class BlockDevicePayload(base.NotificationPayloadBase):
fields = {
'source_type': 
fields.BlockDeviceSourceTypeField(nullable=True),

'destination_type': fields.BlockDeviceDestinationTypeField(
nullable=True),
'guest_format': fields.StringField(nullable=True),
'device_type': fields.BlockDeviceTypeField(nullable=True),
'disk_bus': fields.StringField(nullable=True),
'boot_index': fields.IntegerField(nullable=True),
'device_name': fields.StringField(nullable=True),
'delete_on_termination': fields.BooleanField(default=False),
'snapshot_id': fields.StringField(nullable=True),
'volume_id': fields.StringField(nullable=True),
'volume_size': fields.IntegerField(nullable=True),
'image_id': fields.StringField(nullable=True),
'no_device': fields.BooleanField(default=False),
'tag': fields.StringField(nullable=True)
}

 In this case Nova would provide every BDM attached to the instance 
not just

 the volume ones.

 I intentionally left out connection_info and the db id as those 
seems really

 system internal.
 I also left out the instance related references as this 
BlockDevicePayload
 would be part of an InstancePayload which has an the instance uuid 
already.


+1 leaving those out.


 What do you think, which direction we should go?


There are discussions around extending the info we give out about BDMs
in the API.

What about in between, list all types of BDMs, so include a touch more
info so you can tell which one is a volume for sure.

  class BlockDevicePayload(base.NotificationPayloadBase):
fields = {
'destination_type': fields.BlockDeviceDestinationTypeField(
   nullable=True), # Maybe just called 
"type"?

'boot_index': fields.IntegerField(nullable=True),
'device_name': fields.StringField(nullable=True), # do we
ignore that now?
'delete_on_termination': fields.BooleanField(default=False),
'volume_id': fields.StringField(nullable=True),
'tag': fields.StringField(nullable=True)
}


This payload is OK for me.

I agree to use 'type' instead of 'destination_type' as destination 
doesn't have too much meaning after the device is attached.


The libvirt driver ignores the device_name but I'm not sure about the 
other virt drivers.


Cheers,
gibi




Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Alexandra Settle
Might be too late, but would be interested in having a docs room too :)

From: Kendall Nelson 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, March 15, 2017 at 6:20 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [ptls] Project On-Boarding Rooms

Hello All!
As you may have seen in a previous thread [1] the Forum will offer project 
on-boarding rooms! This idea is that these rooms will provide a place for new 
contributors to a given project to find out more about the project, people, and 
code base. The slots will be spread out throughout the whole Summit and will be 
90 min long.

We have a very limited slots available for interested projects so it will be a 
first come first served process. Let me know if you are interested and I will 
reserve a slot for you if there are spots left.
- Kendall Nelson (diablo_rojo)

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Andrea Frittoli
On Thu, 16 Mar 2017, 9:34 a.m. Thierry Carrez, 
wrote:

> Kendall Nelson wrote:
> > We have a very limited slots available for interested projects so it
> > will be a first come first served process. Let me know if you are
> > interested and I will reserve a slot for you if there are spots left.
>
> I think we could share a 90-min slot between a number of the supporting
> teams:
>
> Infrastructure, QA, Release Management, Requirements, Stable maint
>
> Those teams are all under-staffed and wanting to grow new members, but
> 90 min is both too long and too short for them. I feel like regrouping
> them in a single slot and give each of those teams ~15 min to explain
> what they do, their process and tooling, and a pointer to next steps /
> mentors would be immensely useful.


I agree one shared session would be fine.
15min might be a bit on the short side, but we can make it work.


> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Sean Dague
On 03/16/2017 05:31 AM, Thierry Carrez wrote:
> Kendall Nelson wrote:
>> We have a very limited slots available for interested projects so it
>> will be a first come first served process. Let me know if you are
>> interested and I will reserve a slot for you if there are spots left.
> 
> I think we could share a 90-min slot between a number of the supporting
> teams:
> 
> Infrastructure, QA, Release Management, Requirements, Stable maint
> 
> Those teams are all under-staffed and wanting to grow new members, but
> 90 min is both too long and too short for them. I feel like regrouping
> them in a single slot and give each of those teams ~15 min to explain
> what they do, their process and tooling, and a pointer to next steps /
> mentors would be immensely useful.

If we are going to call these onboarding rooms, I think it's important
that the teams doing these get a full 90 minutes and actually dive deep
into something concrete that people could contribute to.

Splitting a 90 minute slot over 5 teams to have each give a lightning
talk that they exist isn't really onboarding. And doesn't really take
advantage of the format of having people there in the room.

Something like that almost seems better done by recording / producing 60
second video spots with PTLs about what each group does, and getting
those run either at the venue or part of collateral on openstack.org.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] [nova] understanding log domain change - https://review.openstack.org/#/c/439500

2017-03-16 Thread Sean Dague
On 03/16/2017 05:12 AM, Rochelle Grober wrote:
> Sorry for top posting, but this is likely the best place...
> 
> I wanted to provide an update from the Ops midcycle about related topics 
> around this. 
> 
> The operators here are in general agreement that translation is not needed, 
> but they are also very interested in the possibility of getting some of their 
> long time wish list items for traceability and specificity into the log 
> messages at the same time as the translation bits are removed.  There is an 
> effort and a team of devops working on defining more specifically exactly 
> what the operators want in the messages and providing personpower to do a 
> good chunk of the work.
> 
> We hope to have some solid proposals written up for the forum so as to be 
> able to move forward and partner with the project developers on this.
> We expect two proposals, one for error codes (yes, that again, but operators 
> really want/need this) and traceability around request-ids.

Just to be clear, the delete of the existing markup is going to be
completely independent of of any markup. Deleting the i18n wrappers is
super mechanical, and can go under very fast and bulk review.

Guidelines for better tracing are great, work to contribute that in even
better, but that's something that's a lot more work, and different work.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaasv2] Migrate LBaaS instance

2017-03-16 Thread zhi
Hi, all
Currently, LBaaS v2 doesn't support migration. Just like router instances,
we can remove a router instance from one L3 agent and add it to another L3
agent.

So, there is a single point failure in LBaaS agent. As far as I know, LBaaS
supports " allow_automatic_lbaas_agent_failover ". But in many cases, we
want to migrate LBaaS instances manually. Do we plan to do this?

I'm doing this right now. But I meet a question. I define a function
in agent_scheduler.py like this:

def remove_loadbalancer_from_lbaas_agent(self, context, agent_id,
loadbalancer_id):
self._unschedule_loadbalancer(context, loadbalancer_id, agent_id)

The question is, how do I notify LBaaS agent?

Hope for your reply.



Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN-0078] copy_from in Image Service API v1 allows network port scan

2017-03-16 Thread Luke Hinds
copy_from in Image Service API v1 allows network port scan
---

### Summary ###
The `copy_from` feature in Image Service API v1 supplied by Glance can
allow an attacker to perform masked network port scans.

### Affected Services / Software ###
Version 1 of the Glance Image Service (deprecated in Newton).

### Discussion ###
In Version 1 of the Glance Image Service API it is possible to create
images with a URL such as `http://localhost:22`. This could then allow
an attacker to enumerate internal network details while appearing
masked, since the scan would appear to originate from the Glance image
service.

### Recommended Actions ###
Version 1 of the Glance Image Service API was deprecated in the Newton
cycle, so operators should upgrade to a later version that will allow
use of Version 2.

Existing deployments can limit policy on `copy_from` by restricting use
to `admin` within `policy.json` as follows:

"copy_from": "role:admin"

### Contacts / References ###
Author: Luke Hinds, Red Hat
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0078
Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1606495
OpenStack Security Project : https://launchpad.net/~openstack-ossg




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Sean McGinnis
> 
> We have a very limited slots available for interested projects so it will
> be a first come first served process. Let me know if you are interested and
> I will reserve a slot for you if there are spots left.
> 

I would like one for Cinder, but if we are running low on time slots I would
be willing to use other avenues for outreach or share a time slot with another
project.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib][heat] dib-utils/dib-run-parts/dib v2 concern

2017-03-16 Thread Steven Hardy
On Wed, Mar 15, 2017 at 04:22:37PM -0500, Ben Nemec wrote:
> While looking through the dib v2 changes after the feature branch was merged
> to master, I noticed this commit[1], which bring dib-run-parts back into dib
> itself.  Unfortunately I missed the original proposal to do this, but I have
> some concerns about the impact of this change.
> 
> Originally the split was done so that dib-run-parts and one of the
> os-*-config projects (looks like os-refresh-config) that depends on it could
> be included in a stock distro cloud image without pulling in all of dib.
> Note that it is still present in the requirements of orc: 
> https://github.com/openstack/os-refresh-config/blob/master/requirements.txt#L5
> 
> Disk space in a distro cloud image is at a premium, so pulling in a project
> like diskimage-builder to get one script out of it was not acceptable, at
> least from what I was told at the time.
> 
> I believe this was done so a distro cloud image could be used with Heat out
> of the box, hence the heat tag on this message.  I don't know exactly what
> happened after we split out dib-utils, so I'm hoping someone can confirm
> whether this requirement still exists.  I think Steve was the one who made
> the original request.  There were a lot of Steves working on Heat at the
> time though, so it's possible I'm wrong. ;-)

I don't think I'm the Steve you're referring to, but I do have some
additional info as a result of investigating this bug:

https://bugs.launchpad.net/tripleo/+bug/1673144

It appears we have three different versions of dib-run-parts on the
undercloud (and, presumably overcloud nodes) at the moment, which is a
pretty major headache from a maintenance/debugging perspective.

However we resolve this, *please* can we avoid permanently forking the
tool, as e.g in that bug, where do I send the patch to fix leaking
profiledir directories?  What package needs an update?  What is installing
the script being run that's not owned by any package?

Yes, I know the answer to some of those questions, but I'm trying to point
out duplicating this script and shipping it from multiple repos/packages is
pretty horrible from a maintenance perspective, especially for new or
casual contributors.

If we have to fork it, I'd suggest we should rename the script to avoid the
confusion I outline in the bug above, e.g one script -> one repo -> one
package?

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Boston Forum brainstorming

2017-03-16 Thread Afek, Ifat (Nokia - IL)
Hi,

We need to finish the brainstorming on our Forum proposals for Boston by next 
Monday (March 20). The deadline for topic submissions is April 2. 

We started writing down ideas in the etherpad on our last IRC meeting, but 
apparently the etherpad’s name was wrong… so I created a new one and copied our 
ideas. If you suggested a topic, please write down a short description so it 
will be easier for us to decide which topics to select. And of course feel free 
to suggest other topics. 

Etherpad link: https://etherpad.openstack.org/p/BOS-Vitrage-brainstorming 
General explanation about the forum: https://wiki.openstack.org/wiki/Forum 

Thanks,
Ifat.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Watcher] Forum Brainstorming for Watcher

2017-03-16 Thread Alexander Chadin
Watcher folks,

Here is etherpad[0], where you can share your thoughts and questions regarding 
to Watcher feedback and improvements.

[0] https://etherpad.openstack.org/p/BOS-Watcher-brainstorming

Best Regards,
_
Alexander Chadin
OpenStack Developer
Servionica LTD
a.cha...@servionica.ru
+7 (916) 693-58-81

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Thierry Carrez
Kendall Nelson wrote:
> We have a very limited slots available for interested projects so it
> will be a first come first served process. Let me know if you are
> interested and I will reserve a slot for you if there are spots left.

I think we could share a 90-min slot between a number of the supporting
teams:

Infrastructure, QA, Release Management, Requirements, Stable maint

Those teams are all under-staffed and wanting to grow new members, but
90 min is both too long and too short for them. I feel like regrouping
them in a single slot and give each of those teams ~15 min to explain
what they do, their process and tooling, and a pointer to next steps /
mentors would be immensely useful.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >