Re: [openstack-dev] [dib][heat] dib-utils/dib-run-parts/dib v2 concern

2017-03-15 Thread Ian Wienand

On 03/16/2017 08:22 AM, Ben Nemec wrote:

Anyway, I don't know that anything is broken at the moment since I
believe dib-run-parts was brought over unchanged, but the retirement of
dib-utils was proposed in https://review.openstack.org/#/c/445617 and I
would like to resolve this question before we do anything like that.


The underlying motivation behind this was to isolate dib so we could
do things like re-implement dib-run-parts in posixy shell (for busybox
environments) or python, etc.

So my idea was we'd just leave dib-utils alone.  But it raises a good
point that both dib-utils and diskimage-builder are providing
dib-run-parts.  I think this is probably the main oversight here.

I've proposed [1] that makes dib use dib-run-parts from its private
library dir (rather than any globally installed version) and stops it
exporting the script to avoid conflict with dib-utils.  I think this
should allow everything to live in harmony?

-i

[1] https://review.openstack.org/#/c/446285/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread Rico Lin
I'm interested in having a Heat on-boarding room:)

2017-03-16 8:39 GMT+08:00 Zhipeng Huang :

> Hi Kendall,
>
> If this could also apply to the unofficial projects, then Cyborg project
> would also like to have a slot, we could have at least one team member to
> do the on-boarding :)
>
> On Thu, Mar 16, 2017 at 8:16 AM, joehuang  wrote:
>
>> Hello, Kendall,
>>
>> Tricircle need a slot too, if it's not too late :).
>>
>> Thanks providing the project on-boarding opportunity.
>>
>> Best Regards
>> Chaoyi Huang (joehuang)
>> --
>> *From:* Kendall Nelson [kennelso...@gmail.com]
>> *Sent:* 16 March 2017 2:20
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* [openstack-dev] [ptls] Project On-Boarding Rooms
>>
>> Hello All!
>>
>> As you may have seen in a previous thread [1] the Forum will offer
>> project on-boarding rooms! This idea is that these rooms will provide a
>> place for new contributors to a given project to find out more about the
>> project, people, and code base. The slots will be spread out throughout the
>> whole Summit and will be 90 min long.
>>
>> We have a very limited slots available for interested projects so it will
>> be a first come first served process. Let me know if you are interested and
>> I will reserve a slot for you if there are spots left.
>>
>> - Kendall Nelson (diablo_rojo)
>>
>> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-Marc
>> h/113459.html
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Prooduct Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-15 Thread Ken'ichi Ohmichi
2017-03-10 8:51 GMT-08:00 Sean McGinnis :
>> >
>> As far as I can tell:
>> - Cinder v1 if I'm not mistaken has been deprecated in Juno, so it's
>> deprecated in all supported releases.
>> - Glance v1 has been deprecated in Newton, so it's deprecated in all
>> supported releases
>> - Keystone v2 has been deprecated in Mitaka, so testing *must* stay in
>> Tempest until Mitaka EOL, which is in a month from now
>>
>> We should stop testing these three api versions in the common gate
>> including stable branches now (except for keystone v2 on stable/mitaka
>> which can run for one more month).
>>
>> Are cinder / glance / keystone willing to take over the API tests and run
>> them in their own gate until removal of the API version?
>>
>> Doug
>
> With Cinder's v1 API being deprecated for quite awhile now, I would
> actually prefer to just remove all tempest tests and drop the API
> completely. There was some concern about removal a few cycles back since
> there was concern (rightly so) that a lot of deployments and a lot of
> users were still using it.
>
> I think it has now been marked as deprecated long enough that if anyone
> is still using it, it's just out of obstinance. We've removed the v1
> api-ref documentation, and the default in the client has been v2 for
> awhile.
>
> Unless there's a strong objection, and a valid argument to support it,
> I really would just like to drop v1 from Cinder and not waste any more
> cycles on redoing tempest tests and reconfiguring jobs to support
> something we have stated for over two years that we were no longer going
> to support. Juno went EOL in December of 2015. I really hope it's safe
> now to remove.

OK, let's remove the Cinder v1 API tests from Tempest [1].

Thanks

---
[1]: https://review.openstack.org/#/c/446233/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread Zhipeng Huang
Hi Kendall,

If this could also apply to the unofficial projects, then Cyborg project
would also like to have a slot, we could have at least one team member to
do the on-boarding :)

On Thu, Mar 16, 2017 at 8:16 AM, joehuang  wrote:

> Hello, Kendall,
>
> Tricircle need a slot too, if it's not too late :).
>
> Thanks providing the project on-boarding opportunity.
>
> Best Regards
> Chaoyi Huang (joehuang)
> --
> *From:* Kendall Nelson [kennelso...@gmail.com]
> *Sent:* 16 March 2017 2:20
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [ptls] Project On-Boarding Rooms
>
> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will offer project
> on-boarding rooms! This idea is that these rooms will provide a place for
> new contributors to a given project to find out more about the project,
> people, and code base. The slots will be spread out throughout the whole
> Summit and will be 90 min long.
>
> We have a very limited slots available for interested projects so it will
> be a first come first served process. Let me know if you are interested and
> I will reserve a slot for you if there are spots left.
>
> - Kendall Nelson (diablo_rojo)
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-
> March/113459.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [codesearch] how to exclude projects?

2017-03-15 Thread Diana Clarke
Hi Ihar:

The OpenStack Hound instance [1] is passed config.json with the
projects to index.

That file is generated by jeepyb here [2] based on the projects
defined in projects.yaml [3].

Here's an example config.json [4] I manually generated a couple of
weeks back when I was looking into why some tripleo projects weren't
being indexed (turns out it was just stale because puppet was
disabled).

IIUC, puppet is still disabled for codesearch, so you'll need to ping
infra once you've modified jeepyb to exclude the openstack/deb-*
repos.

pabelanger in #openstack-infra kindly did a manual puppet run for me
when I recently wanted config.json refreshed, so he'll know what to do
when you're ready.

Finally, there's also this entry in the infra system-config docs [5]
that points to the bug tracker etc.

Hope that helps!

Cheers,

--diana

[1] http://codesearch.openstack.org/
[2] 
https://github.com/openstack-infra/jeepyb/blob/master/jeepyb/cmd/create_hound_config.py
[3] 
https://github.com/openstack-infra/project-config/blob/master/gerrit/projects.yaml
[4] https://gist.github.com/dianaclarke/1533448ed33232f5c1c348ab57cb884e
[5] https://docs.openstack.org/infra/system-config/codesearch.html

On Wed, Mar 15, 2017 at 5:06 PM, Ihar Hrachyshka  wrote:
> Hi all,
>
> lately I noticed that any search in codesearch triggers duplicate
> matches because it seems code for lots of projects is stored in
> openstack/deb- repos, probably used for debian
> packaging. I would like to be able to exclude the projects from the
> search by openstack/deb-* pattern. Is it possible?
>
> Ideally, we would exclude them by default, but I couldn't find where I
> patch codesearch to do it. Is there a repo for the webui that I can
> chew?
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread joehuang
Hello, Kendall,

Tricircle need a slot too, if it's not too late :).

Thanks providing the project on-boarding opportunity.

Best Regards
Chaoyi Huang (joehuang)

From: Kendall Nelson [kennelso...@gmail.com]
Sent: 16 March 2017 2:20
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [ptls] Project On-Boarding Rooms

Hello All!

As you may have seen in a previous thread [1] the Forum will offer project 
on-boarding rooms! This idea is that these rooms will provide a place for new 
contributors to a given project to find out more about the project, people, and 
code base. The slots will be spread out throughout the whole Summit and will be 
90 min long.

We have a very limited slots available for interested projects so it will be a 
first come first served process. Let me know if you are interested and I will 
reserve a slot for you if there are spots left.

- Kendall Nelson (diablo_rojo)

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread Yuval Brik
Interested in a room for Karbor on-boarding.


--Yuval



From: Kendall Nelson 
Sent: Wednesday, March 15, 2017 20:20
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [ptls] Project On-Boarding Rooms

Hello All!

As you may have seen in a previous thread [1] the Forum will offer project 
on-boarding rooms! This idea is that these rooms will provide a place for new 
contributors to a given project to find out more about the project, people, and 
code base. The slots will be spread out throughout the whole Summit and will be 
90 min long.

We have a very limited slots available for interested projects so it will be a 
first come first served process. Let me know if you are interested and I will 
reserve a slot for you if there are spots left.

- Kendall Nelson (diablo_rojo)

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] Kubernetes-based long running processes

2017-03-15 Thread Fox, Kevin M
Maybe glance has some stuff that would apply? I think they had a job kind of 
api at one point. I could see it being useful to download an image, do some 
conversion or scanning, etc.


From: Taryma, Joanna [joanna.tar...@intel.com]
Sent: Wednesday, March 15, 2017 3:28 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [all][ironic] Kubernetes-based long running processes

Hi all,

There was an idea of using Kubernetes to handle long running processes for 
Ironic [0]. It could be useful for example for graphical and serial consoles or 
improving scalability (and possibly for other long-running processes in the 
future). Kubernetes would be used as a backend for running processes (as 
containers).
However, the complexity of adding this to ironic would be a too laborious, 
considering the use case. At the PTG it was decided not to implement it within 
ironic, but in the future ironic may adopt such solution if it’s common.

I’m reaching out to you to ask if you’re aware of any other use cases that 
could leverage such solution. If there’s a need for it in other project, it may 
be a good idea to implement this in some sort of a common place.

Kind regards,
Joanna

[0] https://review.openstack.org/#/c/431605/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][ironic] Kubernetes-based long running processes

2017-03-15 Thread Taryma, Joanna
Hi all,

There was an idea of using Kubernetes to handle long running processes for 
Ironic [0]. It could be useful for example for graphical and serial consoles or 
improving scalability (and possibly for other long-running processes in the 
future). Kubernetes would be used as a backend for running processes (as 
containers).
However, the complexity of adding this to ironic would be a too laborious, 
considering the use case. At the PTG it was decided not to implement it within 
ironic, but in the future ironic may adopt such solution if it’s common.

I’m reaching out to you to ask if you’re aware of any other use cases that 
could leverage such solution. If there’s a need for it in other project, it may 
be a good idea to implement this in some sort of a common place.

Kind regards,
Joanna

[0] https://review.openstack.org/#/c/431605/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Fox, Kevin M
Interesting. Thanks for the info.

Kevin

From: Boris Bobrov [bre...@cynicmansion.ru]
Sent: Wednesday, March 15, 2017 2:07 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog

On 03/15/2017 10:06 PM, Jay Pipes wrote:
> +Boris B
>
> On 03/15/2017 02:55 PM, Fox, Kevin M wrote:
>> I think they are. If they are not, things will break if federation is
>> used for sure. If you know that it is please let me know. I want to
>> deploy federation at some point but was waiting for dashboard support.
>> Now that the dashboard supports it, I may try it soon. Its a no-go
>> still though if heat doesn't work with it.
>
> We had a customer engagement recently that had issues with Heat not
> being able to execute certain actions in a federated Keystone
> environment. I believe we learned that Keystone trusts and federation
> were not compatible during this engagement.
>
> Boris, would you mind refreshing memories on this?

They are still broken when user gets roles from groups membership.
At the PTG session the decision was to document that it is fine and that
user should get concrete role assignments before using heat via
federation. Now there are 2 ways to do it.

1. New auto-provisioning capabilities, which make role assignments
persistent [0]. Which is funny, because group membership is not
persistent.

2. Ask project admin to assign the roles.

[0]https://docs.openstack.org/developer/keystone/federation/mapping_combinations.html#auto-provisioning

I don't like it though and wanted to talk about it at keystone
meeting. But we didn't make it on time so it will be discussed next
Tuesday. I want this: https://review.openstack.org/#/c/437533/

> Best,
> -jay
>
>> 
>> From: Jay Pipes [jaypi...@gmail.com]
>> Sent: Wednesday, March 15, 2017 11:41 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog
>>
>> On 03/15/2017 01:21 PM, Fox, Kevin M wrote:
>>> Other OpenStack subsystems (such as Heat) handle this with Trusts. A
>>> service account is made in a different, usually SQL backed Keystone
>>> Domain and a trust is created associating the service account with
>>> the User.
>>>
>>> This mostly works but does give the trusted account a lot of power,
>>> as the roles by default in OpenStack are pretty coarse grained. That
>>> should be solvable though.
>>
>> I didn't think Keystone trusts and Keystone federation were compatible
>> with each other, though? Did that change recently?
>>
>> Best,
>> -jay
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Sample Roles

2017-03-15 Thread Emilien Macchi
On Wed, Mar 15, 2017 at 5:28 PM, Alex Schultz  wrote:
> Ahoy folks,
>
> For the Pike cycle, we have a blueprint[0] to provide a few basic
> environment configurations with some custom roles.  For this effort
> and to reduce the complexity when dealing with roles I have put
> together a patch to try and organize roles in a more consumable
> fashion[1].  The goal behind this is that we can document the standard
> role configurations and also be able to ensure that when we add a new
> OS::TripleO::Service::* we can make sure they get applied to all of
> the appropriate roles.  The goal of this initial change is to also
> allow us all to reuse the same roles and work from a single
> configuration repository.  Please also review the existing roles in
> the review and make sure we're not missing any services.

Sounds super cool!

> Also my ask is that if you have any standard roles, please consider
> publishing them to the new roles folder[1] so we can also identify
> future CI testing scenarios we would like to support.

Can we document it here maybe?
https://docs.openstack.org/developer/tripleo-docs/developer/tht_walkthrough/tht_walkthrough.html

> Thanks,
> -Alex
>
> [0] 
> https://blueprints.launchpad.net/tripleo/+spec/example-custom-role-environments
> [1] https://review.openstack.org/#/c/445687/

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib][heat] dib-utils/dib-run-parts/dib v2 concern

2017-03-15 Thread Gregory Haynes
On Wed, Mar 15, 2017, at 04:22 PM, Ben Nemec wrote:
> While looking through the dib v2 changes after the feature branch was 
> merged to master, I noticed this commit[1], which bring dib-run-parts 
> back into dib itself.  Unfortunately I missed the original proposal to 
> do this, but I have some concerns about the impact of this change.
> 
> Originally the split was done so that dib-run-parts and one of the 
> os-*-config projects (looks like os-refresh-config) that depends on it 
> could be included in a stock distro cloud image without pulling in all 
> of dib.  Note that it is still present in the requirements of orc: 
> https://github.com/openstack/os-refresh-config/blob/master/requirements.txt#L5
> 

I had forgotten about this, but you're completely correct - the
os-refresh-config phases are run via dib-run-parts. The reason for
moving dib-run-parts back in to dib was to simplify some of the
installation insanity we had going on, I want to say it was one reason
you couldn't run disk-image-create from a virtualenv without sourcing it
first.

> Disk space in a distro cloud image is at a premium, so pulling in a 
> project like diskimage-builder to get one script out of it was not 
> acceptable, at least from what I was told at the time.
> 
> I believe this was done so a distro cloud image could be used with Heat 
> out of the box, hence the heat tag on this message.  I don't know 
> exactly what happened after we split out dib-utils, so I'm hoping 
> someone can confirm whether this requirement still exists.  I think 
> Steve was the one who made the original request.  There were a lot of 
> Steves working on Heat at the time though, so it's possible I'm wrong.
> ;-)
> 
> Anyway, I don't know that anything is broken at the moment since I 
> believe dib-run-parts was brought over unchanged, but the retirement of 
> dib-utils was proposed in https://review.openstack.org/#/c/445617 and I 
> would like to resolve this question before we do anything like that.
> 

I think you're right in that nothing should be broken ATM since the API
is consistent. I agree that it doesn't make a lot of sense to retire
something which is depended on by other non-retired projects. The
biggest issue I can see with us leaving dib-utils in its current state
is there's the opportunity for the two implementations to drift and have
slightly different dib-run-parts APIs. Maybe we could prevent this by
deprecating dib-utils (or leaving a big warning of this tool is frozen
in the README) and leaving os-refresh-config as is. Although it isn't
ideal for os-refresh-config to depend on a deprecated tool I am not sure
anyone is making use of os-refresh-config currently so I am hesitant to
suggest we add back the complexity to DIB.

> Thanks.
> 
> -Ben
> 
> 1: 
> https://github.com/openstack/diskimage-builder/commit/d65678678ec0416550d768f323ceace4d0861bca
> 

Thanks!
- Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat] Heat memory usage in the TripleO gate: Ocata edition

2017-03-15 Thread Zane Bitter

On 15/03/17 15:52, Joe Talerico wrote:

Can we start looking at CPU usage as well? Not sure if your data has
this as well...


Usage by Heat specifically? Or just in general?

We're limited by what is logged in the gate, so CPU usage by Heat is 
definitely a non-starter. Picking a random gate run, the Heat memory use 
comes from this file:


http://logs.openstack.org/27/445627/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-nonha/9232979/logs/ps.txt.gz

which is generated by running `ps` at the end of the test.

We also have this file (including historical data) from dstat:

http://logs.openstack.org/27/445627/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-nonha/9232979/logs/dstat.txt.gz

so there is _some_ data there, it's mostly a question of how to process 
it down to something we can plot against time. My first guess would be 
to do a box-and-whisker-style plot showing the distribution of the 1m 
load average during the test. (CPU usage itself is generally a pretty 
bad measure of... CPU usage.) What problems are you hoping to catch?


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-university] Meeting time poll - ACTION needed

2017-03-15 Thread Ildiko Vancsa
Hi All,

In preparation to the upcoming OpenStack University training in Boston, on May 
6-7, we would like to start to have weekly meetings with the training team and 
those of you who would like to get involved to work on the training material 
and format.

I created a Doodle poll to see which are the most feasible time slots: 
http://doodle.com/poll/ccky8cviubdgeb9f

Please ignore the exact date and select all the slots on each day when you can 
most probably make it to the meeting! The time slots as you will see are all in 
UTC. Please keep the DST switch in mind where it hasn’t happened yet.

If you are interested please fill out the form __by the end of this week (March 
19)__.

If you have any questions please let me know.

Thanks and Best Regards,
Ildikó
IRC: ildikov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][neutron-fwaas][networking-bgpvpn][nova-lxd] Removing old data_utils from Tempest

2017-03-15 Thread Ken'ichi Ohmichi
2017-03-14 18:52 GMT-07:00 Takashi Yamamoto :
> thank you for heads up.
>
> On Wed, Mar 15, 2017 at 2:18 AM, Ken'ichi Ohmichi  
> wrote:
>> Hi,
>>
>> Many projects are using data_utils library which is provided by
>> Tempest for creating test resources with random resource names.
>> Now the library is provided as stable interface (tempest.lib) and old
>> unstable interface (tempest.common) will be removed after most
>> projects are switching to the new one.
>> We can see remaining projects from
>> https://review.openstack.org/#/q/status:open+branch:master+topic:tempest-data_utils
>
> are you going to backport?

Backport is not necessary on Tempest side, because Tempest is
branchless and we are using the same Tempest for stable branches.
If using the package of Tempest on tempest-plugin gate, the old
data_utils still exists with putting the Tempest version limitation
after the removal.
If using the latest Tempest on tempest-plugin gate, we need to
backport the corresponding patches to tempest-plugin after the
removal.
The above ways are different on projects.

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Sample Roles

2017-03-15 Thread Alex Schultz
Ahoy folks,

For the Pike cycle, we have a blueprint[0] to provide a few basic
environment configurations with some custom roles.  For this effort
and to reduce the complexity when dealing with roles I have put
together a patch to try and organize roles in a more consumable
fashion[1].  The goal behind this is that we can document the standard
role configurations and also be able to ensure that when we add a new
OS::TripleO::Service::* we can make sure they get applied to all of
the appropriate roles.  The goal of this initial change is to also
allow us all to reuse the same roles and work from a single
configuration repository.  Please also review the existing roles in
the review and make sure we're not missing any services.

Also my ask is that if you have any standard roles, please consider
publishing them to the new roles folder[1] so we can also identify
future CI testing scenarios we would like to support.

Thanks,
-Alex

[0] 
https://blueprints.launchpad.net/tripleo/+spec/example-custom-role-environments
[1] https://review.openstack.org/#/c/445687/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-university] Team updates

2017-03-15 Thread Ildiko Vancsa
Hi All,

As many of you know we are working on to create a horizontal team with liaisons 
from the project teams and volunteers from the Community overall to work on the 
upstream trainings and help with on boarding new contributors.

We named our efforts and the contribution trainings OpenStack University.

You can find further information on the following wiki page: 
https://wiki.openstack.org/wiki/OpenStack_University

Those of you who already signed up please check your details on the wiki page 
and add the missing information.

Those of you who would like to get involved please filter and use the 
[os-university] tag on the mailing list and join our IRC channel, which is 
called #openstack-university.

If you have any questions feel free to reach out to me and the team on IRC or 
here in mails. :)

Thanks and Best Regards,
Ildikó
IRC: ildikov



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A Summary of Atlanta PTG Summaries (!)

2017-03-15 Thread Ian Y. Choi

Hello Hugh,

Just I have found that there is a typo: I18n team is "I18n", not "I8n" :)
(There are total 18 characters between 'I' and 'n' in 
"Internationalization" - too long.)

Could you fix it?


With many thanks,

/Ian

Hugh Blemings wrote on 3/13/2017 6:30 PM:

Hi Emilien, All,

On 8/3/17 09:26, Emilien Macchi wrote:
On Mon, Mar 6, 2017 at 10:45 PM, Hugh Blemings  
wrote:

Hiya,

As has been done for the last few Summits/PTGs in Lwood[1] I've pulled
together a list of the various posts to openstack-dev that summarise 
things

at the PTG - projects, videos, anecdotes etc.

The aggregated list is in this blog post;

http://hugh.blemings.id.au/2017/03/07/openstack-ptg-atlanta-2017-summary-of-summaries/ 



Which should aggregate across to planet.openstack.org as well.

I'll update this as further summaries appear, corrections/contributions
welcome.


Thanks Hugh, that's awesome!


Most welcome :)


Can you please remove the first link: TripleO:
http://lists.openstack.org/pipermail/openstack-dev/2017-February/112893.html 


and only keep this one:
http://lists.openstack.org/pipermail/openstack-dev/2017-February/112995.html 



Done, apologies for the delay, got behind in my email this week!

Cheers,
Hugh


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [dib][heat] dib-utils/dib-run-parts/dib v2 concern

2017-03-15 Thread Ben Nemec
While looking through the dib v2 changes after the feature branch was 
merged to master, I noticed this commit[1], which bring dib-run-parts 
back into dib itself.  Unfortunately I missed the original proposal to 
do this, but I have some concerns about the impact of this change.


Originally the split was done so that dib-run-parts and one of the 
os-*-config projects (looks like os-refresh-config) that depends on it 
could be included in a stock distro cloud image without pulling in all 
of dib.  Note that it is still present in the requirements of orc: 
https://github.com/openstack/os-refresh-config/blob/master/requirements.txt#L5


Disk space in a distro cloud image is at a premium, so pulling in a 
project like diskimage-builder to get one script out of it was not 
acceptable, at least from what I was told at the time.


I believe this was done so a distro cloud image could be used with Heat 
out of the box, hence the heat tag on this message.  I don't know 
exactly what happened after we split out dib-utils, so I'm hoping 
someone can confirm whether this requirement still exists.  I think 
Steve was the one who made the original request.  There were a lot of 
Steves working on Heat at the time though, so it's possible I'm wrong. ;-)


Anyway, I don't know that anything is broken at the moment since I 
believe dib-run-parts was brought over unchanged, but the retirement of 
dib-utils was proposed in https://review.openstack.org/#/c/445617 and I 
would like to resolve this question before we do anything like that.


Thanks.

-Ben

1: 
https://github.com/openstack/diskimage-builder/commit/d65678678ec0416550d768f323ceace4d0861bca


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][tripleo] initial discussion for a new periodic pipeline

2017-03-15 Thread Paul Belanger
On Wed, Mar 15, 2017 at 03:42:32PM -0500, Ben Nemec wrote:
> 
> 
> On 03/13/2017 02:29 PM, Sagi Shnaidman wrote:
> > Hi, all
> > 
> > I submitted a change: https://review.openstack.org/#/c/443964/
> > but seems like it reached a point which requires an additional discussion.
> > 
> > I had a few proposals, it's increasing period to 12 hours instead of 4
> > for start, and to leave it in regular periodic *low* precedence.
> > I think we can start from 12 hours period to see how it goes, although I
> > don't think that 4 only jobs will increase load on OVB cloud, it's
> > completely negligible comparing to current OVB capacity and load.
> > But making its precedence as "low" IMHO completely removes any sense
> > from this pipeline to be, because we already run experimental-tripleo
> > pipeline which this priority and it could reach timeouts like 7-14
> > hours. So let's assume we ran periodic job, it's queued to run now 12 +
> > "low queue length" - about 20 and more hours. It's even worse than usual
> > periodic job and definitely makes this change useless.
> > I'd like to notice as well that those periodic jobs unlike "usual"
> > periodic are used for repository promotion and their value are equal or
> > higher than check jobs, so it needs to run with "normal" or even "high"
> > precedence.
> 
> Yeah, it makes no sense from an OVB perspective to add these as low priority
> jobs.  Once in a while we've managed to chew through the entire experimental
> queue during the day, but with the containers job added it's very unlikely
> that's going to happen anymore.  Right now we have a 4.5 hour wait time just
> for the check queue, then there's two hours of experimental jobs queued up
> behind that.  All of which means if we started a low priority periodic job
> right now it probably wouldn't run until about midnight my time, which I
> think is when the regular periodic jobs run now.
> 
Lets just give it a try? A 12 hour periodic job with low priority. There is
nothing saying we cannot iterate on this after a few days / weeks / months.

> > 
> > Thanks
> > 
> > 
> > On Thu, Mar 9, 2017 at 10:06 PM, Wesley Hayutin  > > wrote:
> > 
> > 
> > 
> > On Wed, Mar 8, 2017 at 1:29 PM, Jeremy Stanley  > > wrote:
> > 
> > On 2017-03-07 10:12:58 -0500 (-0500), Wesley Hayutin wrote:
> > > The TripleO team would like to initiate a conversation about the
> > > possibility of creating a new pipeline in Openstack Infra to allow
> > > a set of jobs to run periodically every four hours
> > [...]
> > 
> > The request doesn't strike me as contentious/controversial. Why not
> > just propose your addition to the zuul/layout.yaml file in the
> > openstack-infra/project-config repo and hash out any resulting
> > concerns via code review?
> > --
> > Jeremy Stanley
> > 
> > 
> > Sounds good to me.
> > We thought it would be nice to walk through it in an email first :)
> > 
> > Thanks
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > 
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> > 
> > 
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> > 
> > 
> > --
> > Best regards
> > Sagi Shnaidman
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread Andrea Frittoli
QA would love to have a slot for new contributors to QA projects and
plugins.

On Wed, 15 Mar 2017, 8:21 p.m. Rob Cresswell (rcresswe), 
wrote:

> Horizon would love the opportunity to baffle newcomers.
> This could also be useful for the many plugins that are managed by their
> respective service teams.
>
> Rob
>
> On 15 Mar 2017, at 18:20, Kendall Nelson  wrote:
>
> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will offer project
> on-boarding rooms! This idea is that these rooms will provide a place for
> new contributors to a given project to find out more about the project,
> people, and code base. The slots will be spread out throughout the whole
> Summit and will be 90 min long.
>
> We have a very limited slots available for interested projects so it will
> be a first come first served process. Let me know if you are interested and
> I will reserve a slot for you if there are spots left.
>
> - Kendall Nelson (diablo_rojo)
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread Matt Riedemann

On 3/15/2017 1:20 PM, Kendall Nelson wrote:

Hello All!

As you may have seen in a previous thread [1] the Forum will offer
project on-boarding rooms! This idea is that these rooms will provide a
place for new contributors to a given project to find out more about the
project, people, and code base. The slots will be spread out throughout
the whole Summit and will be 90 min long.

We have a very limited slots available for interested projects so it
will be a first come first served process. Let me know if you are
interested and I will reserve a slot for you if there are spots left.

- Kendall Nelson (diablo_rojo)

[1]
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We have at least one person from the Nova team that can host an 
on-boarding session for Nova if we can get a slot.


I promise to send our friendliest people.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics][neutron] some big tent projects included into 'Neutron Official'

2017-03-15 Thread Ihar Hrachyshka
Any update? The issue still seems to be present.

On Mon, Nov 28, 2016 at 6:42 AM, Ilya Shakhat  wrote:
> Hi Ihar,
>
> This sounds like a bug - the contents of official group should be in sync
> with the governance repo.
> I'll take a look what went wrong with it.
>
> Thanks,
> Ilya
>
> 2016-11-26 2:28 GMT+03:00 Ihar Hrachyshka :
>>
>> Hi all,
>>
>> I am looking at
>> http://stackalytics.com/?project_type=openstack=neutron-group and I
>> see some reviews counted for projects that are for long out of neutron
>> stadium (f.e. dragonflow or kuryr or networking-hyperv). How can we get them
>> excluded from the official neutron stats?
>>
>> I’ve briefly looked at the code, and it seems like the project should
>> reflect what’s defined in governance repo, but apparently it’s not the case.
>> So what does it reflect?
>>
>> Ihar
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Boris Bobrov
On 03/15/2017 10:06 PM, Jay Pipes wrote:
> +Boris B
> 
> On 03/15/2017 02:55 PM, Fox, Kevin M wrote:
>> I think they are. If they are not, things will break if federation is
>> used for sure. If you know that it is please let me know. I want to
>> deploy federation at some point but was waiting for dashboard support.
>> Now that the dashboard supports it, I may try it soon. Its a no-go
>> still though if heat doesn't work with it.
> 
> We had a customer engagement recently that had issues with Heat not
> being able to execute certain actions in a federated Keystone
> environment. I believe we learned that Keystone trusts and federation
> were not compatible during this engagement.
> 
> Boris, would you mind refreshing memories on this?

They are still broken when user gets roles from groups membership.
At the PTG session the decision was to document that it is fine and that
user should get concrete role assignments before using heat via
federation. Now there are 2 ways to do it.

1. New auto-provisioning capabilities, which make role assignments
persistent [0]. Which is funny, because group membership is not
persistent.

2. Ask project admin to assign the roles.

[0]https://docs.openstack.org/developer/keystone/federation/mapping_combinations.html#auto-provisioning

I don't like it though and wanted to talk about it at keystone
meeting. But we didn't make it on time so it will be discussed next
Tuesday. I want this: https://review.openstack.org/#/c/437533/

> Best,
> -jay
> 
>> 
>> From: Jay Pipes [jaypi...@gmail.com]
>> Sent: Wednesday, March 15, 2017 11:41 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog
>>
>> On 03/15/2017 01:21 PM, Fox, Kevin M wrote:
>>> Other OpenStack subsystems (such as Heat) handle this with Trusts. A
>>> service account is made in a different, usually SQL backed Keystone
>>> Domain and a trust is created associating the service account with
>>> the User.
>>>
>>> This mostly works but does give the trusted account a lot of power,
>>> as the roles by default in OpenStack are pretty coarse grained. That
>>> should be solvable though.
>>
>> I didn't think Keystone trusts and Keystone federation were compatible
>> with each other, though? Did that change recently?
>>
>> Best,
>> -jay
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [codesearch] how to exclude projects?

2017-03-15 Thread Ihar Hrachyshka
Hi all,

lately I noticed that any search in codesearch triggers duplicate
matches because it seems code for lots of projects is stored in
openstack/deb- repos, probably used for debian
packaging. I would like to be able to exclude the projects from the
search by openstack/deb-* pattern. Is it possible?

Ideally, we would exclude them by default, but I couldn't find where I
patch codesearch to do it. Is there a repo for the webui that I can
chew?

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][tripleo] initial discussion for a new periodic pipeline

2017-03-15 Thread Ben Nemec



On 03/13/2017 02:29 PM, Sagi Shnaidman wrote:

Hi, all

I submitted a change: https://review.openstack.org/#/c/443964/
but seems like it reached a point which requires an additional discussion.

I had a few proposals, it's increasing period to 12 hours instead of 4
for start, and to leave it in regular periodic *low* precedence.
I think we can start from 12 hours period to see how it goes, although I
don't think that 4 only jobs will increase load on OVB cloud, it's
completely negligible comparing to current OVB capacity and load.
But making its precedence as "low" IMHO completely removes any sense
from this pipeline to be, because we already run experimental-tripleo
pipeline which this priority and it could reach timeouts like 7-14
hours. So let's assume we ran periodic job, it's queued to run now 12 +
"low queue length" - about 20 and more hours. It's even worse than usual
periodic job and definitely makes this change useless.
I'd like to notice as well that those periodic jobs unlike "usual"
periodic are used for repository promotion and their value are equal or
higher than check jobs, so it needs to run with "normal" or even "high"
precedence.


Yeah, it makes no sense from an OVB perspective to add these as low 
priority jobs.  Once in a while we've managed to chew through the entire 
experimental queue during the day, but with the containers job added 
it's very unlikely that's going to happen anymore.  Right now we have a 
4.5 hour wait time just for the check queue, then there's two hours of 
experimental jobs queued up behind that.  All of which means if we 
started a low priority periodic job right now it probably wouldn't run 
until about midnight my time, which I think is when the regular periodic 
jobs run now.




Thanks


On Thu, Mar 9, 2017 at 10:06 PM, Wesley Hayutin > wrote:



On Wed, Mar 8, 2017 at 1:29 PM, Jeremy Stanley > wrote:

On 2017-03-07 10:12:58 -0500 (-0500), Wesley Hayutin wrote:
> The TripleO team would like to initiate a conversation about the
> possibility of creating a new pipeline in Openstack Infra to allow
> a set of jobs to run periodically every four hours
[...]

The request doesn't strike me as contentious/controversial. Why not
just propose your addition to the zuul/layout.yaml file in the
openstack-infra/project-config repo and hash out any resulting
concerns via code review?
--
Jeremy Stanley


Sounds good to me.
We thought it would be nice to walk through it in an email first :)

Thanks



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Best regards
Sagi Shnaidman


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose Attila Darazs and Gabriele Cerami for tripleo-ci core

2017-03-15 Thread Sagi Shnaidman
+1 +1 !

On Wed, Mar 15, 2017 at 5:44 PM, John Trowbridge  wrote:

> Both Attila and Gabriele have been rockstars with the work to transition
> tripleo-ci to run via quickstart, and both have become extremely
> knowledgeable about how tripleo-ci works during that process. They are
> both very capable of providing thorough and thoughtful reviews of
> tripleo-ci patches.
>
> On top of this Attila has greatly increased the communication from the
> tripleo-ci squad as the liason, with weekly summary emails of our
> meetings to this list.
>
> - trown
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread Rob Cresswell (rcresswe)
Horizon would love the opportunity to baffle newcomers.
This could also be useful for the many plugins that are managed by their 
respective service teams.

Rob

On 15 Mar 2017, at 18:20, Kendall Nelson 
> wrote:

Hello All!

As you may have seen in a previous thread [1] the Forum will offer project 
on-boarding rooms! This idea is that these rooms will provide a place for new 
contributors to a given project to find out more about the project, people, and 
code base. The slots will be spread out throughout the whole Summit and will be 
90 min long.

We have a very limited slots available for interested projects so it will be a 
first come first served process. Let me know if you are interested and I will 
reserve a slot for you if there are spots left.

- Kendall Nelson (diablo_rojo)

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Choosing a name for the OSA Python package

2017-03-15 Thread Nolan Brubaker
Hello,


I'm working on making the OpenStack-Ansible inventory code available within a 
Python package, rather than simply living in the git repository. While 
initially I used 'openstack_ansible' as a name in my patch [1], I'm open to 
suggestions for something slightly less confusing, if others are amenable.


A list of proposed names is at [2] for voting or suggestion.


[1] https://review.openstack.org/#/c/418076/

[2] https://etherpad.openstack.org/p/osa-python-package-name


Thanks,


Nolan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-15 Thread Paul Belanger
On Wed, Mar 15, 2017 at 09:41:16AM +0100, Thomas Herve wrote:
> On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow  wrote:
> 
> > * How does reloading work (does it)?
> 
> No. There is nothing that we can do in oslo that will make services
> magically reload configuration. It's also unclear to me if that's
> something to do. In a containerized environment, wouldn't it be
> simpler to deploy new services? Otherwise, supporting signal based
> reload as we do today should be trivial.
> 
> > * What's the operational experience (editing a ini file is about the lowest
> > bar we can possible get to, for better and/or worse).
> >
> > * Does this need to be a new oslo.config backend or is it better suited by
> > something like the following (external programs loop)::
> >
> >etcd_client = make_etcd_client(args)
> >while True:
> >has_changed = etcd_client.get_new_config("/blahblah") # or use a
> > watch
> >if has_changed:
> >   fetch_and_write_ini_file(etcd_client)
> >   trigger_reload()
> >time.sleep(args.wait)
> 
> That's confd: https://github.com/kelseyhightower/confd/ . Bonus
> points; it supports a ton of other backends. One solution is to
> provide templates and documentation to use confd with OpenStack.
> 
++

Lets not get into the business of writing cfgmgmt tools in openstack, but reuse
what exists today.

oslo.config should just write to etcd, and other tools would be used, confd, to
trigger things.

> -- 
> Thomas
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][heat] Heat memory usage in the TripleO gate: Ocata edition

2017-03-15 Thread Joe Talerico
On Tue, Mar 14, 2017 at 4:06 PM, Zane Bitter  wrote:
> Following up on the previous thread:
>
> http://lists.openstack.org/pipermail/openstack-dev/2017-January/109748.html
>
> Here is the latest data, which includes the Ocata release:
>
> https://fedorapeople.org/~zaneb/tripleo-memory/20170314/heat_memused.png
>
> As you can see, there has been one jump in memory usage. This was due to the
> TripleO patch https://review.openstack.org/#/c/425717/
>
> Unlike previous increases in memory usage, I was able to warn of this one in
> the advance, and it was deemed an acceptable trade-off. The reasons for the
> increase are unknown - the addition of more stuff to the endpoint map seemed
> like a good bet, but one attempt to mitigate that[1] had no effect and I'm
> increasingly unconvinced that this could account for the magnitude of the
> increase.
>
> In any event, memory usage remains around the 1GiB level, none of the other
> complexity increases during Ocata have had any discernible effect, and Heat
> has had no memory usage regressions.
>
> Stay tuned for the next exciting edition, in which I try to figure out how
> to do more than 3 colors on the plot.

Nice work Zane! Thanks for this!

Can we start looking at CPU usage as well? Not sure if your data has
this as well...

Thanks!
Joe

>
> cheers,
> Zane.
>
>
> [1] https://review.openstack.org/#/c/427836/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Zane Bitter

On 15/03/17 14:41, Jay Pipes wrote:

On 03/15/2017 01:21 PM, Fox, Kevin M wrote:

Other OpenStack subsystems (such as Heat) handle this with Trusts. A
service account is made in a different, usually SQL backed Keystone
Domain and a trust is created associating the service account with the
User.

This mostly works but does give the trusted account a lot of power, as
the roles by default in OpenStack are pretty coarse grained. That
should be solvable though.


I didn't think Keystone trusts and Keystone federation were compatible
with each other, though?


You're correct, you have to pick one or the other.


Did that change recently?


Nope. We did discuss it at the PTG:

https://etherpad.openstack.org/p/pike-ptg-cross-project-federation

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing duonghq for core

2017-03-15 Thread Vikram Hosakote (vhosakot)
+1  Great job Duong!

Regards,
Vikram Hosakote
IRC:  vhosakot

From: Michał Jastrzębski >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, March 08, 2017 at 11:21 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla] Proposing duonghq for core

Hello,

I'd like to start voting to include Duong (duonghq) in Kolla and
Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
21st of March).

Consider this my +1 vote.

Cheers,
Michal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-15 Thread Doug Hellmann
Excerpts from Monty Taylor's message of 2017-03-15 04:36:24 +0100:
> On 03/14/2017 06:04 PM, Davanum Srinivas wrote:
> > Team,
> > 
> > So one more thing popped up again on IRC:
> > https://etherpad.openstack.org/p/oslo.config_etcd_backend
> > 
> > What do you think? interested in this work?
> > 
> > Thanks,
> > Dims
> > 
> > PS: Between this thread and the other one about Tooz/DLM and
> > os-lively, we can probably make a good case to add etcd as a base
> > always-on service.
> 
> As I mentioned in the other thread, there was specific and strong
> anti-etcd sentiment in Tokyo which is why we decided to use an
> abstraction. I continue to be in favor of us having one known service in
> this space, but I do think that it's important to revisit that decision
> fully and in context of the concerns that were raised when we tried to
> pick one last time.
> 
> It's worth noting that there is nothing particularly etcd-ish about
> storing config that couldn't also be done with zk and thus just be an
> additional api call or two added to Tooz with etcd and zk drivers for it.
> 

The fun* thing about working with these libraries is managing the
interdependencies. If we're going to have an abstraction library that
provides configuration options for seeing the backend, like we do in
oslo.db and olso.messaging, then the configuration library can't use it
or we have a circular dependency.

Luckily, tooz does not currently use oslo.config. So, oslo.config could
use tooz and we could create an oslo.dlm library with a shallow
interface mapping config options to tooz calls to open connections or
whatever we need from tooz in an application. Then apps could use
oslo.dlm instead of calling into tooz directly and the configuration of
the backend would be hidden from the application developer.

Doug

* your definition of "fun" may be different than mine

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-15 Thread Dave McCowan (dmccowan)


On 3/15/17, 6:51 AM, "Julien Danjou"  wrote:

>On Mon, Mar 13 2017, Clint Byrum wrote:
>
>> To me, Oslo is a bunch of libraries that encompass "the way OpenStack
>> does ". When  is key management, projects are, AFAICT,
>>universally
>> using Castellan at the moment. So I think it fits in Oslo
>> conceptually.
>
>It would be cool if it could rather be "the way you can do XXX in
>Python" rather than being too much OpenStack centric. :)
>
>> As far as what benefit there is to renaming it, the biggest one is
>> divesting Castellan of the controversy around Barbican. There's no
>> disagreement that explicitly handling key management is necessary. There
>> is, however, still hesitance to fully adopt Barbican in that role. In
>> fact I heard about some alternatives to Barbican, namely "Vault"[1] and
>> "Tang"[2], that may be useful for subsets of the community, or could
>> even grow into de facto standards for key management.
>>
>> So, given that there may be other backends, and the developers would
>> like to embrace that, I see value in renaming. It would help, I think,
>> Castellan's developers to be able to focus on key management and not
>> have to explain to every potential user "no we're not Barbican's cousin,
>> we're just an abstraction..".
>
>I don't think the Castellan name is a problem in itself, because at
>least to me it does not sound like it's Barbican specific. I'd prefer it
>to be a Python generic library that supports an OpenStack project as one
>of its driver. So I'd hate to have it named oslo.foobar.
>
>As far as moving it under the Oslo library, I understand that the point
>would be to make a point stating that this library is not a
>Barbican-specific solution etc. I think it addresses the problem in the
>wrongŠ but pragmatic way.
>
>What I think would be more interesting is to rename the _Barbican team_
>to the "People-who-work-on-keychain-stuff team". That team would build 2
>things, which are Barbican and Castellan (and maybe more later). That'd
>make more sense than trying to fit everything in Oslo, and would also
>help other projects to do the same thing in the future, and, maybe, one
>day, alleviate the whole problem.
>
>Other than that, sure, we can move it to Oslo I guess. :)

The Barbican community has always been the
"People-who-work-on-key-management-stuff" team.  We launched Castellan in
2015 with the explicit purpose of being a generic abstraction for key
managers.[1]  At that time, we envisioned developing a KMIP plugin to
connect directly to an HSM.  Currently, the interest level is higher
around a plugin for software based secure storage, such as Vault.
However, patches for additional plugins have not been forthcoming.

Castellan was designed from the ground up to be a generic abstraction, and
I, and the rest of the Barbican community, hope to see more driver
development for it.  If a change of name or governance helps, we're all
for it.  But, I hope everyone knows there is no push back from the
"People-who-work-on-key-management-stuff".  We welcome all contributions.

In addition, we want the Castellan library to be the go-to library for any
project that wants to add key management.  It is already used by Nova,
Cinder, Glance, Neutron, Octavia, and Magnum.  If a change in name or
governance helps other projects adopt Castellan, again, we're all for it.
In the meantime, we encourage and stand ready to help all adopters.

dave-mccowan
PTL, "People-who-work-on-key-management-stuff"

[1] https://wiki.openstack.org/wiki/Castellan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-15 Thread Doug Hellmann
Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:
> On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow  wrote:
> 
> > * How does reloading work (does it)?
> 
> No. There is nothing that we can do in oslo that will make services
> magically reload configuration. It's also unclear to me if that's
> something to do. In a containerized environment, wouldn't it be
> simpler to deploy new services? Otherwise, supporting signal based
> reload as we do today should be trivial.

Reloading works today with files, that's why the question is important
to think through. There is a special flag to set on options that are
"mutable" and then there are functions within oslo.config to reload.
Those are usually triggered when a service gets a SIGHUP or something similar.

We need to decide what happens to a service's config when that API
is used and the backend is etcd. Maybe nothing, because every time
any config option is accessed the read goes all the way through to
etcd? Maybe a warning is logged because we don't support reloads?
Maybe an error is logged? Or maybe we flush the local cache and start
reading from etcd on future accesses?

> > * What's the operational experience (editing a ini file is about the lowest
> > bar we can possible get to, for better and/or worse).
> >
> > * Does this need to be a new oslo.config backend or is it better suited by
> > something like the following (external programs loop)::
> >
> >etcd_client = make_etcd_client(args)
> >while True:
> >has_changed = etcd_client.get_new_config("/blahblah") # or use a
> > watch
> >if has_changed:
> >   fetch_and_write_ini_file(etcd_client)
> >   trigger_reload()
> >time.sleep(args.wait)
> 
> That's confd: https://github.com/kelseyhightower/confd/ . Bonus
> points; it supports a ton of other backends. One solution is to
> provide templates and documentation to use confd with OpenStack.

That sounds like it might also be useful, but is probably a separate
issue.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Doug Hellmann
Excerpts from Mike Bayer's message of 2017-03-15 12:39:48 -0400:
> 
> On 03/15/2017 11:42 AM, Sean Dague wrote:
> > Perhaps, but in doing so oslo.db is going to get the pin and uc from
> > stable/ocata, which is going to force it back to SQLA < 1.1, which will
> > prevent oslo.db changes that require >= 1.1 to work.
> 
> so do we want to make that job non-voting or something like that?

Is that job a holdover from before we had good constraints pinning in
all of our stable branches? Do we still need it? We need to test a
change to oslo.db's stable branch with the other code on that stable
branch, but do we need to test oslo.db's master branch that way?

Someone with more current Oslo memory may remember why we added that job
in the first place, so let's not just remove it until we understand why
it's there.

Doug

> 
> >
> > -Sean
> >
> > On 03/15/2017 11:26 AM, Roman Podoliaka wrote:
> >> Isn't the purpose of that specific job -
> >> gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata - to test a
> >> change to the library master branch with stable releases (i.e. Ocata)
> >> - of all other components?
> >>
> >> On Wed, Mar 15, 2017 at 5:20 PM, Sean Dague  wrote:
> >>> On 03/15/2017 10:38 AM, Mike Bayer wrote:
> 
> 
>  On 03/15/2017 07:30 AM, Sean Dague wrote:
> >
> > The problem was the original patch kept a cap on SQLA, just moved it up
> > to the next pre-release, not realizing the caps in general are the
> > concern by the requirements team. So instead of upping the cap, I just
> > removed it entirely. (It also didn't help on clarity that there was a
> > completely unrelated fail in the tests which made it look like the
> > system was stopping this.)
> >
> > This should hopefully let new SQLA releases very naturally filter out to
> > all our services and libraries.
> >
> > -Sean
> >
> 
>  so the failure I'm seeing now is *probably* one I saw earlier when we
>  tried to do this, the tempest run fails on trying to run a keystone
>  request, but I can't find the same error in the logs this time.
> 
>  In an earlier build of https://review.openstack.org/#/c/423192/, we saw
>  this:
> 
>  ContextualVersionConflict: (SQLAlchemy 1.1.5
>  (/usr/local/lib/python2.7/dist-packages),
>  Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db',
>  'keystone']))
> 
>  stack trace was in the apache log:  
>  http://paste.openstack.org/show/601583/
> 
> 
>  but now on our own oslo.db build, the same jobs are failing and are
>  halting at keystone, but I can't find any error:
> 
>  the failure is:
> 
> 
>  http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/
> 
> 
>  and is on:  https://review.openstack.org/#/c/445930/
> 
> 
>  if someone w/ tempest expertise could help with this that would be great.
> >>>
> >>> It looks like oslo.db master is being used with ocata services?
> >>> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/logs/devstacklog.txt.gz#_2017-03-15_13_10_52_434
> >>>
> >>>
> >>> I suspect that's the root issue. That should be stable/ocata branch, 
> >>> right?
> >>>
> >>> -Sean
> >>>
> >>> --
> >>> Sean Dague
> >>> http://dague.net
> >>>
> >>> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2017-03-15 15:06:26 -0400:
> +Boris B
> 
> On 03/15/2017 02:55 PM, Fox, Kevin M wrote:
> > I think they are. If they are not, things will break if federation is used 
> > for sure. If you know that it is please let me know. I want to deploy 
> > federation at some point but was waiting for dashboard support. Now that 
> > the dashboard supports it, I may try it soon. Its a no-go still though if 
> > heat doesn't work with it.
> 
> We had a customer engagement recently that had issues with Heat not 
> being able to execute certain actions in a federated Keystone 
> environment. I believe we learned that Keystone trusts and federation 
> were not compatible during this engagement.
> 
> Boris, would you mind refreshing memories on this?

Is it possible that this was because there was no writable domain for
Heat to create instance users in?

Because when last I used Heat long ago, Heat straight up just won't work
without trusts (since you have to give Heat a trust for it to be able
to do anything for you). Prior to that Heat was storing your creds in
its database... pretty sure that's long gone.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Jay Pipes

+Boris B

On 03/15/2017 02:55 PM, Fox, Kevin M wrote:

I think they are. If they are not, things will break if federation is used for 
sure. If you know that it is please let me know. I want to deploy federation at 
some point but was waiting for dashboard support. Now that the dashboard 
supports it, I may try it soon. Its a no-go still though if heat doesn't work 
with it.


We had a customer engagement recently that had issues with Heat not 
being able to execute certain actions in a federated Keystone 
environment. I believe we learned that Keystone trusts and federation 
were not compatible during this engagement.


Boris, would you mind refreshing memories on this?

Best,
-jay



From: Jay Pipes [jaypi...@gmail.com]
Sent: Wednesday, March 15, 2017 11:41 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog

On 03/15/2017 01:21 PM, Fox, Kevin M wrote:

Other OpenStack subsystems (such as Heat) handle this with Trusts. A service 
account is made in a different, usually SQL backed Keystone Domain and a trust 
is created associating the service account with the User.

This mostly works but does give the trusted account a lot of power, as the 
roles by default in OpenStack are pretty coarse grained. That should be 
solvable though.


I didn't think Keystone trusts and Keystone federation were compatible
with each other, though? Did that change recently?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Pike summit attendance

2017-03-15 Thread Matt Riedemann
I've added an "Attendance" section to the bottom of the Nova Forum 
Brainstorming etherpad [1]. If you are going to the summit, or are at 
least pretty sure you'll be going, please add your name to that list. 
This will help me get an idea of who is all going to be there so I can 
take that into account when scheduling sessions (or whatever we'll be 
doing at the Forum).


[1] https://etherpad.openstack.org/p/BOS-Nova-brainstorming

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Fox, Kevin M
I think they are. If they are not, things will break if federation is used for 
sure. If you know that it is please let me know. I want to deploy federation at 
some point but was waiting for dashboard support. Now that the dashboard 
supports it, I may try it soon. Its a no-go still though if heat doesn't work 
with it.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Wednesday, March 15, 2017 11:41 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog

On 03/15/2017 01:21 PM, Fox, Kevin M wrote:
> Other OpenStack subsystems (such as Heat) handle this with Trusts. A service 
> account is made in a different, usually SQL backed Keystone Domain and a 
> trust is created associating the service account with the User.
>
> This mostly works but does give the trusted account a lot of power, as the 
> roles by default in OpenStack are pretty coarse grained. That should be 
> solvable though.

I didn't think Keystone trusts and Keystone federation were compatible
with each other, though? Did that change recently?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swg][tc] Moving Stewardship Working Group meeting

2017-03-15 Thread Jim Rollenhagen
On Wed, Mar 15, 2017 at 8:15 AM, John Garbutt  wrote:

> On 15 March 2017 at 09:50, Thierry Carrez  wrote:
> > Colette Alexander wrote:
> >> Currently the Stewardship Working Group meetings every other Thursday at
> >> 1400 UTC.
> >>
> >> We've had a couple of pings from folks who are interested in joining us
> >> for meetings that live in US Pacific Time, and that Thursday time isn't
> >> terribly conducive to them being able to make meetings. So - the
> >> question is when to move it to, if we can.
> >>
> >> A quick glance at the rest of the Thursday schedule shows the 1500 and
> >> 1600 time slots available (in #openstack-meeting I believe). I'm
> >> hesitant to go beyond that in the daytime because we also need to
> >> accommodate attendees in Western Europe.
> >>
> >> Thoughts on whether either of those works from SWG members and anyone
> >> who might like to drop in? We can also look into having meetings once a
> >> week, and potentially alternating times between the two to help
> >> accommodate the spread of people.
> >>
> >> Let me know what everyone thinks - and for this week I'll see anyone who
> >> can make it at 1400 UTC on Thursday.
> >
> > Alternatively, we could try to come up with ways to avoid regular
> > meetings altogether. That would certainly be a bit experimental, but the
> > SWG sounds like a nice place to experiment with more inclusive ways of
> > coordination.
> >
> > IMHO meetings serve three purposes. The first is to provide a regular
> > rhythm and force people to make progress on stated objectives. You give
> > status updates, lay down actions, make sure nothing is stuck. The second
> > is to provide quick progress on specific topics -- by having multiple
> > people around at the same time you can quickly iterate through ideas and
> > options. The third is to expose an entry point to new contributors: if
> > they are interested they will look for a meeting to get the temperature
> > on a workgroup and potentially jump in.
> >
> > I'm certainly guilty of being involved in too many things, so purpose
> > (1) is definitely helpful to force me to make regular progress, but it
> > also feels like something a good status board could do better, and async.
> >
> > The second purpose is definitely helpful, but I'd say that ad-hoc
> > meetings (or discussions in a IRC channel) are a better way to achieve
> > the result. You just need to come up with a one-time meeting point where
> > all the interested parties will be around, and that's usually easier
> > than to pick a weekly time that will work for everyone all the time. We
> > just need to invent tooling that would facilitate organizing and
> > tracking those.
> >
> > For the third, I think using IRC channels as the on-boarding mechanism
> > is more efficient -- meetings are noisy, busy and not so great for
> > newcomers. If we ramped up channel activity (and generally made IRC
> > channels more discoverable), I don't think any newcomer would ever use
> > meetings to "tune in".
> >
> > Am I missing something that only meetings could ever provide ? If not it
> > feels like the SWG could experiment with meeting-less coordination by
> > replacing it with better async status coordination / reminder tools,
> > some framework to facilitate ad-hoc discussions, and ramping up activity
> > in IRC channel. If that ends up being successful, we could promote our
> > techniques to the rest of OpenStack.
>
> +1 for trying out a meeting-less group ourselves.
>

I'm also +1.


>
> In the absence of tooling, could we replace the meeting with weekly
> email reporting current working streams, and whats planned next? That
> would include fixing any problems we face trying to work well
> together.
>

This is a good idea, I've quite liked cdent's weekly placement update,
maybe something similar, and others can chime in with their own updates/etc.

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] - driver capability/feature verification framework

2017-03-15 Thread Robert Kukura

RFE is at https://bugs.launchpad.net/neutron/+bug/1673142.

-Bob


On 3/13/17 2:37 PM, Robert Kukura wrote:


Hi Kevin,

I will file the RFE this week.

-Bob


On 3/13/17 2:05 PM, Kevin Benton wrote:

Hi,

At the PTG we briefly discussed a generic system for verifying that 
the appropriate drivers are enforcing a particular user-requested 
feature in ML2 (e.g. security groups, qos, etc).


Is someone planning on working on this for Pike? If so, can you 
please file an RFE so we can prioritize it appropriately? We have to 
decide if we are going to block features based on the enforcement by 
this framework.


Cheers,
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread Lance Bragstad
I would love to have one for on-boarding new identity developers.

On Wed, Mar 15, 2017 at 1:43 PM, Michał Jastrzębski 
wrote:

> One for Kolla too please:)
>
> On 15 March 2017 at 11:35, Чадин Александр (Alexander Chadin)
>  wrote:
> > +1 for Watcher
> >
> > Best Regards,
> > _
> > Alexander Chadin
> > OpenStack Developer
> > Servionica LTD
> > a.cha...@servionica.ru
> > +7 (916) 693-58-81
> >
> > On 15 Mar 2017, at 21:20, Kendall Nelson  wrote:
> >
> > Hello All!
> >
> > As you may have seen in a previous thread [1] the Forum will offer
> project
> > on-boarding rooms! This idea is that these rooms will provide a place for
> > new contributors to a given project to find out more about the project,
> > people, and code base. The slots will be spread out throughout the whole
> > Summit and will be 90 min long.
> >
> > We have a very limited slots available for interested projects so it
> will be
> > a first come first served process. Let me know if you are interested and
> I
> > will reserve a slot for you if there are spots left.
> >
> > - Kendall Nelson (diablo_rojo)
> >
> > [1]
> > http://lists.openstack.org/pipermail/openstack-dev/2017-
> March/113459.html
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread Michał Jastrzębski
One for Kolla too please:)

On 15 March 2017 at 11:35, Чадин Александр (Alexander Chadin)
 wrote:
> +1 for Watcher
>
> Best Regards,
> _
> Alexander Chadin
> OpenStack Developer
> Servionica LTD
> a.cha...@servionica.ru
> +7 (916) 693-58-81
>
> On 15 Mar 2017, at 21:20, Kendall Nelson  wrote:
>
> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will offer project
> on-boarding rooms! This idea is that these rooms will provide a place for
> new contributors to a given project to find out more about the project,
> people, and code base. The slots will be spread out throughout the whole
> Summit and will be 90 min long.
>
> We have a very limited slots available for interested projects so it will be
> a first come first served process. Let me know if you are interested and I
> will reserve a slot for you if there are spots left.
>
> - Kendall Nelson (diablo_rojo)
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Jay Pipes

On 03/15/2017 01:21 PM, Fox, Kevin M wrote:

Other OpenStack subsystems (such as Heat) handle this with Trusts. A service 
account is made in a different, usually SQL backed Keystone Domain and a trust 
is created associating the service account with the User.

This mostly works but does give the trusted account a lot of power, as the 
roles by default in OpenStack are pretty coarse grained. That should be 
solvable though.


I didn't think Keystone trusts and Keystone federation were compatible 
with each other, though? Did that change recently?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread Alexander Chadin
+1 for Watcher

Best Regards,
_
Alexander Chadin
OpenStack Developer
Servionica LTD
a.cha...@servionica.ru
+7 (916) 693-58-81

On 15 Mar 2017, at 21:20, Kendall Nelson 
> wrote:

Hello All!

As you may have seen in a previous thread [1] the Forum will offer project 
on-boarding rooms! This idea is that these rooms will provide a place for new 
contributors to a given project to find out more about the project, people, and 
code base. The slots will be spread out throughout the whole Summit and will be 
90 min long.

We have a very limited slots available for interested projects so it will be a 
first come first served process. Let me know if you are interested and I will 
reserve a slot for you if there are spots left.

- Kendall Nelson (diablo_rojo)

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread John Dickinson
I'm interested in having a room for Swift.

--John




On 15 Mar 2017, at 11:20, Kendall Nelson wrote:

> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will offer project
> on-boarding rooms! This idea is that these rooms will provide a place for
> new contributors to a given project to find out more about the project,
> people, and code base. The slots will be spread out throughout the whole
> Summit and will be 90 min long.
>
> We have a very limited slots available for interested projects so it will
> be a first come first served process. Let me know if you are interested and
> I will reserve a slot for you if there are spots left.
>
> - Kendall Nelson (diablo_rojo)
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread Kendall Nelson
Hello All!

As you may have seen in a previous thread [1] the Forum will offer project
on-boarding rooms! This idea is that these rooms will provide a place for
new contributors to a given project to find out more about the project,
people, and code base. The slots will be spread out throughout the whole
Summit and will be 90 min long.

We have a very limited slots available for interested projects so it will
be a first come first served process. Let me know if you are interested and
I will reserve a slot for you if there are spots left.

- Kendall Nelson (diablo_rojo)

[1]
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Small steps for Go

2017-03-15 Thread Davanum Srinivas
Bruno,

Here it is:  https://etherpad.openstack.org/p/go-and-containers (it
was on the first email of this thread)

Thanks,
Dims

On Wed, Mar 15, 2017 at 1:46 PM, Bruno Morel  wrote:
> Still, I’m intrigued by this apparent duplication of efforts...
> esp since the discussions at the committee level tended to go the ‘inclusive’ 
> way toward other communities 
> (http://superuser.openstack.org/articles/community-leadership-charts-course-openstack/
>  section “Adjacent technologies”), can someone point me to the rational of 
> doing gophercloud work ?
>
> *No judging here :) *
> I’m guessing someone has done the work already but I can’t find the etherpad 
> Steve is talking about (when will those etherpad be searchable ? :P)
>
> Just trying to understand if we’re doing it for explicit, visible and logical 
> reasons and where to put our efforts if we need to participate in the ‘golang 
> for OpenStack’ efforts :)
>
> Tks
>
> Bruno
>
>
> On 2017-03-13, 8:13 PM, "Steve Gordon"  wrote:
>
> - Original Message -
> > From: "Steve Gordon" 
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> > Sent: Monday, March 13, 2017 8:11:34 PM
> > Subject: Re: [openstack-dev] [all] Small steps for Go
> >
> > - Original Message -
> > > From: "Clint Byrum" 
> > > To: "openstack-dev" 
> > > Sent: Monday, March 13, 2017 1:44:19 PM
> > > Subject: Re: [openstack-dev] [all] Small steps for Go
> > >
> > > Excerpts from Davanum Srinivas's message of 2017-03-13 10:06:30 -0400:
> > > > Update:
> > > >
> > > > * We have a new git repo (EMPTY!) for the commons work -
> > > > http://git.openstack.org/cgit/openstack/golang-commons/
> > > > * The golang-client has little code, but lot of potential -
> > > > https://git.openstack.org/cgit/openstack/golang-client/
> > > >
> > >
> > > So, we're going to pretend gophercloud doesn't exist and continue to
> > > isolate ourselves from every other community?
> >
> > I'd add that gophercloud [1] is what the Kubernetes cloud provider 
> framework
> > implementation for OpenStack [2] uses to talk to the underlying cloud*. 
> This
> > would seem like a pretty good area for collaboration with other 
> communities
> > to expand on what is there rather than start over?
> >
> > -Steve
> >
> > [1] https://github.com/gophercloud/gophercloud
> > [2]
> > 
> https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/openstack
>
> Nevermind I see this train of thought also made its way to the etherpad...
>
> -Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Small steps for Go

2017-03-15 Thread Bruno Morel
Still, I’m intrigued by this apparent duplication of efforts... 
esp since the discussions at the committee level tended to go the ‘inclusive’ 
way toward other communities 
(http://superuser.openstack.org/articles/community-leadership-charts-course-openstack/
 section “Adjacent technologies”), can someone point me to the rational of 
doing gophercloud work ?

*No judging here :) *
I’m guessing someone has done the work already but I can’t find the etherpad 
Steve is talking about (when will those etherpad be searchable ? :P)

Just trying to understand if we’re doing it for explicit, visible and logical 
reasons and where to put our efforts if we need to participate in the ‘golang 
for OpenStack’ efforts :)

Tks

Bruno


On 2017-03-13, 8:13 PM, "Steve Gordon"  wrote:

- Original Message -
> From: "Steve Gordon" 
> To: "OpenStack Development Mailing List (not for usage questions)" 

> Sent: Monday, March 13, 2017 8:11:34 PM
> Subject: Re: [openstack-dev] [all] Small steps for Go
> 
> - Original Message -
> > From: "Clint Byrum" 
> > To: "openstack-dev" 
> > Sent: Monday, March 13, 2017 1:44:19 PM
> > Subject: Re: [openstack-dev] [all] Small steps for Go
> > 
> > Excerpts from Davanum Srinivas's message of 2017-03-13 10:06:30 -0400:
> > > Update:
> > > 
> > > * We have a new git repo (EMPTY!) for the commons work -
> > > http://git.openstack.org/cgit/openstack/golang-commons/
> > > * The golang-client has little code, but lot of potential -
> > > https://git.openstack.org/cgit/openstack/golang-client/
> > > 
> > 
> > So, we're going to pretend gophercloud doesn't exist and continue to
> > isolate ourselves from every other community?
> 
> I'd add that gophercloud [1] is what the Kubernetes cloud provider 
framework
> implementation for OpenStack [2] uses to talk to the underlying cloud*. 
This
> would seem like a pretty good area for collaboration with other 
communities
> to expand on what is there rather than start over?
> 
> -Steve
> 
> [1] https://github.com/gophercloud/gophercloud
> [2]
> 
https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/openstack

Nevermind I see this train of thought also made its way to the etherpad...

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Fox, Kevin M
Other OpenStack subsystems (such as Heat) handle this with Trusts. A service 
account is made in a different, usually SQL backed Keystone Domain and a trust 
is created associating the service account with the User.

This mostly works but does give the trusted account a lot of power, as the 
roles by default in OpenStack are pretty coarse grained. That should be 
solvable though.

Thanks,
Kevin


From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday, March 15, 2017 9:49 AM
To: openstack-dev
Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog

Excerpts from Sean Dague's message of 2017-03-15 08:45:54 -0400:
> On 03/13/2017 05:10 PM, Zane Bitter wrote:
> 
> >> I'm not sure I agree. One can very simply inject needed credentials
> >> into a running VM and have it interact with the cloud APIs.
> >
> > Demo please!
> >
> > Most Keystone backends are read-only, you can't even create a new user
> > account yourself. It's an admin-only API anyway. The only non-expiring
> > credential you even *have*, ignoring the difficulties of getting it to
> > the server, is your LDAP password. Would *you* put *your* LDAP password
> > on an internet-facing server? I would not.
>
> So is one of the issues to support cloud native flows that our user auth
> system, which often needs to connect into traditional enterprise
> systems, doesn't really consider that?
>
> I definitely agree, if your cloud is using your LDAP password, which
> gets you into your health insurance and direct deposit systems at your
> employeer, sticking this into a cloud server is a no go.
>
> Thinking aloud, I wonder if user provisionable sub users would help
> here. They would have all the same rights as the main user (except
> modify other subusers), but would have a dedicated user provisioned
> password. You basically can carve off the same thing from Google when
> you have services that can't do the entire oauth/2factor path. Fastmail
> rolled out something similar recently as well.
>

Could we just let users manage a set of OAuth keys that have a subset
of their roles?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose Attila Darazs and Gabriele Cerami for tripleo-ci core

2017-03-15 Thread Giulio Fidente
On 03/15/2017 04:44 PM, John Trowbridge wrote:
> Both Attila and Gabriele have been rockstars with the work to transition
> tripleo-ci to run via quickstart, and both have become extremely
> knowledgeable about how tripleo-ci works during that process. They are
> both very capable of providing thorough and thoughtful reviews of
> tripleo-ci patches.
> 
> On top of this Attila has greatly increased the communication from the
> tripleo-ci squad as the liason, with weekly summary emails of our
> meetings to this list.

++

where would CI be without you guys :)
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Clint Byrum
Excerpts from Clark Boylan's message of 2017-03-15 09:50:49 -0700:
> On Wed, Mar 15, 2017, at 06:19 AM, Monty Taylor wrote:
> > On 03/15/2017 11:37 AM, Davanum Srinivas wrote:
> > > Monty, Team,
> > > 
> > > Sorry for the top post:
> > > 
> > > Support for etcd/tooz in devstack (with file driver as default) -
> > > https://review.openstack.org/#/c/445432/
> > > 
> > > As of right now both zookeeper driver and etcd driver is working fine:
> > > https://review.openstack.org/#/c/445630/
> > > https://review.openstack.org/#/c/445629/
> > > 
> > > The problem we have from before is that we do not have any CI jobs
> > > that used zookeeper.
> > > 
> > > I am leaning towards just throwing the etcd as default and if folks
> > > are interested in zookeeper then they can add specific CI jobs with
> > > DLM_BACKEND variable set.
> > 
> > That doesn't bother me - zk as the default choice was because at the
> > time zk worked and etcd did not.
> > 
> > That said - etcd3 is a newer/better thing - so maybe instead of driving
> > etcd home as a default before we add etcd3 support, we just change tooz
> > to support etcd3, add the devstack jobs to use that, and start from a
> > position that doesn't involve dealing with any legacy?
> > 
> 
> One logistical concern that no one else seems to have pointed out on
> this thread yet is that the example devstack setup linked at the
> beginning of the thread
> (http://git.openstack.org/cgit/openstack/dragonflow/tree/devstack/etcd_driver)
> grabs tarball from github to perform the etcd3 installation. Looks like
> Fedora and CentOS may have proper packages for etcd3 but Ubuntu does
> not.
> 
> For reliability reasons we probably do not want to be grabbing this
> tarball from github on every job run (particularly if this becomes a
> "base service" installed on every devstack job run).
> 

Zesty looks to be getting etcd3, though there are dependencies that
haven't synced/built yet[1].  Once that's done we can submit a backport and
get it available on Xenial and then just 'apt-get install backports/etcd'

Of course, using backports in LTS's is a little bit weird since the
security team won't maintain backports.

[1] https://launchpad.net/ubuntu/+source/etcd/3.1.0-1/+build/12016435

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [acceleration]Team Biweekly Meeting Agenda 2017.03.15

2017-03-15 Thread Zhipeng Huang
Hi Team,

Please find the meeting minutes at
https://docs.google.com/document/d/14Ab100zV0h_g2U0dhxYQ7hbzIU_6xH329hBv6cqDgWk/edit?usp=sharing
. We had a great discussion todday and it was agreed that BP discussions
will be carried on gerrit for spec review.

Another agreement is that since we began developing, we will start to hold
weekly meetings instead of bi-weekly meetings. The time is 11:00am ET on
Wed.

On Wed, Mar 15, 2017 at 3:43 PM, Zhipeng Huang 
wrote:

> Hi Team,
>
> Please find the initial agenda at https://wiki.openstack.org/
> wiki/Meetings/CyborgTeamMeeting#Agenda_for_next_meeting
>
> our IRC channel is #openstack-cyborg
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Prooduct Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Clark Boylan
On Wed, Mar 15, 2017, at 06:19 AM, Monty Taylor wrote:
> On 03/15/2017 11:37 AM, Davanum Srinivas wrote:
> > Monty, Team,
> > 
> > Sorry for the top post:
> > 
> > Support for etcd/tooz in devstack (with file driver as default) -
> > https://review.openstack.org/#/c/445432/
> > 
> > As of right now both zookeeper driver and etcd driver is working fine:
> > https://review.openstack.org/#/c/445630/
> > https://review.openstack.org/#/c/445629/
> > 
> > The problem we have from before is that we do not have any CI jobs
> > that used zookeeper.
> > 
> > I am leaning towards just throwing the etcd as default and if folks
> > are interested in zookeeper then they can add specific CI jobs with
> > DLM_BACKEND variable set.
> 
> That doesn't bother me - zk as the default choice was because at the
> time zk worked and etcd did not.
> 
> That said - etcd3 is a newer/better thing - so maybe instead of driving
> etcd home as a default before we add etcd3 support, we just change tooz
> to support etcd3, add the devstack jobs to use that, and start from a
> position that doesn't involve dealing with any legacy?
> 

One logistical concern that no one else seems to have pointed out on
this thread yet is that the example devstack setup linked at the
beginning of the thread
(http://git.openstack.org/cgit/openstack/dragonflow/tree/devstack/etcd_driver)
grabs tarball from github to perform the etcd3 installation. Looks like
Fedora and CentOS may have proper packages for etcd3 but Ubuntu does
not.

For reliability reasons we probably do not want to be grabbing this
tarball from github on every job run (particularly if this becomes a
"base service" installed on every devstack job run).

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Clint Byrum
Excerpts from Kristi Nikolla's message of 2017-03-15 12:35:45 -0400:
> This might be related to the current discussion. 
> 
> In one of the keystone PTG sessions we started to talk about API keys. [0]
> A spec is being written and discussed. [1]
> 
> This would allow the user to provision API key credentials with a subset of 
> roles for use inside of their applications. Removing the need to inject the 
> actual user credentials inside an application.
> 
> [0] http://lbragstad.com/keystone-pike-ptg-summary/
> [1] https://review.openstack.org/#/c/438761
> 

 ^^ this!

> > On Mar 15, 2017, at 8:45 AM, Sean Dague  wrote:
> > 
> > On 03/13/2017 05:10 PM, Zane Bitter wrote:
> >> 
> >> Demo please!
> >> 
> >> Most Keystone backends are read-only, you can't even create a new user
> >> account yourself. It's an admin-only API anyway. The only non-expiring
> >> credential you even *have*, ignoring the difficulties of getting it to
> >> the server, is your LDAP password. Would *you* put *your* LDAP password
> >> on an internet-facing server? I would not.
> > 
> > So is one of the issues to support cloud native flows that our user auth
> > system, which often needs to connect into traditional enterprise
> > systems, doesn't really consider that?
> > 
> > I definitely agree, if your cloud is using your LDAP password, which
> > gets you into your health insurance and direct deposit systems at your
> > employeer, sticking this into a cloud server is a no go.
> > 
> > Thinking aloud, I wonder if user provisionable sub users would help
> > here. They would have all the same rights as the main user (except
> > modify other subusers), but would have a dedicated user provisioned
> > password. You basically can carve off the same thing from Google when
> > you have services that can't do the entire oauth/2factor path. Fastmail
> > rolled out something similar recently as well.
> > 
> > -Sean
> > 
> > -- 
> > Sean Dague
> > http://dague.net
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Clint Byrum
Excerpts from Sean Dague's message of 2017-03-15 08:45:54 -0400:
> On 03/13/2017 05:10 PM, Zane Bitter wrote:
> 
> >> I'm not sure I agree. One can very simply inject needed credentials
> >> into a running VM and have it interact with the cloud APIs.
> > 
> > Demo please!
> > 
> > Most Keystone backends are read-only, you can't even create a new user
> > account yourself. It's an admin-only API anyway. The only non-expiring
> > credential you even *have*, ignoring the difficulties of getting it to
> > the server, is your LDAP password. Would *you* put *your* LDAP password
> > on an internet-facing server? I would not.
> 
> So is one of the issues to support cloud native flows that our user auth
> system, which often needs to connect into traditional enterprise
> systems, doesn't really consider that?
> 
> I definitely agree, if your cloud is using your LDAP password, which
> gets you into your health insurance and direct deposit systems at your
> employeer, sticking this into a cloud server is a no go.
> 
> Thinking aloud, I wonder if user provisionable sub users would help
> here. They would have all the same rights as the main user (except
> modify other subusers), but would have a dedicated user provisioned
> password. You basically can carve off the same thing from Google when
> you have services that can't do the entire oauth/2factor path. Fastmail
> rolled out something similar recently as well.
> 

Could we just let users manage a set of OAuth keys that have a subset
of their roles?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose Attila Darazs and Gabriele Cerami for tripleo-ci core

2017-03-15 Thread Michele Baldessari
On Wed, Mar 15, 2017 at 11:44:22AM -0400, John Trowbridge wrote:
> Both Attila and Gabriele have been rockstars with the work to transition
> tripleo-ci to run via quickstart, and both have become extremely
> knowledgeable about how tripleo-ci works during that process. They are
> both very capable of providing thorough and thoughtful reviews of
> tripleo-ci patches.
> 
> On top of this Attila has greatly increased the communication from the
> tripleo-ci squad as the liason, with weekly summary emails of our
> meetings to this list.

+1. Thanks to both!

-- 
Michele Baldessari
C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][requirements] Does proposal bot also update foo-requirements.txt?

2017-03-15 Thread Clark Boylan
On Wed, Mar 15, 2017, at 09:20 AM, Andreas Scheuring wrote:
> Hi,
> I wanted to understand which project requirement files get updated by
> the OpenStack proposal bot. Only requirements.txt and
> test-requirements.txt, or also foo-requirements.txt?
> 
> Pointing me to the OpenStack proposal bot code would also help I
> guess...

The "bot" is really just a job defined in JJB [0], which runs a script
in the requirements repo [1], which appears to read the requirements
files using this read() function [2].

With that out of the way, you shouldn't need a bunch of requirements
files. Environment markers should allow you to specify conditional
installations based on python version or operating system and so on [3].
And for optional sets of dependencies you should be using "extras" [4].
Also more general info can be found in the requirements README [5].

[0]
https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/requirements.yaml#n261
[1]
https://git.openstack.org/cgit/openstack/requirements/tree/openstack_requirements/cmds/update.py
[2]
https://git.openstack.org/cgit/openstack/requirements/tree/openstack_requirements/project.py#n117
[3] https://www.python.org/dev/peps/pep-0496/#id7
[4]
https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies
[5]
https://git.openstack.org/cgit/openstack/requirements/tree/README.rst

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Mike Bayer


On 03/15/2017 11:42 AM, Sean Dague wrote:

Perhaps, but in doing so oslo.db is going to get the pin and uc from
stable/ocata, which is going to force it back to SQLA < 1.1, which will
prevent oslo.db changes that require >= 1.1 to work.


so do we want to make that job non-voting or something like that?





-Sean

On 03/15/2017 11:26 AM, Roman Podoliaka wrote:

Isn't the purpose of that specific job -
gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata - to test a
change to the library master branch with stable releases (i.e. Ocata)
- of all other components?

On Wed, Mar 15, 2017 at 5:20 PM, Sean Dague  wrote:

On 03/15/2017 10:38 AM, Mike Bayer wrote:



On 03/15/2017 07:30 AM, Sean Dague wrote:


The problem was the original patch kept a cap on SQLA, just moved it up
to the next pre-release, not realizing the caps in general are the
concern by the requirements team. So instead of upping the cap, I just
removed it entirely. (It also didn't help on clarity that there was a
completely unrelated fail in the tests which made it look like the
system was stopping this.)

This should hopefully let new SQLA releases very naturally filter out to
all our services and libraries.

-Sean



so the failure I'm seeing now is *probably* one I saw earlier when we
tried to do this, the tempest run fails on trying to run a keystone
request, but I can't find the same error in the logs this time.

In an earlier build of https://review.openstack.org/#/c/423192/, we saw
this:

ContextualVersionConflict: (SQLAlchemy 1.1.5
(/usr/local/lib/python2.7/dist-packages),
Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db',
'keystone']))

stack trace was in the apache log:  http://paste.openstack.org/show/601583/


but now on our own oslo.db build, the same jobs are failing and are
halting at keystone, but I can't find any error:

the failure is:


http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/


and is on:  https://review.openstack.org/#/c/445930/


if someone w/ tempest expertise could help with this that would be great.


It looks like oslo.db master is being used with ocata services?
http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/logs/devstacklog.txt.gz#_2017-03-15_13_10_52_434


I suspect that's the root issue. That should be stable/ocata branch, right?

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Zane Bitter

On 15/03/17 08:45, Sean Dague wrote:

On 03/13/2017 05:10 PM, Zane Bitter wrote:


I'm not sure I agree. One can very simply inject needed credentials
into a running VM and have it interact with the cloud APIs.


Demo please!

Most Keystone backends are read-only, you can't even create a new user
account yourself. It's an admin-only API anyway. The only non-expiring
credential you even *have*, ignoring the difficulties of getting it to
the server, is your LDAP password. Would *you* put *your* LDAP password
on an internet-facing server? I would not.


So is one of the issues to support cloud native flows that our user auth
system, which often needs to connect into traditional enterprise
systems, doesn't really consider that?


Yes, absolutely.

Keystone kinda sorta has a partial fix for this. Different domains can 
have different backends, so you can have one read-only domain for 
corporate user accounts backed by LDAP/ActiveDirectory and another 
read/write domain backed by Sqlalchemy.


In fact this is how Heat gets around this - we require operators to 
create a DB-backed heat_stack_users domain, we create accounts in there, 
and then we give them special permissions (not granted by their keystone 
roles) for the stacks they're associated with in Heat. It's messy and 
other projects (like Kuryr) don't automatically get the benefit.


Nor does it help end users at the moment. There's no domain that 
guaranteed to be set up for them to create user accounts in (certainly 
not one that's consistent across multiple OpenStack clouds), and even if 
there were only admins can create user accounts on most clouds (IIUC 
Rackspace is one notable exception to this, but we need stuff that's 
consistent across clouds).



I definitely agree, if your cloud is using your LDAP password, which
gets you into your health insurance and direct deposit systems at your
employeer, sticking this into a cloud server is a no go.

Thinking aloud, I wonder if user provisionable sub users would help
here. They would have all the same rights as the main user (except
modify other subusers), but would have a dedicated user provisioned
password. You basically can carve off the same thing from Google when
you have services that can't do the entire oauth/2factor path. Fastmail
rolled out something similar recently as well.


This sounds like a good idea, and could definitely be part of the 
solution. If you read an AWS getting started guide pretty much all of 
them have as Step #1 creating an IAM account to use with your project so 
that you basically never have to use the credentials of your master 
account, which is connected to your billing. (I suspect the reason is 
that most people seem to end up accidentally checking their AWS 
credentials into a public GitHub repo at some point. ;)


It's not a total solution, though. A user account that has all of your 
permissions can still do anything you can do, like e.g. delete your 
whole application and all of its data. And backups. In fact, it can do 
that in all of the projects you have a role in. (The latter is fixable, 
by creating a user that has _no_ permissions instead, so you can then 
delegate your roles to them one at a time using a trust, or 
alternatively just by choosing which roles it inherits.)


So ultimately we need to give cloud application developers total control 
over what the accounts they create can and cannot do - so an application 
can, e.g. scale itself and heal itself but not delete itself.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Kristi Nikolla
This might be related to the current discussion. 

In one of the keystone PTG sessions we started to talk about API keys. [0]
A spec is being written and discussed. [1]

This would allow the user to provision API key credentials with a subset of 
roles for use inside of their applications. Removing the need to inject the 
actual user credentials inside an application.

[0] http://lbragstad.com/keystone-pike-ptg-summary/
[1] https://review.openstack.org/#/c/438761


> On Mar 15, 2017, at 8:45 AM, Sean Dague  wrote:
> 
> On 03/13/2017 05:10 PM, Zane Bitter wrote:
>> 
>> Demo please!
>> 
>> Most Keystone backends are read-only, you can't even create a new user
>> account yourself. It's an admin-only API anyway. The only non-expiring
>> credential you even *have*, ignoring the difficulties of getting it to
>> the server, is your LDAP password. Would *you* put *your* LDAP password
>> on an internet-facing server? I would not.
> 
> So is one of the issues to support cloud native flows that our user auth
> system, which often needs to connect into traditional enterprise
> systems, doesn't really consider that?
> 
> I definitely agree, if your cloud is using your LDAP password, which
> gets you into your health insurance and direct deposit systems at your
> employeer, sticking this into a cloud server is a no go.
> 
> Thinking aloud, I wonder if user provisionable sub users would help
> here. They would have all the same rights as the main user (except
> modify other subusers), but would have a dedicated user provisioned
> password. You basically can carve off the same thing from Google when
> you have services that can't do the entire oauth/2factor path. Fastmail
> rolled out something similar recently as well.
> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][requirements] Does proposal bot also update foo-requirements.txt?

2017-03-15 Thread Andreas Scheuring
Hi,
I wanted to understand which project requirement files get updated by
the OpenStack proposal bot. Only requirements.txt and
test-requirements.txt, or also foo-requirements.txt?

Pointing me to the OpenStack proposal bot code would also help I
guess...

Thanks

-- 
-
Andreas 
IRC: andreas_s





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Forum Boston - Brainstorming

2017-03-15 Thread Andrea Frittoli
Hello folks,

I set up an etherpad [0] for the QA team to track topic ideas / proposals
for the Forum [1].
Please feel free to contribute ideas, we'll have a slot to discuss about
this during our meetings starting tomorrow.

Andrea

[0] https://etherpad.openstack.org/p/BOS-QA-brainstorming
[1]
http://lists.openstack.org/pipermail/openstack-dev/2017-March/114017.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Additional Grenade jobs for Cinder patches

2017-03-15 Thread Prabhu Vinod, Karthik
Hi,

We now have two additional gate jobs which can be used to test patches that 
modify RPC API, objects or DB Schema for incompatibilities. These two jobs are 
non-voting and can be invoked on cinder patches by typing “check experimental” 
as a comment on that patch.

The two-additional gate jobs are:

a.) gate-grenade-dsvm-cinder-mn-sub-volschbak-ubuntu-xenial-nv

b.) gate-grenade-dsvm-cinder-mn-sub-bak-ubuntu-xenial-nv

What do they do?
--
gate-grenade-dsvm-cinder-mn-sub-volschbak-ubuntu-xenial-nv: This job has only 
the c-api on the primary and remaining on the sub-nodes. So it’s a scenario 
where an upgraded c-api talks to older c-vol, c-sch, c-bak.

gate-grenade-dsvm-cinder-mn-sub-bak-ubuntu-xenial-nv: This job has only the 
c-api, sch, c-vol on the primary and c-bak on the sub-nodes. So it’s a scenario 
where an upgraded c-api, sch, c-vol talks to older c-bak.


Regards,
Karthik Prabhu



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][blazar][ceilometer][congress][intel-nfv-ci-tests][ironic][manila][networking-bgpvpn][networking-fortinet][networking-sfc][neutron][neutron-fwaas][neutron-lbaas][nova-lxd][octa

2017-03-15 Thread Andrea Frittoli
On Wed, Mar 15, 2017 at 11:38 AM Dmitry Tantsur  wrote:

> On 02/27/2017 12:34 PM, Andrea Frittoli wrote:
> > Hello folks,
> >
> > TL;DR: if today you import manager,py from tempest.scenario please
> maintain a
> > copy of [0] in tree until further notice.
>
> Hi!
>
> I hope it is pretty obvious, but just to be clear. Anything that this
> copied
> file uses should be treated more or less as a stable API by the QA team
> during
> the whole transition period. The last thing we want to happen is for this
> file
> to break all the time because its dependencies (imports, functions,
> classes it
> uses) are not stable.
>
> If it's not the case, please update it, and let us know the git hash to
> use to
> grab the final version of the file.
>
>
Your code depends on manager.py and its dependencies today,
and copying that in-tree removes at least one of the dependencies.

The only case were you're in a worse situation is if one of the imports is
removed / renamed,
and we'll do our best to avoid that.

My recommendation would be to trim down your copy of manager.py to the bare
minimum you
need, which is likely to be much smaller than the whole module.


> Thanks for understanding!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Propose Attila Darazs and Gabriele Cerami for tripleo-ci core

2017-03-15 Thread John Trowbridge
Both Attila and Gabriele have been rockstars with the work to transition
tripleo-ci to run via quickstart, and both have become extremely
knowledgeable about how tripleo-ci works during that process. They are
both very capable of providing thorough and thoughtful reviews of
tripleo-ci patches.

On top of this Attila has greatly increased the communication from the
tripleo-ci squad as the liason, with weekly summary emails of our
meetings to this list.

- trown

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Sean Dague
Perhaps, but in doing so oslo.db is going to get the pin and uc from
stable/ocata, which is going to force it back to SQLA < 1.1, which will
prevent oslo.db changes that require >= 1.1 to work.

-Sean

On 03/15/2017 11:26 AM, Roman Podoliaka wrote:
> Isn't the purpose of that specific job -
> gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata - to test a
> change to the library master branch with stable releases (i.e. Ocata)
> - of all other components?
> 
> On Wed, Mar 15, 2017 at 5:20 PM, Sean Dague  wrote:
>> On 03/15/2017 10:38 AM, Mike Bayer wrote:
>>>
>>>
>>> On 03/15/2017 07:30 AM, Sean Dague wrote:

 The problem was the original patch kept a cap on SQLA, just moved it up
 to the next pre-release, not realizing the caps in general are the
 concern by the requirements team. So instead of upping the cap, I just
 removed it entirely. (It also didn't help on clarity that there was a
 completely unrelated fail in the tests which made it look like the
 system was stopping this.)

 This should hopefully let new SQLA releases very naturally filter out to
 all our services and libraries.

 -Sean

>>>
>>> so the failure I'm seeing now is *probably* one I saw earlier when we
>>> tried to do this, the tempest run fails on trying to run a keystone
>>> request, but I can't find the same error in the logs this time.
>>>
>>> In an earlier build of https://review.openstack.org/#/c/423192/, we saw
>>> this:
>>>
>>> ContextualVersionConflict: (SQLAlchemy 1.1.5
>>> (/usr/local/lib/python2.7/dist-packages),
>>> Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db',
>>> 'keystone']))
>>>
>>> stack trace was in the apache log:  http://paste.openstack.org/show/601583/
>>>
>>>
>>> but now on our own oslo.db build, the same jobs are failing and are
>>> halting at keystone, but I can't find any error:
>>>
>>> the failure is:
>>>
>>>
>>> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/
>>>
>>>
>>> and is on:  https://review.openstack.org/#/c/445930/
>>>
>>>
>>> if someone w/ tempest expertise could help with this that would be great.
>>
>> It looks like oslo.db master is being used with ocata services?
>> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/logs/devstacklog.txt.gz#_2017-03-15_13_10_52_434
>>
>>
>> I suspect that's the root issue. That should be stable/ocata branch, right?
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-15 Thread Brant Knudson
On Wed, Mar 15, 2017 at 5:18 AM, Thierry Carrez 
wrote:

> Julien Danjou wrote:
> > On Tue, Mar 14 2017, Clint Byrum wrote:
> >
> >> +1 for just pulling it under the oslo umbrella but not renaming it. As
> >> much as I like the uniformity oslo.keymanager would bring, I think it's
> >> already adopted well enough we just want to make it clear that it is
> >> blessed and ok to adopt.
> >
> > I don't even get why moving it under the Oslo umbrella is a win.
> >
> > What's the current problem people are trying to solve here?
>
> It's a governance problem. Basically if the abstraction layer is under
> the control of the same group as one of the drivers, it's not really an
> abstraction layer, and nobody will adopt it or develop another driver
> for it.
>
> See Clint's first answer in the thread for a more detailed explanation.
>
> --
> Thierry Carrez (ttx)
>
>
Can the Castellan team be broken out into a new project under the big tent
rather than having to go under oslo? Oslo as a catch-all made more sense
before the big tent. Also, I always thought part of the deal of moving
under oslo is that oslo core reviewers have +2 authority on the repos, but
it doesn't look like that's part of the proposal here which was a rename
and now is changing launchpad to make Castellan a subproject under oslo
(along with some documentation changes).

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Roman Podoliaka
Isn't the purpose of that specific job -
gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata - to test a
change to the library master branch with stable releases (i.e. Ocata)
- of all other components?

On Wed, Mar 15, 2017 at 5:20 PM, Sean Dague  wrote:
> On 03/15/2017 10:38 AM, Mike Bayer wrote:
>>
>>
>> On 03/15/2017 07:30 AM, Sean Dague wrote:
>>>
>>> The problem was the original patch kept a cap on SQLA, just moved it up
>>> to the next pre-release, not realizing the caps in general are the
>>> concern by the requirements team. So instead of upping the cap, I just
>>> removed it entirely. (It also didn't help on clarity that there was a
>>> completely unrelated fail in the tests which made it look like the
>>> system was stopping this.)
>>>
>>> This should hopefully let new SQLA releases very naturally filter out to
>>> all our services and libraries.
>>>
>>> -Sean
>>>
>>
>> so the failure I'm seeing now is *probably* one I saw earlier when we
>> tried to do this, the tempest run fails on trying to run a keystone
>> request, but I can't find the same error in the logs this time.
>>
>> In an earlier build of https://review.openstack.org/#/c/423192/, we saw
>> this:
>>
>> ContextualVersionConflict: (SQLAlchemy 1.1.5
>> (/usr/local/lib/python2.7/dist-packages),
>> Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db',
>> 'keystone']))
>>
>> stack trace was in the apache log:  http://paste.openstack.org/show/601583/
>>
>>
>> but now on our own oslo.db build, the same jobs are failing and are
>> halting at keystone, but I can't find any error:
>>
>> the failure is:
>>
>>
>> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/
>>
>>
>> and is on:  https://review.openstack.org/#/c/445930/
>>
>>
>> if someone w/ tempest expertise could help with this that would be great.
>
> It looks like oslo.db master is being used with ocata services?
> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/logs/devstacklog.txt.gz#_2017-03-15_13_10_52_434
>
>
> I suspect that's the root issue. That should be stable/ocata branch, right?
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Sean Dague
On 03/15/2017 10:38 AM, Mike Bayer wrote:
> 
> 
> On 03/15/2017 07:30 AM, Sean Dague wrote:
>>
>> The problem was the original patch kept a cap on SQLA, just moved it up
>> to the next pre-release, not realizing the caps in general are the
>> concern by the requirements team. So instead of upping the cap, I just
>> removed it entirely. (It also didn't help on clarity that there was a
>> completely unrelated fail in the tests which made it look like the
>> system was stopping this.)
>>
>> This should hopefully let new SQLA releases very naturally filter out to
>> all our services and libraries.
>>
>> -Sean
>>
> 
> so the failure I'm seeing now is *probably* one I saw earlier when we
> tried to do this, the tempest run fails on trying to run a keystone
> request, but I can't find the same error in the logs this time.
> 
> In an earlier build of https://review.openstack.org/#/c/423192/, we saw
> this:
> 
> ContextualVersionConflict: (SQLAlchemy 1.1.5
> (/usr/local/lib/python2.7/dist-packages),
> Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db',
> 'keystone']))
> 
> stack trace was in the apache log:  http://paste.openstack.org/show/601583/
> 
> 
> but now on our own oslo.db build, the same jobs are failing and are
> halting at keystone, but I can't find any error:
> 
> the failure is:
> 
> 
> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/
> 
> 
> and is on:  https://review.openstack.org/#/c/445930/
> 
> 
> if someone w/ tempest expertise could help with this that would be great.

It looks like oslo.db master is being used with ocata services?
http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/logs/devstacklog.txt.gz#_2017-03-15_13_10_52_434


I suspect that's the root issue. That should be stable/ocata branch, right?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Julien Danjou
On Wed, Mar 15 2017, Monty Taylor wrote:

> On 03/15/2017 02:44 PM, Julien Danjou wrote:
>> On Wed, Mar 15 2017, Davanum Srinivas wrote:
>> 
>>> Yep, jd__ and i confirmed that things work with 3.x
>> 
>> Though to be clear, what's used in tooz is the v2 HTTP API, not the new
>> v3 gRPC API.
>
> But if it conceptually works with v3 server using v2 http api, then we
> should be able to iterate in support for grpc api as patches to tooz and
> have everything be happy, right?

Definitely!

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] volunteers for cross project liaisons

2017-03-15 Thread Jay Faulkner

> On Mar 15, 2017, at 8:11 AM, Loo, Ruby  wrote:
> 
> Hi,
> 
> The ironic community is looking for volunteers to be cross-project liaisons 
> [1] for these projects:
> - oslo
> - logging working group
> - i18n

The i18n and docs projects are closely related. I also don’t think they do a 
lot of translating for ironic. Unless we have a contributor who utilizes i18n 
and is more familiar, I can take this on.

-Jay
> 
> The expectations are documented in [1] on a per-project basis. The amount of 
> commitment varies depending on the project (and I don't know what that might 
> be).
> 
> [insert here why it would be an awesome experience for you, fame, fortune, 
> ... :D]
> 
> --ruby
> 
> [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-15 Thread Paul Belanger
On Wed, Mar 15, 2017 at 04:23:09AM +0100, Monty Taylor wrote:
> On 03/15/2017 12:05 AM, Joshua Harlow wrote:
> > So just fyi, this has been talked about before (but prob in context of
> > zookeeper or various other pluggable config backends).
> > 
> > Some links:
> > 
> > - https://review.openstack.org/#/c/243114/
> > - https://review.openstack.org/#/c/243182/
> > - https://blueprints.launchpad.net/oslo.config/+spec/oslo-config-db
> > - https://review.openstack.org/#/c/130047/
> > 
> > I think the general questions that seem to reappear are around the
> > following:
> > 
> > * How does reloading work (does it)?
> > 
> > * What's the operational experience (editing a ini file is about the
> > lowest bar we can possible get to, for better and/or worse).
> 
> As a person who operates many softwares (but who does not necessarily
> operate OpenStack specifically) I will say that services that store
> their config in a service that do not have an injest/update facility
> from file are a GIANT PITA to deal with. Config management is great at
> laying down config files. It _can_ put things into services, but that's
> almost always more work.
> 
> Which is my way of saying - neat, but please please please whoever
> writes this make a simple facility that will let someone plop config
> into a file on disk and get that noticed and slurped into the config
> service. A one-liner command line tool that one runs on the config file
> to splat into the config service would be fine.
> 
So much this! As an operator, I am fine plopping a config files down on a remote
node and understand what that means to my workflow.

Opt out by default! :)

> > * Does this need to be a new oslo.config backend or is it better suited
> > by something like the following (external programs loop)::
> > 
> >etcd_client = make_etcd_client(args)
> >while True:
> >has_changed = etcd_client.get_new_config("/blahblah") # or use a
> > watch
> >if has_changed:
> >   fetch_and_write_ini_file(etcd_client)
> >   trigger_reload()
> >time.sleep(args.wait)
> > 
> > * Is an external loop better (maybe, maybe not?)
> > 
> > Pretty sure there are some etherpad discussions around this also somewhere.
> > 
> > Clint Byrum wrote:
> >> Excerpts from Davanum Srinivas's message of 2017-03-14 13:04:37 -0400:
> >>> Team,
> >>>
> >>> So one more thing popped up again on IRC:
> >>> https://etherpad.openstack.org/p/oslo.config_etcd_backend
> >>>
> >>> What do you think? interested in this work?
> >>>
> >>> Thanks,
> >>> Dims
> >>>
> >>> PS: Between this thread and the other one about Tooz/DLM and
> >>> os-lively, we can probably make a good case to add etcd as a base
> >>> always-on service.
> >>>
> >>
> >> This is a cool idea, and I think we should do it.
> >>
> >> A few loose ends I'd like to see in a spec:
> >>
> >> * Security Security Security. (Hoping if I say it 3 times a real
> >>security person will appear and ask the hard questions).
> >> * Explain clearly how operators would inspect, edit, and diff their
> >>configs.
> >>
> >> __
> >>
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] volunteers for cross project liaisons

2017-03-15 Thread Loo, Ruby
Hi,

The ironic community is looking for volunteers to be cross-project liaisons [1] 
for these projects:
- oslo
- logging working group
- i18n

The expectations are documented in [1] on a per-project basis. The amount of 
commitment varies depending on the project (and I don't know what that might 
be).

[insert here why it would be an awesome experience for you, fame, fortune, ... 
:D]

--ruby

[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed: Removal of legacy per-project vanity domain redirects

2017-03-15 Thread Andrea Frittoli
On Wed, Mar 8, 2017 at 8:09 PM Monty Taylor  wrote:

> On 03/08/2017 10:31 AM, Daniel P. Berrange wrote:
> > On Wed, Mar 08, 2017 at 09:12:59AM -0600, Monty Taylor wrote:
> >> Hey all,
> >>
> >> We have a set of old vanity redirect URLs from back when we made a URL
> >> for each project:
> >>
> >> cinder.openstack.org
> >> glance.openstack.org
> >> horizon.openstack.org
> >> keystone.openstack.org
> >> nova.openstack.org
> >> qa.openstack.org

I never knew this existed :)


>
> >> swift.openstack.org
> >>
> >> They are being served from an old server we'd like to retire. Obviously,
> >> moving a set of http redirects is trivial, but these domains have been
> >> deprecated for about 4 now, so we figured we'd clean house if we can.
> >>
> >> We know that the swift team has previously expressed that there are
> >> links out in the wild pointing to swift.o.o/content that still work and
> >> that they don't want to break anyone, which is fine. (although if the
> >> swift team has changed their minds, that's also welcome)
> >>
> >> for the rest of you, can we kill these rather than transfer them?
> >
> > Does the server have any access log that could provide stats on whether
> > any of the subdomains are a receiving a meaningful amount of traffic ?
> > Easy to justify removing them if they're not seeing any real traffic.
> >
> > If there's any referrer logs present, that might highlight which places
> > still have outdated links that need updating to kill off remaining
> > traffic.
>
> Yes. The majority of the hits are from search engines, fwiw. :)
>
> However, we're also changing the current redirect from temporary to
> permanent, so we're hoping that'll get the search engines to fall over
> to the other thing.
>
> I agree, doing that for a while, especially after having gotten the
> redirect changed and landing the patches to remove references from our
> docs should give us better info on where we stand currently.
>
>
> A search of openstack qa leads to the wiki, and a search for
openstack tempest leads to the docs page, so changing this should
be no problem for the QA redirect.


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Monty Taylor
On 03/15/2017 02:52 PM, Louis Taylor wrote:
> On Wed, Mar 15, 2017 at 1:44 PM, Julien Danjou  wrote:
>> On Wed, Mar 15 2017, Davanum Srinivas wrote:
>>
>>> Yep, jd__ and i confirmed that things work with 3.x
>>
>> Though to be clear, what's used in tooz is the v2 HTTP API, not the new
>> v3 gRPC API.
> 
> And just to be double clear: although etcd 3.x comes with a v2 api,
> the etcd3 api also has a different data model, so the data stored
> using it is not accessible to the v2 api, and vice versa.

AH! That's different...

For my money, not that I have any, I'd still suggest we just pile on
getting moved over to v3 gRPC real quick and then make that the default.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Monty Taylor
On 03/15/2017 02:44 PM, Julien Danjou wrote:
> On Wed, Mar 15 2017, Davanum Srinivas wrote:
> 
>> Yep, jd__ and i confirmed that things work with 3.x
> 
> Though to be clear, what's used in tooz is the v2 HTTP API, not the new
> v3 gRPC API.

But if it conceptually works with v3 server using v2 http api, then we
should be able to iterate in support for grpc api as patches to tooz and
have everything be happy, right?




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Monty Taylor
On 03/15/2017 02:26 PM, Davanum Srinivas wrote:
> On Wed, Mar 15, 2017 at 9:19 AM, Monty Taylor  wrote:
>> On 03/15/2017 11:37 AM, Davanum Srinivas wrote:
>>> Monty, Team,
>>>
>>> Sorry for the top post:
>>>
>>> Support for etcd/tooz in devstack (with file driver as default) -
>>> https://review.openstack.org/#/c/445432/
>>>
>>> As of right now both zookeeper driver and etcd driver is working fine:
>>> https://review.openstack.org/#/c/445630/
>>> https://review.openstack.org/#/c/445629/
>>>
>>> The problem we have from before is that we do not have any CI jobs
>>> that used zookeeper.
>>>
>>> I am leaning towards just throwing the etcd as default and if folks
>>> are interested in zookeeper then they can add specific CI jobs with
>>> DLM_BACKEND variable set.
>>
>> That doesn't bother me - zk as the default choice was because at the
>> time zk worked and etcd did not.
>>
>> That said - etcd3 is a newer/better thing - so maybe instead of driving
>> etcd home as a default before we add etcd3 support, we just change tooz
>> to support etcd3, add the devstack jobs to use that, and start from a
>> position that doesn't involve dealing with any legacy?
> 
> Yep, jd__ and i confirmed that things work with 3.x

WOOT!

> Thanks,
> Dims
> 
>>
>>> On Tue, Mar 14, 2017 at 11:00 PM, Monty Taylor  wrote:
 On 03/15/2017 03:13 AM, Jay Pipes wrote:
> On 03/14/2017 05:01 PM, Clint Byrum wrote:
>> Excerpts from Jay Pipes's message of 2017-03-14 15:30:32 -0400:
>>> On 03/14/2017 02:50 PM, Julien Danjou wrote:
 On Tue, Mar 14 2017, Jay Pipes wrote:

> Not tooz, because I'm not interested in a DLM nor leader election
> library
> (that's what the underlying etcd3 cluster handles for me), only a
> fast service
> liveness/healthcheck system, but it shows usage of etcd3 and Google
> Protocol
> Buffers implementing a simple API for liveness checking and host
> maintenance
> reporting.

 Cool cool. So that's the same feature that we implemented in tooz 3
 years ago. It's called "group membership". You create a group, make
 nodes join it, and you know who's dead/alive and get notified when
 their
 status change.
>>>
>>> The point of os-lively is not to provide a thin API over ZooKeeper's
>>> group membership interface. The point of os-lively is to remove the need
>>> to have a database (RDBMS) record of a service in Nova.
>>
>> That's also the point of tooz's group membership API:
>>
>> https://docs.openstack.org/developer/tooz/compatibility.html#grouping
>
> Did you take a look at the code I wrote in os-lively? What part of the
> tooz group membership API do you think I would have used?
>
> Again, this was a weekend project that I was moving fast on. I looked at
> tooz and didn't see how I could use it for my purposes, which was to
> store a versioned object in a consistent key/value store with support
> for transactional semantics when storing index and data records at the
> same time [1]
>
> https://github.com/jaypipes/os-lively/blob/master/os_lively/service.py#L468-L511
>
>
> etcd3 -- and specifically etcd3, not etcd2 -- supports the transactional
> semantics in a consistent key/value store that I needed.
>
> tooz is cool, but it's not what I was looking for. It's solving a
> different problem than I was trying to solve.
>
> This isn't a case of NIH, despite what Julien is trying to intimate in
> his emails.
>
>>> tooz simply abstracts a group membership API across a number of drivers.
>>> I don't need that. I need a way to maintain a service record (with
>>> maintenance period information, region, and an evolvable data record
>>> format) and query those service records in an RDBMS-like manner but
>>> without the RDBMS being involved.
>>>
> servicegroup API with os-lively and eliminate Nova's use of an
> RDBMS for
> service liveness checking, which should dramatically reduce the
> amount of both
> DB traffic as well as conductor/MQ service update traffic.

 Interesting. Joshua and Vilob tried to push usage of tooz group
 membership a couple of years ago, but it got nowhere. Well, no, they
 got
 2 specs written IIRC:


 https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/service-group-using-tooz.html


 But then it died for whatever reasons on Nova side.
>>>
>>> It died because it didn't actually solve a problem.
>>>
>>> The problem is that even if we incorporate tooz, we would still need to
>>> have a service table in the RDBMS and continue to query it over and over
>>> again in the scheduler and API nodes.
>>
>> Most 

Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Mike Bayer



On 03/15/2017 07:30 AM, Sean Dague wrote:


The problem was the original patch kept a cap on SQLA, just moved it up
to the next pre-release, not realizing the caps in general are the
concern by the requirements team. So instead of upping the cap, I just
removed it entirely. (It also didn't help on clarity that there was a
completely unrelated fail in the tests which made it look like the
system was stopping this.)

This should hopefully let new SQLA releases very naturally filter out to
all our services and libraries.

-Sean



so the failure I'm seeing now is *probably* one I saw earlier when we 
tried to do this, the tempest run fails on trying to run a keystone 
request, but I can't find the same error in the logs this time.


In an earlier build of https://review.openstack.org/#/c/423192/, we saw 
this:


ContextualVersionConflict: (SQLAlchemy 1.1.5 
(/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db', 
'keystone']))


stack trace was in the apache log:  http://paste.openstack.org/show/601583/


but now on our own oslo.db build, the same jobs are failing and are 
halting at keystone, but I can't find any error:


the failure is:


http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/ 



and is on:  https://review.openstack.org/#/c/445930/


if someone w/ tempest expertise could help with this that would be great.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][gate] functional job busted

2017-03-15 Thread Ihar Hrachyshka
That was quick folks. Thanks everyone for moving the patches forward.

Ihar

On Wed, Mar 15, 2017 at 4:32 AM, Miguel Angel Ajo Pelayo
 wrote:
> Thank you for the patches. I merged them, released 1.1.0 and proposed [1]
>
> Cheers!,
>
> [1] //review.openstack.org/445884
>
>
> On Wed, Mar 15, 2017 at 10:14 AM, Gorka Eguileor 
> wrote:
>>
>> On 14/03, Ihar Hrachyshka wrote:
>> > Hi all,
>> >
>> > the patch that started to produce log index file for logstash [1] and
>> > the patch that switched metadata proxy to haproxy [2] landed and
>> > together busted the functional job because the latter produces log
>> > messages with null-bytes inside, while os-log-merger is not resilient
>> > against it.
>> >
>> > If functional job would be in gate and not just in check queue, that
>> > would not happen.
>> >
>> > Attempt to fix the situation in multiple ways at [3]. (For
>> > os-log-merger patches, we will need new release and then bump the
>> > version used in gate, so short term neutron patches seem more viable.)
>> >
>> > I will need support from both authors of os-log-merger as well as
>> > other neutron members to unravel that. I am going offline in a moment,
>> > and hope someone will take care of patches up for review, and land
>> > what's due.
>> >
>> > [1] https://review.openstack.org/#/c/442804/ [2]
>> > https://review.openstack.org/#/c/431691/ [3]
>> > https://review.openstack.org/#/q/topic:fix-os-log-merger-crash
>> >
>> > Thanks,
>> > Ihar
>>
>> Hi Ihar,
>>
>> That is an unexpected case that never came up during our tests or usage,
>> but it is indeed something the script should take into account.
>>
>> Thanks for the os-log-merger patches, I've reviewed them and they look
>> good to me, so hopefully they'll land before you come back online.  ;-)
>>
>> Cheers,
>> Gorka.
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Louis Taylor
On Wed, Mar 15, 2017 at 1:44 PM, Julien Danjou  wrote:
> On Wed, Mar 15 2017, Davanum Srinivas wrote:
>
>> Yep, jd__ and i confirmed that things work with 3.x
>
> Though to be clear, what's used in tooz is the v2 HTTP API, not the new
> v3 gRPC API.

And just to be double clear: although etcd 3.x comes with a v2 api,
the etcd3 api also has a different data model, so the data stored
using it is not accessible to the v2 api, and vice versa.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Julien Danjou
On Wed, Mar 15 2017, Davanum Srinivas wrote:

> Yep, jd__ and i confirmed that things work with 3.x

Though to be clear, what's used in tooz is the v2 HTTP API, not the new
v3 gRPC API.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Davanum Srinivas
On Wed, Mar 15, 2017 at 9:19 AM, Monty Taylor  wrote:
> On 03/15/2017 11:37 AM, Davanum Srinivas wrote:
>> Monty, Team,
>>
>> Sorry for the top post:
>>
>> Support for etcd/tooz in devstack (with file driver as default) -
>> https://review.openstack.org/#/c/445432/
>>
>> As of right now both zookeeper driver and etcd driver is working fine:
>> https://review.openstack.org/#/c/445630/
>> https://review.openstack.org/#/c/445629/
>>
>> The problem we have from before is that we do not have any CI jobs
>> that used zookeeper.
>>
>> I am leaning towards just throwing the etcd as default and if folks
>> are interested in zookeeper then they can add specific CI jobs with
>> DLM_BACKEND variable set.
>
> That doesn't bother me - zk as the default choice was because at the
> time zk worked and etcd did not.
>
> That said - etcd3 is a newer/better thing - so maybe instead of driving
> etcd home as a default before we add etcd3 support, we just change tooz
> to support etcd3, add the devstack jobs to use that, and start from a
> position that doesn't involve dealing with any legacy?

Yep, jd__ and i confirmed that things work with 3.x

Thanks,
Dims

>
>> On Tue, Mar 14, 2017 at 11:00 PM, Monty Taylor  wrote:
>>> On 03/15/2017 03:13 AM, Jay Pipes wrote:
 On 03/14/2017 05:01 PM, Clint Byrum wrote:
> Excerpts from Jay Pipes's message of 2017-03-14 15:30:32 -0400:
>> On 03/14/2017 02:50 PM, Julien Danjou wrote:
>>> On Tue, Mar 14 2017, Jay Pipes wrote:
>>>
 Not tooz, because I'm not interested in a DLM nor leader election
 library
 (that's what the underlying etcd3 cluster handles for me), only a
 fast service
 liveness/healthcheck system, but it shows usage of etcd3 and Google
 Protocol
 Buffers implementing a simple API for liveness checking and host
 maintenance
 reporting.
>>>
>>> Cool cool. So that's the same feature that we implemented in tooz 3
>>> years ago. It's called "group membership". You create a group, make
>>> nodes join it, and you know who's dead/alive and get notified when
>>> their
>>> status change.
>>
>> The point of os-lively is not to provide a thin API over ZooKeeper's
>> group membership interface. The point of os-lively is to remove the need
>> to have a database (RDBMS) record of a service in Nova.
>
> That's also the point of tooz's group membership API:
>
> https://docs.openstack.org/developer/tooz/compatibility.html#grouping

 Did you take a look at the code I wrote in os-lively? What part of the
 tooz group membership API do you think I would have used?

 Again, this was a weekend project that I was moving fast on. I looked at
 tooz and didn't see how I could use it for my purposes, which was to
 store a versioned object in a consistent key/value store with support
 for transactional semantics when storing index and data records at the
 same time [1]

 https://github.com/jaypipes/os-lively/blob/master/os_lively/service.py#L468-L511


 etcd3 -- and specifically etcd3, not etcd2 -- supports the transactional
 semantics in a consistent key/value store that I needed.

 tooz is cool, but it's not what I was looking for. It's solving a
 different problem than I was trying to solve.

 This isn't a case of NIH, despite what Julien is trying to intimate in
 his emails.

>> tooz simply abstracts a group membership API across a number of drivers.
>> I don't need that. I need a way to maintain a service record (with
>> maintenance period information, region, and an evolvable data record
>> format) and query those service records in an RDBMS-like manner but
>> without the RDBMS being involved.
>>
 servicegroup API with os-lively and eliminate Nova's use of an
 RDBMS for
 service liveness checking, which should dramatically reduce the
 amount of both
 DB traffic as well as conductor/MQ service update traffic.
>>>
>>> Interesting. Joshua and Vilob tried to push usage of tooz group
>>> membership a couple of years ago, but it got nowhere. Well, no, they
>>> got
>>> 2 specs written IIRC:
>>>
>>>
>>> https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/service-group-using-tooz.html
>>>
>>>
>>> But then it died for whatever reasons on Nova side.
>>
>> It died because it didn't actually solve a problem.
>>
>> The problem is that even if we incorporate tooz, we would still need to
>> have a service table in the RDBMS and continue to query it over and over
>> again in the scheduler and API nodes.
>
> Most likely it was designed with hesitance to have a tooz requirement
> to be a source of truth. But it's certainly not a problem for most tooz
> backends to be a source of 

Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Monty Taylor
On 03/15/2017 01:48 PM, John Garbutt wrote:
> On 15 March 2017 at 12:33, Sean Dague  wrote:
>> On 03/15/2017 08:10 AM, John Garbutt wrote:
>>> On 15 March 2017 at 11:58, Jay Pipes  wrote:
 On 03/15/2017 07:44 AM, Sean Dague wrote:
>
> On 03/14/2017 11:00 PM, Monty Taylor wrote:
> 
>>
>> a) awesome. when the rest of this dips momentarily into words that might
>> sound negative, please hear it all wrapped in an "awesome" and know that
>> my personal desire is to see the thing you're working on be successful
>> without undue burden...
>>
>> b) In Tokyo, we had the big discussion about DLMs (where at least my
>> intent going in to the room was to get us to pick one and only one).
>> There were three camps in the room who were all vocal:
>>
>> 1) YES! Let's just pick one, I don't care which one
>> 2) I hate Java I don't want to run Zookeeper, so we can't pick that
>> 3) I hate go/don't trust coreos I don't want to run etcd so we can't
>> pick that
>>
>> Because of 2 and 3 the group represented by 1 lost and we ended up with:
>> "crap, we have to use an abstraction library"
>>
>> I'd argue that unless something has changed significantly, having Nova
>> grow a direct depend on etcd when the DLM discussion brought us to "the
>> operators in the room have expressed a need for a pluggable choice
>> between at least zk and etcd" should be pretty much a non-starter.
>>
>> Now, being that I was personally in group 1, I'd be THRILLED if we
>> could, as a community, decide to pick one and skip having an abstraction
>> library. I still don't care which one - and you know I love
>> gRPC/protobuf.
>>
>> But I do think that given the anti-etcd sentiment that was expressed was
>> equally as vehement as the anti-zk sentiment, that we need to circle
>> back and make a legit call on this topic.
>>
>> If we can pick one, I think having special-purpose libraries like
>> os-lively for specific purposes would be neat.
>>
>> If we still can't pick one, then I think adding the liveness check you
>> implemented for os-lively as a new feature in tooz and also implementing
>> the same thing in the zk driver would be necessary. (of course, that'll
>> probably depend on getting etcd3 support added to tooz and making sure
>> there is a good functional test for etcd3.
>
>
> We should also make it clear that:
>
> 1) Tokyo was nearly 1.5 years ago.
> 2) Many stake holders in openstack with people in that room may no
> longer be part of our community
> 3) Alignment with Kubernetes has become something important at many
> levels inside of OpenStack (which puts some real weight on the etcd front)


 Yes, and even more so for etcd3 vs. etcd2, since a) k8s now uses etcd3 and
 b) etcd2 is no longer being worked on.

> 4) The containers ecosystem, which etcd came out of, has matured
> dramatically
>>>
>>> +1 for working towards etcd3 a "base service", based on operator acceptance.
>>> +1 for liveness checks not causing silly DB churn.
>>>
>>> While we might not need/want an abstraction layer to hide the
>>> differences between different backends, but a library (tooz and/or
>>> os-lively) so we all consistently use the tool seems to make sense.
>>>
>>> Maybe that means get tooz using etcd3 (Julian or Jay, or both maybe
>>> seemed keen?)
>>> Maybe the tooz API adds bits from the os-lively POC?
>>
>> I do have a concern where we immediately jump to a generic abstraction,
>> instead of using the underlying technology to the best of our ability.
>> It's really hard to break down and optimize the abstractions later.
>> We've got all sorts of cruft (an inefficiencies) in our DB access layer
>> because of this (UUIDs stored as UTF8 strings being a good example).
>>
>> I'd definitely be more interested in etcd3 as a defined base service,
>> people can use it directly. See what kind of patterns people come up
>> with. Abstract late once the patterns are there.
> 
> Good point.
> 
> +1 to collecting the patterns.
> Thats the bit I didn't want to throw away.

++

If the 1.5 years has changed us, I think just depending on etcd3 would
be awesome.

FWIW - when we added zk to zuul/nodepool, we chose a single system
(although the opposite system, but that's not the interesting thing
here) for all the reasons mentioned about being able to dive deep into
the actual tool. It has worked well for us. However, we did wind up
writing an in-tree API on top of kazoo in-tree almost immediately, just
because it made working with it for the logical operations we needed
more sense elsewhere in the code.

So - yeah - use the tech without a generic abstraction - but patterns
can help a bunch.


__
OpenStack Development Mailing List (not 

Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Monty Taylor
On 03/15/2017 11:37 AM, Davanum Srinivas wrote:
> Monty, Team,
> 
> Sorry for the top post:
> 
> Support for etcd/tooz in devstack (with file driver as default) -
> https://review.openstack.org/#/c/445432/
> 
> As of right now both zookeeper driver and etcd driver is working fine:
> https://review.openstack.org/#/c/445630/
> https://review.openstack.org/#/c/445629/
> 
> The problem we have from before is that we do not have any CI jobs
> that used zookeeper.
> 
> I am leaning towards just throwing the etcd as default and if folks
> are interested in zookeeper then they can add specific CI jobs with
> DLM_BACKEND variable set.

That doesn't bother me - zk as the default choice was because at the
time zk worked and etcd did not.

That said - etcd3 is a newer/better thing - so maybe instead of driving
etcd home as a default before we add etcd3 support, we just change tooz
to support etcd3, add the devstack jobs to use that, and start from a
position that doesn't involve dealing with any legacy?

> On Tue, Mar 14, 2017 at 11:00 PM, Monty Taylor  wrote:
>> On 03/15/2017 03:13 AM, Jay Pipes wrote:
>>> On 03/14/2017 05:01 PM, Clint Byrum wrote:
 Excerpts from Jay Pipes's message of 2017-03-14 15:30:32 -0400:
> On 03/14/2017 02:50 PM, Julien Danjou wrote:
>> On Tue, Mar 14 2017, Jay Pipes wrote:
>>
>>> Not tooz, because I'm not interested in a DLM nor leader election
>>> library
>>> (that's what the underlying etcd3 cluster handles for me), only a
>>> fast service
>>> liveness/healthcheck system, but it shows usage of etcd3 and Google
>>> Protocol
>>> Buffers implementing a simple API for liveness checking and host
>>> maintenance
>>> reporting.
>>
>> Cool cool. So that's the same feature that we implemented in tooz 3
>> years ago. It's called "group membership". You create a group, make
>> nodes join it, and you know who's dead/alive and get notified when
>> their
>> status change.
>
> The point of os-lively is not to provide a thin API over ZooKeeper's
> group membership interface. The point of os-lively is to remove the need
> to have a database (RDBMS) record of a service in Nova.

 That's also the point of tooz's group membership API:

 https://docs.openstack.org/developer/tooz/compatibility.html#grouping
>>>
>>> Did you take a look at the code I wrote in os-lively? What part of the
>>> tooz group membership API do you think I would have used?
>>>
>>> Again, this was a weekend project that I was moving fast on. I looked at
>>> tooz and didn't see how I could use it for my purposes, which was to
>>> store a versioned object in a consistent key/value store with support
>>> for transactional semantics when storing index and data records at the
>>> same time [1]
>>>
>>> https://github.com/jaypipes/os-lively/blob/master/os_lively/service.py#L468-L511
>>>
>>>
>>> etcd3 -- and specifically etcd3, not etcd2 -- supports the transactional
>>> semantics in a consistent key/value store that I needed.
>>>
>>> tooz is cool, but it's not what I was looking for. It's solving a
>>> different problem than I was trying to solve.
>>>
>>> This isn't a case of NIH, despite what Julien is trying to intimate in
>>> his emails.
>>>
> tooz simply abstracts a group membership API across a number of drivers.
> I don't need that. I need a way to maintain a service record (with
> maintenance period information, region, and an evolvable data record
> format) and query those service records in an RDBMS-like manner but
> without the RDBMS being involved.
>
>>> servicegroup API with os-lively and eliminate Nova's use of an
>>> RDBMS for
>>> service liveness checking, which should dramatically reduce the
>>> amount of both
>>> DB traffic as well as conductor/MQ service update traffic.
>>
>> Interesting. Joshua and Vilob tried to push usage of tooz group
>> membership a couple of years ago, but it got nowhere. Well, no, they
>> got
>> 2 specs written IIRC:
>>
>>
>> https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/service-group-using-tooz.html
>>
>>
>> But then it died for whatever reasons on Nova side.
>
> It died because it didn't actually solve a problem.
>
> The problem is that even if we incorporate tooz, we would still need to
> have a service table in the RDBMS and continue to query it over and over
> again in the scheduler and API nodes.

 Most likely it was designed with hesitance to have a tooz requirement
 to be a source of truth. But it's certainly not a problem for most tooz
 backends to be a source of truth. Certainly not for etcd or ZK, which
 are both designed to be that.

> I want all service information in the same place, and I don't want to
> use an RDBMS for that information. etcd3 provides an ideal place to
> store service 

Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-15 Thread Sean Dague
On 03/15/2017 02:16 AM, Clint Byrum wrote:
> Excerpts from Monty Taylor's message of 2017-03-15 04:36:24 +0100:
>> On 03/14/2017 06:04 PM, Davanum Srinivas wrote:
>>> Team,
>>>
>>> So one more thing popped up again on IRC:
>>> https://etherpad.openstack.org/p/oslo.config_etcd_backend
>>>
>>> What do you think? interested in this work?
>>>
>>> Thanks,
>>> Dims
>>>
>>> PS: Between this thread and the other one about Tooz/DLM and
>>> os-lively, we can probably make a good case to add etcd as a base
>>> always-on service.
>>
>> As I mentioned in the other thread, there was specific and strong
>> anti-etcd sentiment in Tokyo which is why we decided to use an
>> abstraction. I continue to be in favor of us having one known service in
>> this space, but I do think that it's important to revisit that decision
>> fully and in context of the concerns that were raised when we tried to
>> pick one last time.
>>
>> It's worth noting that there is nothing particularly etcd-ish about
>> storing config that couldn't also be done with zk and thus just be an
>> additional api call or two added to Tooz with etcd and zk drivers for it.
>>
> 
> Combine that thought with the "please have an ingest/export" thought,
> and I think you have a pretty operator-friendly transition path. Would
> be pretty great to have a release of OpenStack that just lets you add
> an '[etcd]', or '[config-service]' section maybe, to your config files,
> and then once you've fully migrated everything, lets you delete all the
> other sections. Then the admin nodes still have the full configs and
> one can just edit configs in git and roll them out by ingesting.
> 
> (Then the magical rainbow fairy ponies teach our services to watch their
> config service for changes and restart themselves).

Make sure to add:

... (after fully quiescing, when they are not processing any inflight
work, when they are part of a pool so that they can be rolling restarted
without impacting other services trying to connect to them, with a
rollback to past config should the new config cause a crash).

There are a ton of really interesting things about a network registry,
that makes many things easier. However, from an operational point of
view I would be concerned about the idea of services restarting
themselves in a non orchestrated manner. Or that a single key set in the
registry triggers a complete reboot of the cluster. It's definitely less
clear to understand the linkage of the action that took down your cloud
and why when the operator isn't explicit about "and restart this service
now".

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Nicolas Trangez
On Wed, 2017-03-15 at 08:33 -0400, Sean Dague wrote:
> On 03/15/2017 08:10 AM, John Garbutt wrote:
> > On 15 March 2017 at 11:58, Jay Pipes  wrote:
> > > On 03/15/2017 07:44 AM, Sean Dague wrote:
> > > > 
> > > > On 03/14/2017 11:00 PM, Monty Taylor wrote:
> > > > 
> > > > > 
> > > > > a) awesome. when the rest of this dips momentarily into words
> > > > > that might
> > > > > sound negative, please hear it all wrapped in an "awesome"
> > > > > and know that
> > > > > my personal desire is to see the thing you're working on be
> > > > > successful
> > > > > without undue burden...
> > > > > 
> > > > > b) In Tokyo, we had the big discussion about DLMs (where at
> > > > > least my
> > > > > intent going in to the room was to get us to pick one and
> > > > > only one).
> > > > > There were three camps in the room who were all vocal:
> > > > > 
> > > > > 1) YES! Let's just pick one, I don't care which one
> > > > > 2) I hate Java I don't want to run Zookeeper, so we can't
> > > > > pick that
> > > > > 3) I hate go/don't trust coreos I don't want to run etcd so
> > > > > we can't
> > > > > pick that
> > > > > 
> > > > > Because of 2 and 3 the group represented by 1 lost and we
> > > > > ended up with:
> > > > > "crap, we have to use an abstraction library"
> > > > > 
> > > > > I'd argue that unless something has changed significantly,
> > > > > having Nova
> > > > > grow a direct depend on etcd when the DLM discussion brought
> > > > > us to "the
> > > > > operators in the room have expressed a need for a pluggable
> > > > > choice
> > > > > between at least zk and etcd" should be pretty much a non-
> > > > > starter.
> > > > > 
> > > > > Now, being that I was personally in group 1, I'd be THRILLED
> > > > > if we
> > > > > could, as a community, decide to pick one and skip having an
> > > > > abstraction
> > > > > library. I still don't care which one - and you know I love
> > > > > gRPC/protobuf.
> > > > > 
> > > > > But I do think that given the anti-etcd sentiment that was
> > > > > expressed was
> > > > > equally as vehement as the anti-zk sentiment, that we need to
> > > > > circle
> > > > > back and make a legit call on this topic.
> > > > > 
> > > > > If we can pick one, I think having special-purpose libraries
> > > > > like
> > > > > os-lively for specific purposes would be neat.
> > > > > 
> > > > > If we still can't pick one, then I think adding the liveness
> > > > > check you
> > > > > implemented for os-lively as a new feature in tooz and also
> > > > > implementing
> > > > > the same thing in the zk driver would be necessary. (of
> > > > > course, that'll
> > > > > probably depend on getting etcd3 support added to tooz and
> > > > > making sure
> > > > > there is a good functional test for etcd3.
> > > > 
> > > > 
> > > > We should also make it clear that:
> > > > 
> > > > 1) Tokyo was nearly 1.5 years ago.
> > > > 2) Many stake holders in openstack with people in that room may
> > > > no
> > > > longer be part of our community
> > > > 3) Alignment with Kubernetes has become something important at
> > > > many
> > > > levels inside of OpenStack (which puts some real weight on the
> > > > etcd front)
> > > 
> > > 
> > > Yes, and even more so for etcd3 vs. etcd2, since a) k8s now uses
> > > etcd3 and
> > > b) etcd2 is no longer being worked on.
> > > 
> > > > 4) The containers ecosystem, which etcd came out of, has
> > > > matured
> > > > dramatically
> > 
> > +1 for working towards etcd3 a "base service", based on operator
> > acceptance.
> > +1 for liveness checks not causing silly DB churn.
> > 
> > While we might not need/want an abstraction layer to hide the
> > differences between different backends, but a library (tooz and/or
> > os-lively) so we all consistently use the tool seems to make sense.
> > 
> > Maybe that means get tooz using etcd3 (Julian or Jay, or both maybe
> > seemed keen?)
> > Maybe the tooz API adds bits from the os-lively POC?
> 
> I do have a concern where we immediately jump to a generic
> abstraction,
> instead of using the underlying technology to the best of our
> ability.
> It's really hard to break down and optimize the abstractions later.
> We've got all sorts of cruft (an inefficiencies) in our DB access
> layer
> because of this (UUIDs stored as UTF8 strings being a good example).
> 
> I'd definitely be more interested in etcd3 as a defined base service,
> people can use it directly. See what kind of patterns people come up
> with. Abstract late once the patterns are there.

I'm not involved in Oslo or Devstack, so not really a say in this, but:

Yes! A 1000 times yes! An 'abstraction' with only a single
implementation is (1) almost by definition not an abstraction, and (2)
in 99% of the cases just exposing a crippled version of the underlying
system's features.
Furthermore, adding a second implementation of the interface backed by
another system at some later point in time turns out to be difficult
(or impossible) because the 'abstraction' 

Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread John Garbutt
On 15 March 2017 at 12:33, Sean Dague  wrote:
> On 03/15/2017 08:10 AM, John Garbutt wrote:
>> On 15 March 2017 at 11:58, Jay Pipes  wrote:
>>> On 03/15/2017 07:44 AM, Sean Dague wrote:

 On 03/14/2017 11:00 PM, Monty Taylor wrote:
 
>
> a) awesome. when the rest of this dips momentarily into words that might
> sound negative, please hear it all wrapped in an "awesome" and know that
> my personal desire is to see the thing you're working on be successful
> without undue burden...
>
> b) In Tokyo, we had the big discussion about DLMs (where at least my
> intent going in to the room was to get us to pick one and only one).
> There were three camps in the room who were all vocal:
>
> 1) YES! Let's just pick one, I don't care which one
> 2) I hate Java I don't want to run Zookeeper, so we can't pick that
> 3) I hate go/don't trust coreos I don't want to run etcd so we can't
> pick that
>
> Because of 2 and 3 the group represented by 1 lost and we ended up with:
> "crap, we have to use an abstraction library"
>
> I'd argue that unless something has changed significantly, having Nova
> grow a direct depend on etcd when the DLM discussion brought us to "the
> operators in the room have expressed a need for a pluggable choice
> between at least zk and etcd" should be pretty much a non-starter.
>
> Now, being that I was personally in group 1, I'd be THRILLED if we
> could, as a community, decide to pick one and skip having an abstraction
> library. I still don't care which one - and you know I love
> gRPC/protobuf.
>
> But I do think that given the anti-etcd sentiment that was expressed was
> equally as vehement as the anti-zk sentiment, that we need to circle
> back and make a legit call on this topic.
>
> If we can pick one, I think having special-purpose libraries like
> os-lively for specific purposes would be neat.
>
> If we still can't pick one, then I think adding the liveness check you
> implemented for os-lively as a new feature in tooz and also implementing
> the same thing in the zk driver would be necessary. (of course, that'll
> probably depend on getting etcd3 support added to tooz and making sure
> there is a good functional test for etcd3.


 We should also make it clear that:

 1) Tokyo was nearly 1.5 years ago.
 2) Many stake holders in openstack with people in that room may no
 longer be part of our community
 3) Alignment with Kubernetes has become something important at many
 levels inside of OpenStack (which puts some real weight on the etcd front)
>>>
>>>
>>> Yes, and even more so for etcd3 vs. etcd2, since a) k8s now uses etcd3 and
>>> b) etcd2 is no longer being worked on.
>>>
 4) The containers ecosystem, which etcd came out of, has matured
 dramatically
>>
>> +1 for working towards etcd3 a "base service", based on operator acceptance.
>> +1 for liveness checks not causing silly DB churn.
>>
>> While we might not need/want an abstraction layer to hide the
>> differences between different backends, but a library (tooz and/or
>> os-lively) so we all consistently use the tool seems to make sense.
>>
>> Maybe that means get tooz using etcd3 (Julian or Jay, or both maybe
>> seemed keen?)
>> Maybe the tooz API adds bits from the os-lively POC?
>
> I do have a concern where we immediately jump to a generic abstraction,
> instead of using the underlying technology to the best of our ability.
> It's really hard to break down and optimize the abstractions later.
> We've got all sorts of cruft (an inefficiencies) in our DB access layer
> because of this (UUIDs stored as UTF8 strings being a good example).
>
> I'd definitely be more interested in etcd3 as a defined base service,
> people can use it directly. See what kind of patterns people come up
> with. Abstract late once the patterns are there.

Good point.

+1 to collecting the patterns.
Thats the bit I didn't want to throw away.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Sean Dague
On 03/13/2017 05:10 PM, Zane Bitter wrote:

>> I'm not sure I agree. One can very simply inject needed credentials
>> into a running VM and have it interact with the cloud APIs.
> 
> Demo please!
> 
> Most Keystone backends are read-only, you can't even create a new user
> account yourself. It's an admin-only API anyway. The only non-expiring
> credential you even *have*, ignoring the difficulties of getting it to
> the server, is your LDAP password. Would *you* put *your* LDAP password
> on an internet-facing server? I would not.

So is one of the issues to support cloud native flows that our user auth
system, which often needs to connect into traditional enterprise
systems, doesn't really consider that?

I definitely agree, if your cloud is using your LDAP password, which
gets you into your health insurance and direct deposit systems at your
employeer, sticking this into a cloud server is a no go.

Thinking aloud, I wonder if user provisionable sub users would help
here. They would have all the same rights as the main user (except
modify other subusers), but would have a dedicated user provisioned
password. You basically can carve off the same thing from Google when
you have services that can't do the entire oauth/2factor path. Fastmail
rolled out something similar recently as well.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [notification] BlockDeviceMapping in InstancePayload

2017-03-15 Thread John Garbutt
On 13 March 2017 at 17:14, Balazs Gibizer  wrote:
> Hi,
>
> As part of the Searchlight integration we need to extend our instance
> notifications with BDM data [1]. As far as I understand the main goal is to
> provide enough data about the instance to Searchlight so that Nova can use
> Searchlight to generate the response of the GET /servers/{server_id}
> requests based on the data stored in Searchlight.
>
> I checked the server API response and I found one field that needs BDM
> related data: os-extended-volumes:volumes_attached. Only the uuid of the
> volume and the value of delete_on_terminate is provided in the API response.
>
> I have two options about what to add to the InstancePayload and I want to
> get some opinions about which direction we should go with the
> implementation.
>
> Option A: Add only the minimum required information from the BDM to the
> InstancePayload
>
>  additional InstancePayload field:
>  block_devices: ListOfObjectsField(BlockDevicePayload)
>
>  class BlockDevicePayload(base.NotificationPayloadBase):
>fields = {
>'delete_on_termination': fields.BooleanField(default=False),
>'volume_id': fields.StringField(nullable=True),
>}
>
> This payload would be generated from the BDMs connected to the instance
> where the BDM.destination_type == 'volume'.
>
>
> Option B: Provide a comprehensive set of BDM attributes
>
>  class BlockDevicePayload(base.NotificationPayloadBase):
>fields = {
>'source_type': fields.BlockDeviceSourceTypeField(nullable=True),
>'destination_type': fields.BlockDeviceDestinationTypeField(
>nullable=True),
>'guest_format': fields.StringField(nullable=True),
>'device_type': fields.BlockDeviceTypeField(nullable=True),
>'disk_bus': fields.StringField(nullable=True),
>'boot_index': fields.IntegerField(nullable=True),
>'device_name': fields.StringField(nullable=True),
>'delete_on_termination': fields.BooleanField(default=False),
>'snapshot_id': fields.StringField(nullable=True),
>'volume_id': fields.StringField(nullable=True),
>'volume_size': fields.IntegerField(nullable=True),
>'image_id': fields.StringField(nullable=True),
>'no_device': fields.BooleanField(default=False),
>'tag': fields.StringField(nullable=True)
>}
>
> In this case Nova would provide every BDM attached to the instance not just
> the volume ones.
>
> I intentionally left out connection_info and the db id as those seems really
> system internal.
> I also left out the instance related references as this BlockDevicePayload
> would be part of an InstancePayload which has an the instance uuid already.

+1 leaving those out.

> What do you think, which direction we should go?

There are discussions around extending the info we give out about BDMs
in the API.

What about in between, list all types of BDMs, so include a touch more
info so you can tell which one is a volume for sure.

  class BlockDevicePayload(base.NotificationPayloadBase):
fields = {
'destination_type': fields.BlockDeviceDestinationTypeField(
   nullable=True), # Maybe just called "type"?
'boot_index': fields.IntegerField(nullable=True),
'device_name': fields.StringField(nullable=True), # do we
ignore that now?
'delete_on_termination': fields.BooleanField(default=False),
'volume_id': fields.StringField(nullable=True),
'tag': fields.StringField(nullable=True)
}

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread John Garbutt
On 13 March 2017 at 21:10, Zane Bitter  wrote:
> Yes. this is a problem with the default policy - if you have *any* role in a
> project then you get write access to everything in that project. I don't
> know how I can even call this role-based, since everybody has access to
> everything regardless of their roles.
>
> Keystone folks are working on a new global default policy. The new policy
> will require specific reader/writer roles on a project to access any of that
> project's data (I attended the design session and insisted on it). That will
> free up services to create their own limited-scope roles without the
> consequence of opening up full access to every other OpenStack API. e.g.
> it's easy to imagine a magnum-tenant role that has permissions to move
> Neutron ports around but nothing else.
>
> We ultimately need finer-grained authorisation than that - we'll want users
> to be able to specify permissions for particular resources, and since most
> users are not OpenStack projects we'll need them to be able to do it for
> roles (or specific user accounts) that are not predefined in policy.json.
> With the other stuff in place that's at least do-able in individual projects
> though, and if a few projects can agree on a common approach then it could
> easily turn into e.g. an Oslo library, even if it never turns into a
> centralised authorisation service.

I would love feedback on these three Nova specs currently reworking
our default policy:
https://review.openstack.org/#/c/427872/

It clearly doesn't get us all the way there, but I think it lays the
foundations to build what you suggest.

In a related note, there is this old idea I am trying to write up for
Trove/Magnum concerns (now we have proper service token support in
keystoneauth and keystone middleware):
https://review.openstack.org/#/c/438134/

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Sean Dague
On 03/15/2017 08:10 AM, John Garbutt wrote:
> On 15 March 2017 at 11:58, Jay Pipes  wrote:
>> On 03/15/2017 07:44 AM, Sean Dague wrote:
>>>
>>> On 03/14/2017 11:00 PM, Monty Taylor wrote:
>>> 

 a) awesome. when the rest of this dips momentarily into words that might
 sound negative, please hear it all wrapped in an "awesome" and know that
 my personal desire is to see the thing you're working on be successful
 without undue burden...

 b) In Tokyo, we had the big discussion about DLMs (where at least my
 intent going in to the room was to get us to pick one and only one).
 There were three camps in the room who were all vocal:

 1) YES! Let's just pick one, I don't care which one
 2) I hate Java I don't want to run Zookeeper, so we can't pick that
 3) I hate go/don't trust coreos I don't want to run etcd so we can't
 pick that

 Because of 2 and 3 the group represented by 1 lost and we ended up with:
 "crap, we have to use an abstraction library"

 I'd argue that unless something has changed significantly, having Nova
 grow a direct depend on etcd when the DLM discussion brought us to "the
 operators in the room have expressed a need for a pluggable choice
 between at least zk and etcd" should be pretty much a non-starter.

 Now, being that I was personally in group 1, I'd be THRILLED if we
 could, as a community, decide to pick one and skip having an abstraction
 library. I still don't care which one - and you know I love
 gRPC/protobuf.

 But I do think that given the anti-etcd sentiment that was expressed was
 equally as vehement as the anti-zk sentiment, that we need to circle
 back and make a legit call on this topic.

 If we can pick one, I think having special-purpose libraries like
 os-lively for specific purposes would be neat.

 If we still can't pick one, then I think adding the liveness check you
 implemented for os-lively as a new feature in tooz and also implementing
 the same thing in the zk driver would be necessary. (of course, that'll
 probably depend on getting etcd3 support added to tooz and making sure
 there is a good functional test for etcd3.
>>>
>>>
>>> We should also make it clear that:
>>>
>>> 1) Tokyo was nearly 1.5 years ago.
>>> 2) Many stake holders in openstack with people in that room may no
>>> longer be part of our community
>>> 3) Alignment with Kubernetes has become something important at many
>>> levels inside of OpenStack (which puts some real weight on the etcd front)
>>
>>
>> Yes, and even more so for etcd3 vs. etcd2, since a) k8s now uses etcd3 and
>> b) etcd2 is no longer being worked on.
>>
>>> 4) The containers ecosystem, which etcd came out of, has matured
>>> dramatically
> 
> +1 for working towards etcd3 a "base service", based on operator acceptance.
> +1 for liveness checks not causing silly DB churn.
> 
> While we might not need/want an abstraction layer to hide the
> differences between different backends, but a library (tooz and/or
> os-lively) so we all consistently use the tool seems to make sense.
> 
> Maybe that means get tooz using etcd3 (Julian or Jay, or both maybe
> seemed keen?)
> Maybe the tooz API adds bits from the os-lively POC?

I do have a concern where we immediately jump to a generic abstraction,
instead of using the underlying technology to the best of our ability.
It's really hard to break down and optimize the abstractions later.
We've got all sorts of cruft (an inefficiencies) in our DB access layer
because of this (UUIDs stored as UTF8 strings being a good example).

I'd definitely be more interested in etcd3 as a defined base service,
people can use it directly. See what kind of patterns people come up
with. Abstract late once the patterns are there.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [api] API stability and change guidelines

2017-03-15 Thread Chris Dent


Short Version:

Please review "Refactor and re-validate api change guidelines"
https://review.openstack.org/#/c/421846/ this week because we
intend to freeze it next week and I've heard a lot of people
comment lots of places but not on the review.

Long Version:

At the PTG we had nearly a full day (Monday) of discussion about
guidelines for stability and change handling in APIs. Many of the
participants, especially in the morning, had very strong feelings
and opinions.

Since then the document has been lightly refined, but most of the
participation has been from people who might be described as already
in agreement [1] with the fundamental principles which are:

* effectively all change needs to be versioned
* the best tool we have right now for doing that versioning are
  microversions
* all versions should be kept around as long as possible,
  potentially forever

I've been keeping the review open and lingering because I've been
hoping to get input from those people who disagree or have concerns
so that those issues are at least recorded on the review. Some of
the issues are centered around:

* general dislike of the microversion tech
* concerns about the degree of technical debt and backwards
  compatibility cruft that the guideline implies will need to be
  carried
* the way in which having such a guideline enables and even
  encourages deployments to never or rarely upgrade
* fears that such a guideline will be used as a whip against
  projects that want to fix things in way that violates them
* conversely, fears that such a guideline won't have sufficient
  teeth to drive projects towards consistency

If you share any of these concerns, please comment on the review
at https://review.openstack.org/#/c/421846/ so everyone involved can
know about it and we can have it in the record and hopefully make
the document more fully relevant as a result.

Thank you for participating.

[1] or at least accepting of as a useful compromise

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-15 Thread John Garbutt
On 13 March 2017 at 15:17, Jay Pipes  wrote:
> On 03/13/2017 11:13 AM, Dan Smith wrote:
>>
>> Interestingly, we just had a meeting about cells and the scheduler,
>> which had quite a bit of overlap on this topic.
>>
>>> That said, as mentioned in the previous email, the priorities for Pike
>>> (and likely Queens) will continue to be, in order: traits, ironic,
>>> shared resource pools, and nested providers.
>>
>>
>> Given that the CachingScheduler is still a thing until we get claims in
>> the scheduler, and given that CachingScheduler doesn't use placement
>> like the FilterScheduler does, I think we need to prioritize the claims
>> part of the above list.
>>
>> Based on the discussion several of us just had, the priority list
>> actually needs to be this:
>>
>> 1. Traits
>> 2. Ironic
>> 3. Claims in the scheduler
>> 4. Shared resources
>> 5. Nested resources
>>
>> Claims in the scheduler is not likely to be a thing for Pike, but should
>> be something we do as much prep for as possible, and land early in Queens.
>>
>> Personally, I think getting to the point of claiming in the scheduler
>> will be easier if we have placement in tree, and anything we break in
>> that process will be easier to backport if they're in the same tree.
>> However, I'd say that after that goal is met, splitting placement should
>> be good to go.
> ++

+1 from me, a bit late I know.

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swg][tc] Moving Stewardship Working Group meeting

2017-03-15 Thread John Garbutt
On 15 March 2017 at 09:50, Thierry Carrez  wrote:
> Colette Alexander wrote:
>> Currently the Stewardship Working Group meetings every other Thursday at
>> 1400 UTC.
>>
>> We've had a couple of pings from folks who are interested in joining us
>> for meetings that live in US Pacific Time, and that Thursday time isn't
>> terribly conducive to them being able to make meetings. So - the
>> question is when to move it to, if we can.
>>
>> A quick glance at the rest of the Thursday schedule shows the 1500 and
>> 1600 time slots available (in #openstack-meeting I believe). I'm
>> hesitant to go beyond that in the daytime because we also need to
>> accommodate attendees in Western Europe.
>>
>> Thoughts on whether either of those works from SWG members and anyone
>> who might like to drop in? We can also look into having meetings once a
>> week, and potentially alternating times between the two to help
>> accommodate the spread of people.
>>
>> Let me know what everyone thinks - and for this week I'll see anyone who
>> can make it at 1400 UTC on Thursday.
>
> Alternatively, we could try to come up with ways to avoid regular
> meetings altogether. That would certainly be a bit experimental, but the
> SWG sounds like a nice place to experiment with more inclusive ways of
> coordination.
>
> IMHO meetings serve three purposes. The first is to provide a regular
> rhythm and force people to make progress on stated objectives. You give
> status updates, lay down actions, make sure nothing is stuck. The second
> is to provide quick progress on specific topics -- by having multiple
> people around at the same time you can quickly iterate through ideas and
> options. The third is to expose an entry point to new contributors: if
> they are interested they will look for a meeting to get the temperature
> on a workgroup and potentially jump in.
>
> I'm certainly guilty of being involved in too many things, so purpose
> (1) is definitely helpful to force me to make regular progress, but it
> also feels like something a good status board could do better, and async.
>
> The second purpose is definitely helpful, but I'd say that ad-hoc
> meetings (or discussions in a IRC channel) are a better way to achieve
> the result. You just need to come up with a one-time meeting point where
> all the interested parties will be around, and that's usually easier
> than to pick a weekly time that will work for everyone all the time. We
> just need to invent tooling that would facilitate organizing and
> tracking those.
>
> For the third, I think using IRC channels as the on-boarding mechanism
> is more efficient -- meetings are noisy, busy and not so great for
> newcomers. If we ramped up channel activity (and generally made IRC
> channels more discoverable), I don't think any newcomer would ever use
> meetings to "tune in".
>
> Am I missing something that only meetings could ever provide ? If not it
> feels like the SWG could experiment with meeting-less coordination by
> replacing it with better async status coordination / reminder tools,
> some framework to facilitate ad-hoc discussions, and ramping up activity
> in IRC channel. If that ends up being successful, we could promote our
> techniques to the rest of OpenStack.

+1 for trying out a meeting-less group ourselves.

In the absence of tooling, could we replace the meeting with weekly
email reporting current working streams, and whats planned next? That
would include fixing any problems we face trying to work well
together.

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread John Garbutt
On 15 March 2017 at 11:58, Jay Pipes  wrote:
> On 03/15/2017 07:44 AM, Sean Dague wrote:
>>
>> On 03/14/2017 11:00 PM, Monty Taylor wrote:
>> 
>>>
>>> a) awesome. when the rest of this dips momentarily into words that might
>>> sound negative, please hear it all wrapped in an "awesome" and know that
>>> my personal desire is to see the thing you're working on be successful
>>> without undue burden...
>>>
>>> b) In Tokyo, we had the big discussion about DLMs (where at least my
>>> intent going in to the room was to get us to pick one and only one).
>>> There were three camps in the room who were all vocal:
>>>
>>> 1) YES! Let's just pick one, I don't care which one
>>> 2) I hate Java I don't want to run Zookeeper, so we can't pick that
>>> 3) I hate go/don't trust coreos I don't want to run etcd so we can't
>>> pick that
>>>
>>> Because of 2 and 3 the group represented by 1 lost and we ended up with:
>>> "crap, we have to use an abstraction library"
>>>
>>> I'd argue that unless something has changed significantly, having Nova
>>> grow a direct depend on etcd when the DLM discussion brought us to "the
>>> operators in the room have expressed a need for a pluggable choice
>>> between at least zk and etcd" should be pretty much a non-starter.
>>>
>>> Now, being that I was personally in group 1, I'd be THRILLED if we
>>> could, as a community, decide to pick one and skip having an abstraction
>>> library. I still don't care which one - and you know I love
>>> gRPC/protobuf.
>>>
>>> But I do think that given the anti-etcd sentiment that was expressed was
>>> equally as vehement as the anti-zk sentiment, that we need to circle
>>> back and make a legit call on this topic.
>>>
>>> If we can pick one, I think having special-purpose libraries like
>>> os-lively for specific purposes would be neat.
>>>
>>> If we still can't pick one, then I think adding the liveness check you
>>> implemented for os-lively as a new feature in tooz and also implementing
>>> the same thing in the zk driver would be necessary. (of course, that'll
>>> probably depend on getting etcd3 support added to tooz and making sure
>>> there is a good functional test for etcd3.
>>
>>
>> We should also make it clear that:
>>
>> 1) Tokyo was nearly 1.5 years ago.
>> 2) Many stake holders in openstack with people in that room may no
>> longer be part of our community
>> 3) Alignment with Kubernetes has become something important at many
>> levels inside of OpenStack (which puts some real weight on the etcd front)
>
>
> Yes, and even more so for etcd3 vs. etcd2, since a) k8s now uses etcd3 and
> b) etcd2 is no longer being worked on.
>
>> 4) The containers ecosystem, which etcd came out of, has matured
>> dramatically

+1 for working towards etcd3 a "base service", based on operator acceptance.
+1 for liveness checks not causing silly DB churn.

While we might not need/want an abstraction layer to hide the
differences between different backends, but a library (tooz and/or
os-lively) so we all consistently use the tool seems to make sense.

Maybe that means get tooz using etcd3 (Julian or Jay, or both maybe
seemed keen?)
Maybe the tooz API adds bits from the os-lively POC?

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-15 Thread John Garbutt
On 15 March 2017 at 03:23, Monty Taylor  wrote:
> On 03/15/2017 12:05 AM, Joshua Harlow wrote:
>> So just fyi, this has been talked about before (but prob in context of
>> zookeeper or various other pluggable config backends).
>>
>> Some links:
>>
>> - https://review.openstack.org/#/c/243114/
>> - https://review.openstack.org/#/c/243182/
>> - https://blueprints.launchpad.net/oslo.config/+spec/oslo-config-db
>> - https://review.openstack.org/#/c/130047/
>>
>> I think the general questions that seem to reappear are around the
>> following:
>>
>> * How does reloading work (does it)?
>>
>> * What's the operational experience (editing a ini file is about the
>> lowest bar we can possible get to, for better and/or worse).
>
> As a person who operates many softwares (but who does not necessarily
> operate OpenStack specifically) I will say that services that store
> their config in a service that do not have an injest/update facility
> from file are a GIANT PITA to deal with. Config management is great at
> laying down config files. It _can_ put things into services, but that's
> almost always more work.
>
> Which is my way of saying - neat, but please please please whoever
> writes this make a simple facility that will let someone plop config
> into a file on disk and get that noticed and slurped into the config
> service. A one-liner command line tool that one runs on the config file
> to splat into the config service would be fine.

+1 for keeping the simple use a config file working well.

(+1 for trying other things too, if they don't break the simple way)

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-15 Thread Jay Pipes

On 03/15/2017 07:44 AM, Sean Dague wrote:

On 03/14/2017 11:00 PM, Monty Taylor wrote:


a) awesome. when the rest of this dips momentarily into words that might
sound negative, please hear it all wrapped in an "awesome" and know that
my personal desire is to see the thing you're working on be successful
without undue burden...

b) In Tokyo, we had the big discussion about DLMs (where at least my
intent going in to the room was to get us to pick one and only one).
There were three camps in the room who were all vocal:

1) YES! Let's just pick one, I don't care which one
2) I hate Java I don't want to run Zookeeper, so we can't pick that
3) I hate go/don't trust coreos I don't want to run etcd so we can't
pick that

Because of 2 and 3 the group represented by 1 lost and we ended up with:
"crap, we have to use an abstraction library"

I'd argue that unless something has changed significantly, having Nova
grow a direct depend on etcd when the DLM discussion brought us to "the
operators in the room have expressed a need for a pluggable choice
between at least zk and etcd" should be pretty much a non-starter.

Now, being that I was personally in group 1, I'd be THRILLED if we
could, as a community, decide to pick one and skip having an abstraction
library. I still don't care which one - and you know I love gRPC/protobuf.

But I do think that given the anti-etcd sentiment that was expressed was
equally as vehement as the anti-zk sentiment, that we need to circle
back and make a legit call on this topic.

If we can pick one, I think having special-purpose libraries like
os-lively for specific purposes would be neat.

If we still can't pick one, then I think adding the liveness check you
implemented for os-lively as a new feature in tooz and also implementing
the same thing in the zk driver would be necessary. (of course, that'll
probably depend on getting etcd3 support added to tooz and making sure
there is a good functional test for etcd3.


We should also make it clear that:

1) Tokyo was nearly 1.5 years ago.
2) Many stake holders in openstack with people in that room may no
longer be part of our community
3) Alignment with Kubernetes has become something important at many
levels inside of OpenStack (which puts some real weight on the etcd front)


Yes, and even more so for etcd3 vs. etcd2, since a) k8s now uses etcd3 
and b) etcd2 is no longer being worked on.



4) The containers ecosystem, which etcd came out of, has matured
dramatically


Some of it has matured, yes :) Other parts of it continue to be a 
whirling maelstrom of unpredictability^Winnovation.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >