Re: [openstack-dev] [elections][security] Candidacy for Security Project PTL (Queens)

2017-08-04 Thread Rob C
+1

Luke has been an excellent contributor to the Security project and would be
an excellent PTL to take the project forward.



On Tue, Aug 1, 2017 at 8:30 AM, Luke Hinds  wrote:

> Hello All,
>
> I would like to announce my candidacy for Security Project PTL for
> Queens.
>
> I have been a member of the Security Project for 2-3 years, and a
> core member for one year.
>
> During my tenure as core I have managed public and embargoed security
> notes and contributed with my feedback to the VMT team on OpenStack
> vulnerabilities.
>
> I have also been an active contributor to the security guide as well as a
> regular reviewer. I am the current driver for the security guide
> launchpad page.
>
> As PTL, I'd like to focus on the following things:
>
> * Documentation
>
> I am currently planning a revamp of the Security guide to bring it up to
> date with Pike. To do this I will reach out to other projects to help
> validate the information in the guide is technically correct and up to
> date.
>
> I also would like to migrate the checklists into a format that can be
> easily filtered to a specific release, thereby allowing other security
> tools and processes to easily consume the content and gain a snapshot
> of what security actions are required to harden any given release.
>
> I also plan to encourage others to get involved, with topics arranged for
> the coming PTG on key management.
>
> * Support and championing of OpenStack security projects.
>
> I would like to put forward continued support by means of reviews and
> feedback for the projects currently having their home under the
> security project, and I have plans to propose further projects. Our
> close synergy with the Barbican project should continue to be fostered,
> and encouraged.
>
> * Perform Threat Analysis with further projects
>
> The Threat Analysis project has proved very useful in helping the VMT
> and operators understand the threat landscape pertinent to each OpenStack
> project. I will work with and encourage other projects to undergo threat
> analysis.
>
> * Encourage more contributions and grow some new cores
>
> The security project has lost a good number of core members due to
> companies shifting priorities, so I would like increase the projects
> exposure with blog posts to planet.openstack.org and by outreach at
> various other tech events. I see it as vital to keep the security
> project afloat, as operators rely so much on the project for
> guidance on securing OpenStack clouds.
>
> Regards,
>
> Luke Hinds (lhinds)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Sean McGinnis
On Fri, Aug 04, 2017 at 03:44:32PM -0700, Joshua Harlow wrote:
> Morgan Fainberg wrote:
> >On Fri, Aug 4, 2017 at 3:09 PM, Kevin L. Mitchell  wrote:
> >>On Fri, 2017-08-04 at 14:52 -0700, Morgan Fainberg wrote:
> Maybe not, but please do recall that there are many deployers out
> there
> that track master, not fixed releases, so we need to take that
> level of
> compatibility into account.
> 
> Any idea of who are these deployers? I think I knew once who they might have
> been but I'm not really sure anymore. Are they still doing this (and can
> afford doing it)? Why don't we hear more about them? I'd expect that
> deployers (and there associated developer army) that are trying to do this
> would be the *most* active in IRC and in the mailing list yet I don't really
> see any such activity (which either means we never break them, which seems
> highly unlikely, or that they don't communicate through the normal channels,
> ie they go through some vendor, or that they just flat out don't exist
> anymore).
> 
> I'd personally really like to know how they do it (especially if they do not
> have an associated developer army)... Because they have always been a pink
> elephant that I've heard exists 'somewhere' and they manage to make this all
> work 'somehow'.
> 
> -Josh

I am curious too. I've been bothered by some of the compromised we've had to
make based on some people claiming CD from master is a "core tenet from the
beginning of OpenStack". But I have yet to find anywhere that we've stated
that we will commit to making continuous deployment from master a) possible,
and b) supported.

I would like us to either clearly declare that that type of deployment is
possible and expected and that the community agrees to support it, or that
it is possible for folks to try, but it will be on them if things don't work
out the way the had hoped.

Like I mentioned, we've made compromises based on this assertion, and I really
don't like that we've sacrificed good design and quality in order to support
a scenario that, as far as I've seen, 90+% of users and deployers would run
screaming from.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Status of python-openstacksdk project

2017-08-04 Thread Monty Taylor

On 08/04/2017 04:03 PM, Dean Troyer wrote:

On Fri, Aug 4, 2017 at 11:52 AM, Monty Taylor  wrote:
[...]

* Both shade and python-openstackclient have investigated using openstacksdk
as their REST layer but were unable to because it puts some abstractions in
layers that make it impossible to do some of the advanced things we need.


OSC _does_ use the SDK for all Network API commands.  That was a
mistake in the sense that we did it before SDK 1.0 was released and
still do not have 1.0.


Yah - although it got you out of the competing cliff issue, so that was 
good.



* openstacksdk has internal implementations of things that exist at
different points in the stack. We just added full support for version
service and version discovery to keystoneauth, but openstacksdk has its own
layer for that so it both can't use the ksa implementation and is not
compliant with the API-WG consume guidelines.


This was intended to make it free from dependencies, when first
concieved, none of these other libs existed.


Oh - totally. It made total sense at the time ... it's just the 
surrounding context has changed.



I am coming around to believe that we need to slice a couple more
things up so they can be composed as needed rather then bite off the
entire thing in once piece.

[...]


I'd propose we have the shade team adopt the python-openstacksdk codebase.


++


This is obviously an aggressive suggestion and essentially represents a
takeover of a project. We don't have the luxury of humans to work on things
that we once had, so I think as a community we should be realistic about the
benefits of consolidation and the downside to continuing to have 2 different
python SDKs.


++

I thought it would be natural for OSC to adopt the SDK someday if
Brian did not get around to making it official, but the current
circumstances make it clear that we (OSC) do not have the resources to
do this.  This proposal is much better and leads to a natural
coalescence of the parallel goals and projects.


Doing that implies the following:

* Rework the underlying guts of openstacksdk to make it possible to replace
shade's REST layer with openstacksdk. openstacksdk still doesn't have a 1.0
release, so we can break the few things we'll need to break.


Sigh.  OSC has been using the Network components of the SDK for a long
time in spite of SDK not being at 1.0.  In retrospect that was a
mistake on my part but I believed at the time that 1.0 was close and
we had already ignored Network for far too long.  We already have one
compatibility layer in the SDK due to the proxy refactor work that was
supposed to be the last thing before 1.0.


I don't think we need to break the things you're using, fwiw. In fact, 
we can probably take it as a first-order requirement to not break OSC - 
unless we find something SUPER intractable - and even then we should 
talk about it first.



[...]


* Merge the two repos and retire one of them. Specifics on the mechanics of
this below, but this will either result in moving the resource and service
layer in openstacksdk into shade and adding appropriate attributes to the
shade.OpenStackCloud object, or moving the shade.OpenStackCloud into
something like openstack.cloud and making a shade backwards-compate shim. I
lean towards the first, as we've been telling devs "use shade to talk to
OpenStack" at hackathons and bootcamps and I'd rather avoid the messaging
shift. However, pointing to an SDK called "The Python OpenStack SDK" and
telling people to use it certainly has its benefits from a messaging
perspective.


I don't have a big concern about which repo is maintained.  For OSC I
want what amounts to a low-level REST API, one example of which can be
found in openstackclient.api.*.  Currently Shade is not quite the
right thing to back a CLI but now has the layer I want, and SDK does
not have that layer at all (it was proposed very early on and not
merged).


FWIW - now that we landed version discovery and microversion support in 
keystoneauth - I'd say keystoneauth Adapter is actually now the 
low-level REST API:


  client = keystoneauth1.adapter.Adapter(
session=session,
service_type='compute',
region_name='DFW',
min_version='2',
max_version='2.latest')
  endpoint_data = client.get_endpoint_data()
  print(
"Microversion range is: ",
endpoint_data.min_microversion,
endpoint_data.max_microversion)
  # want 2.31 of servers
  server_response = client.get('/servers', microversion='2.31')
  server = server_response.json()['servers'][0]
  # want 2.14 of keypairs
  keypair_response = client.get('/keypairs', microversion='2.14')
  # Don't care on volume attachments - don't specify one.
  volume_attachments_response = client.get(
  '/servers/{id}/os-volume_attachments'.format(id=server['id']))


Is it better to have a single monolithic thing that has three
conceptual layers internally that can be individually consumed or to
have three things that get layered as needed?


It's a good quest

Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Joshua Harlow

Morgan Fainberg wrote:

On Fri, Aug 4, 2017 at 3:09 PM, Kevin L. Mitchell  wrote:

On Fri, 2017-08-04 at 14:52 -0700, Morgan Fainberg wrote:

Maybe not, but please do recall that there are many deployers out
there
that track master, not fixed releases, so we need to take that
level of
compatibility into account.


Any idea of who are these deployers? I think I knew once who they might 
have been but I'm not really sure anymore. Are they still doing this 
(and can afford doing it)? Why don't we hear more about them? I'd expect 
that deployers (and there associated developer army) that are trying to 
do this would be the *most* active in IRC and in the mailing list yet I 
don't really see any such activity (which either means we never break 
them, which seems highly unlikely, or that they don't communicate 
through the normal channels, ie they go through some vendor, or that 
they just flat out don't exist anymore).


I'd personally really like to know how they do it (especially if they do 
not have an associated developer army)... Because they have always been 
a pink elephant that I've heard exists 'somewhere' and they manage to 
make this all work 'somehow'.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Morgan Fainberg
On Fri, Aug 4, 2017 at 3:09 PM, Kevin L. Mitchell  wrote:
> On Fri, 2017-08-04 at 14:52 -0700, Morgan Fainberg wrote:
>> > Maybe not, but please do recall that there are many deployers out
>> > there
>> > that track master, not fixed releases, so we need to take that
>> > level of
>> > compatibility into account.
>> >
>>
>> I am going to go out on a limb and say that we should not assume that
>> if code merges ever it is a contract because someone might be
>> following master. The contract should be for releases. We should do
>> everything we can to avoid breaking people, but in the case of an API
>> contract (behavior) that was never part of a final release, it should
>> be understood this may change if needed until it is released.
>>
>> This is just my $0.002 as this leads rapidly to "why bother having
>> real releases" if everything is a contract, let someone take a
>> snapshot where they're happy with the code to run. You're devaluing
>> the actual releases.
>
> In my view, following master imposes risks that deployers should
> understand and be prepared to mitigate; but I believe that it is our
> responsibility to acknowledge that they're doing it, and make a
> reasonable effort to not break them.  There are, of course, times when
> no reasonable effort will avoid breaking them, and in those cases, I
> feel that the reasonable course of action is to try to notify them of
> the upcoming breakage.  That's why then I went on to suggest that
> fixing this problem in keystone shouldn't require a version bump in
> this case: it _is_ a breakage that's being fixed.

I appreciate that this specific case you view as being in that
category, I was commenting more on the general case. I would go so far
as to outline exactly what we wont break for markers in-between
releases rather than what you've implied. I can come up with exactly
one case that should never be broken between releases (fixes that
change no behavior but address edge cases are fine): DB Schemas.

I am going to continue to say we cannot and should not commit to
treating anything that lands as a contract, it devalues the release
and ability of the developers to make shifts while working on a
release.

You and I may not agree here, but tracking master has risks and we
should allow for projects to make API changes for un-released APIs as
they see fit without version bumps.

Thanks for the feedback for this fix in specific, I think we came to
much the same conclusion in IRC but wanted some outside eyes on it.

Cheers,
--Morgan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Kevin L. Mitchell
On Fri, 2017-08-04 at 14:52 -0700, Morgan Fainberg wrote:
> > Maybe not, but please do recall that there are many deployers out
> > there
> > that track master, not fixed releases, so we need to take that
> > level of
> > compatibility into account.
> > 
> 
> I am going to go out on a limb and say that we should not assume that
> if code merges ever it is a contract because someone might be
> following master. The contract should be for releases. We should do
> everything we can to avoid breaking people, but in the case of an API
> contract (behavior) that was never part of a final release, it should
> be understood this may change if needed until it is released.
> 
> This is just my $0.002 as this leads rapidly to "why bother having
> real releases" if everything is a contract, let someone take a
> snapshot where they're happy with the code to run. You're devaluing
> the actual releases.

In my view, following master imposes risks that deployers should
understand and be prepared to mitigate; but I believe that it is our
responsibility to acknowledge that they're doing it, and make a
reasonable effort to not break them.  There are, of course, times when
no reasonable effort will avoid breaking them, and in those cases, I
feel that the reasonable course of action is to try to notify them of
the upcoming breakage.  That's why then I went on to suggest that
fixing this problem in keystone shouldn't require a version bump in
this case: it _is_ a breakage that's being fixed.
-- 
Kevin L. Mitchell 

signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Morgan Fainberg
On Fri, Aug 4, 2017 at 2:43 PM, Kevin L. Mitchell  wrote:
> On Fri, 2017-08-04 at 16:45 -0400, Kristi Nikolla wrote:
>> Is this the case even if we haven’t made any final release with the change
>> that introduced this issue? [0]
>>
>> It was only included in the Pike milestones and betas so far, and was not
>> part of the Ocata release.
>
> Maybe not, but please do recall that there are many deployers out there
> that track master, not fixed releases, so we need to take that level of
> compatibility into account.
>

I am going to go out on a limb and say that we should not assume that
if code merges ever it is a contract because someone might be
following master. The contract should be for releases. We should do
everything we can to avoid breaking people, but in the case of an API
contract (behavior) that was never part of a final release, it should
be understood this may change if needed until it is released.

This is just my $0.002 as this leads rapidly to "why bother having
real releases" if everything is a contract, let someone take a
snapshot where they're happy with the code to run. You're devaluing
the actual releases.

>> Therefore the call which now returns a 403 in master, returned a 2xx in
>> Ocata. So we would be fixing something which is broken on master rather
>> than changing a ‘contract’.
>>
>> 0. 
>> https://github.com/openstack/keystone/commit/51d5597df729158d15b71e2ba80ab103df5d55f8
>
> I would be inclined to accept this specific change as a bug fix not
> requiring a version bump, because it is a corner case that I believe a
> deployer would view as a bug, if they encountered it, and because it
> was introduced prior to a named final release.
> --
> Kevin L. Mitchell 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Status of python-openstacksdk project

2017-08-04 Thread Clark Boylan
On Fri, Aug 4, 2017, at 02:37 PM, Kevin L. Mitchell wrote:
> On Fri, 2017-08-04 at 12:26 -0700, Boris Pavlovic wrote:
> > By the way stevedore is really providing very bad plugin experience
> > and should not be used definitely. 
> 
> Perhaps entrypointer[1]? ;)
> 
> [1] https://pypi.python.org/pypi/entrypointer
> -- 
> Kevin L. Mitchell 

The problems seem to be more with the use of entrypoints and the
incredible runtime cost for using them (which you've hinted at in
entrypointer's README). I don't think switching from $plugin lib to
$otherplugin lib changes much for tools like openstackclient unless we
first fix entrypoints (or avoid entrypoints all together). Until then
you must still rely on entrypoints to scan your python path which is
slow.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect plaintext secrets

2017-08-04 Thread Fox, Kevin M
Yeah, but you still run into stuff like db contact and driver information being 
mixed up with secret used for contacting that service. Those should be separate 
fields I think so they can be split/merged with that mechanism.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Friday, August 04, 2017 1:49 PM
To: openstack-dev
Subject: Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and  protect 
plaintext secrets

Excerpts from Fox, Kevin M's message of 2017-08-04 20:21:19 +:
> I would really like to see secrets separated from config. Always have... They 
> are two separate things.
>
> If nothing else, a separate config file so it can be permissioned differently.
>
> This could be combined with k8s secrets/configmaps better too.
> Or make it much easier to version config in git and have secrets somewhere 
> else.

Sure. It's already possible today to use multiple configuration
files with oslo.config, using either the --config-dir option or by
passing multiple --config-file options.

Doug

>
> Thanks,
> Kevin
>
> 
> From: Raildo Mascena de Sousa Filho [rmasc...@redhat.com]
> Sent: Friday, August 04, 2017 12:34 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect 
> plaintext secrets
>
> Hi all,
>
> We had a couple of discussions with the Oslo team related to implement 
> Pluggable drivers for oslo.config[0] and use those feature to implement 
> support to protect plaintext secret on configuration files[1].
>
> In another hand, due the containerized support on OpenStack services, we have 
> a community effort to implement a k8s ConfigMap support[2][3], which might 
> make us step back and consider how secret management will work, since the 
> config data will need to go into the configmap *before* the container is 
> launched.
>
> So, I would like to see what the community think. Should we continue working 
> on that pluggable drivers and protect plain text secrets support for 
> oslo.config? Makes sense having a PTG session[4] on Oslo to discuss that 
> feature?
>
> Thanks for the feedback in advance.
>
> Cheers,
>
> [0] https://review.openstack.org/#/c/454897/
> [1] https://review.openstack.org/#/c/474304/
> [2] 
> https://github.com/flaper87/keystone-k8s-ansible/blob/6524b768d75a28adf44c74aca77ccf13dd66b1a9/provision-keystone-apb/tasks/main.yaml#L71-L108
> [3] 
> https://kubernetes.io/docs/tasks/configure-pod-container/configmap/
> [4] https://etherpad.openstack.org/p/oslo-ptg-queens

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Kevin L. Mitchell
On Fri, 2017-08-04 at 16:45 -0400, Kristi Nikolla wrote:
> Is this the case even if we haven’t made any final release with the change
> that introduced this issue? [0]
> 
> It was only included in the Pike milestones and betas so far, and was not
> part of the Ocata release.

Maybe not, but please do recall that there are many deployers out there
that track master, not fixed releases, so we need to take that level of
compatibility into account.

> Therefore the call which now returns a 403 in master, returned a 2xx in
> Ocata. So we would be fixing something which is broken on master rather
> than changing a ‘contract’. 
> 
> 0. 
> https://github.com/openstack/keystone/commit/51d5597df729158d15b71e2ba80ab103df5d55f8

I would be inclined to accept this specific change as a bug fix not
requiring a version bump, because it is a corner case that I believe a
deployer would view as a bug, if they encountered it, and because it
was introduced prior to a named final release.
-- 
Kevin L. Mitchell 

signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Status of python-openstacksdk project

2017-08-04 Thread Kevin L. Mitchell
On Fri, 2017-08-04 at 12:26 -0700, Boris Pavlovic wrote:
> By the way stevedore is really providing very bad plugin experience
> and should not be used definitely. 

Perhaps entrypointer[1]? ;)

[1] https://pypi.python.org/pypi/entrypointer
-- 
Kevin L. Mitchell 

signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Status of python-openstacksdk project

2017-08-04 Thread Dean Troyer
On Fri, Aug 4, 2017 at 11:52 AM, Monty Taylor  wrote:
[...]
> * Both shade and python-openstackclient have investigated using openstacksdk
> as their REST layer but were unable to because it puts some abstractions in
> layers that make it impossible to do some of the advanced things we need.

OSC _does_ use the SDK for all Network API commands.  That was a
mistake in the sense that we did it before SDK 1.0 was released and
still do not have 1.0.

> * openstacksdk has internal implementations of things that exist at
> different points in the stack. We just added full support for version
> service and version discovery to keystoneauth, but openstacksdk has its own
> layer for that so it both can't use the ksa implementation and is not
> compliant with the API-WG consume guidelines.

This was intended to make it free from dependencies, when first
concieved, none of these other libs existed.

I am coming around to believe that we need to slice a couple more
things up so they can be composed as needed rather then bite off the
entire thing in once piece.

[...]

> I'd propose we have the shade team adopt the python-openstacksdk codebase.

++

> This is obviously an aggressive suggestion and essentially represents a
> takeover of a project. We don't have the luxury of humans to work on things
> that we once had, so I think as a community we should be realistic about the
> benefits of consolidation and the downside to continuing to have 2 different
> python SDKs.

++

I thought it would be natural for OSC to adopt the SDK someday if
Brian did not get around to making it official, but the current
circumstances make it clear that we (OSC) do not have the resources to
do this.  This proposal is much better and leads to a natural
coalescence of the parallel goals and projects.

> Doing that implies the following:
>
> * Rework the underlying guts of openstacksdk to make it possible to replace
> shade's REST layer with openstacksdk. openstacksdk still doesn't have a 1.0
> release, so we can break the few things we'll need to break.

Sigh.  OSC has been using the Network components of the SDK for a long
time in spite of SDK not being at 1.0.  In retrospect that was a
mistake on my part but I believed at the time that 1.0 was close and
we had already ignored Network for far too long.  We already have one
compatibility layer in the SDK due to the proxy refactor work that was
supposed to be the last thing before 1.0.

[...]

> * Merge the two repos and retire one of them. Specifics on the mechanics of
> this below, but this will either result in moving the resource and service
> layer in openstacksdk into shade and adding appropriate attributes to the
> shade.OpenStackCloud object, or moving the shade.OpenStackCloud into
> something like openstack.cloud and making a shade backwards-compate shim. I
> lean towards the first, as we've been telling devs "use shade to talk to
> OpenStack" at hackathons and bootcamps and I'd rather avoid the messaging
> shift. However, pointing to an SDK called "The Python OpenStack SDK" and
> telling people to use it certainly has its benefits from a messaging
> perspective.

I don't have a big concern about which repo is maintained.  For OSC I
want what amounts to a low-level REST API, one example of which can be
found in openstackclient.api.*.  Currently Shade is not quite the
right thing to back a CLI but now has the layer I want, and SDK does
not have that layer at all (it was proposed very early on and not
merged).

Is it better to have a single monolithic thing that has three
conceptual layers internally that can be individually consumed or to
have three things that get layered as needed?

> * drop keystoneauth.Session subclass. It's over-riding things at the wrong
> layer. keystoneauth Adapter is the thing it wants to be.

FWIW, OSC has this problem too...

> * suitability for python-openstackclient. Dean and Steve have been laying in
> the groundwork for doing direct-REST in python-openstackclient because
> python-*client are a mess from an end-user perspective and openstacksdk
> isn't suitable. If we can sync on requirements hopefully we can produce
> something that python-openstackclient can honestly use for that layer
> instead of needing local code.

As I mentioned, we already use the Networking portions of SDK, even
without a 1.0, and it has bit us already a couple of times.  It has
long been my plan to convert to using the SDK, but that was when I
believed there would also be a lower-level API exposed that did not
require all of the application-level goodness and abstractions.

I personally feel like splitting out the low-level REST API layers
into a stand-alone piece that shade, SDK and OSC can all benefit from
would be our best course, but then I have been wrong about this
layering thing in the past, so I throw it out there to have something
that can be used to push against to get what everyone else seems to
want.

dt

-- 

Dean Troyer
dtro...@gmail.com

_

Re: [openstack-dev] [oslo][performance] Proposing tail-based sampling in OSProfiler

2017-08-04 Thread Rajul Kumar
>As far as I understand the idea of continuous tracing is to collect as few
as possible metrics to get insights of the request (not all tracepoints).
>If you keep only API, RPC and Driver calls it is going to
drastically reduce amount of metrics collected.

Exactly, this is also one of the goals to have continuous tracing in place
i.e. to have minimal tracepoints active and increase the granularity to
learn more information or pin down the problem.

>I'll try to elaborate my points. For monitoring perspective it's going to
be super beneficial to have continuous tracing and I fully support the
effort. However, it won't help community too much to fix the real problems
in architecture (in my opinion it's too late), for example creating VM
performs ~400 DB requests... and yep this is going to be slow, and now
what? how can you fix that?..

Thanks. Monitoring is the major goal here. It's more focused on the user
and operational experience than the developer community.
Integrating this with services like rally may help the operations people
identify the hot spots and bottlenecks in the system. Automating this may
help with finding the anomalies in the system as one of many other usages.
Developers may get to know the impact the changes will make on the current
system across various services that interact. However, I totally agree that
it won't make any changes to the current architecture. It will just help to
set the baseline to further changes.

Thanks
Rajul


On Fri, Aug 4, 2017 at 4:24 PM, Boris Pavlovic  wrote:

> Ilya,
>
> Continuous tracing is a cool story, but before proceeding it would be good
>> to estimate the overhead. There will be an additional delay introduced by
>> OSProfiler library itself and delay caused by events transfer to consumer.
>> OSProfiler overhead is critical to minimize. E.g. VM creation produces >1k
>> events, which gives almost 2 times performance penalty in DevStack. Would
>> be definitely nice to have the same test run on real environment --
>> something that Performance Team could help with.
>
>
> As far as I understand the idea of continuous tracing is to collect as few
> as possible metrics to get insights of the request (not all tracepoints).
> If you keep only API, RPC and Driver calls it is going to
> drastically reduce amount of metrics collected.
>
> As well, one of the things that should be done is sending the metrics in
> bulk after the request in async way, that way we won't slow down UX and
> won't add too much load on underlaying infrastructure.
>
>
> Rajul,
>
> ICYMI, Boris is father of OSprofiler in OpenStack [1]
>>
>>> This is why I was excited to get the first response from him and curious
>>> on his stand. Really looking forward to get more on this from him. Also,
>>> Josh's response on other Tracing thread peeked my curiosity further.
>>
>>
> I'll try to elaborate my points. For monitoring perspective it's going to
> be super beneficial to have continuous tracing and I fully support the
> effort. However, it won't help community too much to fix the real problems
> in architecture (in my opinion it's too late), for example creating VM
> performs ~400 DB requests... and yep this is going to be slow, and now
> what? how can you fix that?..
>
> Best regards,
> Boris Pavlovic
>
>
>
> On Fri, Aug 4, 2017 at 1:12 PM, Rajul Kumar 
> wrote:
>
>> Hi Vinh
>>
>> For the `agent idea`, I think it is very good.
>>
>> However, in OpenStack, that idea may be really hard for us.
>>
>> The reason is the same with what Boris think.
>>
>>
>> Thanks. We did a poc and working to integrate it with OSProfiler without
>> affecting any of the services.
>> I understand this will be difficult.
>>
>> For tail-based and adaptive sampling, it is another story.
>>
>> Exactly. This needs some major changes. We will need this if we look to
>> have an effective tracing and any kind of automated analysis of the system.
>>
>> However, in naïve way, we can use sampling abilities from other
>> OpenTracing compatible tracers
>>
>> such as Uber Jaeger, Appdash, Zipkin (has an open pull request), LighStep
>> … by making OSprofiler
>>
>> compatible with OpenTracing API.
>>
>> I agree. Initially, this can be done.
>> However, the limitations of traces they generate is another story and
>> working to come up with another blueprint on that.
>>
>> ICYMI, Boris is father of OSprofiler in OpenStack [1]
>>
>> This is why I was excited to get the first response from him and curious
>> on his stand. Really looking forward to get more on this from him. Also,
>> Josh's response on other Tracing thread peeked my curiosity further.
>>
>> Thanks
>> Rajul
>>
>>
>>
>>
>>
>> On Thu, Aug 3, 2017 at 10:04 PM, vin...@vn.fujitsu.com <
>> vin...@vn.fujitsu.com> wrote:
>>
>>> Hi Rajul,
>>>
>>>
>>>
>>> For the `agent idea`, I think it is very good.
>>>
>>> However, in OpenStack, that idea may be really hard for us.
>>>
>>> The reason is the same with what Boris think.
>>>
>>>
>>>
>>> For the sampling part, head-based sampling 

Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Lance Bragstad


On 08/04/2017 03:45 PM, Kristi Nikolla wrote:
> Is this the case even if we haven’t made any final release with the change
> that introduced this issue? [0]
>
> It was only included in the Pike milestones and betas so far, and was not
> part of the Ocata release.
>
> Therefore the call which now returns a 403 in master, returned a 2xx in
> Ocata. So we would be fixing something which is broken on master rather
> than changing a ‘contract’. 

Good call - with that in mind I would be inclined to say we should fix
the issue in Pike that way we keep the 204 -> 204 behavior the same
across releases. But I'll defer to someone from the API WG just to make
sure.

>
> 0. 
> https://github.com/openstack/keystone/commit/51d5597df729158d15b71e2ba80ab103df5d55f8
>
>> On Aug 4, 2017, at 3:52 PM, Matthew Treinish  wrote:
>>
>> On Fri, Aug 04, 2017 at 03:35:38PM -0400, William M Edmonds wrote:
>>> Lance Bragstad  wrote on 08/04/2017 02:37:40 PM:
 Properly fixing this would result in a 403 -> 204 status code, which
 requires an API version bump according to the interoperability
 guidelines [5] (note that keystone has not implemented microversions at
 this point). At the same time - not fixing the issues results in a 403
 anytime a project is deleted while in this configuration.

>>> The guidelines you linked actually say that this is allowed without a
>>> version bump:
>>>
>>> "There are two types of change which do not require a version change:... or
>>> responding with success (when the request was properly formed, but the
>>> server had broken handling)."
>> That's only for 500-599 response codes. The 'broken handling' there literally
>> means broken as in the server couldn't handle the request. That bullet point 
>> is
>> saying if you had a 500-599 response fixing the code so it's either a 4XX or 
>> a
>> 2XX does not need a version. This specific case needs a version boundary 
>> because
>> you going from a 403 -> 204.
>>
>> -Matt Treinish
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect plaintext secrets

2017-08-04 Thread Doug Hellmann
Excerpts from Fox, Kevin M's message of 2017-08-04 20:21:19 +:
> I would really like to see secrets separated from config. Always have... They 
> are two separate things.
> 
> If nothing else, a separate config file so it can be permissioned differently.
> 
> This could be combined with k8s secrets/configmaps better too.
> Or make it much easier to version config in git and have secrets somewhere 
> else.

Sure. It's already possible today to use multiple configuration
files with oslo.config, using either the --config-dir option or by
passing multiple --config-file options.

Doug

> 
> Thanks,
> Kevin
> 
> 
> From: Raildo Mascena de Sousa Filho [rmasc...@redhat.com]
> Sent: Friday, August 04, 2017 12:34 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect 
> plaintext secrets
> 
> Hi all,
> 
> We had a couple of discussions with the Oslo team related to implement 
> Pluggable drivers for oslo.config[0] and use those feature to implement 
> support to protect plaintext secret on configuration files[1].
> 
> In another hand, due the containerized support on OpenStack services, we have 
> a community effort to implement a k8s ConfigMap support[2][3], which might 
> make us step back and consider how secret management will work, since the 
> config data will need to go into the configmap *before* the container is 
> launched.
> 
> So, I would like to see what the community think. Should we continue working 
> on that pluggable drivers and protect plain text secrets support for 
> oslo.config? Makes sense having a PTG session[4] on Oslo to discuss that 
> feature?
> 
> Thanks for the feedback in advance.
> 
> Cheers,
> 
> [0] https://review.openstack.org/#/c/454897/
> [1] https://review.openstack.org/#/c/474304/
> [2] 
> https://github.com/flaper87/keystone-k8s-ansible/blob/6524b768d75a28adf44c74aca77ccf13dd66b1a9/provision-keystone-apb/tasks/main.yaml#L71-L108
> [3] 
> https://kubernetes.io/docs/tasks/configure-pod-container/configmap/
> [4] https://etherpad.openstack.org/p/oslo-ptg-queens

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Kristi Nikolla
Is this the case even if we haven’t made any final release with the change
that introduced this issue? [0]

It was only included in the Pike milestones and betas so far, and was not
part of the Ocata release.

Therefore the call which now returns a 403 in master, returned a 2xx in
Ocata. So we would be fixing something which is broken on master rather
than changing a ‘contract’. 

0. 
https://github.com/openstack/keystone/commit/51d5597df729158d15b71e2ba80ab103df5d55f8

> On Aug 4, 2017, at 3:52 PM, Matthew Treinish  wrote:
> 
> On Fri, Aug 04, 2017 at 03:35:38PM -0400, William M Edmonds wrote:
>> 
>> Lance Bragstad  wrote on 08/04/2017 02:37:40 PM:
>>> Properly fixing this would result in a 403 -> 204 status code, which
>>> requires an API version bump according to the interoperability
>>> guidelines [5] (note that keystone has not implemented microversions at
>>> this point). At the same time - not fixing the issues results in a 403
>>> anytime a project is deleted while in this configuration.
>>> 
>> 
>> The guidelines you linked actually say that this is allowed without a
>> version bump:
>> 
>> "There are two types of change which do not require a version change:... or
>> responding with success (when the request was properly formed, but the
>> server had broken handling)."
> 
> That's only for 500-599 response codes. The 'broken handling' there literally
> means broken as in the server couldn't handle the request. That bullet point 
> is
> saying if you had a 500-599 response fixing the code so it's either a 4XX or a
> 2XX does not need a version. This specific case needs a version boundary 
> because
> you going from a 403 -> 204.
> 
> -Matt Treinish
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] string freeze exception for VMAX driver

2017-08-04 Thread Sean McGinnis
On Sat, Aug 05, 2017 at 02:52:57AM +0900, Ian Y. Choi wrote:
> Hello Sean,
> 
> For soft string freezes as a translator's view, trying the way you suggested
> for server projects would be good assuming that:
> - The volume of changes (e.g., the number of sentences, ratio of changes)
> needs to be properly limited

Completely agree. I'm more interested in changing the process around this than
the policy. I would hope we would never get beyond the single digits in number
of string changes during soft string freeze. If that happens, we are likely
letting too many trivial things through than we should at this point.

> - The string change needs to be well notified to translators (e.g., sharing
> to openstack-i18n mailing list)

I'm not sure how many folks are on the openstack-i18n mailing list. As the
mail manager informed me shortly after sending my last reply, I am not so my
copy to that list was rejected. It might be better to keep it on -dev.

Although we are really trying to get the attention of the i18n team. Maybe
we could make it a requirement that each project's PTL (or designated i18n
liason) be subscribed to that mailing list in order to submit the notice of
string changes allowed during soft freeze.

> - It would be so nice if I18n team can keep track of original string changes
> in Zanata - translate.openstack.org but
>   currently it is not easy unfortunately.
> 
> For hard string freezes, in my honest opinion, it is difficult to change the
> current policy
> because some translation sync support activities for a stable branch in
> openstack-infra [1] and
> addition of a stable version in translate.openstack.org [2] are dealt.
> 

Totally agree. Hard freeze should be a hard freeze. Only if there is a last
minute critical fix that absolutely must get through would be the only reason
I would want to see string changes allowed. And that better be a very rare
thing.

> From those views, I would like to discuss more opinions and would we try
> better approach from the next development cycle
> with agreement for server projects?
> 
> 
> With many thanks,
> 
> /Ian
> 
> [1]
> https://review.openstack.org/#/c/435812/1/jenkins/jobs/projects.yaml@1146
> [2] https://translate.openstack.org/iteration/view/cinder/stable-ocata

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [keystone] [ptl] PTL candidacy for Queens

2017-08-04 Thread Lance Bragstad
*

Hi all,

**

I'd like to formally communicate my desire to continue serving as the
keystone PTL for the upcoming Queen’s release. Despite some turbulence
throughout the Pike development cycle, keystone has managed to make
progress on some long standing issues. Even though the pace of
development has decreased, I think we can build momentum from the small
victories in Pike and have a productive Queens release. I'd like to
direct our focus on the following areas for continued improvement and
stability of the keystone project throughout the Queen’s cycle.


  Policy Improvements

We dedicated a significant portion of our time this release getting
policy into code and documented. We also had meaningful discussions with
operators and developers, resulting in a plan that improves
long-standing issues with OpenStack policy enforcement. I'd like to
carry this momentum into Queens and ensure we’re implementing global
role assignments. Additionally, I'd like to work closely with the oslo
team to find ways we can signal deprecations to operators though the
oslo.policy library. This will help us define a better set of default
roles in Rocky, and clean-up policy enforcement at each service. Lastly,
I look forward to championing the community goal to move policy and
documentation into code for all applicable projects.


  Application Credentials

At the forum and while reviewing the specification, we realized just how
important this work is to our users. While it's unfortunate we didn't
make as much progress here as we hoped during Pike, the discussions this
cycle highlighted a lot of concerns with design as well as usability. I
think we're all better off and more prepared to address the last few
tough design bits during the PTG. One of my goals during the PTG is to
facilitate those conversations and verbosely communicate our approach. I
think that will help us keep the goal in mind as we drive towards
delivering this in Queens.


  Unified Limits

Addressing unified limits was another long-standing issue that we did a
good job of capturing and documenting throughout the Pike release [0].
Pending available resources, it would be great to push this forward,
starting with unified limits in keystone. This will have a positive
impact on any project currently experiencing issues with quota and will
make quota usability more consistent overall.


  Testing

Throughout Pike, our team spent a significant amount of time paying down
testing, technical debt. We cleaned-up and extended support for
federated testing, and we've integrated testing with our tempest plugin.
We added experimental support to test rolling upgrades and look forward
to gating on rolling upgrades in Queens. In the upcoming months, we need
to focus our efforts on better LDAP integration testing, which has been
on our TODO list for too long.


  Team Building

Last, but certainly not least, we need to recruit new keystone
contributors - part or full-time. Some of our most experienced and
well-versed developers moved onto other projects outside of OpenStack.
Luckily, the remaining team members have ambitiously stepped up to fill
the gaps. I want to ensure this project is working optimally on all
cylinders. In order for us to do this and achieve our Queen’s goals, we
need more contributors.


Thanks for reading and I look forward to seeing everyone in Denver,

Lance



[0]http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/unified-limits.html*


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][performance] Proposing tail-based sampling in OSProfiler

2017-08-04 Thread Boris Pavlovic
Ilya,

Continuous tracing is a cool story, but before proceeding it would be good
> to estimate the overhead. There will be an additional delay introduced by
> OSProfiler library itself and delay caused by events transfer to consumer.
> OSProfiler overhead is critical to minimize. E.g. VM creation produces >1k
> events, which gives almost 2 times performance penalty in DevStack. Would
> be definitely nice to have the same test run on real environment --
> something that Performance Team could help with.


As far as I understand the idea of continuous tracing is to collect as few
as possible metrics to get insights of the request (not all tracepoints).
If you keep only API, RPC and Driver calls it is going to
drastically reduce amount of metrics collected.

As well, one of the things that should be done is sending the metrics in
bulk after the request in async way, that way we won't slow down UX and
won't add too much load on underlaying infrastructure.


Rajul,

ICYMI, Boris is father of OSprofiler in OpenStack [1]
>
>> This is why I was excited to get the first response from him and curious
>> on his stand. Really looking forward to get more on this from him. Also,
>> Josh's response on other Tracing thread peeked my curiosity further.
>
>
I'll try to elaborate my points. For monitoring perspective it's going to
be super beneficial to have continuous tracing and I fully support the
effort. However, it won't help community too much to fix the real problems
in architecture (in my opinion it's too late), for example creating VM
performs ~400 DB requests... and yep this is going to be slow, and now
what? how can you fix that?..

Best regards,
Boris Pavlovic



On Fri, Aug 4, 2017 at 1:12 PM, Rajul Kumar 
wrote:

> Hi Vinh
>
> For the `agent idea`, I think it is very good.
>
> However, in OpenStack, that idea may be really hard for us.
>
> The reason is the same with what Boris think.
>
>
> Thanks. We did a poc and working to integrate it with OSProfiler without
> affecting any of the services.
> I understand this will be difficult.
>
> For tail-based and adaptive sampling, it is another story.
>
> Exactly. This needs some major changes. We will need this if we look to
> have an effective tracing and any kind of automated analysis of the system.
>
> However, in naïve way, we can use sampling abilities from other
> OpenTracing compatible tracers
>
> such as Uber Jaeger, Appdash, Zipkin (has an open pull request), LighStep
> … by making OSprofiler
>
> compatible with OpenTracing API.
>
> I agree. Initially, this can be done.
> However, the limitations of traces they generate is another story and
> working to come up with another blueprint on that.
>
> ICYMI, Boris is father of OSprofiler in OpenStack [1]
>
> This is why I was excited to get the first response from him and curious
> on his stand. Really looking forward to get more on this from him. Also,
> Josh's response on other Tracing thread peeked my curiosity further.
>
> Thanks
> Rajul
>
>
>
>
>
> On Thu, Aug 3, 2017 at 10:04 PM, vin...@vn.fujitsu.com <
> vin...@vn.fujitsu.com> wrote:
>
>> Hi Rajul,
>>
>>
>>
>> For the `agent idea`, I think it is very good.
>>
>> However, in OpenStack, that idea may be really hard for us.
>>
>> The reason is the same with what Boris think.
>>
>>
>>
>> For the sampling part, head-based sampling can be implemented in
>> OSprofiler.
>>
>> For tail-based and adaptive sampling, it is another story.
>>
>> However, in naïve way, we can use sampling abilities from other
>> OpenTracing compatible tracers
>>
>> such as Uber Jaeger, Appdash, Zipkin (has an open pull request), LighStep
>> … by making OSprofiler
>>
>> compatible with OpenTracing API.
>>
>>
>>
>> ICYMI, Boris is father of OSprofiler in OpenStack [1]
>>
>>
>>
>> [1] https://specs.openstack.org/openstack/oslo-specs/specs/mitak
>> a/osprofiler-cross-service-project-profiling.html
>>
>>
>>
>> Best regards,
>>
>>
>>
>> Vinh Nguyen Trong
>>
>> PODC – Fujitsu Vietnam Ltd.
>>
>>
>>
>> *From:* Rajul Kumar [mailto:kumar.r...@husky.neu.edu]
>> *Sent:* Friday, 04 August, 2017 03:49
>> *To:* OpenStack Development Mailing List (not for usage questions) <
>> openstack-dev@lists.openstack.org>
>> *Subject:* Re: [openstack-dev] [oslo][performance] Proposing tail-based
>> sampling in OSProfiler
>>
>>
>>
>> Hi Boris
>>
>>
>>
>> That is a point of concern.
>>
>> Can you please direct to any of those?
>>
>>
>>
>> Anyways, we don't have anything in place for OpenStack yet.
>>
>> Now, either we pick another tracing solution like Zipkin, Jaeger etc.
>> which have their own limitations OR enhance OSProfiler.
>>
>> We pick the later as it's most native and better coupled with OpenStack
>> as of now.
>>
>> I understand that we may be blocked by these issues. However, I feel
>> it'll be better to fight with OSProfiler than anything else till we come up
>> with something better :)
>>
>>
>>
>> Thanks
>>
>> Rajul
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Aug 3, 2017 at 4:01 PM, Boris Pavlovic  wrote:
>>
>> Raj

Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect plaintext secrets

2017-08-04 Thread Fox, Kevin M
+1. Please keep me in the loop for when the PTG session is.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Friday, August 04, 2017 12:46 PM
To: openstack-dev
Subject: Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and  protect 
plaintext secrets

Excerpts from Raildo Mascena de Sousa Filho's message of 2017-08-04 19:34:25 
+:
> Hi all,
>
> We had a couple of discussions with the Oslo team related to implement
> Pluggable drivers for oslo.config[0] and use those feature to implement
> support to protect plaintext secret on configuration files[1].
>
> In another hand, due the containerized support on OpenStack services, we
> have a community effort to implement a k8s ConfigMap support[2][3], which
> might make us step back and consider how secret management will work, since
> the config data will need to go into the configmap *before* the container
> is launched.
>
> So, I would like to see what the community think. Should we continue
> working on that pluggable drivers and protect plain text secrets support
> for oslo.config? Makes sense having a PTG session[4] on Oslo to discuss
> that feature?

A PTG session does make sense.

My main concern is that the driver approach described is a fairly
significant change to the library. I was more confident that it made
sense when it was going to be used for multiple purposes. There may be a
less invasive way to handle secret storage. Or, we might be able to
design a system-level approach for handling those that doesn't require
changing the library at all. So let's not frame the discussion as
"should we add plugins to oslo.config" but "how should we handle secret
values in configuration files".

Doug

>
> Thanks for the feedback in advance.
>
> Cheers,
>
> [0] https://review.openstack.org/#/c/454897/
> [1] https://review.openstack.org/#/c/474304/
> [2]
> https://github.com/flaper87/keystone-k8s-ansible/blob/6524b768d75a28adf44c74aca77ccf13dd66b1a9/provision-keystone-apb/tasks/main.yaml#L71-L108
> [3] https://kubernetes.io/docs/
> 
> tasks/configure-pod-container/configmap/
> 
> [4] https://etherpad.openstack.org/p/oslo-ptg-queens

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect plaintext secrets

2017-08-04 Thread Fox, Kevin M
I would really like to see secrets separated from config. Always have... They 
are two separate things.

If nothing else, a separate config file so it can be permissioned differently.

This could be combined with k8s secrets/configmaps better too.
Or make it much easier to version config in git and have secrets somewhere else.

Thanks,
Kevin


From: Raildo Mascena de Sousa Filho [rmasc...@redhat.com]
Sent: Friday, August 04, 2017 12:34 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect 
plaintext secrets

Hi all,

We had a couple of discussions with the Oslo team related to implement 
Pluggable drivers for oslo.config[0] and use those feature to implement support 
to protect plaintext secret on configuration files[1].

In another hand, due the containerized support on OpenStack services, we have a 
community effort to implement a k8s ConfigMap support[2][3], which might make 
us step back and consider how secret management will work, since the config 
data will need to go into the configmap *before* the container is launched.

So, I would like to see what the community think. Should we continue working on 
that pluggable drivers and protect plain text secrets support for oslo.config? 
Makes sense having a PTG session[4] on Oslo to discuss that feature?

Thanks for the feedback in advance.

Cheers,

[0] https://review.openstack.org/#/c/454897/
[1] https://review.openstack.org/#/c/474304/
[2] 
https://github.com/flaper87/keystone-k8s-ansible/blob/6524b768d75a28adf44c74aca77ccf13dd66b1a9/provision-keystone-apb/tasks/main.yaml#L71-L108
[3] 
https://kubernetes.io/docs/tasks/configure-pod-container/configmap/
[4] https://etherpad.openstack.org/p/oslo-ptg-queens
--

Raildo mascena

Software Engineer, Identity Managment

Red Hat



[https://www.redhat.com/files/brand/email/sig-redhat.png]
TRIED. TESTED. TRUSTED.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][performance] Proposing tail-based sampling in OSProfiler

2017-08-04 Thread Rajul Kumar
Hi Vinh

For the `agent idea`, I think it is very good.

However, in OpenStack, that idea may be really hard for us.

The reason is the same with what Boris think.


Thanks. We did a poc and working to integrate it with OSProfiler without
affecting any of the services.
I understand this will be difficult.

For tail-based and adaptive sampling, it is another story.

Exactly. This needs some major changes. We will need this if we look to
have an effective tracing and any kind of automated analysis of the system.

However, in naïve way, we can use sampling abilities from other OpenTracing
compatible tracers

such as Uber Jaeger, Appdash, Zipkin (has an open pull request), LighStep …
by making OSprofiler

compatible with OpenTracing API.

I agree. Initially, this can be done.
However, the limitations of traces they generate is another story and
working to come up with another blueprint on that.

ICYMI, Boris is father of OSprofiler in OpenStack [1]

This is why I was excited to get the first response from him and curious on
his stand. Really looking forward to get more on this from him. Also,
Josh's response on other Tracing thread peeked my curiosity further.

Thanks
Rajul




On Thu, Aug 3, 2017 at 10:04 PM, vin...@vn.fujitsu.com <
vin...@vn.fujitsu.com> wrote:

> Hi Rajul,
>
>
>
> For the `agent idea`, I think it is very good.
>
> However, in OpenStack, that idea may be really hard for us.
>
> The reason is the same with what Boris think.
>
>
>
> For the sampling part, head-based sampling can be implemented in
> OSprofiler.
>
> For tail-based and adaptive sampling, it is another story.
>
> However, in naïve way, we can use sampling abilities from other
> OpenTracing compatible tracers
>
> such as Uber Jaeger, Appdash, Zipkin (has an open pull request), LighStep
> … by making OSprofiler
>
> compatible with OpenTracing API.
>
>
>
> ICYMI, Boris is father of OSprofiler in OpenStack [1]
>
>
>
> [1] https://specs.openstack.org/openstack/oslo-specs/specs/
> mitaka/osprofiler-cross-service-project-profiling.html
>
>
>
> Best regards,
>
>
>
> Vinh Nguyen Trong
>
> PODC – Fujitsu Vietnam Ltd.
>
>
>
> *From:* Rajul Kumar [mailto:kumar.r...@husky.neu.edu]
> *Sent:* Friday, 04 August, 2017 03:49
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [oslo][performance] Proposing tail-based
> sampling in OSProfiler
>
>
>
> Hi Boris
>
>
>
> That is a point of concern.
>
> Can you please direct to any of those?
>
>
>
> Anyways, we don't have anything in place for OpenStack yet.
>
> Now, either we pick another tracing solution like Zipkin, Jaeger etc.
> which have their own limitations OR enhance OSProfiler.
>
> We pick the later as it's most native and better coupled with OpenStack as
> of now.
>
> I understand that we may be blocked by these issues. However, I feel it'll
> be better to fight with OSProfiler than anything else till we come up with
> something better :)
>
>
>
> Thanks
>
> Rajul
>
>
>
>
>
>
>
> On Thu, Aug 3, 2017 at 4:01 PM, Boris Pavlovic  wrote:
>
> Rajul,
>
>
>
> May I ask why you think so?
>
>
>
> Exposed by OSprofiler issues are going to be really hard to fix in current
> OpenStack architecture.
>
>
>
> Best regards,
>
> Boris Pavlovic
>
>
>
> On Thu, Aug 3, 2017 at 12:56 PM, Rajul Kumar 
> wrote:
>
> Hi Boris
>
>
>
> Good to hear from you.
>
> May I ask why you think so?
>
>
>
> We do see some potential with OSProfiler for this and further objectives.
>
>
>
> Thanks
>
> Rajul
>
>
>
> On Thu, Aug 3, 2017 at 3:48 PM, Boris Pavlovic  wrote:
>
> Rajul,
>
>
>
> It makes sense! However, maybe it's a bit too late... ;)
>
>
>
> Best regards,
>
> Boris Pavlovic
>
>
>
> On Thu, Aug 3, 2017 at 12:16 PM, Rajul Kumar 
> wrote:
>
> Hello everyone
>
>
>
> I have added a blueprint on having tail-based sampling as a sampling
> option for continuous tracing in OSProfiler. It would be really helpful to
> have some thoughts, ideas, comments on this from the community.
>
>
>
> Continuous tracing provides a good insight on how various transactions
> behave across in a distributed system. Currently, OpenStack doesn't have a
> defined solution for continuous tracing. Though, it has OSProfiler that
> does generates selective traces, it may not capture the occurrence. Even if
> we have OSProfiler running continuously [1], we need to sample the traces
> so as to cut down the data generated and still keep the useful info.
>
>
>
> Head based sampling can be applied that decides initially whether a trace
> should be saved or not. However, it may miss out on some useful traces. I
> propose to have tail-based sampling [2] mechanism that makes the decision
> at the end of the transaction and tends to keep all the useful traces. This
> may require a lot of changes depending on what all type of info is required
> and the solution that we pick to implement it [2]. This may not affect the
> current working of any of the services on OpenStack as it will

Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Matthew Treinish
On Fri, Aug 04, 2017 at 03:35:38PM -0400, William M Edmonds wrote:
> 
> Lance Bragstad  wrote on 08/04/2017 02:37:40 PM:
> > Properly fixing this would result in a 403 -> 204 status code, which
> > requires an API version bump according to the interoperability
> > guidelines [5] (note that keystone has not implemented microversions at
> > this point). At the same time - not fixing the issues results in a 403
> > anytime a project is deleted while in this configuration.
> >
> 
> The guidelines you linked actually say that this is allowed without a
> version bump:
> 
> "There are two types of change which do not require a version change:... or
> responding with success (when the request was properly formed, but the
> server had broken handling)."

That's only for 500-599 response codes. The 'broken handling' there literally
means broken as in the server couldn't handle the request. That bullet point is
saying if you had a 500-599 response fixing the code so it's either a 4XX or a
2XX does not need a version. This specific case needs a version boundary because
you going from a 403 -> 204.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect plaintext secrets

2017-08-04 Thread Doug Hellmann
Excerpts from Raildo Mascena de Sousa Filho's message of 2017-08-04 19:34:25 
+:
> Hi all,
> 
> We had a couple of discussions with the Oslo team related to implement
> Pluggable drivers for oslo.config[0] and use those feature to implement
> support to protect plaintext secret on configuration files[1].
> 
> In another hand, due the containerized support on OpenStack services, we
> have a community effort to implement a k8s ConfigMap support[2][3], which
> might make us step back and consider how secret management will work, since
> the config data will need to go into the configmap *before* the container
> is launched.
> 
> So, I would like to see what the community think. Should we continue
> working on that pluggable drivers and protect plain text secrets support
> for oslo.config? Makes sense having a PTG session[4] on Oslo to discuss
> that feature?

A PTG session does make sense.

My main concern is that the driver approach described is a fairly
significant change to the library. I was more confident that it made
sense when it was going to be used for multiple purposes. There may be a
less invasive way to handle secret storage. Or, we might be able to
design a system-level approach for handling those that doesn't require
changing the library at all. So let's not frame the discussion as
"should we add plugins to oslo.config" but "how should we handle secret
values in configuration files".

Doug

> 
> Thanks for the feedback in advance.
> 
> Cheers,
> 
> [0] https://review.openstack.org/#/c/454897/
> [1] https://review.openstack.org/#/c/474304/
> [2]
> https://github.com/flaper87/keystone-k8s-ansible/blob/6524b768d75a28adf44c74aca77ccf13dd66b1a9/provision-keystone-apb/tasks/main.yaml#L71-L108
> [3] https://kubernetes.io/docs/
> 
> tasks/configure-pod-container/configmap/
> 
> [4] https://etherpad.openstack.org/p/oslo-ptg-queens

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect plaintext secrets

2017-08-04 Thread Davanum Srinivas
Raildo,

I am interested in this topic. PTG session sounds great!

Thanks,
Dims

On Fri, Aug 4, 2017 at 3:34 PM, Raildo Mascena de Sousa Filho <
rmasc...@redhat.com> wrote:

> Hi all,
>
> We had a couple of discussions with the Oslo team related to implement
> Pluggable drivers for oslo.config[0] and use those feature to implement
> support to protect plaintext secret on configuration files[1].
>
> In another hand, due the containerized support on OpenStack services, we
> have a community effort to implement a k8s ConfigMap support[2][3], which
> might make us step back and consider how secret management will work, since
> the config data will need to go into the configmap *before* the container
> is launched.
>
> So, I would like to see what the community think. Should we continue
> working on that pluggable drivers and protect plain text secrets support
> for oslo.config? Makes sense having a PTG session[4] on Oslo to discuss
> that feature?
>
> Thanks for the feedback in advance.
>
> Cheers,
>
> [0] https://review.openstack.org/#/c/454897/
> [1] https://review.openstack.org/#/c/474304/
> [2] https://github.com/flaper87/keystone-k8s-ansible/blob/
> 6524b768d75a28adf44c74aca77ccf13dd66b1a9/provision-keystone-
> apb/tasks/main.yaml#L71-L108
> [3] https://kubernetes.io/docs/
> tas
> ks/configure-pod-container/configmap/
> 
> [4] https://etherpad.openstack.org/p/oslo-ptg-queens
> --
>
> Raildo mascena
>
> Software Engineer, Identity Managment
>
> Red Hat
>
> 
> 
> TRIED. TESTED. TRUSTED. 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread William M Edmonds

Lance Bragstad  wrote on 08/04/2017 02:37:40 PM:
> Properly fixing this would result in a 403 -> 204 status code, which
> requires an API version bump according to the interoperability
> guidelines [5] (note that keystone has not implemented microversions at
> this point). At the same time - not fixing the issues results in a 403
> anytime a project is deleted while in this configuration.
>

The guidelines you linked actually say that this is allowed without a
version bump:

"There are two types of change which do not require a version change:... or
responding with success (when the request was properly formed, but the
server had broken handling)."
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][oslo.config] Pluggable drivers and protect plaintext secrets

2017-08-04 Thread Raildo Mascena de Sousa Filho
Hi all,

We had a couple of discussions with the Oslo team related to implement
Pluggable drivers for oslo.config[0] and use those feature to implement
support to protect plaintext secret on configuration files[1].

In another hand, due the containerized support on OpenStack services, we
have a community effort to implement a k8s ConfigMap support[2][3], which
might make us step back and consider how secret management will work, since
the config data will need to go into the configmap *before* the container
is launched.

So, I would like to see what the community think. Should we continue
working on that pluggable drivers and protect plain text secrets support
for oslo.config? Makes sense having a PTG session[4] on Oslo to discuss
that feature?

Thanks for the feedback in advance.

Cheers,

[0] https://review.openstack.org/#/c/454897/
[1] https://review.openstack.org/#/c/474304/
[2]
https://github.com/flaper87/keystone-k8s-ansible/blob/6524b768d75a28adf44c74aca77ccf13dd66b1a9/provision-keystone-apb/tasks/main.yaml#L71-L108
[3] https://kubernetes.io/docs/

tasks/configure-pod-container/configmap/

[4] https://etherpad.openstack.org/p/oslo-ptg-queens
-- 

Raildo mascena

Software Engineer, Identity Managment

Red Hat



TRIED. TESTED. TRUSTED. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Status of python-openstacksdk project

2017-08-04 Thread Boris Pavlovic
Monty,


* drop stevedore/plugin support. An OpenStack REST client has no need for
> plugins. All services are welcome. *note below*


Back to 60s style of development? Just copy paste things? no plugins? no
architecture? no libs?

That's not going to work for dozens of OpenStack projects. It's just won't
scale. Every project should maintain plugin for their project. And it
should be enough to say "pip install python-client" that
extend the Core OpenStack python client and adds support of new commands.

The whole core part should be only about how to make plugins interface in
such way that it's easy to extend and provide to end user nice user
experience from both side (shell & python) and nice and stable interface
for client developers .

By the way stevedore is really providing very bad plugin experience and
should not be used definitely.

Best regards,
Boris Pavlovic

On Fri, Aug 4, 2017 at 12:05 PM, Joshua Harlow 
wrote:

> Also note that this appears to exist:
>
> https://github.com/openstack/python-openstackclient/blob/mas
> ter/requirements.txt#L10
>
> So even if python-openstacksdk is not a top level project, I would assume
> that it being a requirement would imply that it is? Or perhaps neither the
> python-openstackclient or python-openstacksdk should really be used? I've
> been telling people that python-openstackclient should be good to use (I
> hope that is still correct, though I do have to tell people to *not* use
> python-openstackclient from python itself, and only use it from bash/other
> shell).
>
>
> Michael Johnson wrote:
>
>> Hi OpenStack developers,
>>
>> I was wondering what is the current status of the python-openstacksdk
>> project.  The Octavia team has posted some patches implementing our new
>> Octavia v2 API [1] in the SDK, but we have not had any reviews.  I have
>> also
>> asked some questions in #openstack-sdks with no responses.
>> I see that there are some maintenance patches getting merged and a pypi
>> release was made 6/14/17 (though not through releases project).  I'm not
>> seeing any mailing list traffic and the IRC meetings seem to have ended in
>> 2016.
>>
>> With all the recent contributor changes, I want to make sure the project
>> isn't adrift in the sea of OpenStack before we continue to spend
>> development
>> time implementing the SDK for Octavia. We were also planning to use it as
>> the backing for our dashboard project.
>>
>> Since it's not in the governance projects list I couldn't determine who
>> the
>> PTL to ping would be, so I decided to ping the dev mailing list.
>>
>> My questions:
>> 1. Is this project abandoned?
>> 2. Is there a plan to make it an official project?
>> 3. Should we continue to develop for it?
>>
>> Thanks,
>> Michael (johnsom)
>>
>> [1]
>> https://review.openstack.org/#/q/project:openstack/python-op
>> enstacksdk+statu
>> s:open+topic:%255Eoctavia.*
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cloudkitty] PTL candidacy for Queens

2017-08-04 Thread Christophe Sauthier

Hello everyone,

I would like to announce my candidacy for PTL of Cloudkitty.

During the Pike cycle we have been able continue the openess of our
community but also the addition of some interesting improvment and bug
fixes.

Now with Queen cycle approacing fast, my main focus for cloudkitty will
be to expand the spectrum of interaction : toward more OpenStack 
components

and toward containers.
Finally I would like a special focus with documentation to help to 
adoption

of cloudkitty !

I would also like to take this opportunity to thank all members of the
OpenStack community who helped our team during the lasts cycles.

Thank you,
 Christophe


Christophe Sauthier   Mail : 
christophe.sauth...@objectif-libre.com

CEO   Mob : +33 (0) 6 16 98 63 96
Objectif LibreURL : www.objectif-libre.com
Au service de votre Cloud Twitter : @objectiflibre

Suivez les actualités OpenStack en français en vous abonnant à la Pause 
OpenStack

http://olib.re/pause-openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Status of python-openstacksdk project

2017-08-04 Thread Joshua Harlow

Also note that this appears to exist:

https://github.com/openstack/python-openstackclient/blob/master/requirements.txt#L10

So even if python-openstacksdk is not a top level project, I would 
assume that it being a requirement would imply that it is? Or perhaps 
neither the python-openstackclient or python-openstacksdk should really 
be used? I've been telling people that python-openstackclient should be 
good to use (I hope that is still correct, though I do have to tell 
people to *not* use python-openstackclient from python itself, and only 
use it from bash/other shell).


Michael Johnson wrote:

Hi OpenStack developers,

I was wondering what is the current status of the python-openstacksdk
project.  The Octavia team has posted some patches implementing our new
Octavia v2 API [1] in the SDK, but we have not had any reviews.  I have also
asked some questions in #openstack-sdks with no responses.
I see that there are some maintenance patches getting merged and a pypi
release was made 6/14/17 (though not through releases project).  I'm not
seeing any mailing list traffic and the IRC meetings seem to have ended in
2016.

With all the recent contributor changes, I want to make sure the project
isn't adrift in the sea of OpenStack before we continue to spend development
time implementing the SDK for Octavia. We were also planning to use it as
the backing for our dashboard project.

Since it's not in the governance projects list I couldn't determine who the
PTL to ping would be, so I decided to ping the dev mailing list.

My questions:
1. Is this project abandoned?
2. Is there a plan to make it an official project?
3. Should we continue to develop for it?

Thanks,
Michael (johnsom)

[1]
https://review.openstack.org/#/q/project:openstack/python-openstacksdk+statu
s:open+topic:%255Eoctavia.*


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-04 Thread Lance Bragstad
Keystone had a bug reported [0] recently (that we are targeting to
pike-rc1) that exposes an inconsistency in the API based on
configuration. The happy path is as follows:

- a deployment is configured to store projects (controlled by the
resource backend) and users (controlled by the identity backend) in SQL
- users can have a default project ID and a previous bug [1] fix made it
so users who were associated to a project via their
`default_project_id`, which is an attribute of the user, would be
corrected when that project was deleted
- when a project is deleted (DELETE /v3/projects/{project_id}) a
callback [2] [3] is invoked to unset that project ID from all users who
might have it set as their default project

This works great when both the identity and resource backends are
configured to use SQL. When the identity backend is configured to use
LDAP, the wheels fall off:

- a user attempts to remove a project (DELETE /v3/projects/{project_id})
- the identity callback is invoked and control is passed to the LDAP
identity driver implementation
- the LDAP implementation raises a 403 [4] because read/write LDAP is
not supported in keystone, and unsetting a project ID would classify as
a write operation

Properly fixing this would result in a 403 -> 204 status code, which
requires an API version bump according to the interoperability
guidelines [5] (note that keystone has not implemented microversions at
this point). At the same time - not fixing the issues results in a 403
anytime a project is deleted while in this configuration.

Looking to get some advice from the API WG to see if this is something
we'll be able to address before rc or not. Thanks for reading!

Lance


[0] https://bugs.launchpad.net/keystone/+bug/1705081
[1]
https://github.com/openstack/keystone/commit/51d5597df729158d15b71e2ba80ab103df5d55f8
[2]
https://github.com/openstack/keystone/blob/4e986235713758f2df5ae12e66ca3e5e93edd551/keystone/identity/core.py#L489-L494
[3]
https://github.com/openstack/keystone/blob/4e986235713758f2df5ae12e66ca3e5e93edd551/keystone/identity/core.py#L523-L533
[4]
https://github.com/openstack/keystone/blob/4e986235713758f2df5ae12e66ca3e5e93edd551/keystone/identity/backends/ldap/core.py#L89-L92
[5]
http://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html#evaluating-api-changes



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Status of python-openstacksdk project

2017-08-04 Thread Michael Johnson

Awesome Monty. This is a great proposal.  I have no preference on which way 
these merge, but see huge value in straightening this out.  Frankly I think 
some of the tempest plugin work could benefit from having an official and well 
maintained SDK as well.

So, I am in favor of getting the ball rolling here. I was really hoping to be 
able to use this for our dashboard work in Queens, but giving the work here I 
may need to do something in the interim. Maybe if these patches merge for 
python-openstacksdk I can use that for now and migrate as new SDK becomes 
available.

Michael

-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com] 
Sent: Friday, August 4, 2017 9:53 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [python-openstacksdk] Status of 
python-openstacksdk project

Conclusion
--

As I mentioned at the top, I'd been thinking some of this already and had 
planned on chatting with folks in person at the PTG, but it seems we're at a 
place where that's potentially counter productive.

Depending on what people think I can follow this up with some governance 
resolutions and more detailed specs.

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [security] Security Meeting next week

2017-08-04 Thread Luke Hinds
Hi,

It was decided that the Security Project meeting would not be held next
week, and will instead reconvene on the 17th of August.

Regards,

Luke
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] string freeze exception for VMAX driver

2017-08-04 Thread Ian Y. Choi

Hello Sean,

For soft string freezes as a translator's view, trying the way you 
suggested for server projects would be good assuming that:
- The volume of changes (e.g., the number of sentences, ratio of 
changes) needs to be properly limited
- The string change needs to be well notified to translators (e.g., 
sharing to openstack-i18n mailing list)
- It would be so nice if I18n team can keep track of original string 
changes in Zanata - translate.openstack.org but

  currently it is not easy unfortunately.

For hard string freezes, in my honest opinion, it is difficult to change 
the current policy
because some translation sync support activities for a stable branch in 
openstack-infra [1] and

addition of a stable version in translate.openstack.org [2] are dealt.

From those views, I would like to discuss more opinions and would we 
try better approach from the next development cycle

with agreement for server projects?


With many thanks,

/Ian

[1] 
https://review.openstack.org/#/c/435812/1/jenkins/jobs/projects.yaml@1146

[2] https://translate.openstack.org/iteration/view/cinder/stable-ocata

Sean McGinnis wrote on 8/5/2017 1:42 AM:

I think the importance of string freeze for server projects (e.g., cinder,
nova, keystone, neutron) might
be less important than previous cycle(s), but sharing the status to all the
teams including release management team
is a good idea to stay on the same page as much as possible :)


Hey Ian,

Do you think we should make any changes to our policy for this? Just one thing
that comes to mind, for server projects should we just send out a ML post with
something like "Here is a list of patches we deemed important that had string
translations".

Just thinking of how we can keep things moving when needed while still making
sure important things are communicated well.

Or is that even too much? During soft string freeze, should the server project
core teams just try to be more aware of these but just approve patches that
are important fixes and move on?

Thanks,
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Status of python-openstacksdk project

2017-08-04 Thread Melvin Hillsman
rage-quit

On Fri, Aug 4, 2017 at 11:52 AM, Monty Taylor  wrote:

> On 08/04/2017 03:24 AM, Thierry Carrez wrote:
>
>> Michael Johnson wrote:
>>
>>> I was wondering what is the current status of the python-openstacksdk
>>> project.  The Octavia team has posted some patches implementing our new
>>> Octavia v2 API [1] in the SDK, but we have not had any reviews.  I have
>>> also
>>> asked some questions in #openstack-sdks with no responses.
>>> I see that there are some maintenance patches getting merged and a pypi
>>> release was made 6/14/17 (though not through releases project).  I'm not
>>> seeing any mailing list traffic and the IRC meetings seem to have ended
>>> in
>>> 2016.
>>>
>>> With all the recent contributor changes, I want to make sure the project
>>> isn't adrift in the sea of OpenStack before we continue to spend
>>> development
>>> time implementing the SDK for Octavia. We were also planning to use it as
>>> the backing for our dashboard project.
>>>
>>> Since it's not in the governance projects list I couldn't determine who
>>> the
>>> PTL to ping would be, so I decided to ping the dev mailing list.
>>>
>>> My questions:
>>> 1. Is this project abandoned?
>>> 2. Is there a plan to make it an official project?
>>> 3. Should we continue to develop for it?
>>>
>>
>> Thanks for raising this.
>>
>> Beyond its limited activity, another issue is that it's not an official
>> project while its name make it a "default choice" for a lot of users
>> (hard to blame them for thinking that
>> http://git.openstack.org/cgit/openstack/python-openstacksdk is not the
>> official Python SDK for OpenStack, but I digress). So I agree that the
>> situation should be clarified.
>>
>> I know that Monty has pretty strong feelings about this too, so I'll
>> wait for him to comment.
>>
>
> Oh boy. I'd kind of hoped we'd make it to the PTG before starting this
> conversation ... guess not. :)
>
> Concerns
> 
>
> I share the same concerns Thierry listed above. Specifically:
>
> * It is not an official project, but its name leads people to believe it's
> the "right" thing to use if they want to talk to OpenStack clouds using
> Python.
>
> * The core team is small to begin with, but recently got hit in a major
> way by shifts in company priorities.
>
> I think we can all agree that those are concerns.
>
> Three additional points:
>
> * The OpenStack AppDev group and the various appdev hackathons use shade,
> not openstacksdk. This is what we have people out in the world recommending
> people use when they write code that consumes OpenStack APIs. The Interop
> challenges at the Summits so far have all used Ansible's OpenStack modules,
> which are based on shade, because they were the thing that works.
>
> * Both shade and python-openstackclient have investigated using
> openstacksdk as their REST layer but were unable to because it puts some
> abstractions in layers that make it impossible to do some of the advanced
> things we need.
>
> * openstacksdk has internal implementations of things that exist at
> different points in the stack. We just added full support for version
> service and version discovery to keystoneauth, but openstacksdk has its own
> layer for that so it both can't use the ksa implementation and is not
> compliant with the API-WG consume guidelines.
>
> It's not all bad! There is some **great** work in openstacksdk and it's a
> shame there are some things that make it hard to consume. Brian, Qiming and
> Terry have done a bunch of excellent work - and I'd like to not lose it to
> the dustbin of corporate shifting interest.
>
> **warning** - there is a very large text wall that follows. If you don't
> care a ton on this topic, please stop reading now, otherwise you  might
> rage-quit computers altogether.
>
> Proposal
> 
>
> I'd propose we have the shade team adopt the python-openstacksdk codebase.
>
> This is obviously an aggressive suggestion and essentially represents a
> takeover of a project. We don't have the luxury of humans to work on things
> that we once had, so I think as a community we should be realistic about
> the benefits of consolidation and the downside to continuing to have 2
> different python SDKs.
>
> Doing that implies the following:
>
> * Rework the underlying guts of openstacksdk to make it possible to
> replace shade's REST layer with openstacksdk. openstacksdk still doesn't
> have a 1.0 release, so we can break the few things we'll need to break.
>
> * Update the shade mission to indicate its purpose in life isn't just
> hiding deployer differences but rather is to provide a holistic
> cloud-centric (rather than service-centric) end-user API library.
>
> * Merge the two repos and retire one of them. Specifics on the mechanics
> of this below, but this will either result in moving the resource and
> service layer in openstacksdk into shade and adding appropriate attributes
> to the shade.OpenStackCloud object, or moving the shade.OpenStackCloud into
> something lik

Re: [openstack-dev] [python-openstacksdk] Status of python-openstacksdk project

2017-08-04 Thread Monty Taylor

On 08/04/2017 03:24 AM, Thierry Carrez wrote:

Michael Johnson wrote:

I was wondering what is the current status of the python-openstacksdk
project.  The Octavia team has posted some patches implementing our new
Octavia v2 API [1] in the SDK, but we have not had any reviews.  I have also
asked some questions in #openstack-sdks with no responses.
I see that there are some maintenance patches getting merged and a pypi
release was made 6/14/17 (though not through releases project).  I'm not
seeing any mailing list traffic and the IRC meetings seem to have ended in
2016.

With all the recent contributor changes, I want to make sure the project
isn't adrift in the sea of OpenStack before we continue to spend development
time implementing the SDK for Octavia. We were also planning to use it as
the backing for our dashboard project.

Since it's not in the governance projects list I couldn't determine who the
PTL to ping would be, so I decided to ping the dev mailing list.

My questions:
1. Is this project abandoned?
2. Is there a plan to make it an official project?
3. Should we continue to develop for it?


Thanks for raising this.

Beyond its limited activity, another issue is that it's not an official
project while its name make it a "default choice" for a lot of users
(hard to blame them for thinking that
http://git.openstack.org/cgit/openstack/python-openstacksdk is not the
official Python SDK for OpenStack, but I digress). So I agree that the
situation should be clarified.

I know that Monty has pretty strong feelings about this too, so I'll
wait for him to comment.


Oh boy. I'd kind of hoped we'd make it to the PTG before starting this 
conversation ... guess not. :)


Concerns


I share the same concerns Thierry listed above. Specifically:

* It is not an official project, but its name leads people to believe 
it's the "right" thing to use if they want to talk to OpenStack clouds 
using Python.


* The core team is small to begin with, but recently got hit in a major 
way by shifts in company priorities.


I think we can all agree that those are concerns.

Three additional points:

* The OpenStack AppDev group and the various appdev hackathons use 
shade, not openstacksdk. This is what we have people out in the world 
recommending people use when they write code that consumes OpenStack 
APIs. The Interop challenges at the Summits so far have all used 
Ansible's OpenStack modules, which are based on shade, because they were 
the thing that works.


* Both shade and python-openstackclient have investigated using 
openstacksdk as their REST layer but were unable to because it puts some 
abstractions in layers that make it impossible to do some of the 
advanced things we need.


* openstacksdk has internal implementations of things that exist at 
different points in the stack. We just added full support for version 
service and version discovery to keystoneauth, but openstacksdk has its 
own layer for that so it both can't use the ksa implementation and is 
not compliant with the API-WG consume guidelines.


It's not all bad! There is some **great** work in openstacksdk and it's 
a shame there are some things that make it hard to consume. Brian, 
Qiming and Terry have done a bunch of excellent work - and I'd like to 
not lose it to the dustbin of corporate shifting interest.


**warning** - there is a very large text wall that follows. If you don't 
care a ton on this topic, please stop reading now, otherwise you  might 
rage-quit computers altogether.


Proposal


I'd propose we have the shade team adopt the python-openstacksdk codebase.

This is obviously an aggressive suggestion and essentially represents a 
takeover of a project. We don't have the luxury of humans to work on 
things that we once had, so I think as a community we should be 
realistic about the benefits of consolidation and the downside to 
continuing to have 2 different python SDKs.


Doing that implies the following:

* Rework the underlying guts of openstacksdk to make it possible to 
replace shade's REST layer with openstacksdk. openstacksdk still doesn't 
have a 1.0 release, so we can break the few things we'll need to break.


* Update the shade mission to indicate its purpose in life isn't just 
hiding deployer differences but rather is to provide a holistic 
cloud-centric (rather than service-centric) end-user API library.


* Merge the two repos and retire one of them. Specifics on the mechanics 
of this below, but this will either result in moving the resource and 
service layer in openstacksdk into shade and adding appropriate 
attributes to the shade.OpenStackCloud object, or moving the 
shade.OpenStackCloud into something like openstack.cloud and making a 
shade backwards-compate shim. I lean towards the first, as we've been 
telling devs "use shade to talk to OpenStack" at hackathons and 
bootcamps and I'd rather avoid the messaging shift. However, pointing to 
an SDK called "The Python OpenStack SDK" and te

Re: [openstack-dev] [Cinder] string freeze exception for VMAX driver

2017-08-04 Thread Sean McGinnis
> 
> I think the importance of string freeze for server projects (e.g., cinder,
> nova, keystone, neutron) might
> be less important than previous cycle(s), but sharing the status to all the
> teams including release management team
> is a good idea to stay on the same page as much as possible :)
> 

Hey Ian,

Do you think we should make any changes to our policy for this? Just one thing
that comes to mind, for server projects should we just send out a ML post with
something like "Here is a list of patches we deemed important that had string
translations".

Just thinking of how we can keep things moving when needed while still making
sure important things are communicated well.

Or is that even too much? During soft string freeze, should the server project
core teams just try to be more aware of these but just approve patches that
are important fixes and move on?

Thanks,
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 31)

2017-08-04 Thread Attila Darazs
If the topics below interest you and you want to contribute to the 
discussion, feel free to join the next meeting:


Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

There are a lot of people on vacation, so this was a small meeting.

We started by discussing the hash promotions and the ways to track 
issues. Whether it's an upstream or RDO promotion issue, just create a 
Launchpad bug against tripleo and tag it with ci and alert. It will 
automatically gets escalated and gets attention.


Gabriele gave a presentation about his current status with container 
building on RDO Cloud. It looks to be in a good shape, however there are 
still bugs to iron out.


Arx explained that the scenario001 jobs are now running a tempest test 
as well, good way to introduce more testing upstream, while Emilien 
explained that we should probably do more tempest testing on container 
jobs as well.


Wes brought up an issue about collecting logs during the image building 
process which needs attention.


That's it for this week, have a nice weekend.

Best regards,
Attila

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] string freeze exception for VMAX driver

2017-08-04 Thread Ian Y. Choi

Hello Helen,

Thanks a lot for sharing a string freeze exception request to 
openstack-dev mailing list.


With @amotoki's comment [1] and the discussion on the last IRC meeting 
yesterday [2],
I18n team is fine and there will be no problem to accept the requests 
from I18n team's perspective.


Note that String Freeze has been important for Docs & I18n team to 
acknowledge string freeze
status and take appropriate actions of what needs to be changed to 
documentation and translation activities if needed.
Since I18n team now more focuses on dashboard translations [1] and 
documents are being migrated [3],
I think the importance of string freeze for server projects (e.g., 
cinder, nova, keystone, neutron) might
be less important than previous cycle(s), but sharing the status to all 
the teams including release management team

is a good idea to stay on the same page as much as possible :)


With many thanks,

/Ian, I18n team Ocata PTL.


[1] 
http://lists.openstack.org/pipermail/openstack-i18n/2017-August/002999.html
[2] 
http://eavesdrop.openstack.org/meetings/openstack_i18n_meeting/2017/openstack_i18n_meeting.2017-08-03-13.02.html
[3] 
http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html


Walsh, Helen wrote on 8/3/2017 5:21 PM:


*To whom it may concern,*

I would like to request a string freeze exception for 2 patches that 
are on the merge queue for Pike.


1.VMAX driver - align VMAX QOS settings with front end  (CI Passed)

https://review.openstack.org/#/c/484885/7/cinder/volume/drivers/dell_emc/vmax/rest.py 
line 800 (removal of exception message)


Although it’s primary aim is to align QoS with front end setting it 
indirectly fixes a lazy loading error we were seeing around QoS which 
occasionally


Broke CI on previous patches.

2.VMAX driver - seamless upgrade from SMI-S to REST (CI Pending)

https://review.openstack.org/#/c/482138/19/cinder/volume/drivers/dell_emc/vmax/common.py 
line 1400 ,1455 (message changes)


This is vital for as reuse of volumes from Ocata to Pike.  In Ocata we 
used SMIS to interface with the VMAX, in Pike we are using REST.  A 
few changes needed to be made to make this transition as seamless as 
possible.


Thank you,

Helen



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][stable] Reverting the deprecation notice: Cinder Driver for NetApp E-Series

2017-08-04 Thread Ravi, Goutham
Developers and Operators,

NetApp had prior provided notification [1] in accordance with community policy 
[2] of an intention to deprecate the iSCSI and FC Cinder drivers for NetApp 
E/EF-Series systems [3].  The community notification issued had the effect of 
introducing us to and engendering discussion with deployers (and those with 
near term plans to become deployers) indicating the importance of continued 
support for E/EF-series systems beyond the Pike release.   We hadn’t been prior 
familiar with many of these folks and greatly appreciate their having shared 
their perspective.

We’re pleased to announce that we’ve reconsidered our approach based on this 
new information.

Our intention with this mail is to revoke [3] the deprecation notice indicated 
in April 2017 and to continue to support E/EF-series systems.  As such, we do 
not intend to remove these drivers during the Queens or for the foreseeable 
future.

Any Cinder E-series current or prospective deployers are encouraged to get in 
touch with NetApp via the community #openstack-netapp IRC channel on freenode 
or via the #OpenStack Slack channel on http://netapp.io for 
any questions.

Thanks,
Goutham Pacha Ravi

[1] 
http://lists.openstack.org/pipermail/openstack-operators/2017-April/013210.html
[2] 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
[3] https://review.openstack.org/#/c/456990/
[4] https://review.openstack.org/#/c/490918/ (Revert "NetApp: Deprecate 
E-Series drivers")

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs][ptl] Candidacy for Docs

2017-08-04 Thread Petr Kovar
Hi all,

I'd like to announce my candidacy for PTL of the docs project for Queens.

I've been active in various open source documentation and
translation projects since 2007 and started contributing to OpenStack
docs during the Mitaka cycle, both upstream and in the RDO Project.

The docs project has recently seen a significant decrease in both the
number of submitted updates and contributors, but thanks to a group of
dedicated people in the community, we've managed to come up with a clear
plan to migrate docs over to individual projects, restructure them and
reduce the scope to keep them maintainable. This is now well underway and
I'd like to help the team and the community drive and continue with this
work.

In docs, we generally like to keep it rather short and simple, so let me
summarize the main three points as follows:

Support and help drive the remaining tasks in restructuring and/or moving
content from the core docs suite.

Help establish the docs team as content editors for individual project docs
when needed or requested.

Stay open and friendly to new contributors, no matter if they are
prospective core members or drive-by docs contributors, with the
understanding that docs are a great way to get involved in the project for
both the non- and developers.

Thank you,
pk

-- 
Petr Kovar
Sr. Technical Writer | Customer Content Services
Red Hat Czech, Brno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMWare Type Snapshot for Openstack 4.3

2017-08-04 Thread Matt Riedemann

On 8/4/2017 8:41 AM, Tom Kennedy wrote:
Is there an Openstack document that shows how to extend openstack to do 
something like this?


The create snapshot API is this in upstream Nova:

https://developer.openstack.org/api-ref/compute/#create-image-createimage-action

There is no distinction between a live and cold snapshot in the end user 
REST API. That's dependent on the backend compute driver. For example, 
the libvirt driver may attempt to perform a live snapshot if possible, 
but falls back to a cold snapshot if that's not possible. Other drivers 
could do the same.


As for the difference between the OpenStack concept of a snapshot and 
the VMware concept of a snapshot, I don't know what that is, but I can 
saw we wouldn't add a VMware-specific REST API for snapshots to the 
compute API when we already have the createImage API. So some design 
work would be involved if you wanted to upstream this.


For information on contributing features to Nova, you can start here:

https://docs.openstack.org/nova/latest/contributor/blueprints.html

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] PTL Candidacy for Queens

2017-08-04 Thread Spyros Trigazis
Hello!

I would like to nominate myself as PTL for the Magnum project for the
Queens cycle.

I have been consistently contributing to Magnum since February 2016
and I am a core reviewer since August 2016. Since then, I have
contributed to significant features like cluster drivers, add Magnum
tests to Rally (I'm core reviewer to rally to help the rally team with
Magnum related reviews), wrote Magnum's installation tutorial and
served as docs liaison for the project. My latest contribution is the
swarm-mode cluster driver. I have been the release liaison for Magnum
for Pike and I have contributed a lot in Magnum's CI jobs (adding
multi-node, DIB and new driver jobs, I haven't managed to add Magnum
in CentOS CI yet :( but we have granted access). Finally, I have been
working closely with other projects consumed by Magnum like Heat and
Fedora Atomic.

My plans for Queens are to contribute and guide other contributors to:
* Finalize and stabilize the very much wanted feature for cluster
  upgrades.
* Add functionality to heal clusters from a failed state.
* Add functionality for federated Kubernetes clusters and potentially
  other cluster types.
* Add Kuryr as a network driver.

Thanks for considering me,
Spyros Trigazis

[0] https://review.openstack.org/490893

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMWare Type Snapshot for Openstack 4.3

2017-08-04 Thread Tom Kennedy
Matt

That is correct.  This is for the IBM ICMO product. 
Here is the only reference I see in the IBM documentation regarding this.
https://www.ibm.com/support/knowledgecenter/SST55W_4.3.0/liaca/liaca_cfg_snapshot.html

Is there an Openstack document that shows how to extend openstack to do 
something like this?


 Tom Kennedy
 kenne...@us.ibm.com
 Infrastructure Engineer




From:   Matt Riedemann 
To: openstack-dev@lists.openstack.org
Date:   08/03/2017 10:44 PM
Subject:Re: [openstack-dev] VMWare Type Snapshot for Openstack 4.3



On 8/3/2017 3:16 PM, Tom Kennedy wrote:
> I see that this is implemented in 
> nova(nova/api/openstack/compute/contrib/server_snapshot.py) , but is not 

> available in Horizon.

I think you're looking at some forked code because that doesn't exist in 
upstream Nova:

https://github.com/openstack/nova/tree/master/nova/api/openstack/compute

I seem to remember a team in China at IBM working on VMware snapshots 
years ago, or something like this, for a product, so maybe you stumbled 
upon that.

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][performance] Proposing tail-based sampling in OSProfiler

2017-08-04 Thread Ilya Shakhat
Hi,

Continuous tracing is a cool story, but before proceeding it would be good
to estimate the overhead. There will be an additional delay introduced by
OSProfiler library itself and delay caused by events transfer to consumer.
OSProfiler overhead is critical to minimize. E.g. VM creation produces >1k
events, which gives almost 2 times performance penalty in DevStack. Would
be definitely nice to have the same test run on real environment --
something that Performance Team could help with.

Transfer part of delay can be reduced by e.g. writing events to local file
and then processing them with Logstash + Grok. Agent-based approach is good
for more real-time processing -- can we consider to use Logstash UDP [1]
input or Elastic Beat [2] framework? OSProfiler has drivers support (works
in Pike!) and we can give operators freedom to choose the pipeline they
prefer.

[1] https://www.elastic.co/guide/en/logstash/current/plugins-inputs-udp.html
[2]
https://www.elastic.co/guide/en/beats/libbeat/current/beats-reference.html

Best regards,
Ilya Shakhat


2017-08-04 4:04 GMT+02:00 vin...@vn.fujitsu.com :

> Hi Rajul,
>
>
>
> For the `agent idea`, I think it is very good.
>
> However, in OpenStack, that idea may be really hard for us.
>
> The reason is the same with what Boris think.
>
>
>
> For the sampling part, head-based sampling can be implemented in
> OSprofiler.
>
> For tail-based and adaptive sampling, it is another story.
>
> However, in naïve way, we can use sampling abilities from other
> OpenTracing compatible tracers
>
> such as Uber Jaeger, Appdash, Zipkin (has an open pull request), LighStep
> … by making OSprofiler
>
> compatible with OpenTracing API.
>
>
>
> ICYMI, Boris is father of OSprofiler in OpenStack [1]
>
>
>
> [1] https://specs.openstack.org/openstack/oslo-specs/specs/
> mitaka/osprofiler-cross-service-project-profiling.html
>
>
>
> Best regards,
>
>
>
> Vinh Nguyen Trong
>
> PODC – Fujitsu Vietnam Ltd.
>
>
>
> *From:* Rajul Kumar [mailto:kumar.r...@husky.neu.edu]
> *Sent:* Friday, 04 August, 2017 03:49
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [oslo][performance] Proposing tail-based
> sampling in OSProfiler
>
>
>
> Hi Boris
>
>
>
> That is a point of concern.
>
> Can you please direct to any of those?
>
>
>
> Anyways, we don't have anything in place for OpenStack yet.
>
> Now, either we pick another tracing solution like Zipkin, Jaeger etc.
> which have their own limitations OR enhance OSProfiler.
>
> We pick the later as it's most native and better coupled with OpenStack as
> of now.
>
> I understand that we may be blocked by these issues. However, I feel it'll
> be better to fight with OSProfiler than anything else till we come up with
> something better :)
>
>
>
> Thanks
>
> Rajul
>
>
>
>
>
>
>
> On Thu, Aug 3, 2017 at 4:01 PM, Boris Pavlovic  wrote:
>
> Rajul,
>
>
>
> May I ask why you think so?
>
>
>
> Exposed by OSprofiler issues are going to be really hard to fix in current
> OpenStack architecture.
>
>
>
> Best regards,
>
> Boris Pavlovic
>
>
>
> On Thu, Aug 3, 2017 at 12:56 PM, Rajul Kumar 
> wrote:
>
> Hi Boris
>
>
>
> Good to hear from you.
>
> May I ask why you think so?
>
>
>
> We do see some potential with OSProfiler for this and further objectives.
>
>
>
> Thanks
>
> Rajul
>
>
>
> On Thu, Aug 3, 2017 at 3:48 PM, Boris Pavlovic  wrote:
>
> Rajul,
>
>
>
> It makes sense! However, maybe it's a bit too late... ;)
>
>
>
> Best regards,
>
> Boris Pavlovic
>
>
>
> On Thu, Aug 3, 2017 at 12:16 PM, Rajul Kumar 
> wrote:
>
> Hello everyone
>
>
>
> I have added a blueprint on having tail-based sampling as a sampling
> option for continuous tracing in OSProfiler. It would be really helpful to
> have some thoughts, ideas, comments on this from the community.
>
>
>
> Continuous tracing provides a good insight on how various transactions
> behave across in a distributed system. Currently, OpenStack doesn't have a
> defined solution for continuous tracing. Though, it has OSProfiler that
> does generates selective traces, it may not capture the occurrence. Even if
> we have OSProfiler running continuously [1], we need to sample the traces
> so as to cut down the data generated and still keep the useful info.
>
>
>
> Head based sampling can be applied that decides initially whether a trace
> should be saved or not. However, it may miss out on some useful traces. I
> propose to have tail-based sampling [2] mechanism that makes the decision
> at the end of the transaction and tends to keep all the useful traces. This
> may require a lot of changes depending on what all type of info is required
> and the solution that we pick to implement it [2]. This may not affect the
> current working of any of the services on OpenStack as it will be off the
> critical path [3].
>
>
>
> Please share your thoughts on this and what solution should be preferred
> in a broader OpenStack's perspective.
>
> This is a 

[openstack-dev] [tc] Technical Committee Status update, August 4th

2017-08-04 Thread Thierry Carrez
Hi!

This is the weekly update on Technical Committee initiatives. You can
find the full list of all open topics at:

https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee


== Recently-approved changes ==

* Pike goals updates: nova, cinder
* New repositories: publiccloud-wg, oswin-tempest-plugin

Not much TC activity this week as we past Pike Feature Freeze and
everyone is focused on producing Pike release candidates.


== Stuck tags ==

We have a number of non-straightforward tag additions in the queue,
which need a bit more input before they can proceed:

Kolla's stable:follows-policy application [1] has been proposed July
2016 but was delayed for various reasons. Currently stuck near the goal
line waiting on some last-minute commitment. Would be great to merge it
before Pike release so that it can be included in the project navigator.

Nova and Ironic's assert:supports-api-interoperability addition[2] is
missing their respective PTL support for them. It also triggered an
interesting side discussion on the value of tags in general and this tag
in particular. This one will likely be abandoned unless it gets the PTL
support on them.

Trove's status:maintenance-mode addition[3] is WIP until Queens opens.
One question is whether Trove is now past the low point, and setting the
tag would convey the wrong message for a recovering project. Please join
the discussion if you have an opinion.

[1] https://review.openstack.org/346455
[2] https://review.openstack.org/#/c/482759/1
[3] https://review.openstack.org/#/c/488947/


== Open discussions ==

Flavio's resolution about allowing teams to host meetings in their own
IRC channels is still in the early days of discussion, and is likely to
need a few iterations to iron out. Please join the first round of reviews:

https://review.openstack.org/485117

Discussion on John's resolution on how decisions should be globally
inclusive is entering the final stages. Please review it if you haven't yet:

https://review.openstack.org/#/c/460946/


== Voting in progress ==

The description of what upstream support is still missing a few votes to
pass. Please review it now:

https://review.openstack.org/440601


== TC member actions for the coming week(s) ==

Flavio still needs to incorporate feedback in the "Drop TC meetings"
proposal and produce a new patchset, or abandon it since we pretty much
already implemented the described change.


== Need for a TC meeting next Tuesday ==

Currently-proposed changes are lacking reviews more than they are stuck,
so I don't think we need a synchronous TC meeting next week. Let me know
if you'd like to have one and the topic you would like to see discussed.


Cheers!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Blazar] PTL Candidacy for Queens

2017-08-04 Thread Masahito MUROI

Dear everyone,

I'd like to announce my candidacy as Blazar PTL for the Queens release 
cycle.


I served as PTL for the Blazar project during Pike cycle. And I would 
like to continue to do in the next cycle.


During the Pike cycle, the Blazar team made tons of progress for 
resolving feedback by operators as well as implementing the Blazar 
features and bug fixes.  The team has made v1 API more manageable for 
users, implemented Horizon plugin and the instance reservation support. 
The big improvement was supported by the development cycle between 
operators and developers.


In the Queens cycle I'd like to focus on the following things:

- Blazar's Features: improving resiliency and user experience

The Blazar started to support 2 types of reservation, host reservation 
and instance reservation, from Pike release.  The two reservation 
features are motivated by real users. In the real deployment, resiliency 
is one of key points for the reservation service.


Additionally, the team mainly focused on server-side developments during 
the last two cycle. The development made the blazar server and its 
features stable. So it's good time to improve the user experience of the 
stable feature in the next cycle.


- Community: Encouraging the team to be more diverse

The team's activities in the Pike cycle become 1.5 times bigger than the 
Ocata cycle. Beside the big growth, most of the activities were done by 
the team members who revived the project since the Barcelona Summit.


Different perspectives from people who have different backgrounds and 
demands make good developments in the next or later release. So I'd like 
to encourage more people to join this team.



best regards,
Masahito



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ffe][requirements][monasca][heat][watcher][congress] FFE for python-monascaclient minimum version in g-r

2017-08-04 Thread witold.be...@est.fujitsu.com
> -Original Message-
> From: Tony Breeds [mailto:t...@bakeyournoodle.com]
> Sent: Freitag, 4. August 2017 05:10
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev]
> [ffe][requirements][monasca][heat][watcher][congress] FFE for python-
> monascaclient minimum version in g-r
> 
> On Thu, Aug 03, 2017 at 11:39:47AM +, witold.be...@est.fujitsu.com
> wrote:
> > Hello everyone,
> >
> > I would like to ask for the FFE for python-monascaclient version in global
> requirements.
> >
> > The current version in Pike (1.7.0) is not fully backward compatible. The
> monasca exception classes were replaced with keystoneauth exceptions,
> which affects heat and watcher projects if they use current upper
> constraints. The fixes for these projects have been submitted [1, 2].
> >
> > Also, monasca projects (monasca-agent, monasca-ui, monasca-api) rely on
> python-monascaclient 1.7.0 and don't work with older versions.
> >
> > The change for bumping the minimum version of python-monascaclient is
> here:
> >
> > https://review.openstack.org/489173
> 
> Okay I said on that review that I was confused and wasn't ready to grant an
> FFE.  In trying to "articulate my confusion"  I worked out why I was confused
> #winning \o/
> 
> So for me it boils down to the affected projects:
> 
> Package  : python-monascaclient [python-monascaclient>=1.1.0] (used
> by 4 projects)
> Also affects : 4 projects
> openstack/congress[]
> openstack/heat[tc:approved-release]
> openstack/monasca-ui  []
> openstack/watcher []
> 
> Congres, and heat have said they're eaither not affected or are willing to
> accept the impacts.  That leaves watcher.
> 
> But each of them is using constraints and the gates are passing, so the 
> overall
> risk/impact seems much lower to me that I estimated yesterday.
> 
> I think I just talked myself round.
> 
> Yours Tony.


Hi Tony,
Thanks, we're really happy to have g-r bumped.
And sorry for the additional effort and confusion. The bump should have 
happened much earlier.


Greetings
Witek
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rpm-packaging] ptl for queens

2017-08-04 Thread Nicolas Bock

+1 !

On Fri, Aug 04, 2017 at 10:19:20AM +0200, Thomas Bechtold wrote:

Hi,

I announce my candidacy for the PTL of the Packaging RPM project.

I have been a contributor to various OpenStack projects since Havana and I'm
one of the initial cores of the packaging RPM project. The project goal is
to produce a production-ready set of OpenStack packages for RPM-based systems
(like SLES, RHEL, openSUSE, Fedora, etc.).

As a PTL, I would focus on:

- python3 support. The currently available packages are python2 only.
 Distros are moving to python3 as default so providing python3 packages
 (starting with the libs and clients) should be done now.

- getting more services packaged. We currently have most of the libs
 and clients available but there are a lot of services still missing.

- Improve tooling. There are still things that can be automated when
 new packages are created or available ones updated.


Thanks,
Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Status of python-openstacksdk project

2017-08-04 Thread Thierry Carrez
Michael Johnson wrote:
> I was wondering what is the current status of the python-openstacksdk
> project.  The Octavia team has posted some patches implementing our new
> Octavia v2 API [1] in the SDK, but we have not had any reviews.  I have also
> asked some questions in #openstack-sdks with no responses.
> I see that there are some maintenance patches getting merged and a pypi
> release was made 6/14/17 (though not through releases project).  I'm not
> seeing any mailing list traffic and the IRC meetings seem to have ended in
> 2016.
> 
> With all the recent contributor changes, I want to make sure the project
> isn't adrift in the sea of OpenStack before we continue to spend development
> time implementing the SDK for Octavia. We were also planning to use it as
> the backing for our dashboard project.
> 
> Since it's not in the governance projects list I couldn't determine who the
> PTL to ping would be, so I decided to ping the dev mailing list.
> 
> My questions:
> 1. Is this project abandoned?
> 2. Is there a plan to make it an official project?
> 3. Should we continue to develop for it?

Thanks for raising this.

Beyond its limited activity, another issue is that it's not an official
project while its name make it a "default choice" for a lot of users
(hard to blame them for thinking that
http://git.openstack.org/cgit/openstack/python-openstacksdk is not the
official Python SDK for OpenStack, but I digress). So I agree that the
situation should be clarified.

I know that Monty has pretty strong feelings about this too, so I'll
wait for him to comment.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rpm-packaging] ptl for queens

2017-08-04 Thread Thomas Bechtold
Hi,

I announce my candidacy for the PTL of the Packaging RPM project.

I have been a contributor to various OpenStack projects since Havana and I'm
one of the initial cores of the packaging RPM project. The project goal is
to produce a production-ready set of OpenStack packages for RPM-based systems
(like SLES, RHEL, openSUSE, Fedora, etc.).

As a PTL, I would focus on:

- python3 support. The currently available packages are python2 only.
  Distros are moving to python3 as default so providing python3 packages
  (starting with the libs and clients) should be done now.

- getting more services packaged. We currently have most of the libs
  and clients available but there are a lot of services still missing.

- Improve tooling. There are still things that can be automated when
  new packages are created or available ones updated.


Thanks,
Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Facilitating automation testing in TripleO UI

2017-08-04 Thread Honza Pokorny
About 10 years ago, we were promised a fully semantic version of HTML.
No more nested divs to structure your documents.  However, all we got
was a few generic, and only marginally useful elements like  and
.

On 2017-08-03 18:59, Ana Krivokapic wrote:
> Hi TripleO devs,
> 
> In our effort to make the TripleO UI code friendlier to automation
> testing[1], there is an open review[2] for which we seem to have some
> difficulty reaching the consensus on how best to proceed. There is already
> a discussion happening on the review itself, and I'd like to move it here,
> rather than having it in a Gerrit review.
> 
> The tricky part is around adding HTML element ids to the Nodes page. This
> page is generated by looping through the list of registered nodes and
> displaying complete information about each of them. Because of this, many
> of the elements are repeating on the page (CPU info, BIOS, power state,
> etc, for each node). We need to figure out how to make these elements easy
> for the automation testing code to access, both in terms of locating a
> single group within the page, as well as distinguishing the individual
> elements of a group from each other. There are several approaches that
> we've come up so far:
> 
> 1) Add unique IDs to all the elements. Generate unique `id` html attributes
> by including the node UUID in the value of the `id` attribute. Do this for
> both the higher level elements (divs that hold all the information about a
> single node), as well as the lower level (the ones that hold info about
> BIOS, CPU, etc). The disadvantage of this approach is cluttering the UI
> codebase with many `id` attributes that would otherwise not be needed.

While this is useful for addressing a particular element, I think it
would still require quite a bit of parsing.  You'd find yourself writing
string-splitting code all over the place.  It would make the code harder
to read without providing much semantic information --- unless of course
every single element had some kind of ID.

> 2) Add CSS classes instead of IDs. Pros for this approach: no need to
> generate the clumsy ids containing the node UUID, since the CSS classes
> don't need to be unique. Cons: we would be adding even more classes to HTML
> elements, many of which are already cluttered with many classes. Also,
> these classes would not exist anywhere in CSS or serve any other purpose.

I like this option the best.  It seems to be the most natural way of
adding semantic information to the bare-bones building blocks of the
web.  Classes are simple strings that add information about the intended
use of the element.  Using jQuery-like selectors, this can make for some
easy-to-understand code.  Do you want to grab the power state of the
currently expanded node in the list?

$('#node-list div.node.expanded').find('.power-state')

By default, Selenium can query the DOM by id, by class name, and by
xpath.  It can be extended to use pyquery which is the Python
implementation of the jQuery css selector.  I think many of the
automation implementation headaches can be solved by using pyquery.

https://blogs.gnome.org/danni/2012/11/19/extending-selenium-with-jquery/

Furthermore, I think that classes can be used effectively when
describing transient state like the expanded/collapsed state of a
togglable element.  It's easy to implement on the client side, and it
should be helpful on the automation side.

Relying on patternfly presentational class names won't suffice.

> 3) Add custom HTML attributes. These usually start with the 'data-' prefix,
> and would not need to be unique either. Pros: avoids the problems described
> with both approaches above. Cons: AFAIU, the React framework could have
> problems with custom attributes (Jirka can probably explain this better).
> Also, casual readers of the code could be confused about the purpose of
> these attributes, since no other code present in the UI codebase is using
> them in any way.

This seems pretty drastic.  I wonder if there is a way we could extend
the React component class to give us automatic and painfree ids.

> It seems that a perfectly optimal solution does not exist here and we will
> have to compromise on some level. Let's try and figure out what the best
> course of action here is.
> 
> [1] https://blueprints.launchpad.net/tripleo/+spec/testing-ids
> [2] https://review.openstack.org/#/c/483039/
> 
> -- 
> Regards,
> Ana Krivokapic
> Senior Software Engineer
> OpenStack team
> Red Hat Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Elections] Candidacy for Senlin PTL (Queens)

2017-08-04 Thread YUAN RUIJIE
Hi all,

I'd like to announce my candidacy as Senlin PTL in Queens cycle.

I began to contribute to Senlin project since 2016.05 and joined the
team as a core reviewer in Ocata cycle. It is my pleasure to work
with the great team to make this project better and better, and we
will keep moving and look forward to push Senlin to the next level.

In past cycles, we have done a lot of great works. As a clustering
service, we already can handle some resource types like nova
server, heat stack .etc. What's more, we already have some small
steps for supporting NFV & k8s use cases.

In next cycle, which is Queens, I'd like to focus on the tasks as follows:

- Improve runtime data processing inside Senlin server
- Improved AZ & Region placement policy
- Improved health policy to provide HA support
- A better support to NFV use cases
- Add support to k8s use cases
- A stronger team to push Senlin to next level

These are the tasks that I believe I can help. All ideas are welcomed.

Sincerely,
Ruijie Yuan (ruijie)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev