[openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests

2018-02-26 Thread Waines, Greg

· I have a commercial OpenStack product that I would like to claim 
compliancy with RefStack
· Is it sufficient to claim compliance with only the “OpenStack Powered 
Platform” TESTS ?
oi.e. https://refstack.openstack.org/#/guidelines
oi.e. the ~350-ish compute + object-storage tests
· OR
· Should I be using the COMPLETE API Test Set ?
oi.e. the > 1,000 tests from various domains that get run if you do not 
specify a test-list

Greg.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ptg] Team dinner

2018-02-26 Thread Christian Berendt
+1

Thanks for the organisation.

> On 26. Feb 2018, at 20:39, Paul Bourke  wrote:
> 
> Hey Kolla,
> 
> Hope you're all enjoying Dublin so far :) Some have expressed interest in 
> getting together for a team meal, how does Thursday sound? Please reply to 
> this with +1/-1 and I can see about booking something.
> 
> Cheers,
> -Paul
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread John Griffith
On Mon, Feb 26, 2018 at 2:47 PM, Matt Riedemann  wrote:

> On 2/26/2018 9:28 PM, John Griffith wrote:
>
>> I'm also wondering how much of the extend actions we can leverage here,
>> but I haven't looked through all of that yet.​
>>
>
> The os-server-external-events API in nova is generic. We'd just add a new
> microversion to register a new tag for this event. Like the extend volume
> event, the volume ID would be provided as input to the API and nova would
> use that to identify the instance + volume to refresh on the compute host.
>
> We'd also register a new instance action / event record so that users
> could poll the os-instance-actions API for completion of the operation.

​Yeah, it seems like this would be pretty handy with what's there.  So are
folks good with that?  Wanted to make sure there's nothing contentious
there before I propose a spec on the Nova and Cinder sides.  If you think
it seems at least worth proposing I'll work on it and get something ready
as a welcome home from Dublin gift for everyone :)
​


>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread Matt Riedemann

On 2/26/2018 9:28 PM, John Griffith wrote:
I'm also wondering how much of the extend actions we can leverage here, 
but I haven't looked through all of that yet.​


The os-server-external-events API in nova is generic. We'd just add a 
new microversion to register a new tag for this event. Like the extend 
volume event, the volume ID would be provided as input to the API and 
nova would use that to identify the instance + volume to refresh on the 
compute host.


We'd also register a new instance action / event record so that users 
could poll the os-instance-actions API for completion of the operation.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ptg] Team dinner

2018-02-26 Thread Dan Ardelean
+1

On 26 Feb 2018, at 21:38, Mark Goddard 
mailto:m...@stackhpc.com>> wrote:

+1

On 26 Feb 2018 7:58 p.m., "Eduardo Gonzalez" 
mailto:dabar...@gmail.com>> wrote:
+1

On Mon, Feb 26, 2018, 7:40 PM Paul Bourke 
mailto:paul.bou...@oracle.com>> wrote:
Hey Kolla,

Hope you're all enjoying Dublin so far :) Some have expressed interest
in getting together for a team meal, how does Thursday sound? Please
reply to this with +1/-1 and I can see about booking something.

Cheers,
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ptg] Team dinner

2018-02-26 Thread Mark Goddard
+1

On 26 Feb 2018 7:58 p.m., "Eduardo Gonzalez"  wrote:

> +1
>
> On Mon, Feb 26, 2018, 7:40 PM Paul Bourke  wrote:
>
>> Hey Kolla,
>>
>> Hope you're all enjoying Dublin so far :) Some have expressed interest
>> in getting together for a team meal, how does Thursday sound? Please
>> reply to this with +1/-1 and I can see about booking something.
>>
>> Cheers,
>> -Paul
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread John Griffith
On Mon, Feb 26, 2018 at 2:13 PM, Matt Riedemann  wrote:

> On 2/26/2018 8:09 PM, John Griffith wrote:
>
>> I'm interested in looking at creating a mechanism to "refresh" all of the
>> existing/current attachments as part of the Cinder Failover process.
>>
>
> What would be involved on the nova side for the refresh? I'm guessing
> disconnect/connect the volume via os-brick (or whatever for non-libvirt
> drivers), resulting in a new host connector from os-brick that nova would
> use to update the existing volume attachment for the volume/server instance
> combo?

​Yep, that's pretty much exactly what I'm thinking about / looking at.  I'm
also wondering how much of the extend actions we can leverage here, but I
haven't looked through all of that yet.​


>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread Matt Riedemann

On 2/26/2018 8:09 PM, John Griffith wrote:
I'm interested in looking at creating a mechanism to "refresh" all of 
the existing/current attachments as part of the Cinder Failover process.


What would be involved on the nova side for the refresh? I'm guessing 
disconnect/connect the volume via os-brick (or whatever for non-libvirt 
drivers), resulting in a new host connector from os-brick that nova 
would use to update the existing volume attachment for the volume/server 
instance combo?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-26 Thread John Griffith
Hey Everyone,

Something I've been looking at with Cinder's replication (sort of the next
step in the evolution if you will) is the ability to refresh/renew in-use
volumes that were part of a migration event.

We do something similar with extend-volume on the Nova side through the use
of Instance Actions I believe, and I'm wondering how folks would feel about
the same sort of thing being added upon failover/failback for replicated
Cinder volumes?

If you're not familiar, Cinder allows a volume to be replicated to multiple
physical backend devices, and in the case of a DR situation an Operator can
failover a backend device (or even a single volume).  This process results
in Cinder making some calls to the respective backend device, it doing it's
magic and updating the Cinder Volume Model with new attachment info.

This works great, except for the case of users that have a bunch of in-use
volumes on that particular backend.  We don't currently do anything to
refresh/update them, so it's a manual process of running through a
detach/attach loop.

I'm interested in looking at creating a mechanism to "refresh" all of the
existing/current attachments as part of the Cinder Failover process.

Curious if anybody has any thoughts on this, or if anyone has already done
something related to this topic?

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ptg] Team dinner

2018-02-26 Thread Eduardo Gonzalez
+1

On Mon, Feb 26, 2018, 7:40 PM Paul Bourke  wrote:

> Hey Kolla,
>
> Hope you're all enjoying Dublin so far :) Some have expressed interest
> in getting together for a team meal, how does Thursday sound? Please
> reply to this with +1/-1 and I can see about booking something.
>
> Cheers,
> -Paul
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][ptg] Team dinner

2018-02-26 Thread Paul Bourke

Hey Kolla,

Hope you're all enjoying Dublin so far :) Some have expressed interest 
in getting together for a team meal, how does Thursday sound? Please 
reply to this with +1/-1 and I can see about booking something.


Cheers,
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Stepping down from Ironic core

2018-02-26 Thread Loo, Ruby
Hey Vasyl,

Thanks for all your contributions to Ironic! I hope that you'll still find a 
bit of time for us :-)

--ruby

From: Vasyl Saienko 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, February 23, 2018 at 9:02 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [ironic] Stepping down from Ironic core

Hey Ironic community!

Unfortunately I don't work on Ironic as much as I used to any more, so i'm
stepping down from core reviewers.

So, thanks for everything everyone, it's been great to work with you
all for all these years!!!




Sincerely,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Ubuntu jobs failed on pike branch due to package dependency

2018-02-26 Thread Michał Jastrzębski
I'm for option 1 definitely. accidental ceph upgrade during routine
minor version upgrade is something we don't want. We will need big
warning about this version mismatch in release notes.

On 26 February 2018 at 07:01, Eduardo Gonzalez  wrote:
> I prefer option 1, breaking stable policy is not good for users. They will
> be forced to upgrade a major ceph version during a minor upgrade, which is
> not good and not excepted to be done ever.
>
> Regards
>
>
> 2018-02-26 9:51 GMT+01:00 Shake Chen :
>>
>> I prefer to the option 2.
>>
>> On Mon, Feb 26, 2018 at 4:39 PM, Jeffrey Zhang 
>> wrote:
>>>
>>> Recently, the Ubuntu jobs on pike branch are red[0]. With some debugging,
>>> i found it is caused by
>>> package dependency.
>>>
>>>
>>> *Background*
>>>
>>> Since we have no time to upgrade ceph from Jewel to Luminous at the end
>>> of pike cycle, we pinned
>>> Ceph to Jewel on pike branch. This works on CentOS, because ceph jewel
>>> and ceph luminous are on
>>> the different repos.
>>>
>>> But in Ubuntu Cloud Archive repo, it bump ceph to Luminous. Even though
>>> ceph luminous still exists
>>> on UCA. But since qemu 2.10 depends on ceph luminous, we have to ping
>>> qemu to 2.5 to use ceph Jewel[1].
>>> And this works since then.
>>>
>>>
>>> *Now Issue*
>>>
>>> But recently, UCA changed the libvirt-daemon package dependency, and
>>> added following,
>>>
>>> Package: libvirt-daemon
>>> Version: 3.6.0-1ubuntu6.2~cloud0
>>> ...
>>> Breaks: qemu (<< 1:2.10+dfsg-0ubuntu3.4~), qemu-kvm (<<
>>> 1:2.10+dfsg-0ubuntu3.4~)
>>>
>>> It requires qemu 2.10 now. So dependency is broken and nova-libvirt
>>> container is failed to build.
>>>
>>>
>>> *Possible Solution*
>>>
>>> I think there two possible ways now, but none of them is good.
>>>
>>> 1. install ceph Luminuous on nova-libvirt container and ceph Jewel in
>>> ceph-* container
>>> 2. Bump ceph from jewel to luminous. But this breaks the backport policy,
>>> obviously.
>>>
>>> So any idea on this?
>>>
>>> [0] https://review.openstack.org/534149
>>> [1] https://review.openstack.org/#/c/526931/
>>>
>>> --
>>> Regards,
>>> Jeffrey Zhang
>>> Blog: http://xcodest.me
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Shake Chen
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-02-26 Thread Lance Bragstad


On 02/26/2018 10:17 AM, Ricardo Rocha wrote:
> Hi.
>
> We have an issue on the way Magnum uses keystone trusts.
>
> Magnum clusters are created in a given project using HEAT, and require
> a trust token to communicate back with OpenStack services -  there is
> also integration with Kubernetes via a cloud provider.
>
> This trust belongs to a given user, not the project, so whenever we
> disable the user's account - for example when a user leaves the
> organization - the cluster becomes unhealthy as the trust is no longer
> valid. Given the token is available in the cluster nodes, accessible
> by users, a trust linked to a service account is also not a viable
> solution.
>
> Is there an existing alternative for this kind of use case? I guess
> what we might need is a trust that is linked to the project.
This was proposed in the original application credential specification
[0] [1]. The problem is that you're sharing an authentication mechanism
with multiple people when you associate it to the life cycle of a
project. When a user is deleted or removed from the project, nothing
would stop them from accessing OpenStack APIs if the application
credential or trust isn't rotated out. Even if the credential or trust
were scoped to the project's life cycle, it would need to be rotated out
and replaced when users come and go for the same reason. So it would
still be associated to the user life cycle, just indirectly. Otherwise
you're allowing unauthorized access to something that should be protected.

If you're at the PTG - we will be having a session on application
credentials tomorrow (Tuesday) afternoon [2] in the identity-integration
room [3].

[0] https://review.openstack.org/#/c/450415/
[1] https://review.openstack.org/#/c/512505/
[2] https://etherpad.openstack.org/p/application-credentials-rocky-ptg
[3] http://ptg.openstack.org/ptg.html
>
> I believe the same issue would be there using application credentials,
> as the ownership is similar.
>
> Cheers,
>   Ricardo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Ubuntu jobs failed on pike branch due to package dependency

2018-02-26 Thread Eduardo Gonzalez
I prefer option 1, breaking stable policy is not good for users. They will
be forced to upgrade a major ceph version during a minor upgrade, which is
not good and not excepted to be done ever.

Regards


2018-02-26 9:51 GMT+01:00 Shake Chen :

> I prefer to the option 2.
>
> On Mon, Feb 26, 2018 at 4:39 PM, Jeffrey Zhang 
> wrote:
>
>> Recently, the Ubuntu jobs on pike branch are red[0]. With some debugging,
>> i found it is caused by
>> package dependency.
>>
>>
>> *Background*
>>
>> Since we have no time to upgrade ceph from Jewel to Luminous at the end
>> of pike cycle, we pinned
>> Ceph to Jewel on pike branch. This works on CentOS, because ceph jewel
>> and ceph luminous are on
>> the different repos.
>>
>> But in Ubuntu Cloud Archive repo, it bump ceph to Luminous. Even though
>> ceph luminous still exists
>> on UCA. But since qemu 2.10 depends on ceph luminous, we have to ping
>> qemu to 2.5 to use ceph Jewel[1].
>> And this works since then.
>>
>>
>> *Now Issue*
>>
>> But recently, UCA changed the libvirt-daemon package dependency, and
>> added following,
>>
>> Package: libvirt-daemon
>> Version: 3.6.0-1ubuntu6.2~cloud0
>> ...
>> Breaks: qemu (<< 1:2.10+dfsg-0ubuntu3.4~), qemu-kvm (<<
>> 1:2.10+dfsg-0ubuntu3.4~)
>>
>> It requires qemu 2.10 now. So dependency is broken and nova-libvirt
>> container is failed to build.
>>
>>
>> *Possible Solution*
>>
>> I think there two possible ways now, but none of them is good.
>>
>> 1. install ceph Luminuous on nova-libvirt container and ceph Jewel in
>> ceph-* container
>> 2. Bump ceph from jewel to luminous. But this breaks the backport policy,
>> obviously.
>>
>> So any idea on this?
>>
>> [0] https://review.openstack.org/534149
>> [1] https://review.openstack.org/#/c/526931/
>>
>> --
>> Regards,
>> Jeffrey Zhang
>> Blog: http://xcodest.me
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Shake Chen
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Release cycles, stable branch maintenance, LTS vs. downstream consumption models

2018-02-26 Thread Thierry Carrez
Thomas Goirand wrote:
> On 02/24/2018 03:42 PM, Thierry Carrez wrote:
>> On Tuesday afternoon we'll have a discussion on release cycle duration,
>> stable branch maintenance, and LTS vs. how OpenStack is consumed downstream.
>>
>> I set up an etherpad at:
>> https://etherpad.openstack.org/p/release-cycles-ptg-rocky
>>
>> Please add the topics you'd like to cover.
> 
> I really wish I could be there. Is there any ways I could attend
> remotely? Like someone with Skype or something...

You should send me a summary of your position on the topics (or add it
to the etherpad) so that we can make sure to take your position into
account.

As for remote participation, I'll see if I can find a volunteer to patch
you in. Worst case scenario we'll document on the etherpad and you could
ask questions / add extra input there.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sdk] Cleaning up openstacksdk core team

2018-02-26 Thread Doug Hellmann
Excerpts from Monty Taylor's message of 2018-02-26 11:41:18 +:
> Hey all,
> 
> A bunch of stuff has changed in SDK recently, and a few of the 
> historical sdk core folks have also not been around. I'd like to propose 
> removing the following people from the core team:
> 
>Everett Towes
>Jesse Noller
>Richard Theis
>Terry Howe
> 
> They're all fantastic humans but they haven't had any activity in quite 
> some time - and not since all the changes of the sdk/shade merge. As is 
> normal in OpenStack land, they'd all be welcome back if they found 
> themselves in a position to dive in again.
> 
> Any objections?
> 
> Monty
> 

+1 for cleanup. As you say, we can add them back easily if we need to.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK

2018-02-26 Thread Gary Kotton
One of the concerns here is that the openstack client does not enable one to 
configure extensions that are not part of the core reference architecture. So 
any external third part that tries to have any etension added will not be able 
to leverage the openstack client. This is a major pain point.


On 2/26/18, 1:26 PM, "Monty Taylor"  wrote:

On 02/26/2018 10:55 AM, Rabi Mishra wrote:
> On Mon, Feb 26, 2018 at 3:44 PM, Monty Taylor  > wrote:
> 
> On 02/26/2018 09:57 AM, Akihiro Motoki wrote:
> 
> Hi neutron and openstacksdk team,
> 
> This mail proposes to change the first priority of neutron-related
> python binding to OpenStack SDK rather than neutronclient python
> bindings.
> I think it is time to start this as OpenStack SDK became a 
official
> project in Queens.
> 
> 
> ++
> 
> 
> [Current situations and problems]
> 
> Network OSC commands are categorized into two parts: OSC and
> neutronclient OSC plugin.
> Commands implemented in OSC consumes OpenStack SDK
> and commands implemented as neutronclient OSC plugin consumes
> neutronclient python bindings.
> This brings tricky situation that some features are supported
> only in
> OpenStack SDK and some features are supported only in 
neutronclient
> python bindings.
> 
> [Proposal]
> 
> The proposal is to implement all neutron features in OpenStack
> SDK as
> the first citizen,
> and the neutronclient OSC plugin consumes corresponding
> OpenStack SDK APIs.
> 
> Once this is achieved, users of OpenStack SDK users can see all
> network related features.
> 
> [Migration plan]
> 
> The migration starts from Rocky (if we agree).
> 
> New features should be supported in OpenStack SDK and
> OSC/neutronclient OSC plugin as the first priority. If new feature
> depends on neutronclient python bindings, it can be implemented in
> neutornclient python bindings first and they are ported as part of
> existing feature transition.
> 
> Existing features only supported in neutronclient python
> bindings are
> ported into OpenStack SDK,
> and neutronclient OSC plugin will consume them once they are
> implemented in OpenStack SDK.
> 
> 
> I think this is a great idea. We've got a bunch of good
> functional/integrations tests in the sdk gate as well that we can
> start running on neutron patches so that we don't lose cross-gating.
> 
> [FAQ]
> 
> 1. Will neutornclient python bindings be removed in future?
> 
> Different from "neutron" CLI, as of now, there is no plan to
> drop the
> neutronclient python bindings.
> Not a small number of projects consumes it, so it will be
> maintained as-is.
> The only change is that new features are implemented in
> OpenStack SDK first and
> enhancements of neutronclient python bindings will be minimum.
> 
> 2. Should projects that consume neutronclient python bindings 
switch
> to OpenStack SDK?
> 
> Necessarily not. It depends on individual projects.
> Projects like nova that consumes small set of neutron features can
> continue to use neutronclient python bindings.
> Projects like horizon or heat that would like to support a wide
> range
> of features might be better to switch to OpenStack SDK.
> 
> 
> We've got a PTG session with Heat to discuss potential wider-use of
> SDK (and have been meaning to reach our to horizon as well) Perhaps
> a good first step would be to migrate the
> heat.engine.clients.os.neutron:NeutronClientPlugin code in Heat from
> neutronclient to SDK.
> 
> 
> Yeah, this would only be possible after openstacksdk supports all 
> neutron features as mentioned in the proposal.

++

> Note: We had initially added the OpenStackSDKPlugin in heat to support 
> neutron segments and were thinking of doing all new neutron stuff with 
> openstacksdk. However, we soon realised that it's not possible when 
> implementing neutron trunk support and had to drop the idea.

Maybe we start converting one thing at a time and when we find something 
sdk doesn't support we should be able to add it pretty quickly... which 
should then also wind up improving the sdk layer.

> There's already an
> heat.engine.clie

[openstack-dev] [Murano]No meeting at Feb 28

2018-02-26 Thread Rong Zhu
Hi Teams,

Let's cancel meetings at 28 Feb because of PTG.

Cheers,
Rong Zhu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk] Cleaning up openstacksdk core team

2018-02-26 Thread Monty Taylor

Hey all,

A bunch of stuff has changed in SDK recently, and a few of the 
historical sdk core folks have also not been around. I'd like to propose 
removing the following people from the core team:


  Everett Towes
  Jesse Noller
  Richard Theis
  Terry Howe

They're all fantastic humans but they haven't had any activity in quite 
some time - and not since all the changes of the sdk/shade merge. As is 
normal in OpenStack land, they'd all be welcome back if they found 
themselves in a position to dive in again.


Any objections?

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTG] Project interviews at the PTG

2018-02-26 Thread Rich Bowen
A HUGE thank you to all of the people who have signed up to do 
interviews at the PTG. Tuesday is now completely full, but I still have 
space/time on the remaining days.


I have set up on the 4th floor. Turn left when you exit the lifts, and 
I'm set up by the couches in the break area.


Please check the schedule first before dropping in, but if I'm 
available, we can do a walk-in if you have the time.


Thanks!

--Rich

http://youtube.com/RDOCommunity


On 02/19/2018 02:12 PM, Rich Bowen wrote:
I promise this is the last time I'll bug you about this. (Except 
on-site, of course!)


I still have lots and lots of space for team/project/whatever interviews 
at the PTG. You can sign up at 
https://docs.google.com/spreadsheets/d/1MK7rCgYXCQZP1AgQ0RUiuc-cEXIzW5RuRzz5BWhV4nQ/edit#gid=0 



You can see some examples of previous interviews at 
http://youtube.com/RDOCommunity


For the most part, interviews focus on what your team accomplished 
during the Queens cycle and what you want to work on in Rocky. However, 
we can also talk about other things like governance, community, related 
projects, licensing, or anything else that you feel is related to the 
OpenStack community.


I encourage you to talk with your team, and find 2 or 3 people who can 
speak most eloquently about what you are trying to do, and find a time 
that works for you.


I'll also have the schedules posted on-site, so you can sign up there, 
if you're still unsure of your schedule. But signing up ahead of time 
lets me know whether Wednesday is really a vacation day. ;-)


See you in Dublin!



--
Rich Bowen: Community Architect
rbo...@redhat.com
@rbowen // @RDOCommunity // @CentOSProject
1 859 351 9166

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK

2018-02-26 Thread Monty Taylor

On 02/26/2018 10:55 AM, Rabi Mishra wrote:
On Mon, Feb 26, 2018 at 3:44 PM, Monty Taylor > wrote:


On 02/26/2018 09:57 AM, Akihiro Motoki wrote:

Hi neutron and openstacksdk team,

This mail proposes to change the first priority of neutron-related
python binding to OpenStack SDK rather than neutronclient python
bindings.
I think it is time to start this as OpenStack SDK became a official
project in Queens.


++


[Current situations and problems]

Network OSC commands are categorized into two parts: OSC and
neutronclient OSC plugin.
Commands implemented in OSC consumes OpenStack SDK
and commands implemented as neutronclient OSC plugin consumes
neutronclient python bindings.
This brings tricky situation that some features are supported
only in
OpenStack SDK and some features are supported only in neutronclient
python bindings.

[Proposal]

The proposal is to implement all neutron features in OpenStack
SDK as
the first citizen,
and the neutronclient OSC plugin consumes corresponding
OpenStack SDK APIs.

Once this is achieved, users of OpenStack SDK users can see all
network related features.

[Migration plan]

The migration starts from Rocky (if we agree).

New features should be supported in OpenStack SDK and
OSC/neutronclient OSC plugin as the first priority. If new feature
depends on neutronclient python bindings, it can be implemented in
neutornclient python bindings first and they are ported as part of
existing feature transition.

Existing features only supported in neutronclient python
bindings are
ported into OpenStack SDK,
and neutronclient OSC plugin will consume them once they are
implemented in OpenStack SDK.


I think this is a great idea. We've got a bunch of good
functional/integrations tests in the sdk gate as well that we can
start running on neutron patches so that we don't lose cross-gating.

[FAQ]

1. Will neutornclient python bindings be removed in future?

Different from "neutron" CLI, as of now, there is no plan to
drop the
neutronclient python bindings.
Not a small number of projects consumes it, so it will be
maintained as-is.
The only change is that new features are implemented in
OpenStack SDK first and
enhancements of neutronclient python bindings will be minimum.

2. Should projects that consume neutronclient python bindings switch
to OpenStack SDK?

Necessarily not. It depends on individual projects.
Projects like nova that consumes small set of neutron features can
continue to use neutronclient python bindings.
Projects like horizon or heat that would like to support a wide
range
of features might be better to switch to OpenStack SDK.


We've got a PTG session with Heat to discuss potential wider-use of
SDK (and have been meaning to reach our to horizon as well) Perhaps
a good first step would be to migrate the
heat.engine.clients.os.neutron:NeutronClientPlugin code in Heat from
neutronclient to SDK.


Yeah, this would only be possible after openstacksdk supports all 
neutron features as mentioned in the proposal.


++

Note: We had initially added the OpenStackSDKPlugin in heat to support 
neutron segments and were thinking of doing all new neutron stuff with 
openstacksdk. However, we soon realised that it's not possible when 
implementing neutron trunk support and had to drop the idea.


Maybe we start converting one thing at a time and when we find something 
sdk doesn't support we should be able to add it pretty quickly... which 
should then also wind up improving the sdk layer.



There's already an
heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin plugin in
Heat. I started a patch to migrate senlin from senlinclient (which
is just a thin wrapper around sdk):
https://review.openstack.org/#/c/532680/


For those of you who are at the PTG, I'll be giving an update on SDK
after lunch on Wednesday. I'd also be more than happy to come chat
about this more in the neutron room if that's useful to anybody.

Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Regards,
Rabi Mishra




Re: [openstack-dev] [k8s][ptg] SIG-K8s Scheduling for Dublin PTG

2018-02-26 Thread Chris Hoge
Initial scheduling is live for sig-k8s work at the PTG. Tuesday morning is
going to be devoted to external provider migration and documentation.
Late morning includes a Kolla sesison. The afternoon is mostly free, with
a session set aside for testing. If you have topics you'd like to have
sessions on please add them to the schedule. If you’re working on k8s
within the OpenStack community, there is a team photo at scheduled
for 3:30.

https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg

Chris

> On Feb 21, 2018, at 7:41 PM, Chris Hoge  wrote:
> 
> SIG-K8s has a planning etherpad available for the Dublin PTG. We have
> space scheduled for Tuesday, with approximately eight forty-minute work
> blocks. For the K8s on OpenStack side of things, we've identified a core
> set of priorities that we'll be working on that day, including:
> 
> * Moving openstack-cloud-controller-manager into OpenStack git repo.
> * Enabling and improving testing across multiple platforms.
> * Identifying documentation gaps.
> 
> Some of these items have some collaboration points with the Infra and
> QA teams. If members of those teams could help us identify when they
> would be available to work on repository creation and enabling testing,
> that would help us to schedule the appropriate times for those topics.
> 
> The work of the SIG-K8s groups also covers other Kubernetes and OpenStack
> integrations, including deploying OpenStack on top of Kubernetes. If
> anyone from the Kolla, OpenStack-Helm, Loci, Magnum, Kuryr, or Zun
> teams would like to schedule cross-project work sessions, please add your
> requests and preferred times to the planning etherpad. Additionally, I
> can be available to attend work sessions for any of those projects.
> 
> https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg
> 
> Thanks!
> Chris
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sdk] Nominating Adrian Turjak for core

2018-02-26 Thread Rosario Di Somma
+1

On Mon, Feb 26, 2018 at 11:46, Sławomir Kapłoński  wrote:
+1

—
Best regards
Slawek Kaplonski
sla...@kaplonski.pl

> Wiadomość napisana przez Monty Taylor  w dniu 
> 26.02.2018, o godz. 11:31:
>
> Hey everybody,
>
> I'd like to nominate Adrian Turjak (adriant) for openstacksdk-core. He's an 
> Operator/End User and brings *excellent* deep/strange/edge-condition bugs. He 
> also has a great understanding of the mechanics between Resource/Proxy 
> objects and is super helpful in verifying fixes work in the real world.
>
> It's worth noting that Adrian's overall review 'stats' aren't what it 
> traditionally associated with a 'core', but I think this is a good example 
> that life shouldn't be driven by stackalytics and the being a core reviewer 
> is about understanding the code base and being able to evaluate proposed 
> changes. From my POV, Adrian more than qualifies.
>
> Thoughts?
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK

2018-02-26 Thread Rabi Mishra
On Mon, Feb 26, 2018 at 3:44 PM, Monty Taylor  wrote:

> On 02/26/2018 09:57 AM, Akihiro Motoki wrote:
>
>> Hi neutron and openstacksdk team,
>>
>> This mail proposes to change the first priority of neutron-related
>> python binding to OpenStack SDK rather than neutronclient python
>> bindings.
>> I think it is time to start this as OpenStack SDK became a official
>> project in Queens.
>>
>
> ++
>
>
> [Current situations and problems]
>>
>> Network OSC commands are categorized into two parts: OSC and
>> neutronclient OSC plugin.
>> Commands implemented in OSC consumes OpenStack SDK
>> and commands implemented as neutronclient OSC plugin consumes
>> neutronclient python bindings.
>> This brings tricky situation that some features are supported only in
>> OpenStack SDK and some features are supported only in neutronclient
>> python bindings.
>>
>> [Proposal]
>>
>> The proposal is to implement all neutron features in OpenStack SDK as
>> the first citizen,
>> and the neutronclient OSC plugin consumes corresponding OpenStack SDK
>> APIs.
>>
>> Once this is achieved, users of OpenStack SDK users can see all
>> network related features.
>>
>> [Migration plan]
>>
>> The migration starts from Rocky (if we agree).
>>
>> New features should be supported in OpenStack SDK and
>> OSC/neutronclient OSC plugin as the first priority. If new feature
>> depends on neutronclient python bindings, it can be implemented in
>> neutornclient python bindings first and they are ported as part of
>> existing feature transition.
>>
>> Existing features only supported in neutronclient python bindings are
>> ported into OpenStack SDK,
>> and neutronclient OSC plugin will consume them once they are
>> implemented in OpenStack SDK.
>>
>
> I think this is a great idea. We've got a bunch of good
> functional/integrations tests in the sdk gate as well that we can start
> running on neutron patches so that we don't lose cross-gating.
>
> [FAQ]
>>
>> 1. Will neutornclient python bindings be removed in future?
>>
>> Different from "neutron" CLI, as of now, there is no plan to drop the
>> neutronclient python bindings.
>> Not a small number of projects consumes it, so it will be maintained
>> as-is.
>> The only change is that new features are implemented in OpenStack SDK
>> first and
>> enhancements of neutronclient python bindings will be minimum.
>>
>> 2. Should projects that consume neutronclient python bindings switch
>> to OpenStack SDK?
>>
>> Necessarily not. It depends on individual projects.
>> Projects like nova that consumes small set of neutron features can
>> continue to use neutronclient python bindings.
>> Projects like horizon or heat that would like to support a wide range
>> of features might be better to switch to OpenStack SDK.
>>
>
> We've got a PTG session with Heat to discuss potential wider-use of SDK
> (and have been meaning to reach our to horizon as well) Perhaps a good
> first step would be to migrate the 
> heat.engine.clients.os.neutron:NeutronClientPlugin
> code in Heat from neutronclient to SDK.


Yeah, this would only be possible after openstacksdk supports all neutron
features as mentioned in the proposal.

Note: We had initially added the OpenStackSDKPlugin in heat to support
neutron segments and were thinking of doing all new neutron stuff with
openstacksdk. However, we soon realised that it's not possible when
implementing neutron trunk support and had to drop the idea.


> There's already an heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin
> plugin in Heat. I started a patch to migrate senlin from senlinclient
> (which is just a thin wrapper around sdk): https://review.openstack.org/#
> /c/532680/
>
> For those of you who are at the PTG, I'll be giving an update on SDK after
> lunch on Wednesday. I'd also be more than happy to come chat about this
> more in the neutron room if that's useful to anybody.
>
> Monty
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sdk] Nominating Adrian Turjak for core

2018-02-26 Thread Sławomir Kapłoński
+1

— 
Best regards
Slawek Kaplonski
sla...@kaplonski.pl

> Wiadomość napisana przez Monty Taylor  w dniu 
> 26.02.2018, o godz. 11:31:
> 
> Hey everybody,
> 
> I'd like to nominate Adrian Turjak (adriant) for openstacksdk-core. He's an 
> Operator/End User and brings *excellent* deep/strange/edge-condition bugs. He 
> also has a great understanding of the mechanics between Resource/Proxy 
> objects and is super helpful in verifying fixes work in the real world.
> 
> It's worth noting that Adrian's overall review 'stats' aren't what it 
> traditionally associated with a 'core', but I think this is a good example 
> that life shouldn't be driven by stackalytics and the being a core reviewer 
> is about understanding the code base and being able to evaluate proposed 
> changes. From my POV, Adrian more than qualifies.
> 
> Thoughts?
> Monty
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk] Nominating Adrian Turjak for core

2018-02-26 Thread Monty Taylor

Hey everybody,

I'd like to nominate Adrian Turjak (adriant) for openstacksdk-core. He's 
an Operator/End User and brings *excellent* deep/strange/edge-condition 
bugs. He also has a great understanding of the mechanics between 
Resource/Proxy objects and is super helpful in verifying fixes work in 
the real world.


It's worth noting that Adrian's overall review 'stats' aren't what it 
traditionally associated with a 'core', but I think this is a good 
example that life shouldn't be driven by stackalytics and the being a 
core reviewer is about understanding the code base and being able to 
evaluate proposed changes. From my POV, Adrian more than qualifies.


Thoughts?
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK

2018-02-26 Thread Sławomir Kapłoński
I also agree that it is good idea and I would be very happy to help with such 
migration :)

— 
Best regards
Slawek Kaplonski
sla...@kaplonski.pl

> Wiadomość napisana przez Monty Taylor  w dniu 
> 26.02.2018, o godz. 11:14:
> 
> On 02/26/2018 09:57 AM, Akihiro Motoki wrote:
>> Hi neutron and openstacksdk team,
>> This mail proposes to change the first priority of neutron-related
>> python binding to OpenStack SDK rather than neutronclient python
>> bindings.
>> I think it is time to start this as OpenStack SDK became a official
>> project in Queens.
> 
> ++
> 
>> [Current situations and problems]
>> Network OSC commands are categorized into two parts: OSC and
>> neutronclient OSC plugin.
>> Commands implemented in OSC consumes OpenStack SDK
>> and commands implemented as neutronclient OSC plugin consumes
>> neutronclient python bindings.
>> This brings tricky situation that some features are supported only in
>> OpenStack SDK and some features are supported only in neutronclient
>> python bindings.
>> [Proposal]
>> The proposal is to implement all neutron features in OpenStack SDK as
>> the first citizen,
>> and the neutronclient OSC plugin consumes corresponding OpenStack SDK APIs.
>> Once this is achieved, users of OpenStack SDK users can see all
>> network related features.
>> [Migration plan]
>> The migration starts from Rocky (if we agree).
>> New features should be supported in OpenStack SDK and
>> OSC/neutronclient OSC plugin as the first priority. If new feature
>> depends on neutronclient python bindings, it can be implemented in
>> neutornclient python bindings first and they are ported as part of
>> existing feature transition.
>> Existing features only supported in neutronclient python bindings are
>> ported into OpenStack SDK,
>> and neutronclient OSC plugin will consume them once they are
>> implemented in OpenStack SDK.
> 
> I think this is a great idea. We've got a bunch of good 
> functional/integrations tests in the sdk gate as well that we can start 
> running on neutron patches so that we don't lose cross-gating.
> 
>> [FAQ]
>> 1. Will neutornclient python bindings be removed in future?
>> Different from "neutron" CLI, as of now, there is no plan to drop the
>> neutronclient python bindings.
>> Not a small number of projects consumes it, so it will be maintained as-is.
>> The only change is that new features are implemented in OpenStack SDK first 
>> and
>> enhancements of neutronclient python bindings will be minimum.
>> 2. Should projects that consume neutronclient python bindings switch
>> to OpenStack SDK?
>> Necessarily not. It depends on individual projects.
>> Projects like nova that consumes small set of neutron features can
>> continue to use neutronclient python bindings.
>> Projects like horizon or heat that would like to support a wide range
>> of features might be better to switch to OpenStack SDK.
> 
> We've got a PTG session with Heat to discuss potential wider-use of SDK (and 
> have been meaning to reach our to horizon as well) Perhaps a good first step 
> would be to migrate the heat.engine.clients.os.neutron:NeutronClientPlugin 
> code in Heat from neutronclient to SDK. There's already an 
> heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin plugin in Heat. I 
> started a patch to migrate senlin from senlinclient (which is just a thin 
> wrapper around sdk): https://review.openstack.org/#/c/532680/
> 
> For those of you who are at the PTG, I'll be giving an update on SDK after 
> lunch on Wednesday. I'd also be more than happy to come chat about this more 
> in the neutron room if that's useful to anybody.
> 
> Monty
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-02-26 Thread Ricardo Rocha
Hi.

We have an issue on the way Magnum uses keystone trusts.

Magnum clusters are created in a given project using HEAT, and require
a trust token to communicate back with OpenStack services -  there is
also integration with Kubernetes via a cloud provider.

This trust belongs to a given user, not the project, so whenever we
disable the user's account - for example when a user leaves the
organization - the cluster becomes unhealthy as the trust is no longer
valid. Given the token is available in the cluster nodes, accessible
by users, a trust linked to a service account is also not a viable
solution.

Is there an existing alternative for this kind of use case? I guess
what we might need is a trust that is linked to the project.

I believe the same issue would be there using application credentials,
as the ownership is similar.

Cheers,
  Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK

2018-02-26 Thread Monty Taylor

On 02/26/2018 09:57 AM, Akihiro Motoki wrote:

Hi neutron and openstacksdk team,

This mail proposes to change the first priority of neutron-related
python binding to OpenStack SDK rather than neutronclient python
bindings.
I think it is time to start this as OpenStack SDK became a official
project in Queens.


++


[Current situations and problems]

Network OSC commands are categorized into two parts: OSC and
neutronclient OSC plugin.
Commands implemented in OSC consumes OpenStack SDK
and commands implemented as neutronclient OSC plugin consumes
neutronclient python bindings.
This brings tricky situation that some features are supported only in
OpenStack SDK and some features are supported only in neutronclient
python bindings.

[Proposal]

The proposal is to implement all neutron features in OpenStack SDK as
the first citizen,
and the neutronclient OSC plugin consumes corresponding OpenStack SDK APIs.

Once this is achieved, users of OpenStack SDK users can see all
network related features.

[Migration plan]

The migration starts from Rocky (if we agree).

New features should be supported in OpenStack SDK and
OSC/neutronclient OSC plugin as the first priority. If new feature
depends on neutronclient python bindings, it can be implemented in
neutornclient python bindings first and they are ported as part of
existing feature transition.

Existing features only supported in neutronclient python bindings are
ported into OpenStack SDK,
and neutronclient OSC plugin will consume them once they are
implemented in OpenStack SDK.


I think this is a great idea. We've got a bunch of good 
functional/integrations tests in the sdk gate as well that we can start 
running on neutron patches so that we don't lose cross-gating.



[FAQ]

1. Will neutornclient python bindings be removed in future?

Different from "neutron" CLI, as of now, there is no plan to drop the
neutronclient python bindings.
Not a small number of projects consumes it, so it will be maintained as-is.
The only change is that new features are implemented in OpenStack SDK first and
enhancements of neutronclient python bindings will be minimum.

2. Should projects that consume neutronclient python bindings switch
to OpenStack SDK?

Necessarily not. It depends on individual projects.
Projects like nova that consumes small set of neutron features can
continue to use neutronclient python bindings.
Projects like horizon or heat that would like to support a wide range
of features might be better to switch to OpenStack SDK.


We've got a PTG session with Heat to discuss potential wider-use of SDK 
(and have been meaning to reach our to horizon as well) Perhaps a good 
first step would be to migrate the 
heat.engine.clients.os.neutron:NeutronClientPlugin code in Heat from 
neutronclient to SDK. There's already an 
heat.engine.clients.os.openstacksdk:OpenStackSDKPlugin plugin in Heat. I 
started a patch to migrate senlin from senlinclient (which is just a 
thin wrapper around sdk): https://review.openstack.org/#/c/532680/


For those of you who are at the PTG, I'll be giving an update on SDK 
after lunch on Wednesday. I'd also be more than happy to come chat about 
this more in the neutron room if that's useful to anybody.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Please delete branch "notif" of project tatu

2018-02-26 Thread Clark Boylan
On Mon, Feb 26, 2018, at 1:59 AM, Pino de Candia wrote:
> Hi OpenStack-Infra Team,
> 
> Please delete branch "notif" of openstack/tatu.
> 
> The project was recently created/imported from my private repo and only the
> master branch is needed for the community project.

Done. Just for historical purposes the sha1 of the HEAD of the branch was 
9ecbb46b8e645fbf2450d4bca09c8f4040341a85.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Please delete branch "notif" of project tatu

2018-02-26 Thread Pino de Candia
Hi OpenStack-Infra Team,

Please delete branch "notif" of openstack/tatu.

The project was recently created/imported from my private repo and only the
master branch is needed for the community project.


thanks for your help!
Pino
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][sdk] Proposal to migrate neutronclient python bindings to OpenStack SDK

2018-02-26 Thread Akihiro Motoki
Hi neutron and openstacksdk team,

This mail proposes to change the first priority of neutron-related
python binding to OpenStack SDK rather than neutronclient python
bindings.
I think it is time to start this as OpenStack SDK became a official
project in Queens.

[Current situations and problems]

Network OSC commands are categorized into two parts: OSC and
neutronclient OSC plugin.
Commands implemented in OSC consumes OpenStack SDK
and commands implemented as neutronclient OSC plugin consumes
neutronclient python bindings.
This brings tricky situation that some features are supported only in
OpenStack SDK and some features are supported only in neutronclient
python bindings.

[Proposal]

The proposal is to implement all neutron features in OpenStack SDK as
the first citizen,
and the neutronclient OSC plugin consumes corresponding OpenStack SDK APIs.

Once this is achieved, users of OpenStack SDK users can see all
network related features.

[Migration plan]

The migration starts from Rocky (if we agree).

New features should be supported in OpenStack SDK and
OSC/neutronclient OSC plugin as the first priority. If new feature
depends on neutronclient python bindings, it can be implemented in
neutornclient python bindings first and they are ported as part of
existing feature transition.

Existing features only supported in neutronclient python bindings are
ported into OpenStack SDK,
and neutronclient OSC plugin will consume them once they are
implemented in OpenStack SDK.

[FAQ]

1. Will neutornclient python bindings be removed in future?

Different from "neutron" CLI, as of now, there is no plan to drop the
neutronclient python bindings.
Not a small number of projects consumes it, so it will be maintained as-is.
The only change is that new features are implemented in OpenStack SDK first and
enhancements of neutronclient python bindings will be minimum.

2. Should projects that consume neutronclient python bindings switch
to OpenStack SDK?

Necessarily not. It depends on individual projects.
Projects like nova that consumes small set of neutron features can
continue to use neutronclient python bindings.
Projects like horizon or heat that would like to support a wide range
of features might be better to switch to OpenStack SDK.


3. 

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] User Committee Election Results - February 2018

2018-02-26 Thread Jimmy McArthur

Congrats everyone! And thanks to the UC Election Committee for managing :)

Cheers,
Jimmy


Shilla Saebi 
February 25, 2018 at 11:52 PM
Hello Everyone!

Please join me in congratulating 3 newly elected members of the User 
Committee (UC)! The winners for the 3 seats are:


Melvin Hillsman
Amy Marrich
Yih Leong Sun

Full results can be found here: 
https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045


Election details can also be found here: 
https://governance.openstack.org/uc/reference/uc-election-feb2018.html


Thank you to all of the candidates, and to all of you who voted and/or 
promoted the election!


Shilla
___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] User Committee Election Results - February 2018

2018-02-26 Thread Arkady.Kanevsky
Congrats to new committee members.
And thanks for great job for previous ones.

From: Shilla Saebi [mailto:shilla.sa...@gmail.com]
Sent: Sunday, February 25, 2018 5:52 PM
To: user-committee ; OpenStack Mailing List 
; OpenStack Operators 
; OpenStack Dev 
; commun...@lists.openstack.org
Subject: [User-committee] User Committee Election Results - February 2018

Hello Everyone!

Please join me in congratulating 3 newly elected members of the User Committee 
(UC)! The winners for the 3 seats are:

Melvin Hillsman
Amy Marrich
Yih Leong Sun

Full results can be found here: 
https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045

Election details can also be found here: 
https://governance.openstack.org/uc/reference/uc-election-feb2018.html

Thank you to all of the candidates, and to all of you who voted and/or promoted 
the election!

Shilla
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] What's new in latest CloudFlow?

2018-02-26 Thread Shaanan, Guy (Nokia - IL/Kfar Sava)
CloudFlow [1] is an open-source web-based GUI tool that helps visualize and 
debug Mistral workflows.

With the latest release [2] of CloudFlow (v0.5.0) you can:
* Visualize the flow of workflow executions
* Identify the execution path of a single task in huge workflows
* Search Mistral by any entity ID
* Identify long-running tasks at a glance
* Easily distinguish between simple task (an action) and a sub workflow 
execution
* Follow tasks with a `retry` and/or `with-items`
* 1-click to copy task's input/output/publish/params values
* See complete workflow definition and per task definition YAML
* And more...

CloudFlow is easy to install and run (and even easier to upgrade), and we 
appreciate any feedback and contribution.

CloudFlow currently supports unauthenticated Mistral or authentication with 
KeyCloak (openid-connect implementation). A support for Keystone will be added 
in the near future.

You can try CloudFlow now on your Mistral Pike/Queens, or try it on the online 
demo [3].

[1] https://github.com/nokia/CloudFlow
[2] https://github.com/nokia/CloudFlow/releases/latest
[3] http://yaqluator.com:8000


Thanks,
-
Guy Shaanan
Full Stack Web Developer, CI & Internal Tools
CloudBand @ Nokia Software, Nokia, ISRAEL
guy.shaa...@nokia.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Ubuntu jobs failed on pike branch due to package dependency

2018-02-26 Thread Shake Chen
I prefer to the option 2.

On Mon, Feb 26, 2018 at 4:39 PM, Jeffrey Zhang 
wrote:

> Recently, the Ubuntu jobs on pike branch are red[0]. With some debugging,
> i found it is caused by
> package dependency.
>
>
> *Background*
>
> Since we have no time to upgrade ceph from Jewel to Luminous at the end of
> pike cycle, we pinned
> Ceph to Jewel on pike branch. This works on CentOS, because ceph jewel and
> ceph luminous are on
> the different repos.
>
> But in Ubuntu Cloud Archive repo, it bump ceph to Luminous. Even though
> ceph luminous still exists
> on UCA. But since qemu 2.10 depends on ceph luminous, we have to ping qemu
> to 2.5 to use ceph Jewel[1].
> And this works since then.
>
>
> *Now Issue*
>
> But recently, UCA changed the libvirt-daemon package dependency, and added
> following,
>
> Package: libvirt-daemon
> Version: 3.6.0-1ubuntu6.2~cloud0
> ...
> Breaks: qemu (<< 1:2.10+dfsg-0ubuntu3.4~), qemu-kvm (<<
> 1:2.10+dfsg-0ubuntu3.4~)
>
> It requires qemu 2.10 now. So dependency is broken and nova-libvirt
> container is failed to build.
>
>
> *Possible Solution*
>
> I think there two possible ways now, but none of them is good.
>
> 1. install ceph Luminuous on nova-libvirt container and ceph Jewel in
> ceph-* container
> 2. Bump ceph from jewel to luminous. But this breaks the backport policy,
> obviously.
>
> So any idea on this?
>
> [0] https://review.openstack.org/534149
> [1] https://review.openstack.org/#/c/526931/
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Shake Chen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Ubuntu jobs failed on pike branch due to package dependency

2018-02-26 Thread Jeffrey Zhang
Recently, the Ubuntu jobs on pike branch are red[0]. With some debugging, i
found it is caused by
package dependency.


*Background*

Since we have no time to upgrade ceph from Jewel to Luminous at the end of
pike cycle, we pinned
Ceph to Jewel on pike branch. This works on CentOS, because ceph jewel and
ceph luminous are on
the different repos.

But in Ubuntu Cloud Archive repo, it bump ceph to Luminous. Even though
ceph luminous still exists
on UCA. But since qemu 2.10 depends on ceph luminous, we have to ping qemu
to 2.5 to use ceph Jewel[1].
And this works since then.


*Now Issue*

But recently, UCA changed the libvirt-daemon package dependency, and added
following,

Package: libvirt-daemon
Version: 3.6.0-1ubuntu6.2~cloud0
...
Breaks: qemu (<< 1:2.10+dfsg-0ubuntu3.4~), qemu-kvm (<<
1:2.10+dfsg-0ubuntu3.4~)

It requires qemu 2.10 now. So dependency is broken and nova-libvirt
container is failed to build.


*Possible Solution*

I think there two possible ways now, but none of them is good.

1. install ceph Luminuous on nova-libvirt container and ceph Jewel in
ceph-* container
2. Bump ceph from jewel to luminous. But this breaks the backport policy,
obviously.

So any idea on this?

[0] https://review.openstack.org/534149
[1] https://review.openstack.org/#/c/526931/

-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] resource providers update 18-07

2018-02-26 Thread Jay Pipes

On 02/24/2018 02:17 AM, Matt Riedemann wrote:

On 2/16/2018 7:54 AM, Chris Dent wrote:

Before I get to the meat of this week's report, I'd like to request
some feedback from readers on how to improve the report. Over its
lifetime it has grown and it has now reached the point that while it
tries to give the impression of being complete, it never actually is,
and is a fair chunk of work to get that way.

So perhaps there is a way to make it a bit more focused and thus bit
more actionable. If there are parts you can live without or parts you
can't live without, please let me know.

One idea I've had is to do some kind of automation to make it what
amounts to a dashboard, but I'm not super inclined to do that because
the human curation has been useful for me. If it's not useful for
anyone else, however, then that's something to consider.


-1 on a dashboard unless it's just something like a placement-specific 
review dashboard, but you'd have to star or somehow label 
placement-specific patches. I appreciate the human thought/comments on 
the various changes for context.


As do I. Thank you, Chris, for doing this week after week. It may not 
seem like it, but these emails are immensely useful for me.


Best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev