Re: [openstack-dev] [heat] resigning from heat-cores

2016-09-13 Thread Qiming Teng
Thank you very much for the helps. Good luck on your new journey!

- Qiming

On Mon, Sep 12, 2016 at 03:35:05PM +0300, Pavlo Shchelokovskyy wrote:
> Hi Heaters,
> 
> with great regret I announce my resignation from the heat-core team.
> 
> About a year ago I was reassigned to another project, and despite my best
> efforts I came to conclusion that unfortunately I can not keep up with
> duties expected from Heat core team member in appropriate capacity.
> 
> I do still work on OpenStack, so I'm not leaving the community altogether,
> and will be available in e.g. IRC. I also have some ideas left to implement
> in Heat, but, given the great community we've built around the project, I
> could surely make it as an ordinary contributor.
> 
> It was an honor to be a member of this team, I’ve learned a lot during this
> time. Hope to see some of you in Barcelona :)
> 
> Best regards,
> 
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Wed, Sep 14, 2016 at 11:22 AM, Liping Mao (limao) 
wrote:

> You have a valid point regarding ipvlan support in newer kernel versions
>> but IIUC overlay mode might not help if nic has a limit on max number of
>> macs that it supports in hardware.
>>
>for example: http://www.brocade.com/content/html/en/
> configuration-guide/fastiron-08030b-securityguide/GUID-
> ED71C989-6295-4175-8CFE-7EABDEE83E1F.html
> 
> Thanks vikas point out this.  Yes, It may cause problem if the mac of
> containers expose to hardware switch.
> In overlay case, AFAIK, hw should not learn container mac as it is in
> vxlan(gre) encapsulation.
>

gotcha, thanks Liping.

What is your opinion on the unicast macs limit that some drivers impose
which can enable promiscous mode on the vm if macvlan interfaces cross a
certain limit and thus may result into performance degradation by accepting
all the multicast/broadcast traffic within subnet ?

ipvlan has problems with dhcp and ipv6. I think its a topic worth
discussing.

-Vikas

>
>
> Regards,
> Liping Mao
>
> From: Vikas Choudhary 
> Reply-To: OpenStack List 
> Date: 2016年9月14日 星期三 下午1:10
>
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>
>
>
> On Wed, Sep 14, 2016 at 10:33 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>>
>>
>> On Wed, Sep 14, 2016 at 9:39 AM, Liping Mao (limao) 
>> wrote:
>>
>>> > Though, not the best person to comment on macvlan vs ipvlan, one
>>> limitation of macvlan is that on physical interfaces, maximum possible
>>> number of random mac generations may not cope-up with large number of
>>> containers on same vm.
>>>
>>> Thanks, yes, it is a limitation, Vikas.
>>> This happened if you use vlan as tenant network. If tenant network use
>>> overlay mode, maybe it will be a little bit better for the mac problem.
>>> The reason why I mention macvlan can be one of choice is because ipvlan
>>> need a very new kernel , it maybe a little bit hard to use in prod
>>> env(AFAIK).
>>>
>>
>> You have a valid point regarding ipvlan support in newer kernel versions
>> but IIUC overlay mode might not help if nic has a limit on max number of
>> macs that it supports in hardware.
>>
>for example: http://www.brocade.com/content/html/en/configuration-
> guide/fastiron-08030b-securityguide/GUID-ED71C989-
> 6295-4175-8CFE-7EABDEE83E1F.html
> 
>
>>
>>
>
>>
>>
>>>
>>> Regards,
>>> Liping Mao
>>>
>>> From: Vikas Choudhary 
>>> Reply-To: OpenStack List 
>>> Date: 2016年9月14日 星期三 上午11:50
>>>
>>> To: OpenStack List 
>>> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>>
>>>
>>>
>>> On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao) 
>>> wrote:
>>>
 Hi Ivan and Gary,

 maybe we can use macvlan as ipvlan need very new kernel.
 allow-address-pairs can aslo allow different mac in vm.
 Do we consider macvlan here? Thanks.

>>>
>>> Though, not the best person to comment on macvlan vs ipvlan, one
>>> limitation of macvlan is that on physical interfaces, maximum possible
>>> number of random mac generations may not cope-up with large number of
>>> containers on same vm.
>>>
>>>

 Regards,
 Liping Mao

 From: Liping Mao 
 Reply-To: OpenStack List 
 Date: 2016年9月13日 星期二 下午9:09
 To: OpenStack List 

 Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

 Hi Gary,

 I mean maybe that can be one choice in my mind.

 Security Group is for each neutron port,in this case,all the docker on
 one vm will share one neutron port(if I understand correct),then they will
 share the security group on that port,it is not per container per security
 group,not sure how to use security group in this case?

 Regards,
 Liping Mao

 在 2016年9月13日,20:31,Loughnane, Gary  写道:

 Hi Liping,



 Thank you for the feedback!



 Do you mean to have disabled security groups as an optional
 configuration for Kuryr?

 Do you have any opinion on the consequences/acceptability of disabling
 SG?



 Regards,

 Gary



 *From:* Liping Mao (limao) [mailto:li...@cisco.com ]
 *Sent:* Tuesday, September 13, 2016 12:56 PM
 *To:* OpenStack Development Mailing List (not for usage questions) <
 openstack-dev@lists.openstack.org>
 *Subject:* Re: [openstack-dev] [Kuryr] IPVLAN data path proposal



 Hi Ivan,



 It sounds cool!



 for security group and allowed address pair,

 Maybe we can disable port-security,because all the docker in one vm
 will share one security group on the vm port. I'm not sure how 

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Liping Mao (limao)
You have a valid point regarding ipvlan support in newer kernel versions but 
IIUC overlay mode might not help if nic has a limit on max number of macs that 
it supports in hardware.
   for example: 
http://www.brocade.com/content/html/en/configuration-guide/fastiron-08030b-securityguide/GUID-ED71C989-6295-4175-8CFE-7EABDEE83E1F.html
 

Thanks vikas point out this.  Yes, It may cause problem if the mac of 
containers expose to hardware switch.
In overlay case, AFAIK, hw should not learn container mac as it is in 
vxlan(gre) encapsulation.


Regards,
Liping Mao

From: Vikas Choudhary 
mailto:choudharyvika...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月14日 星期三 下午1:10
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal



On Wed, Sep 14, 2016 at 10:33 AM, Vikas Choudhary 
mailto:choudharyvika...@gmail.com>> wrote:


On Wed, Sep 14, 2016 at 9:39 AM, Liping Mao (limao) 
mailto:li...@cisco.com>> wrote:
> Though, not the best person to comment on macvlan vs ipvlan, one limitation 
> of macvlan is that on physical interfaces, maximum possible number of random 
> mac generations may not cope-up with large number of containers on same vm.

Thanks, yes, it is a limitation, Vikas.
This happened if you use vlan as tenant network. If tenant network use overlay 
mode, maybe it will be a little bit better for the mac problem.
The reason why I mention macvlan can be one of choice is because ipvlan need a 
very new kernel , it maybe a little bit hard to use in prod env(AFAIK).

You have a valid point regarding ipvlan support in newer kernel versions but 
IIUC overlay mode might not help if nic has a limit on max number of macs that 
it supports in hardware.
   for example: 
http://www.brocade.com/content/html/en/configuration-guide/fastiron-08030b-securityguide/GUID-ED71C989-6295-4175-8CFE-7EABDEE83E1F.html
 





Regards,
Liping Mao

From: Vikas Choudhary 
mailto:choudharyvika...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月14日 星期三 上午11:50

To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal



On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao) 
mailto:li...@cisco.com>> wrote:
Hi Ivan and Gary,

maybe we can use macvlan as ipvlan need very new kernel.
allow-address-pairs can aslo allow different mac in vm.
Do we consider macvlan here? Thanks.

Though, not the best person to comment on macvlan vs ipvlan, one limitation of 
macvlan is that on physical interfaces, maximum possible number of random mac 
generations may not cope-up with large number of containers on same vm.


Regards,
Liping Mao

From: Liping Mao mailto:li...@cisco.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月13日 星期二 下午9:09
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>

Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Gary,

I mean maybe that can be one choice in my mind.

Security Group is for each neutron port,in this case,all the docker on one vm 
will share one neutron port(if I understand correct),then they will share the 
security group on that port,it is not per container per security group,not sure 
how to use security group in this case?

Regards,
Liping Mao

在 2016年9月13日,20:31,Loughnane, Gary 
mailto:gary.loughn...@intel.com>> 写道:

Hi Liping,

Thank you for the feedback!

Do you mean to have disabled security groups as an optional configuration for 
Kuryr?
Do you have any opinion on the consequences/acceptability of disabling SG?

Regards,
Gary

From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Tuesday, September 13, 2016 12:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan,

It sounds cool!

for security group and allowed address pair,
Maybe we can disable port-security,because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker,maybe just disable port-security can be one of the choice. then do not 
need allowed address pairs in this case.


Regards,
Lipimg Mao

在 2016年9月12日,19:31,Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> 写道:

Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:

-  Use allowed-address-pairs configuration for the VM neutron port

-

[openstack-dev] [vitrage] Vitrage IRC meeting - SKIPPED this week

2016-09-13 Thread Afek, Ifat (Nokia - IL)
Hi,



Vitrage meeting today will be SKIPPED. We will meet again next week, on 
September 20.


Thanks,
Ifat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [meghdwar] Edge Cloud Gateway API and Wednesday meeting

2016-09-13 Thread prakash RAMCHANDRAN
Hi all ? 

lets meet  7-8am PDT (Wed 14:00-15:00 UTC)irc Channel?: 
#openstack-meghdwar

You can edit the etherpad if you want to update agenda...or reply be email on 
item1-5 below.
https://etherpad.openstack.org/p/meghdwar

1.a.Megdwar Gateway API discussions?
Focus is what APIs needed for minimum use case of two cloudlets on two edges 
running 1 app each and how to move one of the apps from source edge gateway to 
destination edge gateway on compute nodes through those gateways. refer latest 
Blueprint 
https://blueprints.launchpad.net/meghdwar/+spec/edge-cloud-gateway-api

1.b.Would like to discuss Joes idea about (use pull not push)
joehuang:  For API to move cloudlets, I propose to design API which can talk 
directly from destination edge gateway to the source gateway, for example,  
Pull( srcEdgeGateway, Cloudlet). Using pull will make the destination talk to 
the source directly, but not to introduce any broker or coordinator for the 
migration, otherwise all bits will flow to broker or coordinator first then to 
the destination, this kind of API will introduce latency, overhead, and 
bottleneck by the broker or coordinator.

1.c Proposed by Narinder based on LXD live migration 
APIs - requires Meghdwar service process will have configuration file for each 
cloudlet at the Edge gateway.
Dump (take Snapshot), Create , Read, List, Write, Start, Stop - Cloudlets
Refer live migration using LXD
https://www.stgraber.org/2016/04/25/lxd-2-0-live-migration-912/

2. What other modules are needed in OpenStack 'meghdwar' Catalog Refer Murano 
(May be we use it) - Our Catalog can be for ASP, CSP,NSP and integrated through 
ESP. Reviewed other Catalog modules (application-catalog-ui, murano, 
murano-agent to be tested) on Rackspace

To discuss Binder option for two cloudlets. as  one app in 1(a,b,c) 
Cloudlet Gateway Management  (Gateway Service Management) 
  d. python  Cloudlet  / MEC Gateway Management ?

4. How do we go about priority?
Consider two cloudlet Binders and see if we can use LXD and LXC: live migration 
as 1st API implementation for Meghdwar.LXD 2.0: Live migration [9/12] | 
St?phane Graber's website in 1.c

ThanksPrakash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Wed, Sep 14, 2016 at 10:33 AM, Vikas Choudhary <
choudharyvika...@gmail.com> wrote:

>
>
> On Wed, Sep 14, 2016 at 9:39 AM, Liping Mao (limao) 
> wrote:
>
>> > Though, not the best person to comment on macvlan vs ipvlan, one
>> limitation of macvlan is that on physical interfaces, maximum possible
>> number of random mac generations may not cope-up with large number of
>> containers on same vm.
>>
>> Thanks, yes, it is a limitation, Vikas.
>> This happened if you use vlan as tenant network. If tenant network use
>> overlay mode, maybe it will be a little bit better for the mac problem.
>> The reason why I mention macvlan can be one of choice is because ipvlan
>> need a very new kernel , it maybe a little bit hard to use in prod
>> env(AFAIK).
>>
>
> You have a valid point regarding ipvlan support in newer kernel versions
> but IIUC overlay mode might not help if nic has a limit on max number of
> macs that it supports in hardware.
>
   for example:
http://www.brocade.com/content/html/en/configuration-guide/fastiron-08030b-securityguide/GUID-ED71C989-6295-4175-8CFE-7EABDEE83E1F.html


>
>

>
>
>>
>> Regards,
>> Liping Mao
>>
>> From: Vikas Choudhary 
>> Reply-To: OpenStack List 
>> Date: 2016年9月14日 星期三 上午11:50
>>
>> To: OpenStack List 
>> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>
>>
>>
>> On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao) 
>> wrote:
>>
>>> Hi Ivan and Gary,
>>>
>>> maybe we can use macvlan as ipvlan need very new kernel.
>>> allow-address-pairs can aslo allow different mac in vm.
>>> Do we consider macvlan here? Thanks.
>>>
>>
>> Though, not the best person to comment on macvlan vs ipvlan, one
>> limitation of macvlan is that on physical interfaces, maximum possible
>> number of random mac generations may not cope-up with large number of
>> containers on same vm.
>>
>>
>>>
>>> Regards,
>>> Liping Mao
>>>
>>> From: Liping Mao 
>>> Reply-To: OpenStack List 
>>> Date: 2016年9月13日 星期二 下午9:09
>>> To: OpenStack List 
>>>
>>> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>>
>>> Hi Gary,
>>>
>>> I mean maybe that can be one choice in my mind.
>>>
>>> Security Group is for each neutron port,in this case,all the docker on
>>> one vm will share one neutron port(if I understand correct),then they will
>>> share the security group on that port,it is not per container per security
>>> group,not sure how to use security group in this case?
>>>
>>> Regards,
>>> Liping Mao
>>>
>>> 在 2016年9月13日,20:31,Loughnane, Gary  写道:
>>>
>>> Hi Liping,
>>>
>>>
>>>
>>> Thank you for the feedback!
>>>
>>>
>>>
>>> Do you mean to have disabled security groups as an optional
>>> configuration for Kuryr?
>>>
>>> Do you have any opinion on the consequences/acceptability of disabling
>>> SG?
>>>
>>>
>>>
>>> Regards,
>>>
>>> Gary
>>>
>>>
>>>
>>> *From:* Liping Mao (limao) [mailto:li...@cisco.com ]
>>> *Sent:* Tuesday, September 13, 2016 12:56 PM
>>> *To:* OpenStack Development Mailing List (not for usage questions) <
>>> openstack-dev@lists.openstack.org>
>>> *Subject:* Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>>
>>>
>>>
>>> Hi Ivan,
>>>
>>>
>>>
>>> It sounds cool!
>>>
>>>
>>>
>>> for security group and allowed address pair,
>>>
>>> Maybe we can disable port-security,because all the docker in one vm
>>> will share one security group on the vm port. I'm not sure how to use sg
>>> for each docker,maybe just disable port-security can be one of the
>>> choice. then do not need allowed address pairs in this case.
>>>
>>>
>>>
>>>
>>>
>>> Regards,
>>>
>>> Lipimg Mao
>>>
>>>
>>> 在 2016年9月12日,19:31,Coughlan, Ivan  写道:
>>>
>>>
>>>
>>> *Overview*
>>>
>>> Kuryr proposes to address the issues of double encapsulation and
>>> exposure of containers as neutron entities when containers are running
>>> within VMs.
>>>
>>> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
>>> propose to:
>>>
>>> -  Use allowed-address-pairs configuration for the VM neutron
>>> port
>>>
>>> -  Use IPVLAN for wiring the Containers within VM
>>>
>>>
>>>
>>> In this way:
>>>
>>> -  Achieve efficient data path to container within VM
>>>
>>> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
>>> features to accelerate the data path (more details below)
>>>
>>> -  Mitigate the risk of vlan-aware-vms not making neutron in
>>> time
>>>
>>> -  Provide a solution that works on existing and previous
>>> openstack releases
>>>
>>>
>>>
>>> This work should be done in a way permitting the user to optionally
>>> select this feature.
>>>
>>>
>>>
>>>
>>> *Required Changes*
>>>
>>> The four main changes we have identified in the current kuryr codebase
>>> are as follows:
>>>
>>> · Introduce an option of enabling “IPVLAN in VM” use case. This
>>> can be achieved by using a config file option or possibly 

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Wed, Sep 14, 2016 at 9:39 AM, Liping Mao (limao)  wrote:

> > Though, not the best person to comment on macvlan vs ipvlan, one
> limitation of macvlan is that on physical interfaces, maximum possible
> number of random mac generations may not cope-up with large number of
> containers on same vm.
>
> Thanks, yes, it is a limitation, Vikas.
> This happened if you use vlan as tenant network. If tenant network use
> overlay mode, maybe it will be a little bit better for the mac problem.
> The reason why I mention macvlan can be one of choice is because ipvlan
> need a very new kernel , it maybe a little bit hard to use in prod
> env(AFAIK).
>

You have a valid point regarding ipvlan support in newer kernel versions
but IIUC overlay mode might not help if nic has a limit on max number of
macs that it supports in hardware.



>
> Regards,
> Liping Mao
>
> From: Vikas Choudhary 
> Reply-To: OpenStack List 
> Date: 2016年9月14日 星期三 上午11:50
>
> To: OpenStack List 
> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>
>
>
> On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao) 
> wrote:
>
>> Hi Ivan and Gary,
>>
>> maybe we can use macvlan as ipvlan need very new kernel.
>> allow-address-pairs can aslo allow different mac in vm.
>> Do we consider macvlan here? Thanks.
>>
>
> Though, not the best person to comment on macvlan vs ipvlan, one
> limitation of macvlan is that on physical interfaces, maximum possible
> number of random mac generations may not cope-up with large number of
> containers on same vm.
>
>
>>
>> Regards,
>> Liping Mao
>>
>> From: Liping Mao 
>> Reply-To: OpenStack List 
>> Date: 2016年9月13日 星期二 下午9:09
>> To: OpenStack List 
>>
>> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>
>> Hi Gary,
>>
>> I mean maybe that can be one choice in my mind.
>>
>> Security Group is for each neutron port,in this case,all the docker on
>> one vm will share one neutron port(if I understand correct),then they will
>> share the security group on that port,it is not per container per security
>> group,not sure how to use security group in this case?
>>
>> Regards,
>> Liping Mao
>>
>> 在 2016年9月13日,20:31,Loughnane, Gary  写道:
>>
>> Hi Liping,
>>
>>
>>
>> Thank you for the feedback!
>>
>>
>>
>> Do you mean to have disabled security groups as an optional configuration
>> for Kuryr?
>>
>> Do you have any opinion on the consequences/acceptability of disabling SG?
>>
>>
>>
>> Regards,
>>
>> Gary
>>
>>
>>
>> *From:* Liping Mao (limao) [mailto:li...@cisco.com ]
>> *Sent:* Tuesday, September 13, 2016 12:56 PM
>> *To:* OpenStack Development Mailing List (not for usage questions) <
>> openstack-dev@lists.openstack.org>
>> *Subject:* Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>>
>>
>>
>> Hi Ivan,
>>
>>
>>
>> It sounds cool!
>>
>>
>>
>> for security group and allowed address pair,
>>
>> Maybe we can disable port-security,because all the docker in one vm will
>> share one security group on the vm port. I'm not sure how to use sg for
>> each docker,maybe just disable port-security can be one of the choice.
>> then do not need allowed address pairs in this case.
>>
>>
>>
>>
>>
>> Regards,
>>
>> Lipimg Mao
>>
>>
>> 在 2016年9月12日,19:31,Coughlan, Ivan  写道:
>>
>>
>>
>> *Overview*
>>
>> Kuryr proposes to address the issues of double encapsulation and exposure
>> of containers as neutron entities when containers are running within VMs.
>>
>> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
>> propose to:
>>
>> -  Use allowed-address-pairs configuration for the VM neutron
>> port
>>
>> -  Use IPVLAN for wiring the Containers within VM
>>
>>
>>
>> In this way:
>>
>> -  Achieve efficient data path to container within VM
>>
>> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
>> features to accelerate the data path (more details below)
>>
>> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>>
>> -  Provide a solution that works on existing and previous
>> openstack releases
>>
>>
>>
>> This work should be done in a way permitting the user to optionally
>> select this feature.
>>
>>
>>
>>
>> *Required Changes*
>>
>> The four main changes we have identified in the current kuryr codebase
>> are as follows:
>>
>> · Introduce an option of enabling “IPVLAN in VM” use case. This
>> can be achieved by using a config file option or possibly passing a command
>> line argument. The IPVLAN master interface must also be identified.
>>
>> · If using “IPVLAN in VM” use case, Kuryr should no longer
>> create a new port in Neutron or the associated VEth pairs. Instead, Kuryr
>> will create a new IPVLAN slave interface on top of the VM’s master
>> interface and pass this slave interface to the Container netns.
>>
>> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
>> identified so we can associate the additional IPVLAN addresses with the
>> port. This can be achieved by querying Neutron’

Re: [openstack-dev] [requirements][FFE][keystone][release] block keystonemiddleware 4.0.0

2016-09-13 Thread Tony Breeds
On Tue, Sep 13, 2016 at 03:53:46PM -0400, Steve Martinelli wrote:
> A bug was recently filed against keystone [1]. As of the Newton release we
> depend on a class being public -- BaseAuthProtocol instead of
> _BaseAuthProtocol [2]. Which was introduced in 4.1.0 [3].
> 
> The current requirement for keystonemiddleware is:
>   keystonemiddleware>=4.0.0,!=4.1.0,!=4.5.0
> 
> Blocking 4.0.0 would logically make it:
>   keystonemiddleware>=4.2.0,!=4.5.0
> 
> I've pushed a patch to the requirements repo for this change [4]. I'd like
> to know if blocking the lower value makes sense, I realize it's advertised,
> but we're up to 4.9.0 now.
> 
> Unfortunately, many projects depend on keystonemiddleware, but (luckily ?)
> this should only be server side projects [5], most of which are going
> through their RC period now.

So the *only* reasons we can do this is because no prjects have tagged RC1 and
the only projects that this really impacts are all services:

Package  : keystonemiddleware [keystonemiddleware!=4.1.0,!=4.5.0,>=4.0.0] 
(used by 37 projects)
Included in  : 27 projects
openstack/barbican[type:service]
openstack/cinder  [type:service]
openstack/congress[type:service]
openstack/designate   [type:service]
openstack/freezer-api [type:service]
openstack/glance  [type:service]
openstack/heat[type:service]
openstack/ironic  [type:service]
openstack/karbor  [type:service]
openstack/keystone[type:service]
openstack/magnum  [type:service]
openstack/manila  [type:service]
openstack/mistral [type:service]
openstack/monasca-api [type:service]
openstack/monasca-log-api [type:service]
openstack/murano  [type:service]
openstack/neutron [type:service]
openstack/nova[type:service]
openstack/sahara  [type:service]
openstack/searchlight [type:service]
openstack/senlin  [type:service]
openstack/solum   [type:service]
openstack/tacker  [type:service]
openstack/trove   [type:service]
openstack/vitrage [type:service]
openstack/watcher [type:service]
openstack/zaqar   [type:service]
Also affects : 10 projects
openstack/astara  [release:cycle-with-milestones]
openstack/cue []
openstack/ironic-inspector[release:cycle-with-intermediary]
openstack/kingbird[]
openstack/kosmos  []
openstack/networking-sfc  [release:independent]
openstack/nimble  []
openstack/octavia [release:independent]
openstack/tricircle   []
openstack/zun []

This will means that *all* theose projects need to delay RC1 until they merge
the generated requirements update.  It also adds another thing for the release
managers to check in a tight window.

So I'm inclined to delay this until after we branch stable/newton.

I get that it's wrong to list a minimum we know is broken but I think that's the
less bad option at this point in the cycle.  Packagers/deployers have a pretty
easy solution:  Grab one of the otehr 9 versions:
4.2.0, 4.3.0, 4.4.0, 4.4.1, 4.5.1, 4.6.0, 4.7.0, 4.8.0 or 4.9.0
preferably 4.9.0 as that's what we're testing with.

This will be better in Ocata.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][release][requirements][FFE] global-requirements update for requests-kerberos

2016-09-13 Thread Tony Breeds
On Tue, Sep 13, 2016 at 03:23:08PM -0400, Adam Young wrote:
> https://review.openstack.org/#/c/368530/
> 
> This change is for Python >2.7 only, as python2.7 already supports the
> latest version of these libraraies.  Back in the "just get pythoin3 to work"
> days we cut our losses on Kerberos support, but now it is working.  Getting
> this restriction removed means we don't have to edit away the tests for
> Kerberos in python3.

We had a good discussion about this in #openstack-keystone [1].

The summary of this is we're going to hold off on this until after we branch
stable/newton and fix this up with the next release of keystoneauth[2]

Yours Tony.

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/latest.log.html#t2016-09-14T01:54:12
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/latest.log.html#t2016-09-14T02:38:45


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3] use of six.iteritems()

2016-09-13 Thread Jay Pipes

On 09/13/2016 08:23 PM, Terry Wilson wrote:

On Tue, Sep 13, 2016 at 6:31 PM, Jay Pipes  wrote:

On 09/13/2016 01:40 PM, Terry Wilson wrote:


On Thu, Jun 11, 2015 at 8:33 AM, Sean Dague  wrote:


On 06/11/2015 09:02 AM, Jay Pipes wrote:


On 06/11/2015 01:16 AM, Robert Collins wrote:


But again - where in OpenStack does this matter the slightest?



Precisely. I can't think of a single case where we are iterating over
anywhere near the number of dictionary items that we would see any
impact whatsoever.



In neutron, the ovsdb native code iterates over fairly large
dictionaries since the underlying OVS library stores OVSDB tables
completely in memory as dicts. I just looked at the code I wrote and
it currently uses values() and I now want to switch it to
six.itervalues() :p.


Best,
-jay



+1.

This is a massive premature optimization which just makes all the code
gorpy for no real reason.



Premature optimization is about wasting a bunch of time trying to
optimize code before you know you need to, not about following the
accepted almost-always-faster/always-less-memory-using solution that
already exists. Memory-wise it's the difference between a constant
88-byte iterator and the storage for an additional list of tuples. And
if Raymond Hettinger, in a talk called "Transforming Code Into
Beautiful Idiomatic Python" specifically mentions that people should
always use iteritems
(https://www.youtube.com/watch?v=OSGv2VnC0go&feature=youtu.be&t=21m24s),
I tend to believe him. Sure, it'd be much better if Python 3 and
Python 2 both returned iterators for items(), values(), keys(), etc.,
but it doesn't. Wasting memory for purely aesthetic reasons (they're
even both the same number of lines) is just a bad idea, IMNSHO.



Is it wasted time to respond to a mailing list post from 18 months ago?

-jay


Ha! Absolutely it is. Someone posted a Neutron patch haphazardly
converting all of of the six.iteritems() calls to items() and it
struck a nerve. I searched for the thread in gmail not noticing the
date. My apologies! :)


Heh, no worries, I was mostly just being tongue-in-cheek :)

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Liping Mao (limao)
> Though, not the best person to comment on macvlan vs ipvlan, one limitation 
> of macvlan is that on physical interfaces, maximum possible number of random 
> mac generations may not cope-up with large number of containers on same vm.

Thanks, yes, it is a limitation, Vikas.
This happened if you use vlan as tenant network. If tenant network use overlay 
mode, maybe it will be a little bit better for the mac problem.
The reason why I mention macvlan can be one of choice is because ipvlan need a 
very new kernel , it maybe a little bit hard to use in prod env(AFAIK).

Regards,
Liping Mao

From: Vikas Choudhary 
mailto:choudharyvika...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月14日 星期三 上午11:50
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal



On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao) 
mailto:li...@cisco.com>> wrote:
Hi Ivan and Gary,

maybe we can use macvlan as ipvlan need very new kernel.
allow-address-pairs can aslo allow different mac in vm.
Do we consider macvlan here? Thanks.

Though, not the best person to comment on macvlan vs ipvlan, one limitation of 
macvlan is that on physical interfaces, maximum possible number of random mac 
generations may not cope-up with large number of containers on same vm.


Regards,
Liping Mao

From: Liping Mao mailto:li...@cisco.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月13日 星期二 下午9:09
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>

Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Gary,

I mean maybe that can be one choice in my mind.

Security Group is for each neutron port,in this case,all the docker on one vm 
will share one neutron port(if I understand correct),then they will share the 
security group on that port,it is not per container per security group,not sure 
how to use security group in this case?

Regards,
Liping Mao

在 2016年9月13日,20:31,Loughnane, Gary 
mailto:gary.loughn...@intel.com>> 写道:

Hi Liping,

Thank you for the feedback!

Do you mean to have disabled security groups as an optional configuration for 
Kuryr?
Do you have any opinion on the consequences/acceptability of disabling SG?

Regards,
Gary

From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Tuesday, September 13, 2016 12:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan,

It sounds cool!

for security group and allowed address pair,
Maybe we can disable port-security,because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker,maybe just disable port-security can be one of the choice. then do not 
need allowed address pairs in this case.


Regards,
Lipimg Mao

在 2016年9月12日,19:31,Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> 写道:

Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:

-  Use allowed-address-pairs configuration for the VM neutron port

-  Use IPVLAN for wiring the Containers within VM

In this way:

-  Achieve efficient data path to container within VM

-  Better leverage OpenStack EPA(Enhanced Platform Awareness) features 
to accelerate the data path (more details below)

-  Mitigate the risk of vlan-aware-vms not making neutron in time

-  Provide a solution that works on existing and previous openstack 
releases

This work should be done in a way permitting the user to optionally select this 
feature.


Required Changes
The four main changes we have identified in the current kuryr codebase are as 
follows:

・ Introduce an option of enabling “IPVLAN in VM” use case. This can be 
achieved by using a config file option or possibly passing a command line 
argument. The IPVLAN master interface must also be identified.

・ If using “IPVLAN in VM” use case, Kuryr should no longer create a new 
port in Neutron or the associated VEth pairs. Instead, Kuryr will create a new 
IPVLAN slave interface on top of the VM’s master interface and pass this slave 
interface to the Container netns.

・ If using “IPVLAN in VM” use case, the VM’s port ID needs to be 
identified so we can associate the additional IPVLAN addresses with the port. 
This can be achieved by querying Neutron’s show-port function and passing the 
VMs IP address.

・ If using “IPVLAN in VM” use case, Kuryr should associate the 
additional IPVLAN addresses with the VMs port. This can be achieved using 
Neutron’s allowed-address-pairs flag in the port-update function. We intend to 
make use of Kuryr’s existing IPAM functionality to request these

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Wed, Sep 14, 2016 at 7:10 AM, Liping Mao (limao)  wrote:

> Hi Ivan and Gary,
>
> maybe we can use macvlan as ipvlan need very new kernel.
> allow-address-pairs can aslo allow different mac in vm.
> Do we consider macvlan here? Thanks.
>

Though, not the best person to comment on macvlan vs ipvlan, one limitation
of macvlan is that on physical interfaces, maximum possible number of
random mac generations may not cope-up with large number of containers on
same vm.


>
> Regards,
> Liping Mao
>
> From: Liping Mao 
> Reply-To: OpenStack List 
> Date: 2016年9月13日 星期二 下午9:09
> To: OpenStack List 
>
> Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>
> Hi Gary,
>
> I mean maybe that can be one choice in my mind.
>
> Security Group is for each neutron port,in this case,all the docker on one
> vm will share one neutron port(if I understand correct),then they will
> share the security group on that port,it is not per container per security
> group,not sure how to use security group in this case?
>
> Regards,
> Liping Mao
>
> 在 2016年9月13日,20:31,Loughnane, Gary  写道:
>
> Hi Liping,
>
>
>
> Thank you for the feedback!
>
>
>
> Do you mean to have disabled security groups as an optional configuration
> for Kuryr?
>
> Do you have any opinion on the consequences/acceptability of disabling SG?
>
>
>
> Regards,
>
> Gary
>
>
>
> *From:* Liping Mao (limao) [mailto:li...@cisco.com ]
> *Sent:* Tuesday, September 13, 2016 12:56 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Kuryr] IPVLAN data path proposal
>
>
>
> Hi Ivan,
>
>
>
> It sounds cool!
>
>
>
> for security group and allowed address pair,
>
> Maybe we can disable port-security,because all the docker in one vm will
> share one security group on the vm port. I'm not sure how to use sg for
> each docker,maybe just disable port-security can be one of the choice.
> then do not need allowed address pairs in this case.
>
>
>
>
>
> Regards,
>
> Lipimg Mao
>
>
> 在 2016年9月12日,19:31,Coughlan, Ivan  写道:
>
>
>
> *Overview*
>
> Kuryr proposes to address the issues of double encapsulation and exposure
> of containers as neutron entities when containers are running within VMs.
>
> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
> propose to:
>
> -  Use allowed-address-pairs configuration for the VM neutron port
>
> -  Use IPVLAN for wiring the Containers within VM
>
>
>
> In this way:
>
> -  Achieve efficient data path to container within VM
>
> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
> features to accelerate the data path (more details below)
>
> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>
> -  Provide a solution that works on existing and previous
> openstack releases
>
>
>
> This work should be done in a way permitting the user to optionally select
> this feature.
>
>
>
>
> *Required Changes*
>
> The four main changes we have identified in the current kuryr codebase are
> as follows:
>
> · Introduce an option of enabling “IPVLAN in VM” use case. This
> can be achieved by using a config file option or possibly passing a command
> line argument. The IPVLAN master interface must also be identified.
>
> · If using “IPVLAN in VM” use case, Kuryr should no longer create
> a new port in Neutron or the associated VEth pairs. Instead, Kuryr will
> create a new IPVLAN slave interface on top of the VM’s master interface and
> pass this slave interface to the Container netns.
>
> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
> identified so we can associate the additional IPVLAN addresses with the
> port. This can be achieved by querying Neutron’s show-port function and
> passing the VMs IP address.
>
> · If using “IPVLAN in VM” use case, Kuryr should associate the
> additional IPVLAN addresses with the VMs port. This can be achieved using
> Neutron’s allowed-address-pairs flag in the port-update function. We
> intend to make use of Kuryr’s existing IPAM functionality to request these
> IPs from Neutron.
>
>
>
> *Asks*
>
> We wish to discuss the pros and cons.
>
> For example, containers exposure as proper neutron entities and the
> utility of neutron’s allowed-address-pairs is not yet well understood.
>
>
>
> We also wish to understand if this approach is acceptable for kuryr?
>
>
>
>
>
> *EPA*
>
> The Enhanced Platform Awareness initiative is a continuous program to
> enable fine-tuning of the platform for virtualized network functions.
>
> This is done by exposing the processor and platform capabilities through
> the management and orchestration layers.
>
> When a virtual network function is instantiated by an Enhanced Platform
> Awareness enabled orchestrator, the application requirements can be more
> efficiently matched with the platform capabilities.
>
> http://itpeernetwork.intel.com/openstack-kilo-release-

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Tue, Sep 13, 2016 at 11:13 PM, Antoni Segura Puimedon  wrote:

> On Tue, Sep 13, 2016 at 5:05 PM, Hongbin Lu  wrote:
> >
> >
> > On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary
> >  wrote:
> >>
> >>
> >>
> >> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu 
> wrote:
> >>>
> >>> Ivan,
> >>>
> >>> Thanks for the proposal. From Magnum's point of view, this proposal
> >>> doesn't seem to require to store neutron/rabbitmq credentials in
> tenant VMs
> >>> which is more desirable. I am looking forward to the PoC.
> >>
> >>
> >> Hogbin, Can you please elaborate on this will not require to store
> neutron
> >> credentials?
> >> For example in libnetwork case, neutron's commands like "show_port" and
> >> "update_port" will still need to be invoked from inside VM.
> >
> >
> > In a typical COE cluster, there are master nodes and work (minion/slave)
> > nodes. Regarding to credentials, the following is optimal:
> > * Avoid storing credentials in work nodes. If credentials have to be
> stored,
> > move them to master nodes if we can (containers are running in work
> nodes so
> > credentials stored there have a higher risk). A question for you,
> neutron's
> > commands like "show_port" and "update_port" need to be invoked from work
> > nodes or master nodes?
> > * If credentials have to be stored, scope them with least privilege
> (Magnum
> > uses Keystone trust for this purpose).
>
> I think that with the ipvlan proposal you probably can do without having
> to call
>
Vikas:

To me it looks like 'from where to make neutron calls' part is same in both
the approaches(address-pairs and vlan-aware-vms). What neutron api calls
are made that will differ(no neutron port creation in ipvlan approach
rather port_update) but whether we make those calls from inside worker vm
or master vm that is going to be dependent on the choice of 'neutron
communication mode' ('rest_driver' or 'rpc_driver') .
Please correct me if I understood something wrong.


> those two. IIUC the proposal the binding on the VM, taking libnetwork
> as an example
>  would be:
>
> 1. docker sends a request to kuryr-libnetwork running in container-in-vm
> mode.
> 2. kuryr-libnetwork forwards the request to a kuryr daemon that has
> the necessary
> credentials to talk to neutron (it could run either in the master node
> or in the compute
> node just like there is the dhcp agent, i.e., with one foot on the VM
> network and one
> on the underlay).
> 3. The kuryr daemon does the address pair proposal requests to Neutron
> and returns
> the result to the kuryr-libnetwork in the VM, at which point the VM
> port can already
> send and receive data for the container.
> 4. kuryr-libnetwork in the VM creates an ipvlan virtual device and
> puts it the IP
> returned by the kuryr daemon.
>
> >
> >>
> >>
> >> Overall I liked this approach given its simplicity over vlan-aware-vms.
> >>
> >> -VikasC
> >>>
> >>>
> >>> Best regards,
> >>> Hongbin
> >>>
> >>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan <
> ivan.cough...@intel.com>
> >>> wrote:
> 
> 
> 
>  Overview
> 
>  Kuryr proposes to address the issues of double encapsulation and
>  exposure of containers as neutron entities when containers are running
>  within VMs.
> 
>  As an alternative to the vlan-aware-vms and use of ovs within the VM,
> we
>  propose to:
> 
>  -  Use allowed-address-pairs configuration for the VM neutron
>  port
> 
>  -  Use IPVLAN for wiring the Containers within VM
> 
> 
> 
>  In this way:
> 
>  -  Achieve efficient data path to container within VM
> 
>  -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
>  features to accelerate the data path (more details below)
> 
>  -  Mitigate the risk of vlan-aware-vms not making neutron in
>  time
> 
>  -  Provide a solution that works on existing and previous
>  openstack releases
> 
> 
> 
>  This work should be done in a way permitting the user to optionally
>  select this feature.
> 
> 
> 
> 
> 
>  Required Changes
> 
>  The four main changes we have identified in the current kuryr codebase
>  are as follows:
> 
>  · Introduce an option of enabling “IPVLAN in VM” use case.
> This
>  can be achieved by using a config file option or possibly passing a
> command
>  line argument. The IPVLAN master interface must also be identified.
> 
>  · If using “IPVLAN in VM” use case, Kuryr should no longer
>  create a new port in Neutron or the associated VEth pairs. Instead,
> Kuryr
>  will create a new IPVLAN slave interface on top of the VM’s master
> interface
>  and pass this slave interface to the Container netns.
> 
>  · If using “IPVLAN in VM” use case, the VM’s port ID needs to
> be
>  identified so we can associate the additional IPVLAN addresses with
> 

[openstack-dev] [kolla] Removal of folks from kolla-drivers

2016-09-13 Thread Steven Dake (stdake)
Forgot kolla tag.  See message inside.

From: Steven Dake 
Date: Tuesday, September 13, 2016 at 8:34 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Removal of folks from kolla-drivers

Hey folks,

I removed several folks which appeared inactive for the Newton cycle from the 
kolla-drivers team on launchpad.  As a reminder, the reason we add people to 
the kolla-drivers team is to do bug triage, move blueprint states around, and 
distribute the process of handling the release.  If folks are inactive, they 
need not have permissions to do these things.

If I removed anyone in error, please respond to this email off-line and I’ll 
add you back in. (It is possible I did such a thing).

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ptl] Self non-nomination for Kolla PTL for Ocata cycle

2016-09-13 Thread Jeffrey Zhang
Thank Steve for all you do to make Kolla come true.

On Wed, Sep 14, 2016 at 10:19 AM, Vikram Hosakote (vhosakot)
 wrote:
> Thanks a lot Steve for being a great PTL, leader and a mentor!
>
> Regards,
> Vikram Hosakote
> IRC:  vhosakot
>
> From: "Steven Dake (stdake)" 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Monday, September 12, 2016 at 1:04 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [kolla][ptl] Self non-nomination for Kolla PTL for
> Ocata cycle
>
> To the OpenStack Community,
>
>
>
> Consider this email my self non-nomination for PTL of Kolla for
>
> the coming Ocata release.  I let the team know in our IRC team meeting
>
> several months ago I was passing the on baton at the conclusion of Newton,
>
> but I thought the broader OpenStack community would appreciate the
> information.
>
>
>
> I am super proud of what our tiny struggling community produced starting
>
> 3 years ago with only 3 people to the strongly emergent system that is Kolla
>
> with over 467 total contributors [1] since inception and closing in on 5,000
>
> commits today.
>
>
>
> In my opinion, the Kolla community is well on its way to conquering the last
>
> great challenge OpenStack faces: Making operational deployment management
> (ODM)
>
> of OpenStack cloud platforms straight-forward, easy, and most importantly
>
> cost effective for the long term management of OpenStack.
>
>
>
> The original objective the Kolla community set out to accomplish, deploying
>
> OpenStack in containers at 100 node scale has been achieved as proven by
> this
>
> review [2].  In these 12 scenarios, we were able to deploy with 3
>
> controllers, 100 compute nodes, and 20 storage nodes using Ceph for all
>
> storage and run rally as well as tempest against the deployment.
>
>
>
> Kolla is _No_Longer_a_Toy_ and _has_not_been_ since Liberty 1.1.0.
>
>
>
> I have developed a strong leadership pipeline and expect several candidates
>
> to self-nominate.  I wish all of them the best in the future PTL elections.
>
>
>
> Finally, I would like to thank all of the folks that have supported Kolla’s
>
> objectives.  If I listed the folks individually this email would be far too
>
> long, but you know who you are J Thank you for placing trust in my
> judgement.
>
>
>
> It has been a pleasure to serve as your leader.
>
>
>
> Regards
>
> -steak
>
>
>
> [1] http://stackalytics.com/report/contribution/kolla-group/2000
>
> [2] https://review.openstack.org/#/c/352101/
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3] use of six.iteritems()

2016-09-13 Thread Kevin Benton
>All performance matters. All memory consumption matters. Being wasteful
over a purely aesthetic few extra characters of code is silly.

Isn't the logical conclusion of this to write everything in a different
language? :)

On Tue, Sep 13, 2016 at 8:42 AM, Terry Wilson  wrote:

> On Wed, Jun 10, 2015 at 4:41 AM, Robert Collins
>  wrote:
> > On 10 June 2015 at 21:30, Ihar Hrachyshka  wrote:
> >> -BEGIN PGP SIGNED MESSAGE-
> >> Hash: SHA256
> >>
> >> On 06/10/2015 02:15 AM, Robert Collins wrote:
> >>> I'm very glad folk are working on Python3 ports.
> >>>
> >>> I'd like to call attention to one little wart in that process: I
> >>> get the feeling that folk are applying a massive regex to find
> >>> things like d.iteritems() and convert that to six.iteritems(d).
> >>>
> >>> I'd very much prefer that such a regex approach move things to
> >>> d.items(), which is much easier to read.
> >>>
> >>> Here's why. Firstly, very very very few of our dict iterations are
> >>> going to be performance sensitive in the way that iteritems()
> >>> matters. Secondly, no really - unless you're doing HUGE dicts, it
> >>> doesn't matter. Thirdly. Really, it doesn't.
> >>>
> >>
> >> Does it hurt though? ;)
> >
> > Yes.
> >
> > Its: harder to read. Its going to have to be removed eventually anyway
> > (when we stop supporting 2.7). Its marginally slower on 3.x (it has a
> > function and an iterator wrapping the actual thing). Its unidiomatic,
> > and we get lots of programmers that are new to Python; we should be
> > giving them as beautiful code as we can to help them learn.
>
> If someone is so new they can't handle six.iteritems, they should stay
> away from Neutron code. It'll eat them.
>
> >>> At 1 million items the overhead is 54ms[1]. If we're doing inner
> >>> loops on million item dictionaries anywhere in OpenStack today, we
> >>> have a problem. We might want to in e.g. the scheduler... if it
> >>> held in-memory state on a million hypervisors at once, because I
> >>> don't really to to imagine it pulling a million rows from a DB on
> >>> every action. But then, we'd be looking at a whole 54ms. I think we
> >>> could survive, if we did that (which we don't).
> >>>
> >>> So - please, no six.iteritems().
>
> Huge -1 from me. The "I like looking at d.items() more than I like
> looking at six.iteritems(d) so make everything (even slightly) less
> efficient" argument is insane to me. All performance matters. All
> memory consumption matters. Being wasteful over a purely aesthetic few
> extra characters of code is silly.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Removal of folks from kolla-drivers

2016-09-13 Thread Steven Dake (stdake)
Hey folks,

I removed several folks which appeared inactive for the Newton cycle from the 
kolla-drivers team on launchpad.  As a reminder, the reason we add people to 
the kolla-drivers team is to do bug triage, move blueprint states around, and 
distribute the process of handling the release.  If folks are inactive, they 
need not have permissions to do these things.

If I removed anyone in error, please respond to this email off-line and I’ll 
add you back in. (It is possible I did such a thing).

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Tue, Sep 13, 2016 at 5:26 PM, Liping Mao (limao)  wrote:

> Hi Ivan,
>
> It sounds cool!
>
> for security group and allowed address pair,
> Maybe we can disable port-security,because all the docker in one vm will
> share one security group on the vm port. I'm not sure how to use sg for
> each docker,maybe just disable port-security can be one of the choice. then
> do not need allowed address pairs in this case.
>
Vikas:

Can you please elaborate "maybe just disable port-security can be one of
the choice. then do not need allowed address pairs in this case" ?

Are you suggesting a solution where by disabling port security, each
container can have its own security group? Would you mind please explaining
a bit more for me ?


>
> Regards,
> Lipimg Mao
>
> 在 2016年9月12日,19:31,Coughlan, Ivan  写道:
>
>
>
> *Overview*
>
> Kuryr proposes to address the issues of double encapsulation and exposure
> of containers as neutron entities when containers are running within VMs.
>
> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
> propose to:
>
> -  Use allowed-address-pairs configuration for the VM neutron port
>
> -  Use IPVLAN for wiring the Containers within VM
>
>
>
> In this way:
>
> -  Achieve efficient data path to container within VM
>
> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
> features to accelerate the data path (more details below)
>
> -  Mitigate the risk of vlan-aware-vms not making neutron in time
>
> -  Provide a solution that works on existing and previous
> openstack releases
>
>
>
> This work should be done in a way permitting the user to optionally select
> this feature.
>
>
>
>
> *Required Changes*
>
> The four main changes we have identified in the current kuryr codebase are
> as follows:
>
> · Introduce an option of enabling “IPVLAN in VM” use case. This
> can be achieved by using a config file option or possibly passing a command
> line argument. The IPVLAN master interface must also be identified.
>
> · If using “IPVLAN in VM” use case, Kuryr should no longer create
> a new port in Neutron or the associated VEth pairs. Instead, Kuryr will
> create a new IPVLAN slave interface on top of the VM’s master interface and
> pass this slave interface to the Container netns.
>
> · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
> identified so we can associate the additional IPVLAN addresses with the
> port. This can be achieved by querying Neutron’s show-port function and
> passing the VMs IP address.
>
> · If using “IPVLAN in VM” use case, Kuryr should associate the
> additional IPVLAN addresses with the VMs port. This can be achieved using
> Neutron’s allowed-address-pairs flag in the port-update function. We
> intend to make use of Kuryr’s existing IPAM functionality to request these
> IPs from Neutron.
>
>
>
> *Asks*
>
> We wish to discuss the pros and cons.
>
> For example, containers exposure as proper neutron entities and the
> utility of neutron’s allowed-address-pairs is not yet well understood.
>
>
>
> We also wish to understand if this approach is acceptable for kuryr?
>
>
>
>
>
> *EPA*
>
> The Enhanced Platform Awareness initiative is a continuous program to
> enable fine-tuning of the platform for virtualized network functions.
>
> This is done by exposing the processor and platform capabilities through
> the management and orchestration layers.
>
> When a virtual network function is instantiated by an Enhanced Platform
> Awareness enabled orchestrator, the application requirements can be more
> efficiently matched with the platform capabilities.
>
> http://itpeernetwork.intel.com/openstack-kilo-release-is-sha
> ping-up-to-be-a-milestone-for-enhanced-platform-awareness/
>
> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
>
> https://www.brighttalk.com/webcast/12229/181563/epa-features
> -in-openstack-kilo
>
>
>
>
>
> Regards,
>
> Ivan….
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> h

Re: [openstack-dev] [all][python3] use of six.iteritems()

2016-09-13 Thread Terry Wilson
On Tue, Sep 13, 2016 at 6:31 PM, Jay Pipes  wrote:
> On 09/13/2016 01:40 PM, Terry Wilson wrote:
>>
>> On Thu, Jun 11, 2015 at 8:33 AM, Sean Dague  wrote:
>>>
>>> On 06/11/2015 09:02 AM, Jay Pipes wrote:

 On 06/11/2015 01:16 AM, Robert Collins wrote:
>
> But again - where in OpenStack does this matter the slightest?


 Precisely. I can't think of a single case where we are iterating over
 anywhere near the number of dictionary items that we would see any
 impact whatsoever.
>>
>>
>> In neutron, the ovsdb native code iterates over fairly large
>> dictionaries since the underlying OVS library stores OVSDB tables
>> completely in memory as dicts. I just looked at the code I wrote and
>> it currently uses values() and I now want to switch it to
>> six.itervalues() :p.
>>
 Best,
 -jay
>>>
>>>
>>> +1.
>>>
>>> This is a massive premature optimization which just makes all the code
>>> gorpy for no real reason.
>>
>>
>> Premature optimization is about wasting a bunch of time trying to
>> optimize code before you know you need to, not about following the
>> accepted almost-always-faster/always-less-memory-using solution that
>> already exists. Memory-wise it's the difference between a constant
>> 88-byte iterator and the storage for an additional list of tuples. And
>> if Raymond Hettinger, in a talk called "Transforming Code Into
>> Beautiful Idiomatic Python" specifically mentions that people should
>> always use iteritems
>> (https://www.youtube.com/watch?v=OSGv2VnC0go&feature=youtu.be&t=21m24s),
>> I tend to believe him. Sure, it'd be much better if Python 3 and
>> Python 2 both returned iterators for items(), values(), keys(), etc.,
>> but it doesn't. Wasting memory for purely aesthetic reasons (they're
>> even both the same number of lines) is just a bad idea, IMNSHO.
>
>
> Is it wasted time to respond to a mailing list post from 18 months ago?
>
> -jay

Ha! Absolutely it is. Someone posted a Neutron patch haphazardly
converting all of of the six.iteritems() calls to items() and it
struck a nerve. I searched for the thread in gmail not noticing the
date. My apologies! :)

Terry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Vikas Choudhary
On Tue, Sep 13, 2016 at 8:35 PM, Hongbin Lu  wrote:

>
>
> On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>>
>>
>> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu  wrote:
>>
>>> Ivan,
>>>
>>> Thanks for the proposal. From Magnum's point of view, this proposal
>>> doesn't seem to require to store neutron/rabbitmq credentials in tenant VMs
>>> which is more desirable. I am looking forward to the PoC.
>>>
>>
>> Hogbin, Can you please elaborate on this will not require to store
>> neutron credentials?
>> For example in libnetwork case, neutron's commands like "show_port" and
>> "update_port" will still need to be invoked from inside VM.
>>
>
> In a typical COE cluster, there are master nodes and work (minion/slave)
> nodes. Regarding to credentials, the following is optimal:
> * Avoid storing credentials in work nodes. If credentials have to be
> stored, move them to master nodes if we can (containers are running in work
> nodes so credentials stored there have a higher risk). A question for you,
> neutron's commands like "show_port" and "update_port" need to be invoked
> from work nodes or master nodes?
>

VIKAS>> That will depend on kuryr configuration. There will be two choices:

   1. use  'rest_driver' for neutron communication (making calls directly
   where libnetwork driver is running. It could be a vm or baremetal)
   2. use 'rpc_driver'. Flow that Toni described is assuming that
   rpc_driver is used. So as he explained kuryr-libnetwork in the vm will talk
   to kuryr daemon over rpc for neutron services.

IMO, Above part will be common in both the approaches, address-pairs based
or vlan-aware-vms based.


* If credentials have to be stored, scope them with least privilege (Magnum
> uses Keystone trust for this purpose).
>
>
>>
>> Overall I liked this approach given its simplicity over vlan-aware-vms.
>>
>> -VikasC
>>
>>>
>>> Best regards,
>>> Hongbin
>>>
>>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan >> > wrote:
>>>


 *Overview*

 Kuryr proposes to address the issues of double encapsulation and
 exposure of containers as neutron entities when containers are running
 within VMs.

 As an alternative to the vlan-aware-vms and use of ovs within the VM,
 we propose to:

 -  Use allowed-address-pairs configuration for the VM neutron
 port

 -  Use IPVLAN for wiring the Containers within VM



 In this way:

 -  Achieve efficient data path to container within VM

 -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
 features to accelerate the data path (more details below)

 -  Mitigate the risk of vlan-aware-vms not making neutron in
 time

 -  Provide a solution that works on existing and previous
 openstack releases



 This work should be done in a way permitting the user to optionally
 select this feature.




 *Required Changes*

 The four main changes we have identified in the current kuryr codebase
 are as follows:

 · Introduce an option of enabling “IPVLAN in VM” use case.
 This can be achieved by using a config file option or possibly passing a
 command line argument. The IPVLAN master interface must also be identified.

 · If using “IPVLAN in VM” use case, Kuryr should no longer
 create a new port in Neutron or the associated VEth pairs. Instead, Kuryr
 will create a new IPVLAN slave interface on top of the VM’s master
 interface and pass this slave interface to the Container netns.

 · If using “IPVLAN in VM” use case, the VM’s port ID needs to
 be identified so we can associate the additional IPVLAN addresses with the
 port. This can be achieved by querying Neutron’s show-port function and
 passing the VMs IP address.

 · If using “IPVLAN in VM” use case, Kuryr should associate the
 additional IPVLAN addresses with the VMs port. This can be achieved using
 Neutron’s allowed-address-pairs flag in the port-update function. We
 intend to make use of Kuryr’s existing IPAM functionality to request these
 IPs from Neutron.



 *Asks*

 We wish to discuss the pros and cons.

 For example, containers exposure as proper neutron entities and the
 utility of neutron’s allowed-address-pairs is not yet well understood.



 We also wish to understand if this approach is acceptable for kuryr?





 *EPA*

 The Enhanced Platform Awareness initiative is a continuous program to
 enable fine-tuning of the platform for virtualized network functions.

 This is done by exposing the processor and platform capabilities
 through the management and orchestration layers.

 When a virtual network function is instantiated by an Enhance

[openstack-dev] [cinder] [oslo] privsep socket's client receive thread may crash without attention

2016-09-13 Thread TommyLike Hu
Hello everyone:

  I found an error when backing up a volume with oslo_privsep,it seems that
the client channel use a thread to receive socket message,but the thread
may crash with inner exception unhandled while the main procedure
continues,so I report a bug and upload a patchset to fix:
https://review.openstack.org/#/c/369786/, what I did is just ignore the
timeout exception when receiving message.
  This change seems a little arbitrary and unreasonable,so I start this
thread to raise attention,please add your advices or solutions. I will
abandon this change if any better solutions provided


Thanks
TommyLike.Hu
-- 
It is not the mountain we conquer but ourselves.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-09-13 Thread Alex Xu
Hi,

We have weekly Nova API meeting tomorrow. The meeting is being held
Wednesday UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ptl] Self non-nomination for Kolla PTL for Ocata cycle

2016-09-13 Thread Vikram Hosakote (vhosakot)
Thanks a lot Steve for being a great PTL, leader and a mentor!

Regards,
Vikram Hosakote
IRC:  vhosakot

From: "Steven Dake (stdake)" mailto:std...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, September 12, 2016 at 1:04 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [kolla][ptl] Self non-nomination for Kolla PTL for 
Ocata cycle

To the OpenStack Community,

Consider this email my self non-nomination for PTL of Kolla for
the coming Ocata release.  I let the team know in our IRC team meeting
several months ago I was passing the on baton at the conclusion of Newton,
but I thought the broader OpenStack community would appreciate the information.

I am super proud of what our tiny struggling community produced starting
3 years ago with only 3 people to the strongly emergent system that is Kolla
with over 467 total contributors [1] since inception and closing in on 5,000
commits today.

In my opinion, the Kolla community is well on its way to conquering the last
great challenge OpenStack faces: Making operational deployment management (ODM)
of OpenStack cloud platforms straight-forward, easy, and most importantly
cost effective for the long term management of OpenStack.

The original objective the Kolla community set out to accomplish, deploying
OpenStack in containers at 100 node scale has been achieved as proven by this
review [2].  In these 12 scenarios, we were able to deploy with 3
controllers, 100 compute nodes, and 20 storage nodes using Ceph for all
storage and run rally as well as tempest against the deployment.

Kolla is _No_Longer_a_Toy_ and _has_not_been_ since Liberty 1.1.0.

I have developed a strong leadership pipeline and expect several candidates
to self-nominate.  I wish all of them the best in the future PTL elections.

Finally, I would like to thank all of the folks that have supported Kolla's
objectives.  If I listed the folks individually this email would be far too
long, but you know who you are :) Thank you for placing trust in my judgement.

It has been a pleasure to serve as your leader.

Regards
-steak

[1] http://stackalytics.com/report/contribution/kolla-group/2000
[2] https://review.openstack.org/#/c/352101/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] warning about PBR issue for kolla operators

2016-09-13 Thread Steven Dake (stdake)
pip install –e . is not viable either for a variety of reasons.  We expect 
developers to be able to find their way to the tools directory and operate the 
commands directly from there.

Regards
-steve


From: Clay Gerrard 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, September 13, 2016 at 5:11 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [kolla] warning about PBR issue for kolla operators

There's a note in the "for development" section [1] that notes the development 
instructions don't include anything that puts kolla in your sys.path or any bin 
scripts copied out anywhere into the PATH - i.e. it's not installed

That seems less than ideal for a developer - did I miss a `pip install -e .` 
somewhere?

-Clay

1. 
http://docs.openstack.org/developer/kolla/quickstart.html#installing-kolla-and-dependencies-for-development

On Tue, Sep 13, 2016 at 4:33 PM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:
Hey folks,

The quickstart guide was modified as a result of a lot of painful debugging 
over the last cycle approximately a month ago.  The only solution available to 
us was to split the workflow into an operator workflow (working on stable 
branches) and a developer workflow (working on master).  We recognize operators 
are developers and the docs indicate as much.  Many times operators want to 
work with master as they are evaluating Newton and planning to place it into 
production.

I’d invite folks using master with the pip install ./ method to have a re-read 
of the quickstart documentation. The documentation was changed in subtle ways 
(with warning and info boxes) but folks that have been using Kolla prior to the 
quckstart change may be using kolla in the same way the quickstart previously 
recommended.  Folks tend to get jammed up on this issue – we have helped 70-100 
people work past this problem before we finally sorted out a workable solution 
(via documentation).

The real issue lies in how PBR operates and pip interacts with Kolla and is 
explained in the quickstart.  From consulting with Doug Hellman and others in 
the release team, it appears the issue that impacts Kolla is not really 
solveable within PBR itself.  (I don’t mean to put words in Doug’s mouth, but 
that is how I parsed our four+ hour discussion) on the topic.

The documentation is located here:
http://docs.openstack.org/developer/kolla



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Liping Mao (limao)
Hi Ivan and Gary,

maybe we can use macvlan as ipvlan need very new kernel.
allow-address-pairs can aslo allow different mac in vm.
Do we consider macvlan here? Thanks.

Regards,
Liping Mao

From: Liping Mao mailto:li...@cisco.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: 2016年9月13日 星期二 下午9:09
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Gary,

I mean maybe that can be one choice in my mind.

Security Group is for each neutron port,in this case,all the docker on one vm 
will share one neutron port(if I understand correct),then they will share the 
security group on that port,it is not per container per security group,not sure 
how to use security group in this case?

Regards,
Liping Mao

在 2016年9月13日,20:31,Loughnane, Gary 
mailto:gary.loughn...@intel.com>> 写道:

Hi Liping,

Thank you for the feedback!

Do you mean to have disabled security groups as an optional configuration for 
Kuryr?
Do you have any opinion on the consequences/acceptability of disabling SG?

Regards,
Gary

From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Tuesday, September 13, 2016 12:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan,

It sounds cool!

for security group and allowed address pair,
Maybe we can disable port-security,because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker,maybe just disable port-security can be one of the choice. then do not 
need allowed address pairs in this case.


Regards,
Lipimg Mao

在 2016年9月12日,19:31,Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> 写道:

Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:

-  Use allowed-address-pairs configuration for the VM neutron port

-  Use IPVLAN for wiring the Containers within VM

In this way:

-  Achieve efficient data path to container within VM

-  Better leverage OpenStack EPA(Enhanced Platform Awareness) features 
to accelerate the data path (more details below)

-  Mitigate the risk of vlan-aware-vms not making neutron in time

-  Provide a solution that works on existing and previous openstack 
releases

This work should be done in a way permitting the user to optionally select this 
feature.


Required Changes
The four main changes we have identified in the current kuryr codebase are as 
follows:

・ Introduce an option of enabling “IPVLAN in VM” use case. This can be 
achieved by using a config file option or possibly passing a command line 
argument. The IPVLAN master interface must also be identified.

・ If using “IPVLAN in VM” use case, Kuryr should no longer create a new 
port in Neutron or the associated VEth pairs. Instead, Kuryr will create a new 
IPVLAN slave interface on top of the VM’s master interface and pass this slave 
interface to the Container netns.

・ If using “IPVLAN in VM” use case, the VM’s port ID needs to be 
identified so we can associate the additional IPVLAN addresses with the port. 
This can be achieved by querying Neutron’s show-port function and passing the 
VMs IP address.

・ If using “IPVLAN in VM” use case, Kuryr should associate the 
additional IPVLAN addresses with the VMs port. This can be achieved using 
Neutron’s allowed-address-pairs flag in the port-update function. We intend to 
make use of Kuryr’s existing IPAM functionality to request these IPs from 
Neutron.

Asks
We wish to discuss the pros and cons.
For example, containers exposure as proper neutron entities and the utility of 
neutron’s allowed-address-pairs is not yet well understood.

We also wish to understand if this approach is acceptable for kuryr?


EPA
The Enhanced Platform Awareness initiative is a continuous program to enable 
fine-tuning of the platform for virtualized network functions.
This is done by exposing the processor and platform capabilities through the 
management and orchestration layers.
When a virtual network function is instantiated by an Enhanced Platform 
Awareness enabled orchestrator, the application requirements can be more 
efficiently matched with the platform capabilities.
http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo


Regards,
Ivan….

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, Cou

[openstack-dev] [cinder]tempest test case for force detach volume

2016-09-13 Thread joehuang
Hello,

Is there ant tempest test case for "os-force_detach" action to force detach a 
volume? I didn't find such a test case both in the repository 
https://github.com/openstack/cinder/tree/master/cinder/tests/tempest
and https://github.com/openstack/tempest

The API link is:
http://developer.openstack.org/api-ref-blockstorage-v2.html#forcedetachVolume

Best Regards
Chaoyi Huang(joehuang)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][swg] Summary of today's SWG meeting/work items

2016-09-13 Thread Colette Alexander
Hi everyone,

I thought it might be a good idea to send a summary of the Stewardship
Working Group (SWG) meeting[0] to the -dev list today - both to
increase engagement around what we're up to, and to reach the (quite a
few) folks who couldn't make today's meeting because of conflicts:

1. We're in the process of refining our vision for what the SWG would
like accomplished in Barcelona[1]. Visioning is a concept we learned
about in training and is best summed up here:
https://www.zingtrain.com/content/why-and-how-visioning-works for
anyone who is curious.

The SWG already has a panel discussion scheduled in Barcelona[2], but
we discussed whether it might be good to add 2 more sessions:
a. A cross project themed session covering some of the recent
governance patches suggested by the TC[3]
b.  A general work session on Friday afternoon.

We're going to attempt to get both of those scheduled in the next week
or two, and we'd love to see anyone from the community who's
interested in stewardship/leadership there.

Also, we'd love feedback from the community (via ML or in
#openstack-swg ) on what they'd like to hear about/experience at any
of these sessions, so please chime in if you have anything you'd like
to see addressed by our group.

2. Our short list of work was reviewed[4]

We discussed the current governance changes in flight:
Goals for Ocata cycle & Principles

And some future goals listed:
A long term/overall vision for the Stewardship Working Group,
recommending that the TC write a vision

I floated the idea that having a vision for the SWG that we can all
align on would be the greatest selling point in proposing to the TC
that they have their own. It appears, based on our Barcelona vision,
that that is what we're targeting to start there.

Comments, questions, feedback most welcome!

Cheers,

-gothicmindfood


[0] Meeting Agenda: https://wiki.openstack.org/wiki/Meetings/SWGMeeting
[1] https://etherpad.openstack.org/p/BarcelonaSWGMagic
[2] 
https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/15243/stewardship-bringing-more-leadership-and-vision-to-openstack
[3] https://review.openstack.org/#/c/357260/ and
https://review.openstack.org/#/c/349068/
[4] https://etherpad.openstack.org/p/swg-short-list-deliverables

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] warning about PBR issue for kolla operators

2016-09-13 Thread Clay Gerrard
There's a note in the "for development" section [1] that notes the
development instructions don't include anything that puts kolla in your
sys.path or any bin scripts copied out anywhere into the PATH - i.e. it's
not installed

That seems less than ideal for a developer - did I miss a `pip install -e
.` somewhere?

-Clay

1.
http://docs.openstack.org/developer/kolla/quickstart.html#installing-kolla-and-dependencies-for-development

On Tue, Sep 13, 2016 at 4:33 PM, Steven Dake (stdake) 
wrote:

> Hey folks,
>
>
>
> The quickstart guide was modified as a result of a lot of painful
> debugging over the last cycle approximately a month ago.  The only solution
> available to us was to split the workflow into an operator workflow
> (working on stable branches) and a developer workflow (working on master).
> We recognize operators are developers and the docs indicate as much.  Many
> times operators want to work with master as they are evaluating Newton and
> planning to place it into production.
>
>
>
> I’d invite folks using master with the pip install ./ method to have a
> re-read of the quickstart documentation. The documentation was changed in
> subtle ways (with warning and info boxes) but folks that have been using
> Kolla prior to the quckstart change may be using kolla in the same way the
> quickstart previously recommended.  Folks tend to get jammed up on this
> issue – we have helped 70-100 people work past this problem before we
> finally sorted out a workable solution (via documentation).
>
>
>
> The real issue lies in how PBR operates and pip interacts with Kolla and
> is explained in the quickstart.  From consulting with Doug Hellman and
> others in the release team, it appears the issue that impacts Kolla is not
> really solveable within PBR itself.  (I don’t mean to put words in Doug’s
> mouth, but that is how I parsed our four+ hour discussion) on the topic.
>
>
>
> The documentation is located here:
>
> http://docs.openstack.org/developer/kolla
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] warning about PBR issue for kolla operators

2016-09-13 Thread Steven Dake (stdake)
Hey folks,

The quickstart guide was modified as a result of a lot of painful debugging 
over the last cycle approximately a month ago.  The only solution available to 
us was to split the workflow into an operator workflow (working on stable 
branches) and a developer workflow (working on master).  We recognize operators 
are developers and the docs indicate as much.  Many times operators want to 
work with master as they are evaluating Newton and planning to place it into 
production.

I’d invite folks using master with the pip install ./ method to have a re-read 
of the quickstart documentation. The documentation was changed in subtle ways 
(with warning and info boxes) but folks that have been using Kolla prior to the 
quckstart change may be using kolla in the same way the quickstart previously 
recommended.  Folks tend to get jammed up on this issue – we have helped 70-100 
people work past this problem before we finally sorted out a workable solution 
(via documentation).

The real issue lies in how PBR operates and pip interacts with Kolla and is 
explained in the quickstart.  From consulting with Doug Hellman and others in 
the release team, it appears the issue that impacts Kolla is not really 
solveable within PBR itself.  (I don’t mean to put words in Doug’s mouth, but 
that is how I parsed our four+ hour discussion) on the topic.

The documentation is located here:
http://docs.openstack.org/developer/kolla


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3] use of six.iteritems()

2016-09-13 Thread Jay Pipes

On 09/13/2016 01:40 PM, Terry Wilson wrote:

On Thu, Jun 11, 2015 at 8:33 AM, Sean Dague  wrote:

On 06/11/2015 09:02 AM, Jay Pipes wrote:

On 06/11/2015 01:16 AM, Robert Collins wrote:

But again - where in OpenStack does this matter the slightest?


Precisely. I can't think of a single case where we are iterating over
anywhere near the number of dictionary items that we would see any
impact whatsoever.


In neutron, the ovsdb native code iterates over fairly large
dictionaries since the underlying OVS library stores OVSDB tables
completely in memory as dicts. I just looked at the code I wrote and
it currently uses values() and I now want to switch it to
six.itervalues() :p.


Best,
-jay


+1.

This is a massive premature optimization which just makes all the code
gorpy for no real reason.


Premature optimization is about wasting a bunch of time trying to
optimize code before you know you need to, not about following the
accepted almost-always-faster/always-less-memory-using solution that
already exists. Memory-wise it's the difference between a constant
88-byte iterator and the storage for an additional list of tuples. And
if Raymond Hettinger, in a talk called "Transforming Code Into
Beautiful Idiomatic Python" specifically mentions that people should
always use iteritems
(https://www.youtube.com/watch?v=OSGv2VnC0go&feature=youtu.be&t=21m24s),
I tend to believe him. Sure, it'd be much better if Python 3 and
Python 2 both returned iterators for items(), values(), keys(), etc.,
but it doesn't. Wasting memory for purely aesthetic reasons (they're
even both the same number of lines) is just a bad idea, IMNSHO.


Is it wasted time to respond to a mailing list post from 18 months ago?

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] FFE request for RBD replication

2016-09-13 Thread Walter A. Boring IV

+1

Since this is very isolated to the rbd driver and it's passing already


Walt
On 09/09/2016 12:32 PM, Gorka Eguileor wrote:

Hi,

As some of you may know, Jon Bernard (jbernard on IRC) has been working
on the RBD v2.1 replication implementation [1] for a while, and we would
like to request a Feature Freeze Exception for that work, as we believe
it is a good candidate being a low risk change for the integrity of
the existing functionality in the driver:

- It's non intrusive if it's not enabled (enabled using
   replication_device configuration option).
- It doesn't affect existing deployments (disabled by default).
- Changes are localized to the driver itself (rbd.py) and the driver
   unit tests file (test_rbd.py).

Jon would have liked to make this request himself, but due to the
untimely arrival of his newborn baby this is not possible.

For obvious reasons Jon will not be available for a little while, but
this will not be a problem, as I am well acquainted with the code -and
I'll be able to reach Jon if necessary- and will be taking care of the
final steps of the review process of his patch: replying to comments in
a timely fashion, making changes to the code as required, and answering
pings on IRC regarding the patch.

Since some people may be interested in testing this functionality during
the reviewing process -or just for fun- I'll be publishing a post with
detailed explanation on how to deploy and test this feature as well as
an automated way to deploy 2 Ceph clusters -linked to be mirroring one
another-, and one devstack node with everything ready to test the
functionality (configuration and keys for the Ceph clusters, cinder
configuration, the latest upstream patch, and a volume type with the
right configuration).

Please, do not hesitate to ask if there are any questions to or concerns
related to this request.

Thank you for taking the time to evaluate this request.

Cheers,
Gorka.

[1]: https://review.openstack.org/333565

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] proposing adding Tony Breeds to "Release Managers" team

2016-09-13 Thread Tony Breeds
On Tue, Sep 13, 2016 at 11:00:40AM -0400, Doug Hellmann wrote:
> After a week with only positive responses, I've added Tony to both the
> "Release Managers" and "release-tools-core" groups in gerrit.
> 
> Welcome aboard, Tony!

Thanks so much Doug.  I'd like to thank you and the rest of the team for the
time you've spent mentoring me.  It's much appreciated.

I'd also like to thank the many people that have supported Doug's nomination
(both publicly and in private)  It's a humbling experience.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Is OVS implementation for supporting VLAN-Aware-VM compeleted?

2016-09-13 Thread Cathy Zhang
Ryan,

Thanks!

Cathy

From: Tidwell, Ryan [mailto:ryan.tidw...@hpe.com]
Sent: Tuesday, September 13, 2016 1:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Is OVS implementation for supporting 
VLAN-Aware-VM compeleted?

Cathy,

There are a few outstanding reviews to be wrapped up, including docs. However, 
this is mostly complete and the bulk of the functionality has merged and you 
can try it out.

Code Reviews: 
https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/vlan-aware-vms
Docs: https://review.openstack.org/#/c/361776/

-Ryan

From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Tuesday, September 13, 2016 11:25 AM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [neutron] Is OVS implementation for supporting 
VLAN-Aware-VM compeleted?

Hi All,

Sorry I lost track of this work. Is the implementation completed? Can we start 
using the OVS version of VLAN-Aware VMs ?

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][release][requirements][FFE] global-requirements update for requests-kerberos

2016-09-13 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2016-09-13 16:07:46 -0500:
> On 09/13/2016 03:55 PM, Doug Hellmann wrote:
> > Excerpts from Matthew Thode's message of 2016-09-13 15:09:39 -0500:
> >> On 09/13/2016 02:37 PM, Doug Hellmann wrote:
> >>> Excerpts from Adam Young's message of 2016-09-13 15:23:08 -0400:
>  https://review.openstack.org/#/c/368530/
> 
>  This change is for Python >2.7 only, as python2.7 already supports the 
>  latest version of these libraraies.  Back in the "just get pythoin3 to 
>  work" days we cut our losses on Kerberos support, but now it is 
>  working.  Getting this restriction removed means we don't have to edit 
>  away the tests for Kerberos in python3.
> 
>  "The requests-kerberos package was marked as available for only python 
>  2.6 and python 2.7 because pykerberos did not support python 3. This has 
>  since been fixed, however we don't directly have a kerberos dependency 
>  we can increase so just leave this unbound."
> >>>
> >>> Do we need an FFE for this or can it wait for Ocata? AFAIK, we aren't
> >>> supporting deployment on Python 3, yet, right?
> >>>
> >>> Doug
> >>>
> >>> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >> I think / thought keystone was supported on python 3 (maybe not all
> >> features in mitaka though).  I'm currently running it on python 3
> >> (through uwsgi).
> > 
> > Is there a gate job to back that up?
> > 
> > Doug
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> It looks like there's a python3.4 job running for this mitaka review at
> least.  https://review.openstack.org/369618
> 

Those are unit tests. Do we have the functional tests or integration
tests running under 3.4 or 3.5?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][release][requirements][FFE] global-requirements update for requests-kerberos

2016-09-13 Thread Matthew Thode
On 09/13/2016 03:55 PM, Doug Hellmann wrote:
> Excerpts from Matthew Thode's message of 2016-09-13 15:09:39 -0500:
>> On 09/13/2016 02:37 PM, Doug Hellmann wrote:
>>> Excerpts from Adam Young's message of 2016-09-13 15:23:08 -0400:
 https://review.openstack.org/#/c/368530/

 This change is for Python >2.7 only, as python2.7 already supports the 
 latest version of these libraraies.  Back in the "just get pythoin3 to 
 work" days we cut our losses on Kerberos support, but now it is 
 working.  Getting this restriction removed means we don't have to edit 
 away the tests for Kerberos in python3.

 "The requests-kerberos package was marked as available for only python 
 2.6 and python 2.7 because pykerberos did not support python 3. This has 
 since been fixed, however we don't directly have a kerberos dependency 
 we can increase so just leave this unbound."
>>>
>>> Do we need an FFE for this or can it wait for Ocata? AFAIK, we aren't
>>> supporting deployment on Python 3, yet, right?
>>>
>>> Doug
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> I think / thought keystone was supported on python 3 (maybe not all
>> features in mitaka though).  I'm currently running it on python 3
>> (through uwsgi).
> 
> Is there a gate job to back that up?
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
It looks like there's a python3.4 job running for this mitaka review at
least.  https://review.openstack.org/369618

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][release][requirements][FFE] global-requirements update for requests-kerberos

2016-09-13 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2016-09-13 15:09:39 -0500:
> On 09/13/2016 02:37 PM, Doug Hellmann wrote:
> > Excerpts from Adam Young's message of 2016-09-13 15:23:08 -0400:
> >> https://review.openstack.org/#/c/368530/
> >>
> >> This change is for Python >2.7 only, as python2.7 already supports the 
> >> latest version of these libraraies.  Back in the "just get pythoin3 to 
> >> work" days we cut our losses on Kerberos support, but now it is 
> >> working.  Getting this restriction removed means we don't have to edit 
> >> away the tests for Kerberos in python3.
> >>
> >> "The requests-kerberos package was marked as available for only python 
> >> 2.6 and python 2.7 because pykerberos did not support python 3. This has 
> >> since been fixed, however we don't directly have a kerberos dependency 
> >> we can increase so just leave this unbound."
> > 
> > Do we need an FFE for this or can it wait for Ocata? AFAIK, we aren't
> > supporting deployment on Python 3, yet, right?
> > 
> > Doug
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> I think / thought keystone was supported on python 3 (maybe not all
> features in mitaka though).  I'm currently running it on python 3
> (through uwsgi).

Is there a gate job to back that up?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3] use of six.iteritems()

2016-09-13 Thread Terry Wilson
On Thu, Jun 11, 2015 at 8:33 AM, Sean Dague  wrote:
> On 06/11/2015 09:02 AM, Jay Pipes wrote:
>> On 06/11/2015 01:16 AM, Robert Collins wrote:
>>> But again - where in OpenStack does this matter the slightest?
>>
>> Precisely. I can't think of a single case where we are iterating over
>> anywhere near the number of dictionary items that we would see any
>> impact whatsoever.

In neutron, the ovsdb native code iterates over fairly large
dictionaries since the underlying OVS library stores OVSDB tables
completely in memory as dicts. I just looked at the code I wrote and
it currently uses values() and I now want to switch it to
six.itervalues() :p.

>> Best,
>> -jay
>
> +1.
>
> This is a massive premature optimization which just makes all the code
> gorpy for no real reason.

Premature optimization is about wasting a bunch of time trying to
optimize code before you know you need to, not about following the
accepted almost-always-faster/always-less-memory-using solution that
already exists. Memory-wise it's the difference between a constant
88-byte iterator and the storage for an additional list of tuples. And
if Raymond Hettinger, in a talk called "Transforming Code Into
Beautiful Idiomatic Python" specifically mentions that people should
always use iteritems
(https://www.youtube.com/watch?v=OSGv2VnC0go&feature=youtu.be&t=21m24s),
I tend to believe him. Sure, it'd be much better if Python 3 and
Python 2 both returned iterators for items(), values(), keys(), etc.,
but it doesn't. Wasting memory for purely aesthetic reasons (they're
even both the same number of lines) is just a bad idea, IMNSHO.

Terry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Is OVS implementation for supporting VLAN-Aware-VM compeleted?

2016-09-13 Thread Tidwell, Ryan
Cathy,

There are a few outstanding reviews to be wrapped up, including docs. However, 
this is mostly complete and the bulk of the functionality has merged and you 
can try it out.

Code Reviews: 
https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/vlan-aware-vms
Docs: https://review.openstack.org/#/c/361776/

-Ryan

From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Tuesday, September 13, 2016 11:25 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [neutron] Is OVS implementation for supporting 
VLAN-Aware-VM compeleted?

Hi All,

Sorry I lost track of this work. Is the implementation completed? Can we start 
using the OVS version of VLAN-Aware VMs ?

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][release][requirements][FFE] global-requirements update for requests-kerberos

2016-09-13 Thread Matthew Thode
On 09/13/2016 02:23 PM, Adam Young wrote:
> https://review.openstack.org/#/c/368530/
> 
> This change is for Python >2.7 only, as python2.7 already supports the
> latest version of these libraraies.  Back in the "just get pythoin3 to
> work" days we cut our losses on Kerberos support, but now it is
> working.  Getting this restriction removed means we don't have to edit
> away the tests for Kerberos in python3.
> 
> "The requests-kerberos package was marked as available for only python
> 2.6 and python 2.7 because pykerberos did not support python 3. This has
> since been fixed, however we don't directly have a kerberos dependency
> we can increase so just leave this unbound."
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
It looks like this has limited impact so I'm ok with this.  It (and the
keystonemiddleware ffe) are kinda late in the cycle though...

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][FFE][keystone][release] block keystonemiddleware 4.0.0

2016-09-13 Thread Matthew Thode
On 09/13/2016 02:53 PM, Steve Martinelli wrote:
> A bug was recently filed against keystone [1]. As of the Newton release
> we depend on a class being public -- BaseAuthProtocol instead of
> _BaseAuthProtocol [2]. Which was introduced in 4.1.0 [3].
> 
> The current requirement for keystonemiddleware is:
>   keystonemiddleware>=4.0.0,!=4.1.0,!=4.5.0
> 
> Blocking 4.0.0 would logically make it:
>   keystonemiddleware>=4.2.0,!=4.5.0
> 
> I've pushed a patch to the requirements repo for this change [4]. I'd
> like to know if blocking the lower value makes sense, I realize it's
> advertised, but we're up to 4.9.0 now. 
> 
> Unfortunately, many projects depend on keystonemiddleware, but (luckily
> ?) this should only be server side projects [5], most of which are going
> through their RC period now.
> 
> Thanks for reading,
> Steve
> 
> [1] https://bugs.launchpad.net/keystone/+bug/1623091
> [2] 
> https://github.com/openstack/keystone/blob/master/keystone/middleware/auth.py#L38
> [3] 
> https://github.com/openstack/keystonemiddleware/commit/54cba09855fd366875391cbd25c3b3c346ff7a1b
> [4] https://review.openstack.org/#/c/369624/2
> [5] 
> http://codesearch.openstack.org/?q=keystonemiddleware&i=nope&files=requirements.txt&repos=
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Nothing in codesearch popped out at me so this looks fine.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][release][requirements][FFE] global-requirements update for requests-kerberos

2016-09-13 Thread Matthew Thode
On 09/13/2016 02:37 PM, Doug Hellmann wrote:
> Excerpts from Adam Young's message of 2016-09-13 15:23:08 -0400:
>> https://review.openstack.org/#/c/368530/
>>
>> This change is for Python >2.7 only, as python2.7 already supports the 
>> latest version of these libraraies.  Back in the "just get pythoin3 to 
>> work" days we cut our losses on Kerberos support, but now it is 
>> working.  Getting this restriction removed means we don't have to edit 
>> away the tests for Kerberos in python3.
>>
>> "The requests-kerberos package was marked as available for only python 
>> 2.6 and python 2.7 because pykerberos did not support python 3. This has 
>> since been fixed, however we don't directly have a kerberos dependency 
>> we can increase so just leave this unbound."
> 
> Do we need an FFE for this or can it wait for Ocata? AFAIK, we aren't
> supporting deployment on Python 3, yet, right?
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

I think / thought keystone was supported on python 3 (maybe not all
features in mitaka though).  I'm currently running it on python 3
(through uwsgi).

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][FFE][keystone][release] block keystonemiddleware 4.0.0

2016-09-13 Thread Steve Martinelli
A bug was recently filed against keystone [1]. As of the Newton release we
depend on a class being public -- BaseAuthProtocol instead of
_BaseAuthProtocol [2]. Which was introduced in 4.1.0 [3].

The current requirement for keystonemiddleware is:
  keystonemiddleware>=4.0.0,!=4.1.0,!=4.5.0

Blocking 4.0.0 would logically make it:
  keystonemiddleware>=4.2.0,!=4.5.0

I've pushed a patch to the requirements repo for this change [4]. I'd like
to know if blocking the lower value makes sense, I realize it's advertised,
but we're up to 4.9.0 now.

Unfortunately, many projects depend on keystonemiddleware, but (luckily ?)
this should only be server side projects [5], most of which are going
through their RC period now.

Thanks for reading,
Steve

[1] https://bugs.launchpad.net/keystone/+bug/1623091
[2]
https://github.com/openstack/keystone/blob/master/keystone/middleware/auth.py#L38
[3]
https://github.com/openstack/keystonemiddleware/commit/54cba09855fd366875391cbd25c3b3c346ff7a1b
[4] https://review.openstack.org/#/c/369624/2
[5]
http://codesearch.openstack.org/?q=keystonemiddleware&i=nope&files=requirements.txt&repos=
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms]Running two haproxy-using units on same machine?

2016-09-13 Thread James Page
Hi Neil

On Tue, 13 Sep 2016 at 20:43 Neil Jerram  wrote:

> Should it be possible to run two OpenStack charm units, that both use
> haproxy to load balance their APIs, on the same machine?  Or is there some
> doc somewhere that says that a case like that should use separate machines?
>
> (I'm asking in connection with the bug report at
> https://bugs.launchpad.net/openstack-charm-testing/+bug/1622697.)
>

No - that's not currently possible.  For example, if you try to place both
nova-cloud-controller and cinder units on the same machine, they both
assume sole control over haproxy.cfg and will happily trample each others
changes.

There is a doc somewhere - I'll dig it out and add to the charm-guide on
docs.openstack.org.

Solution: use a LXC or LXD container for each service, assuring sole
control of the filesystem for each charm, avoiding said conflict.

Cheers

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms]Running two haproxy-using units on same machine?

2016-09-13 Thread Neil Jerram
Should it be possible to run two OpenStack charm units, that both use
haproxy to load balance their APIs, on the same machine?  Or is there some
doc somewhere that says that a case like that should use separate machines?

(I'm asking in connection with the bug report at
https://bugs.launchpad.net/openstack-charm-testing/+bug/1622697.)

Thanks,
 Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][release][requirements][FFE] global-requirements update for requests-kerberos

2016-09-13 Thread Doug Hellmann
Excerpts from Adam Young's message of 2016-09-13 15:23:08 -0400:
> https://review.openstack.org/#/c/368530/
> 
> This change is for Python >2.7 only, as python2.7 already supports the 
> latest version of these libraraies.  Back in the "just get pythoin3 to 
> work" days we cut our losses on Kerberos support, but now it is 
> working.  Getting this restriction removed means we don't have to edit 
> away the tests for Kerberos in python3.
> 
> "The requests-kerberos package was marked as available for only python 
> 2.6 and python 2.7 because pykerberos did not support python 3. This has 
> since been fixed, however we don't directly have a kerberos dependency 
> we can increase so just leave this unbound."

Do we need an FFE for this or can it wait for Ocata? AFAIK, we aren't
supporting deployment on Python 3, yet, right?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Extending python-dracclient to fetch System/iDrac resources

2016-09-13 Thread Anish Bhatt
I'm extending python-dracclient to pull in iDrac & System resources
(DCIM_iDRACCard* & DCIM_System*). The way DCIM_BIOS* attributes work right
now, only AttributeName, current_value, pending_value and read_only status
are parsed.

However, this approach does not work with iDRAC or System attributes as the
AttributeName values are not unique, and require a minimum of
GroupID#AttributeName to be able to identified uniquely identified. A quick
look at racadm xml output shows this naming as well (see anything under the
System.Embedded.1 or iDRAC.Embedded.1 FQDDs as opposed to the
BIOS.Setup.1-1 FQDD).

Is parsing iDrac/System attributes differently from BIOS attributes the
correct approach here (this will also make it match racadm output), or
should I be changing all Attributes to be parsed the same way ?
This gist shows the clashing AttributeName values https://gist.github.
com/anish/cef1e9d11f0ddd59f3c009fece0fb552
-Anish

-- 
One socket to bind them all
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][oslo][release][requirements][FFE] global-requirements update for requests-kerberos

2016-09-13 Thread Adam Young

https://review.openstack.org/#/c/368530/

This change is for Python >2.7 only, as python2.7 already supports the 
latest version of these libraraies.  Back in the "just get pythoin3 to 
work" days we cut our losses on Kerberos support, but now it is 
working.  Getting this restriction removed means we don't have to edit 
away the tests for Kerberos in python3.


"The requests-kerberos package was marked as available for only python 
2.6 and python 2.7 because pykerberos did not support python 3. This has 
since been fixed, however we don't directly have a kerberos dependency 
we can increase so just leave this unbound."



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][tripleo] diskimage-builder 1.19.0 release

2016-09-13 Thread no-reply
We are thrilled to announce the release of:

diskimage-builder 1.19.0: Golden Disk Image builder.

For more details, please see below.

Changes in diskimage-builder 1.18.2..1.19.0
---

49baaa4 Remove EPEL as hardcoded dependency of centos elements
01a0dbf Remove unnecessary dkms install from base
07e34f9 Fix mellanox element required kernel modules and user space packages
4b222b8 fix systemd resource deadlock
4b1a875 Disabled IPv6 privacy extensions
796dcd0 Generate and use upper-constraints for ironic-agent
ff234b6 Explain difference between two envvars
a1bfca6 Add tests for building *-minimal images
6a5da7e Spec for changing the block device handling
c3a036e Update portage only if needed
c1c2f71 Update GRUB_MKCONFIG for detecting what's installed
62efc03 Clarify OVERWRITE_OLD_IMAGE docs
16bad71 Add blurb about communication to docs landing page
1787754 Change DIB_IPA_CERT resulting file name
c1bac65 Allow to skip kernel cleanup
b59ae02 Add specs dir
944b4fe Add "audit"package to yum-minimal
5e957f6 secure_path in sudoers: deal with possible quotes
ffb8efd Optionally remove portage files
e49d347 Generalize logic for skipping final image generation
6ae60b2 add no_proxy when debootstrap trying to use proxy
b7d391f Fix proliant-tools dependencies
995c539 Revert "Revert "Pre-install pip/virtualenv packages""


Diffstat (except docs and test files)
-

bin/disk-image-create  |  18 +-
.../specs/v1/approved/block-device-overview.rst| 173 +++
elements/base/install.d/80-disable-rfc3041 |  29 
elements/base/install.d/99-dkms|   9 -
elements/base/pkg-map  |   5 -
elements/bootloader/finalise.d/50-bootloader   |   6 +-
elements/centos-minimal/element-deps   |   1 -
.../environment.d/10-centos-distro-name.bash   |   3 +
.../test-elements/build-succeeds/README.rst|   1 +
elements/centos/element-deps   |   1 -
elements/centos7/element-deps  |   1 -
elements/debian-minimal/README.rst |   4 +
elements/debian/README.rst |   4 +
elements/debootstrap/README.rst|   4 +-
elements/debootstrap/root.d/08-debootstrap |   3 +-
.../install.d/dhcp-interface@.service  |   1 -
.../test-elements/build-succeeds/README.rst|   1 +
.../environment.d/10-fedora-pin-mirror.bash|   2 +
elements/gentoo/README.rst |   4 +
elements/gentoo/bin/install-packages   |  12 +-
.../environment.d/00-gentoo-distro-name.bash   |   2 -
.../gentoo/environment.d/00-gentoo-envars.bash |   3 +
elements/gentoo/extra-data.d/gentoo-releng.gpg |  84 ++
elements/gentoo/gentoo-releng.gpg  |  84 --
elements/gentoo/post-install.d/99-cleanup  |   5 +-
elements/gentoo/root.d/10-gentoo-image |   4 +-
elements/ironic-agent/element-deps |   1 +
elements/ironic-agent/extra-data.d/15-ssl-ca-copy  |   2 +-
.../60-ironic-agent-install|   9 +-
elements/mellanox/element-deps |   2 +
elements/mellanox/init.d/01-mellanox   |   6 +-
elements/mellanox/install.d/65-mellanox|   8 +
elements/mellanox/install.d/mellanox-rules.udev|   5 +
elements/mellanox/package-installs.yaml|  26 +++
elements/mellanox/pkg-map  |  67 
elements/mellanox/udev.d/81-mellanox-drivers.rules |   5 +
elements/no-final-image/README.rst |  13 ++
.../01-install-pip |  49 +-
elements/proliant-tools/element-deps   |   1 +
elements/proliant-tools/package-installs.yaml  |   2 +
elements/proliant-tools/pkg-map|   7 +-
elements/redhat-common/README.rst  |   7 +
.../redhat-common/environment.d/50-redhat-common   |   1 +
.../redhat-common/finalise.d/01-clean-old-kernels  |   5 +
.../pre-install.d/00-usr-local-bin-secure-path |   2 +-
elements/ubuntu-minimal/README.rst |   4 +
.../test-elements/build-succeeds/README.rst|   1 +
elements/yum-minimal/package-installs.yaml |   1 +
54 files changed, 842 insertions(+), 135 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] PTL candidacy

2016-09-13 Thread Hongbin Lu
Hi,

I would like to announce my candidacy for re-election as Magnum PTL.

My involvement in Magnum began in December 2014, in which the project was at
a very early stage. Since then, I have been working with the team to explore
the roadmap, implement and refine individual components, and gradually grow
the feature set. In the past release cycle, it was my honor to serve as
Magnum PTL and I am happy to continue the service in the next cycle.

Based on past discussion within the team, I believe Magnum will focus on the
following items at the new cycle:

* Life-cycle operations: Add support for several life-cycle operations for
  managing long running clusters. In the short-term, we could add support for
  basic operations like reboot, restart, upgrade, and rebuild. Additional
  operations can be considered in long-term.
* Heterogeneous cluster: Magnum currently doesn't support the ability to
  create a heterogeneous cluster, in which worker nodes are allowed to have
  different configurations (i.e. amount of cpu, memory, or I/O). I believe the
  problem needs to be addressed in the new cycle. As discussed within the team,
  we will introduce a new concept, called node group, to represent a collection
  of nodes with same configuration. As a result, a heterogeneous cluster can be
  represented by a collection of node groups.

In addition, I believe the items below are important and need attention in
the new cycle:

* Kuryr integration: Continue to collaborate with Kuryr team to deliver a
  networking solution.
* Ironic integration: We had great progress at Newton in this area and will
  continue the work in the new cycle.
* Quality assurance: Improve coverage of integration and unit tests.
* Documentation: Enhance documents for operators.
* Horizon UI: Enhance our Horizon plugin.
* Grow the community: Attract new contributors to Magnum.

Thank you for considering my PTL candidacy.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Is OVS implementation for supporting VLAN-Aware-VM compeleted?

2016-09-13 Thread Cathy Zhang
Hi All,

Sorry I lost track of this work. Is the implementation completed? Can we start 
using the OVS version of VLAN-Aware VMs ?

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Hongbin Lu
Sounds good!. Thanks for the clarification.

Best regards,
Hongbin

On Tue, Sep 13, 2016 at 1:43 PM, Antoni Segura Puimedon 
wrote:

> On Tue, Sep 13, 2016 at 5:05 PM, Hongbin Lu  wrote:
> >
> >
> > On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary
> >  wrote:
> >>
> >>
> >>
> >> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu 
> wrote:
> >>>
> >>> Ivan,
> >>>
> >>> Thanks for the proposal. From Magnum's point of view, this proposal
> >>> doesn't seem to require to store neutron/rabbitmq credentials in
> tenant VMs
> >>> which is more desirable. I am looking forward to the PoC.
> >>
> >>
> >> Hogbin, Can you please elaborate on this will not require to store
> neutron
> >> credentials?
> >> For example in libnetwork case, neutron's commands like "show_port" and
> >> "update_port" will still need to be invoked from inside VM.
> >
> >
> > In a typical COE cluster, there are master nodes and work (minion/slave)
> > nodes. Regarding to credentials, the following is optimal:
> > * Avoid storing credentials in work nodes. If credentials have to be
> stored,
> > move them to master nodes if we can (containers are running in work
> nodes so
> > credentials stored there have a higher risk). A question for you,
> neutron's
> > commands like "show_port" and "update_port" need to be invoked from work
> > nodes or master nodes?
> > * If credentials have to be stored, scope them with least privilege
> (Magnum
> > uses Keystone trust for this purpose).
>
> I think that with the ipvlan proposal you probably can do without having
> to call
> those two. IIUC the proposal the binding on the VM, taking libnetwork
> as an example
>  would be:
>
> 1. docker sends a request to kuryr-libnetwork running in container-in-vm
> mode.
> 2. kuryr-libnetwork forwards the request to a kuryr daemon that has
> the necessary
> credentials to talk to neutron (it could run either in the master node
> or in the compute
> node just like there is the dhcp agent, i.e., with one foot on the VM
> network and one
> on the underlay).
> 3. The kuryr daemon does the address pair proposal requests to Neutron
> and returns
> the result to the kuryr-libnetwork in the VM, at which point the VM
> port can already
> send and receive data for the container.
> 4. kuryr-libnetwork in the VM creates an ipvlan virtual device and
> puts it the IP
> returned by the kuryr daemon.
>
> >
> >>
> >>
> >> Overall I liked this approach given its simplicity over vlan-aware-vms.
> >>
> >> -VikasC
> >>>
> >>>
> >>> Best regards,
> >>> Hongbin
> >>>
> >>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan <
> ivan.cough...@intel.com>
> >>> wrote:
> 
> 
> 
>  Overview
> 
>  Kuryr proposes to address the issues of double encapsulation and
>  exposure of containers as neutron entities when containers are running
>  within VMs.
> 
>  As an alternative to the vlan-aware-vms and use of ovs within the VM,
> we
>  propose to:
> 
>  -  Use allowed-address-pairs configuration for the VM neutron
>  port
> 
>  -  Use IPVLAN for wiring the Containers within VM
> 
> 
> 
>  In this way:
> 
>  -  Achieve efficient data path to container within VM
> 
>  -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
>  features to accelerate the data path (more details below)
> 
>  -  Mitigate the risk of vlan-aware-vms not making neutron in
>  time
> 
>  -  Provide a solution that works on existing and previous
>  openstack releases
> 
> 
> 
>  This work should be done in a way permitting the user to optionally
>  select this feature.
> 
> 
> 
> 
> 
>  Required Changes
> 
>  The four main changes we have identified in the current kuryr codebase
>  are as follows:
> 
>  · Introduce an option of enabling “IPVLAN in VM” use case.
> This
>  can be achieved by using a config file option or possibly passing a
> command
>  line argument. The IPVLAN master interface must also be identified.
> 
>  · If using “IPVLAN in VM” use case, Kuryr should no longer
>  create a new port in Neutron or the associated VEth pairs. Instead,
> Kuryr
>  will create a new IPVLAN slave interface on top of the VM’s master
> interface
>  and pass this slave interface to the Container netns.
> 
>  · If using “IPVLAN in VM” use case, the VM’s port ID needs to
> be
>  identified so we can associate the additional IPVLAN addresses with
> the
>  port. This can be achieved by querying Neutron’s show-port function
> and
>  passing the VMs IP address.
> 
>  · If using “IPVLAN in VM” use case, Kuryr should associate the
>  additional IPVLAN addresses with the VMs port. This can be achieved
> using
>  Neutron’s allowed-address-pairs flag in the port-update function. We
> intend
>  to make use of Kuryr’s existing 

Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Antoni Segura Puimedon
On Tue, Sep 13, 2016 at 5:05 PM, Hongbin Lu  wrote:
>
>
> On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary
>  wrote:
>>
>>
>>
>> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu  wrote:
>>>
>>> Ivan,
>>>
>>> Thanks for the proposal. From Magnum's point of view, this proposal
>>> doesn't seem to require to store neutron/rabbitmq credentials in tenant VMs
>>> which is more desirable. I am looking forward to the PoC.
>>
>>
>> Hogbin, Can you please elaborate on this will not require to store neutron
>> credentials?
>> For example in libnetwork case, neutron's commands like "show_port" and
>> "update_port" will still need to be invoked from inside VM.
>
>
> In a typical COE cluster, there are master nodes and work (minion/slave)
> nodes. Regarding to credentials, the following is optimal:
> * Avoid storing credentials in work nodes. If credentials have to be stored,
> move them to master nodes if we can (containers are running in work nodes so
> credentials stored there have a higher risk). A question for you, neutron's
> commands like "show_port" and "update_port" need to be invoked from work
> nodes or master nodes?
> * If credentials have to be stored, scope them with least privilege (Magnum
> uses Keystone trust for this purpose).

I think that with the ipvlan proposal you probably can do without having to call
those two. IIUC the proposal the binding on the VM, taking libnetwork
as an example
 would be:

1. docker sends a request to kuryr-libnetwork running in container-in-vm mode.
2. kuryr-libnetwork forwards the request to a kuryr daemon that has
the necessary
credentials to talk to neutron (it could run either in the master node
or in the compute
node just like there is the dhcp agent, i.e., with one foot on the VM
network and one
on the underlay).
3. The kuryr daemon does the address pair proposal requests to Neutron
and returns
the result to the kuryr-libnetwork in the VM, at which point the VM
port can already
send and receive data for the container.
4. kuryr-libnetwork in the VM creates an ipvlan virtual device and
puts it the IP
returned by the kuryr daemon.

>
>>
>>
>> Overall I liked this approach given its simplicity over vlan-aware-vms.
>>
>> -VikasC
>>>
>>>
>>> Best regards,
>>> Hongbin
>>>
>>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan 
>>> wrote:



 Overview

 Kuryr proposes to address the issues of double encapsulation and
 exposure of containers as neutron entities when containers are running
 within VMs.

 As an alternative to the vlan-aware-vms and use of ovs within the VM, we
 propose to:

 -  Use allowed-address-pairs configuration for the VM neutron
 port

 -  Use IPVLAN for wiring the Containers within VM



 In this way:

 -  Achieve efficient data path to container within VM

 -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
 features to accelerate the data path (more details below)

 -  Mitigate the risk of vlan-aware-vms not making neutron in
 time

 -  Provide a solution that works on existing and previous
 openstack releases



 This work should be done in a way permitting the user to optionally
 select this feature.





 Required Changes

 The four main changes we have identified in the current kuryr codebase
 are as follows:

 · Introduce an option of enabling “IPVLAN in VM” use case. This
 can be achieved by using a config file option or possibly passing a command
 line argument. The IPVLAN master interface must also be identified.

 · If using “IPVLAN in VM” use case, Kuryr should no longer
 create a new port in Neutron or the associated VEth pairs. Instead, Kuryr
 will create a new IPVLAN slave interface on top of the VM’s master 
 interface
 and pass this slave interface to the Container netns.

 · If using “IPVLAN in VM” use case, the VM’s port ID needs to be
 identified so we can associate the additional IPVLAN addresses with the
 port. This can be achieved by querying Neutron’s show-port function and
 passing the VMs IP address.

 · If using “IPVLAN in VM” use case, Kuryr should associate the
 additional IPVLAN addresses with the VMs port. This can be achieved using
 Neutron’s allowed-address-pairs flag in the port-update function. We intend
 to make use of Kuryr’s existing IPAM functionality to request these IPs 
 from
 Neutron.



 Asks

 We wish to discuss the pros and cons.

 For example, containers exposure as proper neutron entities and the
 utility of neutron’s allowed-address-pairs is not yet well understood.



 We also wish to understand if this approach is acceptable for kuryr?





 EPA

 The Enhanced Platform Awarene

Re: [openstack-dev] [all][python3] use of six.iteritems()

2016-09-13 Thread Ed Leafe
On Sep 13, 2016, at 10:42 AM, Terry Wilson  wrote:

> All performance matters. All
> memory consumption matters. Being wasteful over a purely aesthetic few
> extra characters of code is silly.

import this


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Let's wrap up config option changes for Newton

2016-09-13 Thread Matt Riedemann
I've created a new blueprint for centralizing and cleaning up 
configuration options for Ocata:


https://blueprints.launchpad.net/nova/+spec/centralize-config-options-ocata

Please start moving existing changes, or create any new changes, in that 
blueprint rather than the newton blueprint:


https://blueprints.launchpad.net/nova/+spec/centralize-config-options-newton

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][murano] murano-pkg-check 0.1.0 release

2016-09-13 Thread no-reply
We are pleased to announce the release of:

murano-pkg-check 0.1.0: Murano package validator tool

This is the first release of murano-pkg-check.

With source available at:

http://git.openstack.org/cgit/openstack/murano-pkg-check

With package available at:

https://pypi.python.org/pypi/murano-pkg-check

Please report issues through launchpad:

http://bugs.launchpad.net/murano-pkg-check

For more details, please see below.

Changes in murano-pkg-check ec13fd263b80f4730fdd98f2f6a1f59f1cb0bad5..0.1.0
---

628b30f Bunch of bugfixes
ff9d851 Added errors registration
e95f43b Add i18n support
2f8fc0d Improve logging system
644e883 Improving tests coverage
5dd21fe Report yaml parsing errors instead of ignoring
0104da8 Added tools/cover.sh
3b4dfa5 Improve package version validation
a220ba4 Allow method name to start with "."
6e47825 Fix error report in version
09135dc Accept null description in manifest
e03bb65 Bunch of fixes for code_structure, base, manifest:
88c7813 Adding Package Validator
7ba204c Adding UI Validator
48dbf28 Adding MuranoPL validator
3504cb0 Improving Manifest require check
47fc634 Code Structure checker added with tests
3470094 Adding ManifestValidator
c59ed6d Added core and CLI
0a97def Commit project structure




Requirements updates


diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 000..8ce956a
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,11 @@
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
+
+pbr>=1.6 # Apache-2.0
+PyYAML>=3.1.0 # MIT
+yaql>=1.1.0 # Apache 2.0 License
+six>=1.9.0 # MIT
+stevedore>=1.16.0 # Apache-2.0
+semantic_version>=2.3.1 # BSD
+oslo.i18n>=2.1.0 # Apache-2.0
\ No newline at end of file
diff --git a/test-requirements.txt b/test-requirements.txt
new file mode 100644
index 000..a3fcd81
--- /dev/null
+++ b/test-requirements.txt
@@ -0,0 +1,17 @@
+# The order of packages is significant, because pip processes them in the order
+# of appearance. Changing the order has an impact on the overall integration
+# process, which may cause wedges in the gate later.
+
+hacking<0.12,>=0.11.0 # Apache-2.0
+
+coverage>=3.6 # Apache-2.0
+python-subunit>=0.0.18 # Apache-2.0/BSD
+sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
+oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
+oslotest>=1.10.0 # Apache-2.0
+testrepository>=0.0.18 # Apache-2.0/BSD
+testscenarios>=0.4 # Apache-2.0/BSD
+testtools>=1.4.0 # MIT
+
+# releasenotes
+reno>=1.8.0 # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [elections] Glance PTL non-candidacy

2016-09-13 Thread Nikhil Komawar
Thanks wangxiyuan! I'm happy to work with you and truly appreciate all
your good work in Glance. Very glad to hear that my reviews are helping you.


On 9/13/16 4:29 AM, 王玺源 wrote:
> Hi Nikhil,
>
> You are the first PTL I knew when I started to work with OpenStack
> community, we can see your hard work for Glance during these cycles. I
> have learned a lot from your review suggestion. Thanks very much!!! 
> And it's great to know that you still work on Glance.  Best wishes.
>
> 2016-09-13 13:33 GMT+08:00 Nikhil Komawar  >:
>
> Thank you Fei Long. It's always a pleasure working with you!
>
>
>
>
>
> On 9/13/16 12:59 AM, Fei Long Wang wrote:
>
> > Hi Nikhil,
>
> >
>
> > Thanks for your hard work as Glance PTL, you did a great job. And
>
> > happy to know you will still work in OpenStack, see you around ;)
>
> >
>
> > On 12/09/16 18:08, Nikhil Komawar wrote:
>
> >> Hi team,
>
> >>
>
> >>
>
> >> Just wanted to share my decision for not running for PTL for
> Glance.
>
> >> It's been great serving the community in this role however,
> there are
>
> >> some personal and family matters that I need to attend to over
> the next
>
> >> couple of months or so.
>
> >>
>
> >>
>
> >> I think the Glance team has done great and we've quite a bunch
> of bright
>
> >> developers who are helping push the project forward in the
> appropriate
>
> >> direction. With Glare becoming separate project and Ocata being
> short
>
> >> cycle, I anticipate the priorities being rather obvious to
> those who
>
> >> have stayed in touch. I will be available to do the rightful
> handoff to
>
> >> the incoming PTL (for Ocata) and update with Newton happenings once
>
> >> we're done with a few important bugs that are being targeted
> for RC1.
>
> >>
>
> >>
>
> >> I intend to stick around in Glance and related projects like
>
> >> Searchlight, Glare, etc. However, I am planning to take a more
> hands on
>
> >> role and see a few features through in Ocata.  Given more and more
>
> >> glance-cores time sharing with other projects, I think we need some
>
> >> throttle in our review system. So, I'd like to help any new
> developers
>
> >> get their reviews up, that in turn will help the Glance community.
>
> >>
>
> >>
>
> >> Last but not the least, I have thoroughly enjoyed working in
> this role
>
> >> with all my fellow stackers, particularly glancers! So, a BIG
> thank you
>
> >> for having worked with me in making Glance better over the last
> couple
>
> >> of years.
>
> >>
>
> >>
>
> >
>
>
>
> --
>
>
>
> Thanks,
>
> Nikhil
>
>
>
>
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
>
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Thanks,
Nikhil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging-deb][PTL] candidacy

2016-09-13 Thread Haïkel
2016-09-12 21:10 GMT+02:00 Thomas Goirand :
> I am writing to submit my candidacy for re-election as the PTL for the
> packaging-deb project.
>
> The idea sparked in Vancouver (spring 2015). The project joined the
> big-tent about a year ago (in August 2015, it was approved by the TC)
> But it then took about a year to have it bootstraped. This was long and
> painful bootstrap, but today, I can proudly announce that it was finally
> well launched. Right now, all of Oslo and python-*client are built, and
> it is a mater of days until all services of Newton is completely built
> in OpenStack infra (Keystone is already there in Newton b2 version).
>
> I'll do my best to continue to drive the project, and hope to gather
> more contribution every day. Every contributor counts.
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> P.S: It maybe will be a bit hard to find out who can vote, because only
> the debian/newton branch should count, and currently Stackalytics is
> counting the master which contains upstream commits. Hopefully, we can
> solve the issue before the elections.
>

Thomas has been very helpful in collaborating with other packaging
groups like RPM ones.
So I welcome his candidacy!

Regards,
H.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][python3] use of six.iteritems()

2016-09-13 Thread Terry Wilson
On Wed, Jun 10, 2015 at 4:41 AM, Robert Collins
 wrote:
> On 10 June 2015 at 21:30, Ihar Hrachyshka  wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> On 06/10/2015 02:15 AM, Robert Collins wrote:
>>> I'm very glad folk are working on Python3 ports.
>>>
>>> I'd like to call attention to one little wart in that process: I
>>> get the feeling that folk are applying a massive regex to find
>>> things like d.iteritems() and convert that to six.iteritems(d).
>>>
>>> I'd very much prefer that such a regex approach move things to
>>> d.items(), which is much easier to read.
>>>
>>> Here's why. Firstly, very very very few of our dict iterations are
>>> going to be performance sensitive in the way that iteritems()
>>> matters. Secondly, no really - unless you're doing HUGE dicts, it
>>> doesn't matter. Thirdly. Really, it doesn't.
>>>
>>
>> Does it hurt though? ;)
>
> Yes.
>
> Its: harder to read. Its going to have to be removed eventually anyway
> (when we stop supporting 2.7). Its marginally slower on 3.x (it has a
> function and an iterator wrapping the actual thing). Its unidiomatic,
> and we get lots of programmers that are new to Python; we should be
> giving them as beautiful code as we can to help them learn.

If someone is so new they can't handle six.iteritems, they should stay
away from Neutron code. It'll eat them.

>>> At 1 million items the overhead is 54ms[1]. If we're doing inner
>>> loops on million item dictionaries anywhere in OpenStack today, we
>>> have a problem. We might want to in e.g. the scheduler... if it
>>> held in-memory state on a million hypervisors at once, because I
>>> don't really to to imagine it pulling a million rows from a DB on
>>> every action. But then, we'd be looking at a whole 54ms. I think we
>>> could survive, if we did that (which we don't).
>>>
>>> So - please, no six.iteritems().

Huge -1 from me. The "I like looking at d.items() more than I like
looking at six.iteritems(d) so make everything (even slightly) less
efficient" argument is insane to me. All performance matters. All
memory consumption matters. Being wasteful over a purely aesthetic few
extra characters of code is silly.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Marking untested drivers as unsupported

2016-09-13 Thread Jim Rollenhagen
Hi folks,

I submitted this patch[0] to mark any untested driver as deprecated. Please
review it to be sure I didn't miss anything or mark things that are tested
(I suspect SSH driver will be controversial, but from what I've seen, the
community is keen to stop supporting that).

To be clear, the drivers marked unsupported are any driver that uses:

  * SSHPower / SSHManagement
  * NativeIPMIPower / NativeIPMIManagement
  * seamicro.Power / seamicro.Management
  * IBootPower
  * SNMPPower
  * VirtualBoxPower / VirtualBoxManagement
  * AMTPower / AMTManagement
  * MSFTOCSPower / MSFTOCSManagement
  * WakeOnLanPower

[0] https://review.openstack.org/369503

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Hongbin Lu
On Tue, Sep 13, 2016 at 2:10 AM, Vikas Choudhary  wrote:

>
>
> On Mon, Sep 12, 2016 at 9:17 PM, Hongbin Lu  wrote:
>
>> Ivan,
>>
>> Thanks for the proposal. From Magnum's point of view, this proposal
>> doesn't seem to require to store neutron/rabbitmq credentials in tenant VMs
>> which is more desirable. I am looking forward to the PoC.
>>
>
> Hogbin, Can you please elaborate on this will not require to store neutron
> credentials?
> For example in libnetwork case, neutron's commands like "show_port" and
> "update_port" will still need to be invoked from inside VM.
>

In a typical COE cluster, there are master nodes and work (minion/slave)
nodes. Regarding to credentials, the following is optimal:
* Avoid storing credentials in work nodes. If credentials have to be
stored, move them to master nodes if we can (containers are running in work
nodes so credentials stored there have a higher risk). A question for you,
neutron's commands like "show_port" and "update_port" need to be invoked
from work nodes or master nodes?
* If credentials have to be stored, scope them with least privilege (Magnum
uses Keystone trust for this purpose).


>
> Overall I liked this approach given its simplicity over vlan-aware-vms.
>
> -VikasC
>
>>
>> Best regards,
>> Hongbin
>>
>> On Mon, Sep 12, 2016 at 7:29 AM, Coughlan, Ivan 
>> wrote:
>>
>>>
>>>
>>> *Overview*
>>>
>>> Kuryr proposes to address the issues of double encapsulation and
>>> exposure of containers as neutron entities when containers are running
>>> within VMs.
>>>
>>> As an alternative to the vlan-aware-vms and use of ovs within the VM, we
>>> propose to:
>>>
>>> -  Use allowed-address-pairs configuration for the VM neutron
>>> port
>>>
>>> -  Use IPVLAN for wiring the Containers within VM
>>>
>>>
>>>
>>> In this way:
>>>
>>> -  Achieve efficient data path to container within VM
>>>
>>> -  Better leverage OpenStack EPA(Enhanced Platform Awareness)
>>> features to accelerate the data path (more details below)
>>>
>>> -  Mitigate the risk of vlan-aware-vms not making neutron in
>>> time
>>>
>>> -  Provide a solution that works on existing and previous
>>> openstack releases
>>>
>>>
>>>
>>> This work should be done in a way permitting the user to optionally
>>> select this feature.
>>>
>>>
>>>
>>>
>>> *Required Changes*
>>>
>>> The four main changes we have identified in the current kuryr codebase
>>> are as follows:
>>>
>>> · Introduce an option of enabling “IPVLAN in VM” use case. This
>>> can be achieved by using a config file option or possibly passing a command
>>> line argument. The IPVLAN master interface must also be identified.
>>>
>>> · If using “IPVLAN in VM” use case, Kuryr should no longer
>>> create a new port in Neutron or the associated VEth pairs. Instead, Kuryr
>>> will create a new IPVLAN slave interface on top of the VM’s master
>>> interface and pass this slave interface to the Container netns.
>>>
>>> · If using “IPVLAN in VM” use case, the VM’s port ID needs to
>>> be identified so we can associate the additional IPVLAN addresses with the
>>> port. This can be achieved by querying Neutron’s show-port function and
>>> passing the VMs IP address.
>>>
>>> · If using “IPVLAN in VM” use case, Kuryr should associate the
>>> additional IPVLAN addresses with the VMs port. This can be achieved using
>>> Neutron’s allowed-address-pairs flag in the port-update function. We
>>> intend to make use of Kuryr’s existing IPAM functionality to request these
>>> IPs from Neutron.
>>>
>>>
>>>
>>> *Asks*
>>>
>>> We wish to discuss the pros and cons.
>>>
>>> For example, containers exposure as proper neutron entities and the
>>> utility of neutron’s allowed-address-pairs is not yet well understood.
>>>
>>>
>>>
>>> We also wish to understand if this approach is acceptable for kuryr?
>>>
>>>
>>>
>>>
>>>
>>> *EPA*
>>>
>>> The Enhanced Platform Awareness initiative is a continuous program to
>>> enable fine-tuning of the platform for virtualized network functions.
>>>
>>> This is done by exposing the processor and platform capabilities through
>>> the management and orchestration layers.
>>>
>>> When a virtual network function is instantiated by an Enhanced Platform
>>> Awareness enabled orchestrator, the application requirements can be more
>>> efficiently matched with the platform capabilities.
>>>
>>> http://itpeernetwork.intel.com/openstack-kilo-release-is-sha
>>> ping-up-to-be-a-milestone-for-enhanced-platform-awareness/
>>>
>>> https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
>>>
>>> https://www.brighttalk.com/webcast/12229/181563/epa-features
>>> -in-openstack-kilo
>>>
>>>
>>>
>>>
>>>
>>> Regards,
>>>
>>> Ivan….
>>>
>>> --
>>> Intel Research and Development Ireland Limited
>>> Registered in Ireland
>>> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
>>> Registered Number: 308263
>>>
>>> This e-

Re: [openstack-dev] [release] proposing adding Tony Breeds to "Release Managers" team

2016-09-13 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2016-09-06 11:36:01 -0400:
> Team,
> 
> I would like to add Tony Breeds to the "Release Managers" team in
> gerrit. This would give him +2 permissions on openstack-infra/release-tools
> and on openstack/releases. I feel his reviews on both of those repos
> have already demonstrated a good attention to detail, especially
> of the release schedule and processes.
> 
> Please respond below with +1 or -1.
> 
> Thanks,
> Doug
> 

After a week with only positive responses, I've added Tony to both the
"Release Managers" and "release-tools-core" groups in gerrit.

Welcome aboard, Tony!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Spec and devref placement

2016-09-13 Thread Ilya Chukhnakov
+1 for (b) or (d). devrefs should should be where the related code is


> On 12 Sep 2016, at 18:18, Irena Berezovsky  wrote:
> 
> I am fine with option (b) as well.
> We can add option (d):
> Specs in openstack/kuryr-specs but devrefs in each specific project,
> i.e., the one that will end up with the implementation code.
> 
> 
> On Mon, Sep 12, 2016 at 2:38 PM, Antoni Segura Puimedon  > wrote:
> Hi Kuryrs!
> 
> On September 5th's weekly IRC meeting Irena Berezovsky suggested that
> we should take a decision regarding the location of specs and devrefs.
> 
> Currently we default to putting all the specs and devrefs for:
> - Kuryr
> - Kuryr-libnetwork
> - Kuryr-kubernetes
> 
> to openstack/kuryr. Fuxi is still being integrated and keeps its own doc.
> 
> The three proposals that came up where:
> a) All specs and devrefs to openstack/kuryr
> b) Specs in openstack/kuryr but devrefs in each specific project,
> i.e., the one that will end up with the implementation code.
> c) Both specs and devrefs in each separate Kuryr project.
> 
> I would like to advocate for option (b). It makes things easy for when
> specs involve multiple kuryr pieces and, at the same time, it keeps
> development information in the place where you'd expect, close to the
> code.
> 
> Please, weigh on this issue here in the ML or in the weekly IRC
> meeting today. The idea is to reach a decision by next week's weekly
> IRC meeting and then write it in each subproject's "how to contribute"
> 
> See you later in the weekly IRC,
> 
> Toni
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Skip-only tests

2016-09-13 Thread Rodrigo Duarte
On Thu, Sep 8, 2016 at 12:44 PM, Joe Hakim Rahme 
wrote:

> On Thu, 2016-09-08 at 10:36 +0200, Andreas Jaeger wrote:
> >
> > Toni, sorry, I wasn't clear. I'm not advocating for this, I wanted to
> > bring Joe into this discussion to see whether the solution that John
> > suggested will help them as well - I wanted to broaden the scope of
> > this since it's not only neutron that faces this.
> >
>
> Hi,
>
> Just to be clear, my intention with the repo I'm trying to create is
> to hold tests that are not acceptable in upstream Tempest by design
> (mainly whitebox testing). I think what John was referring to (correct
> me if I'm wrong) are tests that are acceptable Tempest tests but we
> lack the required infrastrcuture (hardware, images, ...) to run them
> in the gate.
>
> I realize now that the name I chose "nova-tempest-staging" might be
> misleading and I think it should be changed. Maybe something like
> "tempest-whitebox-plugin" might be more descriptive?
>

This sounds like a generic repo, not specific or tied to service. Not being
specific sounds much better to me.

I liked the idea and would be willing to help out in the effort, but agree
it needs more thorough discussion.


>
> As for the topic of skip-only tests, I think that it'd be a shame to
> reject valid and valuable tests simply because the gate is not equipped
> to run them. There are use cases for Tempest outside of the upstream
> gate, so marking certain tests as valid-but-skipped-in-the-gate seems
> like an acceptable solution to me.
>
> (I'm not a maintainer of the Tempest repo so please take my opinion
> with a grain of salt).
>
> Cheers,
> Joe
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Election Season, PTL and TC September/October 2016

2016-09-13 Thread Anita Kuno

On 16-09-13 10:23 AM, Anita Kuno wrote:

On 16-09-13 09:52 AM, Nate Johnston wrote:

On Tue, Sep 13, 2016 at 07:48:06AM -0500, Sean McGinnis wrote:

On Tue, Sep 13, 2016 at 05:14:56PM +1000, Tony Breeds wrote:

On Fri, Sep 02, 2016 at 12:10:53PM +, Tristan Cacqueray wrote:


Lastly, election officials are also reachable through the
#openstack-election Freenode channel.

I wonder if it's worth having all conversations in #openstack-dev?

There are pros and cons to both.

Having a single purpose election channel certainly make it easy to 
spot election activity

Using openstack-dev increases the possibility of braoder reach.

I don't have a strong opionion either way.

Yours Tony.

No strong opinion either, but I agree openstack-dev would get more
visiblity.

Sean (smcginnis)

I am fine with #openstack-dev if that is the consensus choice, but we
may also want to list the direct contact info for the election officers


Officials, not officers. The role is to administrate as an appointee 
of the technical committee, not police.



so that people can ping directly in case their query happens to get
lost.  The conversation can then come back to #openstack-dev, but then
you would know you had someone's attention.


Well the method of communication up until now has been using the -dev 
mailing list with an invitation to email the election officials 
directly for private matters. That way communication reaches the 
broadest audience and is time insensitive.


Correction, timezone insensitive. The fact that email is time sensitive 
is also one of the features for using it.


Thanks,
Anita.


My experience is that has worked well in the past.

Thank you,
Anita.



But this is my first time helping out on an OpenStack election, so I may
be a bit paranoid about missing something.

--N.

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Tracking newton-rc-potential fixes for RC1 on Thursday 9/15

2016-09-13 Thread Matt Riedemann

I've started an etherpad here:

https://etherpad.openstack.org/p/nova-newton-rc-potential

We'll use that for tracking the newton-rc-potential bugs and last few 
remaining placement fixes before we cut RC1 on Thursday.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Election Season, PTL and TC September/October 2016

2016-09-13 Thread Anita Kuno

On 16-09-13 09:52 AM, Nate Johnston wrote:

On Tue, Sep 13, 2016 at 07:48:06AM -0500, Sean McGinnis wrote:

On Tue, Sep 13, 2016 at 05:14:56PM +1000, Tony Breeds wrote:

On Fri, Sep 02, 2016 at 12:10:53PM +, Tristan Cacqueray wrote:


Lastly, election officials are also reachable through the
#openstack-election Freenode channel.

I wonder if it's worth having all conversations in #openstack-dev?

There are pros and cons to both.

Having a single purpose election channel certainly make it easy to spot 
election activity
Using openstack-dev increases the possibility of braoder reach.

I don't have a strong opionion either way.

Yours Tony.

No strong opinion either, but I agree openstack-dev would get more
visiblity.

Sean (smcginnis)

I am fine with #openstack-dev if that is the consensus choice, but we
may also want to list the direct contact info for the election officers


Officials, not officers. The role is to administrate as an appointee of 
the technical committee, not police.



so that people can ping directly in case their query happens to get
lost.  The conversation can then come back to #openstack-dev, but then
you would know you had someone's attention.


Well the method of communication up until now has been using the -dev 
mailing list with an invitation to email the election officials directly 
for private matters. That way communication reaches the broadest 
audience and is time insensitive. My experience is that has worked well 
in the past.


Thank you,
Anita.



But this is my first time helping out on an OpenStack election, so I may
be a bit paranoid about missing something.

--N.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] PTL candidacy

2016-09-13 Thread Antoine Cabot
Hello everyone,

I would like to announce my candidacy to continue as Watcher PTL for
the Ocata release. [1]

During the Newton cycle, our main achievement was to enter the
OpenStack big tent by demonstrating a great diversity of contributors
and the added-value of an infrastructure optimization service for
OpenStack.

With around 20 active contributors during the cycle and 15 blueprints
implemented, I'm very proud to lead this project and welcome new
contributors every week.

Given that the Ocata development window is shorter, my main goal for
Ocata is to focus on the stability and usability of Watcher. We also
need to provide more optimization strategies that will demonstrate
the capabilities of Watcher. Additionaly, I plan to work with Nova
on placement API used for scheduling decisions.

I would also like to take this opportunity to thank all members of
the OpenStack community who helped our team during the Newton cycle.

Thank you,
Antoine

[1] https://review.openstack.org/#/c/369267/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo-test-cloud-rh1 and bastion host

2016-09-13 Thread Paul Belanger
On Tue, Sep 13, 2016 at 12:53:41PM +0100, Derek Higgins wrote:
> On 9 September 2016 at 16:38, Paul Belanger  wrote:
> > Greetings,
> >
> > I would like to start the discussions around the removal of the bastion host
> > that sits in front of tripleo-test-cloud-rh1.  It is my understanding, all
> > traffic from tripleo-test-cloud-rh1 flows through this linux box.  Obviously
> > this is problematic for a public cloud.
> >
> > I currently do not know the history of the bastion host, I am hoping this 
> > thread
> > will start discussions around it.
> >
> > However, my personal preference is to remove the bastion from the pipeline
> > between internet and tripleo-test-cloud-rh1. My main objection to the host, 
> > is
> > the fact we do packet filtering of traffic flowing between the internet and
> > tripleo-test-cloud-rh1.
> 
> Would it be enough to simply remove the traffic filtering? or are
> there other problems you are hoping to get rid of?
> 
I'm hoping to remove it so we don't have to worry about managing it.  Today, it
exists for a reason (which I am still trying to understand). As I understand
today, it is used as a jump box to access some resource in
tripleo-test-cloud-rh1, assuming the controller and compute nodes.  But is that
all?

When things are working, there usually isn't problems. However, when problems
occur, it is another box that we have to loop into to trace outbound
connections. I think the most recent issues with DNS were a result of something
throttling connections?

> >
> > Ideally tripleo-test-cloud-rh1 will simply have an unfiltered network drop 
> > on
> > the public web, this is how we do it today with the infracloud in
> > #openstack-infra.
> >
> > This will avoid the need to gain access to a private server (bastion) and 
> > need
> > to manipulate networking traffic.
> >
> > I'd like for us to try and establish a time frame to make this happen too.
> 
> I don't know how much work this would be and what problems we would
> hit, historically the upstream tripleo team have been hands off when
> it comes to this box(and the rack switch), from our point of view we
> use it as a jump host to get to the other hosts on which openstack
> runs. And all outside traffic goes through it, I suppose the
> alternative would be to route the traffic directly to the overcloud
> controller.
> 
Yes, I think that would be the ideal setup. Removing any unmanaged linux systems
between tripleo-test-cloud-rh1 and the internet is the goal. This give tripleo
move control of the network and 1 less thing to depend on.

> We should be moving all our cloud usage onto RDO-Cloud some day, we
> should probably try and first get a timeline for when we are moving
> onto RDO-Cloud, if that is coming up soon perhaps we can just wait at
> this situation goes away.
> 
Pushing it off is one option, however I am offering up bandwidth to support this
effort.  Last I have heard RDO Cloud is still 3-6 months away.

> >
> > ---
> > Paul
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Election Season, PTL and TC September/October 2016

2016-09-13 Thread Nate Johnston
On Tue, Sep 13, 2016 at 07:48:06AM -0500, Sean McGinnis wrote:
> On Tue, Sep 13, 2016 at 05:14:56PM +1000, Tony Breeds wrote:
> > On Fri, Sep 02, 2016 at 12:10:53PM +, Tristan Cacqueray wrote:
> > 
> > > Lastly, election officials are also reachable through the
> > > #openstack-election Freenode channel.
> > 
> > I wonder if it's worth having all conversations in #openstack-dev?
> > 
> > There are pros and cons to both.
> > 
> > Having a single purpose election channel certainly make it easy to spot 
> > election activity
> > Using openstack-dev increases the possibility of braoder reach.
> > 
> > I don't have a strong opionion either way.
> > 
> > Yours Tony.
> 
> No strong opinion either, but I agree openstack-dev would get more
> visiblity.
> 
> Sean (smcginnis)

I am fine with #openstack-dev if that is the consensus choice, but we
may also want to list the direct contact info for the election officers
so that people can ping directly in case their query happens to get
lost.  The conversation can then come back to #openstack-dev, but then
you would know you had someone's attention.  

But this is my first time helping out on an OpenStack election, so I may
be a bit paranoid about missing something.

--N.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-13 Thread Duncan Thomas
On 13 September 2016 at 06:44, Ben Swartzlander 
wrote:

> On 09/09/2016 11:12 AM, Duncan Thomas wrote:
>
>> I don't care so much whether your CLI or API proxy in open or closed
>> source, but I really do care if I can create a distribution, even a
>> novel one, with that software in it, without hitting licensing issues.
>> That is, as I see it, a bare minimum - anything less than that and it
>> does not belong in the cinder source tree.
>>
>
> I don't understand how you can have this stance while tolerating the
> existence of such things as the VMware driver. That software (ESXi)
> absolutely requires a license to use or distribute.


In all honesty, I hadn't considered the situation in detail until the
recent IBM discussions - I've raised concerns before when specific
troublesome libraries appeared (the Netapp one, and rts-lib, both solved by
relicensing to apache) but never tried to audit the whole codebase. There's
an etherpad Walt linked to int he meeting that is collecting the dependency
info for various drivers, so hopefully we'll have an accurate assessment of
the current situation so that we can figure out what we're doing going
forward.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Iive migration IRC meeting

2016-09-13 Thread Murray, Paul (HP Cloud)
There will be a meeting today – just amended the agenda:
https://wiki.openstack.org/wiki/Meetings/NovaLiveMigration

Paul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-13 Thread Jeremy Stanley
On 2016-09-12 23:44:10 -0400 (-0400), Ben Swartzlander wrote:
[...]
> I don't understand how you can have this stance while tolerating the
> existence of such things as the VMware driver. That software (ESXi)
> absolutely requires a license to use or distribute.

If it has to be installed on the same server as Nova, then I agree
that's a pretty sketchy situation. If you can keep Nova on separate
machines from your VMware "appliances" and have them communicate via
some sort of RPC, then it's not much different from orchestrating a
SAN or NAS over a hardware (network) interface boundary.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Liping Mao (limao)
Hi Gary?

I mean maybe that can be one choice in my mind.

Security Group is for each neutron port?in this case?all the docker on one vm 
will share one neutron port?if I understand correct??then they will share the 
security group on that port?it is not per container per security group?not sure 
how to use security group in this case?

Regards?
Liping Mao

? 2016?9?13??20:31?Loughnane, Gary 
mailto:gary.loughn...@intel.com>> ???

Hi Liping,

Thank you for the feedback!

Do you mean to have disabled security groups as an optional configuration for 
Kuryr?
Do you have any opinion on the consequences/acceptability of disabling SG?

Regards,
Gary

From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Tuesday, September 13, 2016 12:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan?

It sounds cool?

for security group and allowed address pair?
Maybe we can disable port-security?because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker?maybe just disable port-security can be one of the choice. then do not 
need allowed address pairs in this case.


Regards?
Lipimg Mao

? 2016?9?12??19:31?Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> ???

Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:

-  Use allowed-address-pairs configuration for the VM neutron port

-  Use IPVLAN for wiring the Containers within VM

In this way:

-  Achieve efficient data path to container within VM

-  Better leverage OpenStack EPA(Enhanced Platform Awareness) features 
to accelerate the data path (more details below)

-  Mitigate the risk of vlan-aware-vms not making neutron in time

-  Provide a solution that works on existing and previous openstack 
releases

This work should be done in a way permitting the user to optionally select this 
feature.


Required Changes
The four main changes we have identified in the current kuryr codebase are as 
follows:

* Introduce an option of enabling "IPVLAN in VM" use case. This can be 
achieved by using a config file option or possibly passing a command line 
argument. The IPVLAN master interface must also be identified.

* If using "IPVLAN in VM" use case, Kuryr should no longer create a new 
port in Neutron or the associated VEth pairs. Instead, Kuryr will create a new 
IPVLAN slave interface on top of the VM's master interface and pass this slave 
interface to the Container netns.

* If using "IPVLAN in VM" use case, the VM's port ID needs to be 
identified so we can associate the additional IPVLAN addresses with the port. 
This can be achieved by querying Neutron's show-port function and passing the 
VMs IP address.

* If using "IPVLAN in VM" use case, Kuryr should associate the 
additional IPVLAN addresses with the VMs port. This can be achieved using 
Neutron's allowed-address-pairs flag in the port-update function. We intend to 
make use of Kuryr's existing IPAM functionality to request these IPs from 
Neutron.

Asks
We wish to discuss the pros and cons.
For example, containers exposure as proper neutron entities and the utility of 
neutron's allowed-address-pairs is not yet well understood.

We also wish to understand if this approach is acceptable for kuryr?


EPA
The Enhanced Platform Awareness initiative is a continuous program to enable 
fine-tuning of the platform for virtualized network functions.
This is done by exposing the processor and platform capabilities through the 
management and orchestration layers.
When a virtual network function is instantiated by an Enhanced Platform 
Awareness enabled orchestrator, the application requirements can be more 
efficiently matched with the platform capabilities.
http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo


Regards,
Ivan

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
o

Re: [openstack-dev] Election Season, PTL and TC September/October 2016

2016-09-13 Thread Sean McGinnis
On Tue, Sep 13, 2016 at 05:14:56PM +1000, Tony Breeds wrote:
> On Fri, Sep 02, 2016 at 12:10:53PM +, Tristan Cacqueray wrote:
> 
> > Lastly, election officials are also reachable through the
> > #openstack-election Freenode channel.
> 
> I wonder if it's worth having all conversations in #openstack-dev?
> 
> There are pros and cons to both.
> 
> Having a single purpose election channel certainly make it easy to spot 
> election activity
> Using openstack-dev increases the possibility of braoder reach.
> 
> I don't have a strong opionion either way.
> 
> Yours Tony.

No strong opinion either, but I agree openstack-dev would get more
visiblity.

Sean (smcginnis)


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Default devstack deployment config

2016-09-13 Thread Aimee Ukasick
I completely agree with a single process deployment for DevStack. I
ran into issues last week with the multiple process configuration
while I trying to pinpoint an error.  I was using the pdb command line
debugger to step through Congress and Oslo Messaging, making a call
from the CLI. More than once the Congress API couldn't find the
Process Engine, which I'm sure was caused by uploading code and
stopping/starting services in the wrong order.  Deploying single
process Congress to DevStack would have saved me a lot of time.

aimee

On Tue, Sep 13, 2016 at 1:42 AM, Masahito MUROI
 wrote:
> Hi Congress folks,
>
> I'm in favor of single process for devstack default. It's easy to check
> logs and tests its feature.
>
> best regards,
> Masahito
>
> On 2016/09/13 11:00, Tim Hinrichs wrote:
>>
>> I'd agree with a single process version of Congress for devstack.  I'd
>> say we should even do that for Newton.
>>
>> Tim
>>
>> On Mon, Sep 12, 2016 at 6:34 PM Eric K > > wrote:
>>
>> Hi all,
>>
>> I want to get people’s thoughts regarding what we should set as
>> default devstack deployment config for Ocata.
>> At the moment, it is set to deploy three processes: API, policy, and
>> datasource-drivers.
>>
>> I see some potential arguments against that:
>>
>>  1. For most users installing via devstack, running Congress in
>> three processes bring little benefit, but rather a more complex
>> and less stable user experience. (Even if our code is perfect,
>> rabbitMQ will timeout every now and then)
>>  2. It’s not clear that we want to officially support separating the
>> API from the policy engine at this point. The supported
>> deployment options for HAHT do not need it.
>>
>> The main argument I see for deploying three processes by default is
>> that we may get more bug reports regarding the multi-process
>> deployment that way.
>>
>> Our main options for devstack default are:
>> 1. Single-process Congress (with in-mem transport).
>> 2. Two-process Congress API+Policy, datasource-drivers. (other
>> breakdowns between two processes are also possible)
>> 3. Three-process Congress.
>>
>> In the end, I think it’s a trade-off: potentially getting more bug
>> reports from users, at the expense of a more complex and less
>> polished user experience that could make a poor first impression.
>> What does everyone think?
>>
>> Personally, I slightly favor defaulting to single process Congress
>> because from a typical devstack user’s perspective, there is little
>> reason to run separate processes. In addition, because it is the
>> first time we’re releasing our complete architecture overhaul to the
>> wild, and it may be a good to default to the least complex
>> deployment for the first cycle of the new architecture.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> 室井 雅仁(Masahito MUROI)
> Software Innovation Center, NTT
> Tel: +81-422-59-4539
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Loughnane, Gary
Hi Liping,

Thank you for the feedback!

Do you mean to have disabled security groups as an optional configuration for 
Kuryr?
Do you have any opinion on the consequences/acceptability of disabling SG?

Regards,
Gary

From: Liping Mao (limao) [mailto:li...@cisco.com]
Sent: Tuesday, September 13, 2016 12:56 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

Hi Ivan,

It sounds cool!

for security group and allowed address pair,
Maybe we can disable port-security,because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker,maybe just disable port-security can be one of the choice. then do not 
need allowed address pairs in this case.


Regards,
Lipimg Mao

在 2016年9月12日,19:31,Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> 写道:

Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:

-  Use allowed-address-pairs configuration for the VM neutron port

-  Use IPVLAN for wiring the Containers within VM

In this way:

-  Achieve efficient data path to container within VM

-  Better leverage OpenStack EPA(Enhanced Platform Awareness) features 
to accelerate the data path (more details below)

-  Mitigate the risk of vlan-aware-vms not making neutron in time

-  Provide a solution that works on existing and previous openstack 
releases

This work should be done in a way permitting the user to optionally select this 
feature.


Required Changes
The four main changes we have identified in the current kuryr codebase are as 
follows:

* Introduce an option of enabling “IPVLAN in VM” use case. This can be 
achieved by using a config file option or possibly passing a command line 
argument. The IPVLAN master interface must also be identified.

* If using “IPVLAN in VM” use case, Kuryr should no longer create a new 
port in Neutron or the associated VEth pairs. Instead, Kuryr will create a new 
IPVLAN slave interface on top of the VM’s master interface and pass this slave 
interface to the Container netns.

* If using “IPVLAN in VM” use case, the VM’s port ID needs to be 
identified so we can associate the additional IPVLAN addresses with the port. 
This can be achieved by querying Neutron’s show-port function and passing the 
VMs IP address.

* If using “IPVLAN in VM” use case, Kuryr should associate the 
additional IPVLAN addresses with the VMs port. This can be achieved using 
Neutron’s allowed-address-pairs flag in the port-update function. We intend to 
make use of Kuryr’s existing IPAM functionality to request these IPs from 
Neutron.

Asks
We wish to discuss the pros and cons.
For example, containers exposure as proper neutron entities and the utility of 
neutron’s allowed-address-pairs is not yet well understood.

We also wish to understand if this approach is acceptable for kuryr?


EPA
The Enhanced Platform Awareness initiative is a continuous program to enable 
fine-tuning of the platform for virtualized network functions.
This is done by exposing the processor and platform capabilities through the 
management and orchestration layers.
When a virtual network function is instantiated by an Enhanced Platform 
Awareness enabled orchestrator, the application requirements can be more 
efficiently matched with the platform capabilities.
http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo


Regards,
Ivan….

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for 

Re: [openstack-dev] [tripleo] Overriding internal_api network name

2016-09-13 Thread Dmitry Tantsur

On 09/12/2016 11:08 PM, Steven Hardy wrote:

On Mon, Sep 12, 2016 at 04:21:43PM +0200, Dmitry Tantsur wrote:

Hi folks!

I'm looking into support for multiple overclouds with shared control plane.
I'm porting a downstream guide: https://review.openstack.org/368840.

However, this no longer works, probably because "internal_api" network name
is hardcoded in ServiceNetMapDefaults: 
https://github.com/openstack/tripleo-heat-templates/blob/dfe74b211267cde7a1da4e1fe9430127eda234c6/network/service_net_map.yaml#L14.
So deployment fails with

CREATE_FAILED resources.RedisVirtualIP: Property error:
resources.VipPort.properties.network: Error validating value 'internal_api':
Unable to find network with name or id 'internal_api'

Is it a bug? Or is there another way to change the network name? I need it
to avoid overlap between networks from two overclouds. I'd prefer to avoid
overriding everything from ServiceNetMapDefaults in my network environment
file.


IMO this isn't a bug, but an RFE perhaps.


RFE filed: 
https://blueprints.launchpad.net/tripleo/+spec/override-network-name




The reason is that until a couple of weeks ago, you always had to fully
define all services in ServiceNetMap, so this is basically just a case
where the optimization introduced here (which allows you to partially
specify ServiceNetMap which is then merged with ServiceNetMapDefaults)
doesn't work:

https://review.openstack.org/#/c/353032/

I'd say overriding everything is an OK workaround, but we can definitely
discuss ways to do it more cleanly - I'll give it some thought (probably
we'll need another mapping that defines the network names that can be
easily overidden).


Overriding everything does allow me to move forward, thanks.



Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel-library] Nominate Stanislaw Bogatkin to fuel-library core

2016-09-13 Thread Aleksey Kasatkin
+1


Aleksey Kasatkin


On Mon, Sep 12, 2016 at 5:01 PM, Alex Schultz  wrote:

> +1
>
> On Wed, Sep 7, 2016 at 5:07 PM, Maksim Malchuk 
> wrote:
> > Hello,
> >
> > I would like to nominate Stanislaw Bogatkin to fuel-library core due to
> his
> > significant contribution to the project [1] and [2]. He is one of the top
> > reviewers and contributors in the project.
> >
> > [1]
> > http://stackalytics.com/?user_id=sbogatkin&project_type=all&;
> release=all&metric=marks&module=fuel-library
> > [2] http://stackalytics.com/report/contribution/fuel-library/90
> >
> > --
> > Best Regards,
> > Maksim Malchuk,
> > Senior DevOps Engineer,
> > MOS: Product Engineering,
> > Mirantis, Inc
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPVLAN data path proposal

2016-09-13 Thread Liping Mao (limao)
Hi Ivan?

It sounds cool?

for security group and allowed address pair?
Maybe we can disable port-security?because all the docker in one vm will share 
one security group on the vm port. I'm not sure how to use sg for each 
docker?maybe just disable port-security can be one of the choice. then do not 
need allowed address pairs in this case.


Regards?
Lipimg Mao

? 2016?9?12??19:31?Coughlan, Ivan 
mailto:ivan.cough...@intel.com>> ???


Overview
Kuryr proposes to address the issues of double encapsulation and exposure of 
containers as neutron entities when containers are running within VMs.
As an alternative to the vlan-aware-vms and use of ovs within the VM, we 
propose to:

-  Use allowed-address-pairs configuration for the VM neutron port

-  Use IPVLAN for wiring the Containers within VM

In this way:

-  Achieve efficient data path to container within VM

-  Better leverage OpenStack EPA(Enhanced Platform Awareness) features 
to accelerate the data path (more details below)

-  Mitigate the risk of vlan-aware-vms not making neutron in time

-  Provide a solution that works on existing and previous openstack 
releases

This work should be done in a way permitting the user to optionally select this 
feature.


Required Changes
The four main changes we have identified in the current kuryr codebase are as 
follows:

* Introduce an option of enabling "IPVLAN in VM" use case. This can be 
achieved by using a config file option or possibly passing a command line 
argument. The IPVLAN master interface must also be identified.

* If using "IPVLAN in VM" use case, Kuryr should no longer create a new 
port in Neutron or the associated VEth pairs. Instead, Kuryr will create a new 
IPVLAN slave interface on top of the VM's master interface and pass this slave 
interface to the Container netns.

* If using "IPVLAN in VM" use case, the VM's port ID needs to be 
identified so we can associate the additional IPVLAN addresses with the port. 
This can be achieved by querying Neutron's show-port function and passing the 
VMs IP address.

* If using "IPVLAN in VM" use case, Kuryr should associate the 
additional IPVLAN addresses with the VMs port. This can be achieved using 
Neutron's allowed-address-pairs flag in the port-update function. We intend to 
make use of Kuryr's existing IPAM functionality to request these IPs from 
Neutron.

Asks
We wish to discuss the pros and cons.
For example, containers exposure as proper neutron entities and the utility of 
neutron's allowed-address-pairs is not yet well understood.

We also wish to understand if this approach is acceptable for kuryr?


EPA
The Enhanced Platform Awareness initiative is a continuous program to enable 
fine-tuning of the platform for virtualized network functions.
This is done by exposing the processor and platform capabilities through the 
management and orchestration layers.
When a virtual network function is instantiated by an Enhanced Platform 
Awareness enabled orchestrator, the application requirements can be more 
efficiently matched with the platform capabilities.
http://itpeernetwork.intel.com/openstack-kilo-release-is-shaping-up-to-be-a-milestone-for-enhanced-platform-awareness/
https://networkbuilders.intel.com/docs/OpenStack_EPA.pdf
https://www.brighttalk.com/webcast/12229/181563/epa-features-in-openstack-kilo


Regards,
Ivan

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo-test-cloud-rh1 and bastion host

2016-09-13 Thread Derek Higgins
On 9 September 2016 at 16:38, Paul Belanger  wrote:
> Greetings,
>
> I would like to start the discussions around the removal of the bastion host
> that sits in front of tripleo-test-cloud-rh1.  It is my understanding, all
> traffic from tripleo-test-cloud-rh1 flows through this linux box.  Obviously
> this is problematic for a public cloud.
>
> I currently do not know the history of the bastion host, I am hoping this 
> thread
> will start discussions around it.
>
> However, my personal preference is to remove the bastion from the pipeline
> between internet and tripleo-test-cloud-rh1. My main objection to the host, is
> the fact we do packet filtering of traffic flowing between the internet and
> tripleo-test-cloud-rh1.

Would it be enough to simply remove the traffic filtering? or are
there other problems you are hoping to get rid of?

>
> Ideally tripleo-test-cloud-rh1 will simply have an unfiltered network drop on
> the public web, this is how we do it today with the infracloud in
> #openstack-infra.
>
> This will avoid the need to gain access to a private server (bastion) and need
> to manipulate networking traffic.
>
> I'd like for us to try and establish a time frame to make this happen too.

I don't know how much work this would be and what problems we would
hit, historically the upstream tripleo team have been hands off when
it comes to this box(and the rack switch), from our point of view we
use it as a jump host to get to the other hosts on which openstack
runs. And all outside traffic goes through it, I suppose the
alternative would be to route the traffic directly to the overcloud
controller.

We should be moving all our cloud usage onto RDO-Cloud some day, we
should probably try and first get a timeline for when we are moving
onto RDO-Cloud, if that is coming up soon perhaps we can just wait at
this situation goes away.

>
> ---
> Paul
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] governance proposal worth a visit: Write down OpenStack principles

2016-09-13 Thread Amrith Kumar

> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Tuesday, September 13, 2016 3:45 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all] governance proposal worth a visit:
> Write down OpenStack principles
> 
> Clay Gerrard wrote:
> > This is why I always have and presumably always will support Thierry on
> > the TC.  His initial thinking *frequently* seems out of alignment with
> > me, but after observing others healthy debate and discussion [1] - I
> > always find we tend we both come around a little and seem to be pointing
> > in basically the same direction in the end.  Thierry is *reasonable*.
> > Throwing out old assumptions when new information is raised is an
> > absolute imperative - and here we see Thierry plainly and openly
> > offering concession to a reasonable counterpoint.
> 
> Thanks, Clay!
> 
> I'll add that while it may appear that Clay and I always disagree, I
> think he is profoundly honest in his feedback and I value it very much.
> 
> In particular, I agree with his recent blogpost: we need to listen to
> our users (operators, but also application developers), and focus on our
> mission more than on governance details. It is important for open source
> projects to have a strong governance model, but it is only the frame
> that holds the canvas and defines the space. The important part is the
> painting.
> 

[amrith] Clay, Thierry, well said. A few of us have opinions (just a few), an 
even smaller number have strong opinions :) It is therefore understandable that 
we will sometimes disagree. But it was refreshing to see the dialogue on topics 
with such stark disagreement conducted without becoming disagreeable.

> --
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Spec and devref placement

2016-09-13 Thread Liping Mao (limao)
+1,option b)

> 在 2016年9月12日,19:39,Antoni Segura Puimedon  写道:
> 
> Hi Kuryrs!
> 
> On September 5th's weekly IRC meeting Irena Berezovsky suggested that
> we should take a decision regarding the location of specs and devrefs.
> 
> Currently we default to putting all the specs and devrefs for:
> - Kuryr
> - Kuryr-libnetwork
> - Kuryr-kubernetes
> 
> to openstack/kuryr. Fuxi is still being integrated and keeps its own doc.
> 
> The three proposals that came up where:
> a) All specs and devrefs to openstack/kuryr
> b) Specs in openstack/kuryr but devrefs in each specific project,
> i.e., the one that will end up with the implementation code.
> c) Both specs and devrefs in each separate Kuryr project.
> 
> I would like to advocate for option (b). It makes things easy for when
> specs involve multiple kuryr pieces and, at the same time, it keeps
> development information in the place where you'd expect, close to the
> code.
> 
> Please, weigh on this issue here in the ML or in the weekly IRC
> meeting today. The idea is to reach a decision by next week's weekly
> IRC meeting and then write it in each subproject's "how to contribute"
> 
> See you later in the weekly IRC,
> 
> Toni
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] weekly subteam report

2016-09-13 Thread Jim Rollenhagen
On Tue, Sep 13, 2016 at 7:27 AM, Dmitry Tantsur  wrote:
> On 09/13/2016 12:52 PM, Pavlo Shchelokovskyy wrote:
>>
>> Hi all,
>>
>> On Mon, Sep 12, 2016 at 9:28 PM, Loo, Ruby > > wrote:
>>
>> Cross-project:
>> ==
>> - Infra insists on switching new jobs to Xenial
>>
>>
>> A small heads up. If we have any gate jobs that use PXE instead of iPXE,
>> those won't work on Xenial with current Ironic devstack plugin due to
>> some packaging changes made in Ubuntu since about 15.04.
>>
>> Bug: https://bugs.launchpad.net/ironic/+bug/1611850
>> Fix on review: https://review.openstack.org/#/c/326024/
>
>
> Thanks for bringing it! The last time I checked we did use PXE, but I'm not
> sure it's the case still.

Yes, thank you!

We use iPXE on every job except for
'{pipeline}-tempest-dsvm-ironic-pxe_ssh{job-suffix}'

/me reviews the fix

// jim

>
>>
>> Dr. Pavlo Shchelokovskyy
>> Senior Software Engineer
>> Mirantis Inc
>> www.mirantis.com 
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] weekly subteam report

2016-09-13 Thread Dmitry Tantsur

On 09/13/2016 12:52 PM, Pavlo Shchelokovskyy wrote:

Hi all,

On Mon, Sep 12, 2016 at 9:28 PM, Loo, Ruby mailto:ruby@intel.com>> wrote:

Cross-project:
==
- Infra insists on switching new jobs to Xenial


A small heads up. If we have any gate jobs that use PXE instead of iPXE,
those won't work on Xenial with current Ironic devstack plugin due to
some packaging changes made in Ubuntu since about 15.04.

Bug: https://bugs.launchpad.net/ironic/+bug/1611850
Fix on review: https://review.openstack.org/#/c/326024/


Thanks for bringing it! The last time I checked we did use PXE, but I'm 
not sure it's the case still.




Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] weekly subteam report

2016-09-13 Thread Pavlo Shchelokovskyy
Hi all,

On Mon, Sep 12, 2016 at 9:28 PM, Loo, Ruby  wrote:

> Cross-project:
> ==
> - Infra insists on switching new jobs to Xenial
>

A small heads up. If we have any gate jobs that use PXE instead of iPXE,
those won't work on Xenial with current Ironic devstack plugin due to some
packaging changes made in Ubuntu since about 15.04.

Bug: https://bugs.launchpad.net/ironic/+bug/1611850
Fix on review: https://review.openstack.org/#/c/326024/

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] PTL non candidacy

2016-09-13 Thread Vladimir Kozhukalov
Dear colleagues,

Newton cycle is getting to its end and it is time to look at what we've
managed to achieve by the moment.

* Improved task based deployment engine (memory efficiency, code
  structure, graph sequences, noop run).
* Re-implemented OS provisioning as a graph. Now it is one of the graphs in
  the default deployment graph sequence.
* Improved graph API and UX. Now it is possible to upload/download
  custom graphs, run particular graphs, see per-task deployment
  progress.
* Aligned the functionality of the new version of fuelclient with the
  old one. Now all subcommands are available in `fuel2` and we are ready
  to deprecate old `fuel` command.
* We are on our way to get rid of ISO. (ISOless BVT is ready, review
  jobs are in progress).
* Improved LCM UX including IaC (using git repository as a source for
  cluster configuration).
* We begun implementing cluster upgrade procedure as a graph. In the
  future in-place OpenStack cluster upgrades will be a native part Fuel
  functionality.
* We also put some efforts to research container based deployment
  possibilities (K8s and Docker). We introduced a bunch of
  experimental repositories (fuel-ccp-*) where the team is now working
  on CI/CD like UX for containerized OpenStack deployment.

There are also many things that we were planning but didn't manage
to do. I'm not going to nominate myself as a PTL for Ocata cycle, but I'll
continue to contribute to Fuel to make it a perfect deployment
tool and I'm looking forward for other Fuel team members to run for PTL
role.

Thanks.


Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][devstack][devstack-gate][networking-generic-switch] Ironic multinode job

2016-09-13 Thread Vasyl Saienko
Hello Community,

I'm happy to announce that we got stable ironic multinode job. There are a
lot of patches  (around 20) to different projects needed to be merged
before we can move this job to check pipeline.  That is why I'm writing
this email to openstack-dev. I'm kindly asking cores from the devstack,
devstack-gate, networking-generic-switch, ironic to review related patches
from the following list:

*devstack:*
"Fix common functions to work with V2" https://review.openstack.
org/#/c/366922/
"Drop SERVICE_HOST=127.0.0.1 from setup_localrc()"
https://review.openstack.org/#/c/368870/

*devstack-gate:*
"Add c-vol,c-bak on subnode when c-api enabled"  https://review.
openstack.org/#/c/352909
"Preparing multinode networking for Ironic" https://review.
openstack.org/#/c/335981
"Setup ssh-key on subnodes for Ironic" https://review.
openstack.org/#/c/364830
"Update ENABLED_SERVICE on subnode with ironic" https://review.
openstack.org/#/c/368611
"Update local.conf for ironic-multinode case" https://review.
openstack.org/#/c/352790/

*networking-generic-switch*:
"Setup multinode avare config" https://review.
openstack.org/#/c/364848/

*ironic:*
"Configure clean network to provision network" https://review.
openstack.org/#/c/356632
"Ignore required_services for multinode topology" https://review.
openstack.org/#/c/352793
"Source openrc on subnode in multinode topology" https://review.
openstack.org/#/c/353302/
"Skip db configuration on subnodes" https://review.
openstack.org/#/c/353303
"Fix setting custom IRONIC_VM_NETWORK_BRIDGE" http
s://review.openstack.org/#/c/365116/
"Update devstack provision net config for multihost" https://review.
openstack.org/#/c/368644/
"Update ironic node names for multinode case" https://review.
openstack.org/#/c/368645/
"Skip some steps for multinode case" https://review.
openstack.org/#/c/368646/
"Add devstack setup_vxlan_network()" https:/
/review.openstack.org/#/c/368647
"Update iptables rules and services IPs for multinode"
https://review.openstack.org/#/c/368648/
"Testing multinode stuff" https://review.openstack.org/#/c/368173/

Sincerely,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Ocata Design Summit - Proposed slot allocation

2016-09-13 Thread Thierry Carrez
Hi PTLs,

Here is the proposed slot allocation for project teams at the Ocata
Design Summit in Barcelona. This is based on the requests the current
PTLs have made, adjusted to account for limited space availability.

| fb: fishbowl 40-min slots (Wednesday afternoon - Friday morning)
| wr: workroom 40-min slots (Wednesday afternoon - Friday morning)
| cm: Friday afternoon contributors meetup

Neutron: 10fb cm

Nova: 13fb cm

Cinder: 4fb 4wr cm

Ironic: 4fb 4wr cm

Horizon: 1fb 5wr cm

Keystone: 4fb 7wr cm

Heat: 3fb 6wr cm

Murano: 1fb 3wr

Oslo: 3fb 5wr cm

Manila: 2fb 3wr cm

Monasca: 1fb 5wr

Telemetry: 2fb 3wr cm

Glance: 2fb 2wr cm

OpenStackClient: 1fb 1wr

Magnum: 5fb 5wr cm

Tacker: 1fb 3wr cm

Rally: 1fb 2wr

Sahara: 7wr cm

Swift: 2fb 10wr cm

Trove: 2fb 3wr cm

Dragonflow: 1fb 4wr cm

Senlin: 4wr

Designate: 1fb 5wr

Mistral: 1fb 2wr cm

Watcher: 1fb 3wr cm

Karbor: 1fb 5wr

Freezer: 1fb 3wr

Searchlight: 1fb 2wr

Kuryr: 1fb 6wr

Zaqar: 1fb 2wr

Barbican: 1fb 6wr

Vitrage: 1fb 3wr

Congress: 1fb 3wr

Solum: 1fb

Winstackers: 1wr

CloudKitty: 1fb

Astara: 1fb

EC2API: 1wr



Fuel: 2fb 4wr cm

TripleO: 2fb 4wr

OpenStackAnsible: 2fb 8wr cm

PuppetOpenStack: 1fb 2wr

Kolla: 3fb 10wr cm

OpenStack Charms: 2fb 2wr

RpmPackaging: 1fb 2wr

OpenStack Salt: 1wr

ChefOpenstack: 1fb 2wr

Packaging-deb: 1fb 2wr



Infrastructure: 1fb 5wr cm

Quality Assurance: 3fb 4wr cm

Documentation: 4fb 3wr cm

Release mgmt: 1fb 1wr cm*

Requirements: 1fb cm*

Security: 2fb 3wr

Stable maint: 2fb cm*

I18n: cm

App catalog: 1fb 1wr

OpenStack UX: 1fb 1wr

Refstack: 2wr


*: shared contributors meetup between Release Management, Stable
maintenance and Requirements teams

Kendall and I will start working on laying out those sessions over the
available rooms and time slots. Some of you have communicated
constraints together with their room requests (like Manila not wanting
overlap with Cinder sessions), and we'll try to accommodate them the
best we can. If you have extra constraints you haven't communicated yet,
please reply to me ASAP. Also if you don't plan to use all of your
allocated slots, let us know so that we can propose them to other teams.

Time to start thinking about the content you'd like to cover during
those sessions and warm up those Ocata etherpads !

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [elections] Glance PTL non-candidacy

2016-09-13 Thread 王玺源
Hi Nikhil,

You are the first PTL I knew when I started to work with OpenStack
community, we can see your hard work for Glance during these cycles. I have
learned a lot from your review suggestion. Thanks very much!!!  And
it's great to know that you still work on Glance.  Best wishes.

2016-09-13 13:33 GMT+08:00 Nikhil Komawar :

> Thank you Fei Long. It's always a pleasure working with you!
>
>
> On 9/13/16 12:59 AM, Fei Long Wang wrote:
> > Hi Nikhil,
> >
> > Thanks for your hard work as Glance PTL, you did a great job. And
> > happy to know you will still work in OpenStack, see you around ;)
> >
> > On 12/09/16 18:08, Nikhil Komawar wrote:
> >> Hi team,
> >>
> >>
> >> Just wanted to share my decision for not running for PTL for Glance.
> >> It's been great serving the community in this role however, there are
> >> some personal and family matters that I need to attend to over the next
> >> couple of months or so.
> >>
> >>
> >> I think the Glance team has done great and we've quite a bunch of bright
> >> developers who are helping push the project forward in the appropriate
> >> direction. With Glare becoming separate project and Ocata being short
> >> cycle, I anticipate the priorities being rather obvious to those who
> >> have stayed in touch. I will be available to do the rightful handoff to
> >> the incoming PTL (for Ocata) and update with Newton happenings once
> >> we're done with a few important bugs that are being targeted for RC1.
> >>
> >>
> >> I intend to stick around in Glance and related projects like
> >> Searchlight, Glare, etc. However, I am planning to take a more hands on
> >> role and see a few features through in Ocata.  Given more and more
> >> glance-cores time sharing with other projects, I think we need some
> >> throttle in our review system. So, I'd like to help any new developers
> >> get their reviews up, that in turn will help the Glance community.
> >>
> >>
> >> Last but not the least, I have thoroughly enjoyed working in this role
> >> with all my fellow stackers, particularly glancers! So, a BIG thank you
> >> for having worked with me in making Glance better over the last couple
> >> of years.
> >>
> >>
> >
>
> --
>
> Thanks,
> Nikhil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] potential Ansible template issue you may meet

2016-09-13 Thread Daniel Comnea
thanks for heads up.
suggest we add this to the kolla docs as is easy to  forget about it

On Tue, Sep 13, 2016 at 3:15 AM, Jeffrey Zhang 
wrote:

> When using ansible template module, you may see the trailing newline
> is stripped by blocks, like
>
> # template.j2
> a = {% if true %}1{% endfor %}
> b = 2
>
> the render will be like
>
> a=1b=2
>
> The newline character after `a=1` is stripped.
>
> The root cause comes from jinja2's trim_blocks feature. Ansible
> enabled this feature. If you want to disable it, just add `#jinja2:
> trim_blocks: False` to the j2 template file. This is a feature in
> Ansible, and I do not think they will fix/change this. But we need
> take care of this when using the template module.
>
> More info please check[0][1]
>
> [0] https://github.com/ansible/ansible/issues/16344
> [1] http://jinja.pocoo.org/docs/dev/api/#jinja2.Environment
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Election Season, PTL and TC September/October 2016

2016-09-13 Thread Thierry Carrez
Tony Breeds wrote:
> On Fri, Sep 02, 2016 at 12:10:53PM +, Tristan Cacqueray wrote:
> 
>> Lastly, election officials are also reachable through the
>> #openstack-election Freenode channel.
> 
> I wonder if it's worth having all conversations in #openstack-dev?
> 
> There are pros and cons to both.
> 
> Having a single purpose election channel certainly make it easy to spot 
> election activity
> Using openstack-dev increases the possibility of braoder reach.
> 
> I don't have a strong opionion either way.

At the very least it should happen in a logged channel.

I think reusing #openstack-dev for this is fine -- it's an activity that
is limited in time (so doesn't need a permanent channel) and -dev is
pretty inactive those days.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] proposing adding Tony Breeds to "Release Managers" team

2016-09-13 Thread ChangBo Guo
+1 , Tony Breads is a nice guy, glad to see release team make openstack
more better :-)

2016-09-13 12:53 GMT+08:00 Swapnil Kulkarni :

> On Tue, Sep 6, 2016 at 9:06 PM, Doug Hellmann 
> wrote:
> > Team,
> >
> > I would like to add Tony Breeds to the "Release Managers" team in
> > gerrit. This would give him +2 permissions on
> openstack-infra/release-tools
> > and on openstack/releases. I feel his reviews on both of those repos
> > have already demonstrated a good attention to detail, especially
> > of the release schedule and processes.
> >
> > Please respond below with +1 or -1.
> >
> > Thanks,
> > Doug
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> +1
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] governance proposal worth a visit: Write down OpenStack principles

2016-09-13 Thread Thierry Carrez
Clay Gerrard wrote:
> This is why I always have and presumably always will support Thierry on
> the TC.  His initial thinking *frequently* seems out of alignment with
> me, but after observing others healthy debate and discussion [1] - I
> always find we tend we both come around a little and seem to be pointing
> in basically the same direction in the end.  Thierry is *reasonable*. 
> Throwing out old assumptions when new information is raised is an
> absolute imperative - and here we see Thierry plainly and openly
> offering concession to a reasonable counterpoint.

Thanks, Clay!

I'll add that while it may appear that Clay and I always disagree, I
think he is profoundly honest in his feedback and I value it very much.

In particular, I agree with his recent blogpost: we need to listen to
our users (operators, but also application developers), and focus on our
mission more than on governance details. It is important for open source
projects to have a strong governance model, but it is only the frame
that holds the canvas and defines the space. The important part is the
painting.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Spec and devref placement

2016-09-13 Thread Pablo Chacin
Agree with Irena's proposal. Also ok with option b.

On Mon, Sep 12, 2016 at 5:18 PM, Irena Berezovsky 
wrote:

> I am fine with option (b) as well.
> We can add option (d):
> Specs in openstack/*kuryr-specs* but devrefs in each specific project,
> i.e., the one that will end up with the implementation code.
>
>
> On Mon, Sep 12, 2016 at 2:38 PM, Antoni Segura Puimedon <
> celeb...@gmail.com> wrote:
>
>> Hi Kuryrs!
>>
>> On September 5th's weekly IRC meeting Irena Berezovsky suggested that
>> we should take a decision regarding the location of specs and devrefs.
>>
>> Currently we default to putting all the specs and devrefs for:
>> - Kuryr
>> - Kuryr-libnetwork
>> - Kuryr-kubernetes
>>
>> to openstack/kuryr. Fuxi is still being integrated and keeps its own doc.
>>
>> The three proposals that came up where:
>> a) All specs and devrefs to openstack/kuryr
>> b) Specs in openstack/kuryr but devrefs in each specific project,
>> i.e., the one that will end up with the implementation code.
>> c) Both specs and devrefs in each separate Kuryr project.
>>
>> I would like to advocate for option (b). It makes things easy for when
>> specs involve multiple kuryr pieces and, at the same time, it keeps
>> development information in the place where you'd expect, close to the
>> code.
>>
>> Please, weigh on this issue here in the ML or in the weekly IRC
>> meeting today. The idea is to reach a decision by next week's weekly
>> IRC meeting and then write it in each subproject's "how to contribute"
>>
>> See you later in the weekly IRC,
>>
>> Toni
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Karbor] Thoughts on Decoupling Checkpoint API from the current Provider API

2016-09-13 Thread Edward Lee
hi, Leon:
 welcome to join the Karbor project.
 sounds a good suggestion, but the checkpoints belong to each provider
in current design, I am afraid of there are many impactive points for the
whole framework.
 did you make design impact analysis?

2016-09-11 20:45 GMT+08:00 Leon Wang :

> I've added this item to the agenda of the next team meeting. link:
> https://wiki.openstack.org/wiki/Meetings/Karbor
>
>
> -- Original --
> *From: * "王辉";
> *Date: * Sun, Sep 11, 2016 08:24 PM
> *To: * "openstack-dev";
> *Cc: * "yuval.brik";
> *Subject: * [Karbor] Thoughts on Decoupling Checkpoint API from the
> current Provider API
>
> Hi Team,
>
> I'm new to Karbor and stumbled upon the Provider API recently. I find it
> more intuitive that if we could decouple the Checkpoint API out users could
> directly operate on Checkpoint related actions.
>
> I've filled out a BP about it (https://blueprints.launchpad.
> net/karbor/+spec/checkpoint-decouple) and I'm looking forward to the
> coming weekly Karbor meeting.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Edward Lee
-
open source in China
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Election Season, PTL and TC September/October 2016

2016-09-13 Thread Tony Breeds
On Fri, Sep 02, 2016 at 12:10:53PM +, Tristan Cacqueray wrote:

> Lastly, election officials are also reachable through the
> #openstack-election Freenode channel.

I wonder if it's worth having all conversations in #openstack-dev?

There are pros and cons to both.

Having a single purpose election channel certainly make it easy to spot 
election activity
Using openstack-dev increases the possibility of braoder reach.

I don't have a strong opionion either way.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release] [FFE] two xstatic packages to be bumped in upper-constraints, please

2016-09-13 Thread Swapnil Kulkarni (coolsvap)
On Tue, Sep 13, 2016 at 6:09 AM, Richard Jones  wrote:
> Hi folks,
>
> We have two patches to upper-constraints up that we'd like to see merged for
> Newton. The package updates in question only changed meta-data, but they did
> so in a way that fixes issues for downstream, and it makes sense to keep
> upper-constraints in line with what they'll be packaging.
>
> The reviews are:
>
> update constraint for XStatic-Bootstrap-SCSS to new release 3.3.7.1
> https://review.openstack.org/#/c/368970/
>
> update constraint for XStatic-smart-table to new release 1.4.13.2
> https://review.openstack.org/#/c/366194/
>
>
> Thanks,
>
> Richard
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release] [FFE] two xstatic packages to be bumped in upper-constraints, please

2016-09-13 Thread Dirk Müller
Am 13.09.2016 05:03 schrieb "Tony Breeds" :

> > The reviews are:
> >
> > update constraint for XStatic-Bootstrap-SCSS to new release 3.3.7.1
> > https://review.openstack.org/#/c/368970/
> >
> > update constraint for XStatic-smart-table to new release 1.4.13.2
> > https://review.openstack.org/#/c/366194/

+1

Greetings,
Dirk
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Call for mentors and funding

2016-09-13 Thread Mahati C
Hello everyone,

An update on the Outreachy program, including a request for volunteer
mentors and funding. For those of you who are not aware, Outreachy helps
people from underrepresented groups get involved in free and open source
software  by matching interns with established mentors in the upstream
community. For more info, please visit:
https://wiki.openstack.org/wiki/Outreachy

We so far have a confirmation of one spot for OpenStack in this round of
Outreachy. But we are receiving more applicants who are interested in
contributing to different OpenStack projects. Interested mentors - please
publish your project ideas to this page
https://wiki.openstack.org/wiki/Internship_ideas. Here is a link that helps
you get acquainted with mentorship process:
https://wiki.openstack.org/wiki/Outreachy/Mentors

We are looking for additional sponsors to help support the increase in
OpenStack applicants. The sponsorship cost is 6,500 USD per intern, which
is used to provide them a stipend for the three-month program. You can
learn more about sponsorship here:
https://wiki.gnome.org/Outreachy/Admin/InfoForOrgs#Action

Outreachy has been one of the most important and effective diversity
efforts we’ve invested in. It has evidently been a way to retain new
contributors, we’ve had some amazing participants become long-term
contributors to our community.

Please help spread the word. If you are interested in becoming a mentor or
sponsoring an intern, please contact me (mahati.chamarthy AT intel.com) or
Victoria (victoria AT redhat.com).

Thanks,
Mahati
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Periodically checking Glance image files

2016-09-13 Thread Avishay Traeger
On Tue, Sep 13, 2016 at 7:16 AM, Nikhil Komawar 
wrote:
> Firstly, I'd like to mention that Glance is built-in (and if deployed
> correctly) is self-resilient in ensuring that you do NOT need an audit
> of such files. In fact, if any operator (particularly large scale
> operator) needs such a system we have a serious issue where
> potentially
> important /user/ data is likely to be lost resulting in legal
> issues (so
> please beware).

Can you please elaborate on how Glance is self-resilient?

Hey Sergio,
>
>
> Glad to know that you're not having any feature related issues (to me
> this is a good sign). Based on your answers, it makes sense to require a
> reliability solution for backend data (or some sort of health monitoring
> for the user data).
>

All backends will at some point lose some data.  The ask is for reflecting
the image's "health" to the user.


> So, I wonder what your thoughts are for such an audit system. At a first
> glance, this looks rather not scalable, at least if you plan to do the
> audit on all of the active images. Consider a deployment trying to run
> this for around 100-500K active image records. This will need to be run
> in batches, thus completing the list of records and saying that you've
> done a full audit of the active image -- is a NP-complete problem (new
> images can be introduced, some images can be updated in the meantime, etc.)
>

NP-complete?  Really?  Every storage system scrubs all data periodically to
protect from disk errors.  Glance images should be relatively static anyway.


> The failure rate is low, so a random (sparse check) on the image data
> won't help either. Would a cron job setup to do the audit for smaller
> deployments work? May be we can look into some known cron solutions to
> do the trick?
>

How about letting the backend report the health?  S3, for example, reports
an event on object loss
.
The S3 driver could monitor those events and update status.  Swift performs
scrubbing to determine object health - I haven't checked if it reports an
event on object loss, but don't see any reason not to.  For local
filesystem, it would need its own scrubbing process (e.g., recalculate hash
for each object every N days).  On the other hand if it is a mount of some
filer, the filer should be able to report on health.

Thanks,
Avishay

-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web  | Blog 
 | Twitter  | Google+

 | Linkedin 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >