Re: [openstack-dev] [tempest] Discussion on to enable "Citrix XenServer CI" to vote openstack/tempest

2016-06-13 Thread Jianghua Wang
Added project prefix in the subject and loop in Masayuki and Ghanshyam who know 
the background as well. Thanks.

Jianghua

From: Jianghua Wang
Sent: Tuesday, June 14, 2016 12:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Jianghua Wang
Subject: Discussion on to enable "Citrix XenServer CI" to vote openstack/tempest

Hi all,
   Recently the "Citrix XenServer CI" was broken due to a bad commit[1] to 
openstack/tempest. As the commit was merged on Friday which is vacation at 
here, it had been in failure for more than three days before we noticed and 
fixed[2] this problem. As this CI votes for openstack/nova, it had been keeping 
to vote -1 until disabled voting.
   So I suggest we also enable this XenServer CI voting on tempest change to 
avoid similar cases in the future. We see in this case, the tempest commit 
didn't consider the different case for type-1 hypervisors, so it broke 
XenServer test. Actually "Citrix XenServer CI" verified that patch set with 
failure result but which got ignored due to no voting. So let's enable the 
voting to make life easier:)
Currently we have this CI voting for openstack/nova. Per the history 
experience, it has been a stable CI(more stable than the Jenkins check) 
normally if there is no bad commit breaking it.
Thanks for any comments.

[1] https://review.openstack.org/#/c/316672
[2] https://review.openstack.org/#/c/328836/

Regards,
Jianghua

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Initial oslo.privsep conversion?

2016-06-13 Thread Angus Lees
One of the challenges with nova (and I'm working from some earlier
conversations, not a first-hand reading of the code) is that we can't
restrict file operations to any particular corner of the filesystem,
because the location of the libvirt data is stored (only) in the database,
and the database is writeable by "unprivileged" nova code.  My
understanding is that it's considered a feature that the libvirt data
directory can be changed at some point, and old instances will continue to
operate in their old location just fine.

There's a number of ways to improve that (restrict to a list of configured
dirs; limit access to dirs owned by a particular (non-root) uid/gid; etc)
but any translation of the nova file-manipulation code to something more
secure will rapidly run up against this "so how do we work out what should
actually be allowed?" policy discussion.  The conclusion of that discussion
will probably require broader nova changes than simply adopting privsep -
just fyi.

 - Gus

On Fri, 10 Jun 2016 at 09:51 Tony Breeds  wrote:

> On Fri, Jun 10, 2016 at 08:24:34AM +1000, Michael Still wrote:
> > On Fri, Jun 10, 2016 at 7:18 AM, Tony Breeds 
> > wrote:
> >
> > > On Wed, Jun 08, 2016 at 08:10:47PM -0500, Matt Riedemann wrote:
> > >
> > > > Agreed, but it's the worked example part that we don't have yet,
> > > > chicken/egg. So we can drop the hammer on all new things until
> someone
> > > does
> > > > it, which sucks, or hope that someone volunteers to work the first
> > > example.
> > >
> > > I'll work with gus to find a good example in nova and have patches up
> > > before
> > > the mid-cycle.  We can discuss next steps then.
> > >
> >
> > Sorry to be a pain, but I'd really like that example to be non-trivial if
> > possible. One of the advantages of privsep is that we can push the logic
> > down closer to the privileged code, instead of just doing something
> "close"
> > and then parsing. I think reinforcing that idea in the sample code is
> > important.
>
> I think *any* change will show that.  I wanted to pick something
> achievable in
> the short timeframe.
>
> The example I'm thinking of is nova/virt/libvirt/utils.py:update_mtime()
>
>  * It will provide a lot of the boiler plate
>  * Show that we can now now replace an exec with pure python code.
>  * Show how you need to retrieve data from a trusted source on the
> priviledged
>side
>  * Migrate testing
>  * Remove an entry from compute.filters
>
> Once that's implace chown() in the same file is probably a quick fix.
>
> Is it super helpful? does it have a measurable impact on performance,
> security?
> The answer is probably "no"
>
> I still think it has value.
>
> Handling qemu-img is probably best done by creating os-qemu (or similar)
> and
> designing from the ground up with provsep in mind.  Glance and Cinder would
> benefit from that also.  That howveer is waaay to big for this cycle.
>
> Yours Tony.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> Message  protected by MailGuard: e-mail anti-virus, anti-spam and content
> filtering.http://www.mailguard.com.au/mg
> Click here to report this message as spam:
> https://console.mailguard.com.au/ras/1OBOU7cIz8/WIVtx4TniRa9AoBchPxBc/0.62
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-13 Thread Na Zhu
Hi John,

OK, I also find the column "port_pairs" and "flow_classifiers" can not be 
wrote by idl APIs, I will try to fix it.
If any update, i will send you email.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN
Cc: "OpenStack Development Mailing List (not for usage questions)" 
, discuss , 
Srilatha Tangirala 
Date:   2016/06/14 12:17
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

Trying to implement this today showed that this will not work for OVN. I 
am going back to RussellB 's original model with port-chain as a child of 
lswitch. 

I can make this work and then we can evolve from there. It will require 
some re-write of the idl code - hopefully I will get it done tomorrow.

Regards

John

Sent from my iPhone

On Jun 13, 2016, at 8:41 PM, Na Zhu  wrote:

Hi John,

I see you add column "port_pairs" and "flow_classifiers" to table 
Logical_Switch, I am not clear about it, the port-pair ingress port and 
egress port can be the same, they also can be different and in 
same/different network, and the flow classifier is not per network 
neither, can you explain why you do that?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
Cc:Srilatha Tangirala , "OpenStack 
Development Mailing List \(not for usage questions\)" <
openstack-dev@lists.openstack.org>, discuss 
Date:2016/06/14 10:44
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Hi John,

My github account is JunoZhu, pls add me as member of your private repo.
If you submit WIP patch today, then i can update your WIP patch, no need 
to update your private repo.
If not, i will update your private repo.

Thanks.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:discuss , Srilatha Tangirala <
srila...@us.ibm.com>, "OpenStack Development Mailing List (not for usage 
questions)" 
Date:2016/06/13 23:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

What ever is easiest for you �C I can submit WIP patches today for 
networking-ovn and networking-ovs. If you send me your github login I will 
add you as a collaborator to my private repo. 

I am currently working on getting the changes into ovs/ovn ovn-northd.c to 
support the new schema �C hopefully today or tomorrow. Most of the IDL is 
in and I can get info from networking-sfc to ovs/ovn northd.

Regards

John
From: Na Zhu 
Date: Monday, June 13, 2016 at 6:25 AM
To: John McDowall 
Cc: discuss , Srilatha Tangirala <
srila...@us.ibm.com>, "OpenStack Development Mailing List (not for usage 
questions)" 
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I know you are busy recently, sorry to disturb you. I want to ask you 
whether I can submit patch to your private repo, I test your code changes 
and find some minor errors, I think we can work together to make the debug 
work done faster, then you can submit the WIP patch.

What do you think? 




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
Cc:Srilatha Tangirala , "OpenStack 
Development Mailing List \(not for usage questions\)" <
openstack-dev@lists.openstack.org>, discuss 
Date:2016/06/09 16:18
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Hi John,

I know most of the OVN driver codes are copied from OVS driver, OVN driver 
is different from OVS driver. For OVS driver, it should build the sfc 
flows and send to ovs agent, while OVN controller does not need to do it, 
OVN controller only need send 

[openstack-dev] Discussion on to enable "Citrix XenServer CI" to vote openstack/tempest

2016-06-13 Thread Jianghua Wang
Hi all,
   Recently the "Citrix XenServer CI" was broken due to a bad commit[1] to 
openstack/tempest. As the commit was merged on Friday which is vacation at 
here, it had been in failure for more than three days before we noticed and 
fixed[2] this problem. As this CI votes for openstack/nova, it had been keeping 
to vote -1 until disabled voting.
   So I suggest we also enable this XenServer CI voting on tempest change to 
avoid similar cases in the future. We see in this case, the tempest commit 
didn't consider the different case for type-1 hypervisors, so it broke 
XenServer test. Actually "Citrix XenServer CI" verified that patch set with 
failure result but which got ignored due to no voting. So let's enable the 
voting to make life easier:)
Currently we have this CI voting for openstack/nova. Per the history 
experience, it has been a stable CI(more stable than the Jenkins check) 
normally if there is no bad commit breaking it.
Thanks for any comments.

[1] https://review.openstack.org/#/c/316672
[2] https://review.openstack.org/#/c/328836/

Regards,
Jianghua

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Aodh] Ordering Alarm severity on context

2016-06-13 Thread Sanjana Pai Nagarmat

From: Sanjana Pai Nagarmat
Sent: Tuesday, June 14, 2016 9:46 AM
To: openstack-dev@lists.openstack.org
Subject: [Aodh]

Hi All,

This is with respect to the bug  https://launchpad.net/bugs/1452254 , which 
orders the severity of alarms based on context rather than sorting it 
alphabetically. My code is out for review at 
https://review.openstack.org/#/c/328230 .
I would like to know your opinion, on the approach I have followed to solve the 
bug.



Regards,
Sanjana

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-13 Thread John McDowall
Juno,

Added you to my private repos


Regards

John

Sent from my iPhone

On Jun 13, 2016, at 7:39 PM, Na Zhu > 
wrote:

Hi John,

My github account is JunoZhu, pls add me as member of your private repo.
If you submit WIP patch today, then i can update your WIP patch, no need to 
update your private repo.
If not, i will update your private repo.

Thanks.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN
Cc:discuss >, 
Srilatha Tangirala >, 
"OpenStack Development Mailing List (not for usage questions)" 
>
Date:2016/06/13 23:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Juno,

What ever is easiest for you – I can submit WIP patches today for 
networking-ovn and networking-ovs. If you send me your github login I will add 
you as a collaborator to my private repo.

I am currently working on getting the changes into ovs/ovn ovn-northd.c to 
support the new schema – hopefully today or tomorrow. Most of the IDL is in and 
I can get info from networking-sfc to ovs/ovn northd.

Regards

John
From: Na Zhu >
Date: Monday, June 13, 2016 at 6:25 AM
To: John McDowall 
>
Cc: discuss >, Srilatha 
Tangirala >, "OpenStack 
Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I know you are busy recently, sorry to disturb you. I want to ask you whether I 
can submit patch to your private repo, I test your code changes and find some 
minor errors, I think we can work together to make the debug work done faster, 
then you can submit the WIP patch.

What do you think?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
>
Cc:Srilatha Tangirala 
>, "OpenStack Development 
Mailing List \(not for usage questions\)" 
>, 
discuss >
Date:2016/06/09 16:18
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Hi John,

I know most of the OVN driver codes are copied from OVS driver, OVN driver is 
different from OVS driver. For OVS driver, it should build the sfc flows and 
send to ovs agent, while OVN controller does not need to do it, OVN controller 
only need send the sfc parameters to OVN northbound DB, then ovn-controller can 
build the sfc flow.

networking-sfc defines some common APIs for each driver, see 
networking_sfc/services/sfc/drivers/base.py, I think for OVN, we only need 
write the methods about port-chain create/update/delete, and leave other method 
empty, What do you think?
If you agree with me, you have to refactor the OVN sfc driver, do you want me 
to do it?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Amitabha Biswas >
Cc:Na Zhu/China/IBM@IBMCN, Srilatha Tangirala 
>, "OpenStack Development 
Mailing List (not for usage questions)" 
>, 
discuss >
Date:2016/06/09 00:53
Subject:Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Amitabha,

Thanks for looking at it . I took the suggestion from Juno and implemented 

Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-13 Thread John McDowall
Juno,

Trying to implement this today showed that this will not work for OVN. I am 
going back to RussellB 's original model with port-chain as a child of lswitch.

I can make this work and then we can evolve from there. It will require some 
re-write of the idl code - hopefully I will get it done tomorrow.

Regards

John

Sent from my iPhone

On Jun 13, 2016, at 8:41 PM, Na Zhu > 
wrote:

Hi John,

I see you add column "port_pairs" and "flow_classifiers" to table 
Logical_Switch, I am not clear about it, the port-pair ingress port and egress 
port can be the same, they also can be different and in same/different network, 
and the flow classifier is not per network neither, can you explain why you do 
that?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
>
Cc:Srilatha Tangirala 
>, "OpenStack Development 
Mailing List \(not for usage questions\)" 
>, 
discuss >
Date:2016/06/14 10:44
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Hi John,

My github account is JunoZhu, pls add me as member of your private repo.
If you submit WIP patch today, then i can update your WIP patch, no need to 
update your private repo.
If not, i will update your private repo.

Thanks.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN
Cc:discuss >, 
Srilatha Tangirala >, 
"OpenStack Development Mailing List (not for usage questions)" 
>
Date:2016/06/13 23:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Juno,

What ever is easiest for you – I can submit WIP patches today for 
networking-ovn and networking-ovs. If you send me your github login I will add 
you as a collaborator to my private repo.

I am currently working on getting the changes into ovs/ovn ovn-northd.c to 
support the new schema – hopefully today or tomorrow. Most of the IDL is in and 
I can get info from networking-sfc to ovs/ovn northd.

Regards

John
From: Na Zhu >
Date: Monday, June 13, 2016 at 6:25 AM
To: John McDowall 
>
Cc: discuss >, Srilatha 
Tangirala >, "OpenStack 
Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I know you are busy recently, sorry to disturb you. I want to ask you whether I 
can submit patch to your private repo, I test your code changes and find some 
minor errors, I think we can work together to make the debug work done faster, 
then you can submit the WIP patch.

What do you think?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
>
Cc:Srilatha Tangirala 
>, "OpenStack Development 
Mailing List \(not for usage questions\)" 
>, 
discuss >
Date:2016/06/09 16:18
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Hi John,

I know most of the OVN driver codes are copied from OVS driver, OVN driver is 
different from OVS driver. For OVS driver, it should build the sfc flows and 
send to ovs agent, while OVN controller does not need to do it, OVN 

[openstack-dev] [Aodh]

2016-06-13 Thread Sanjana Pai Nagarmat
Hi All,

This is with respect to the bug  https://launchpad.net/bugs/1452254 , which 
orders the severity of alarms based on context rather than sorting it 
alphabetically. My code is out for review at 
https://review.openstack.org/#/c/328230 .
I would like to know your opinion, on the approach I have followed to solve the 
bug.



Regards,
Sanjana

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-06-13 Thread Yuanying OTSUKA
Hi, Hongbin,

Yes, those urls are just information for our work.
We will create a etherpad page to collaborate.





2016年6月11日(土) 7:38 Hongbin Lu :

> Yuanying,
>
>
>
> The etherpads you pointed to were a few years ago and the information
> looks a bit outdated. I think we can collaborate a similar etherpad with
> updated information (i.e. remove container runtimes that we don’t care, add
> container runtimes that we care). The existing etherpad can be used as a
> starting point. What do you think?
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Yuanying OTSUKA [mailto:yuany...@oeilvert.org]
> *Sent:* June-01-16 12:43 AM
> *To:* OpenStack Development Mailing List (not for usage questions); Sheel
> Rana Insaan
> *Cc:* adit...@nectechnologies.in; yanya...@cn.ibm.com;
> flw...@catalyst.net.nz; Qi Ming Teng; sitlani.namr...@yahoo.in; Yuanying;
> Chandan Kumar
> *Subject:* Re: [openstack-dev] [Higgins] Call for contribution for
> Higgins API design
>
>
>
> Just F.Y.I.
>
>
>
> When Magnum wanted to become “Container as a Service”,
>
> There were some discussion about API design.
>
>
>
> * https://etherpad.openstack.org/p/containers-service-api
>
> * https://etherpad.openstack.org/p/openstack-containers-service-api
>
>
>
>
>
>
>
> 2016年6月1日(水) 12:09 Hongbin Lu :
>
> Sheel,
>
>
>
> Thanks for taking the responsibility. Assigned the BP to you. As
> discussed, please submit a spec for the API design. Feel free to let us
> know if you need any help.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Sheel Rana Insaan [mailto:ranasheel2...@gmail.com]
> *Sent:* May-31-16 9:23 PM
> *To:* Hongbin Lu
> *Cc:* adit...@nectechnologies.in; vivek.jain.openst...@gmail.com;
> flw...@catalyst.net.nz; Shuu Mutou; Davanum Srinivas; OpenStack
> Development Mailing List (not for usage questions); Chandan Kumar;
> hai...@xr.jp.nec.com; Qi Ming Teng; sitlani.namr...@yahoo.in; Yuanying;
> Kumari, Madhuri; yanya...@cn.ibm.com
> *Subject:* Re: [Higgins] Call for contribution for Higgins API design
>
>
>
> Dear Hongbin,
>
> I am interested in this.
> Thanks!!
>
> Best Regards,
> Sheel Rana
>
> On Jun 1, 2016 3:53 AM, "Hongbin Lu"  wrote:
>
> Hi team,
>
>
>
> As discussed in the last team meeting, we agreed to define core use cases
> for the API design. I have created a blueprint for that. We need an owner
> of the blueprint and it requires a spec to clarify the API design. Please
> let me know if you interest in this work (it might require a significant
> amount of time to work on the spec).
>
>
>
> https://blueprints.launchpad.net/python-higgins/+spec/api-design
>
>
>
> Best regards,
>
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Gary Kotton
That is already supported in stable/mitaka – please see 
https://review.openstack.org/#/c/260700/
I agree with Kevin

From: Kevin Benton 
Reply-To: OpenStack List 
Date: Monday, June 13, 2016 at 11:59 PM
To: OpenStack List 
Subject: Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for 
wiring trunk ports

+1. Neutron should already be able to tell Nova which bridge to use for an OVS 
port.[1] For the Linux bridge implementation it's a matter of creating vlan 
interfaces and plugging them into bridges like regular VM ports, which is all 
the responsibility of the L2 agent. We shouldn't need any changes from Nova or 
os-vif from what I can see.



1. 
https://github.com/openstack/nova/blob/6e2e1dc912199e057e5c3a5e07d39f26cbbbdd5b/nova/network/neutronv2/api.py#L1618

On Mon, Jun 13, 2016 at 5:26 AM, Mooney, Sean K 
> wrote:


> -Original Message-
> From: Daniel P. Berrange 
> [mailto:berra...@redhat.com]
> Sent: Monday, June 13, 2016 1:12 PM
> To: Armando M. >
> Cc: Carl Baldwin >; OpenStack 
> Development Mailing
> List 
> >;
>  Jay Pipes
> >; Maxime Leroy 
> >; Moshe Levi
> >; Russell Bryant 
> >; sahid
> >; Mooney, Sean 
> K >
> Subject: Re: [Neutron][os-vif] Expanding vif capability for wiring trunk
> ports
>
> On Mon, Jun 13, 2016 at 02:08:30PM +0200, Armando M. wrote:
> > On 13 June 2016 at 10:35, Daniel P. Berrange 
> > >
> wrote:
> >
> > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> > > > Hi,
> > > >
> > > > You may or may not be aware of the vlan-aware-vms effort [1] in
> > > > Neutron.  If not, there is a spec and a fair number of patches in
> > > > progress for this.  Essentially, the goal is to allow a VM to
> > > > connect to multiple Neutron networks by tagging traffic on a
> > > > single port with VLAN tags.
> > > >
> > > > This effort will have some effect on vif plugging because the
> > > > datapath will include some changes that will effect how vif
> > > > plugging is done today.
> > > >
> > > > The design proposal for trunk ports with OVS adds a new bridge for
> > > > each trunk port.  This bridge will demux the traffic and then
> > > > connect to br-int with patch ports for each of the networks.
> > > > Rawlin Peters has some ideas for expanding the vif capability to
> > > > include this wiring.
> > > >
> > > > There is also a proposal for connecting to linux bridges by using
> > > > kernel vlan interfaces.
> > > >
> > > > This effort is pretty important to Neutron in the Newton
> > > > timeframe.  I wanted to send this out to start rounding up the
> > > > reviewers and other participants we need to see how we can start
> > > > putting together a plan for nova integration of this feature (via
> os-vif?).
> > >
> > > I've not taken a look at the proposal, but on the timing side of
> > > things it is really way to late to start this email thread asking
> > > for design input from os-vif or nova. We're way past the spec
> > > proposal deadline for Nova in the Newton cycle, so nothing is going
> > > to happen until the Ocata cycle no matter what Neutron want  in
> Newton.
> >
> >
> > For sake of clarity, does this mean that the management of the os-vif
> > project matches exactly Nova's, e.g. same deadlines and processes
> > apply, even though the core team and its release model are different
> from Nova's?
> > I may have erroneously implied that it wasn't, also from past talks I
> > had with johnthetubaguy.
>
> No, we don't intend to force ourselves to only release at milestones
> like nova does. We'll release the os-vif library whenever there is new
> functionality in its code that we need to make available to
> nova/neutron.
> This could be as frequently as once every few weeks.
[Mooney, Sean K]
I have been tracking contributing to the vlan aware vm work in
neutron since the Vancouver summit so I am quite familiar with what would have
to be modified to support the vlan trucking. Provided the modifications do not
delay 

[openstack-dev] [daisycloud-core] IRC weekly meeting logistics

2016-06-13 Thread hu . zhijiang
Hi Team,

Here is the IRC weekly meeting logistics: 
 

Weekly on Friday at 1200 UTC, You can check out your local time here: 
http://www.timeanddate.com/worldclock/fixedtime.html?hour=12=0=0
IRC channel: #daisycloud at freenode

So our first meeting will be on this friday (Jun 17). The Agenda mainly 
is:

1. Rollcall
2. Welcome Jaivish and everyone introduce him/herself, for the very first 
time over this IRC channel.
3. Daisy status update
4. Daisy for NFV update



Could anyone please help me to update logistics info as well as 
contributors info to https://wiki.openstack.org/wiki/Daisy ? Because I can 
not log into https://wiki.openstack.org any more, I dont know why, I think 
it is a problem of the wiki.openstack.org website. Every time I trying to 
login it show me a blank page, and if I refresh it, it shows error: "Nonce 
already used or out of range"   


Our current active contributor list

-
NameIRC Nick  Email
-
Zhijiang Hu huzhj hu.zhiji...@zte.com.cn
Jaivish Kothari janonymousjanonymous.codevult...@gmail.com 
Wei Kong  kong.w...@zte.com.cn
Yao Lulu.yao...@zte.com.cn
Ya Zhou   zhou...@zte.com.cn
Jing Sun  sun.jin...@zte.com.cn 


ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-13 Thread Na Zhu
Hi John,

I see you add column "port_pairs" and "flow_classifiers" to table 
Logical_Switch, I am not clear about it, the port-pair ingress port and 
egress port can be the same, they also can be different and in 
same/different network, and the flow classifier is not per network 
neither, can you explain why you do that?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   Na Zhu/China/IBM@IBMCN
To: John McDowall 
Cc: Srilatha Tangirala , "OpenStack Development 
Mailing List \(not for usage questions\)" 
, discuss 
Date:   2016/06/14 10:44
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Hi John,

My github account is JunoZhu, pls add me as member of your private repo.
If you submit WIP patch today, then i can update your WIP patch, no need 
to update your private repo.
If not, i will update your private repo.

Thanks.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:discuss , Srilatha Tangirala 
, "OpenStack Development Mailing List (not for usage 
questions)" 
Date:2016/06/13 23:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

What ever is easiest for you �C I can submit WIP patches today for 
networking-ovn and networking-ovs. If you send me your github login I will 
add you as a collaborator to my private repo. 

I am currently working on getting the changes into ovs/ovn ovn-northd.c to 
support the new schema �C hopefully today or tomorrow. Most of the IDL is 
in and I can get info from networking-sfc to ovs/ovn northd.

Regards

John
From: Na Zhu 
Date: Monday, June 13, 2016 at 6:25 AM
To: John McDowall 
Cc: discuss , Srilatha Tangirala <
srila...@us.ibm.com>, "OpenStack Development Mailing List (not for usage 
questions)" 
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I know you are busy recently, sorry to disturb you. I want to ask you 
whether I can submit patch to your private repo, I test your code changes 
and find some minor errors, I think we can work together to make the debug 
work done faster, then you can submit the WIP patch.

What do you think? 




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
Cc:Srilatha Tangirala , "OpenStack 
Development Mailing List \(not for usage questions\)" <
openstack-dev@lists.openstack.org>, discuss 
Date:2016/06/09 16:18
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Hi John,

I know most of the OVN driver codes are copied from OVS driver, OVN driver 
is different from OVS driver. For OVS driver, it should build the sfc 
flows and send to ovs agent, while OVN controller does not need to do it, 
OVN controller only need send the sfc parameters to OVN northbound DB, 
then ovn-controller can build the sfc flow.

networking-sfc defines some common APIs for each driver, see 
networking_sfc/services/sfc/drivers/base.py, I think for OVN, we only need 
write the methods about port-chain create/update/delete, and leave other 
method empty, What do you think? 
If you agree with me, you have to refactor the OVN sfc driver, do you want 
me to do it?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Amitabha Biswas 
Cc:Na Zhu/China/IBM@IBMCN, Srilatha Tangirala , "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>, discuss 
Date:2016/06/09 00:53
Subject:Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Amitabha,

Thanks for looking at it . I took the suggestion from Juno and implemented 
it. I think it is a good 

[openstack-dev] "OpenStack-dev" mailing list

2016-06-13 Thread Sanjana Pai Nagarmat
Email : sanj...@hitachi.co.in

Thanks and Regards,
Sanjana


-Original Message-
From: openstack-dev-requ...@lists.openstack.org 
[mailto:openstack-dev-requ...@lists.openstack.org] 
Sent: Tuesday, June 14, 2016 8:52 AM
To: Sanjana Pai Nagarmat
Subject: Welcome to the "OpenStack-dev" mailing list

Welcome to the OpenStack-dev@lists.openstack.org mailing list!

To post to this list, send your email to:

  openstack-dev@lists.openstack.org

General information about the mailing list is at:

  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

If you ever want to unsubscribe or change your options (eg, switch to or from 
digest mode, change your password, etc.), visit your subscription page at:

  
http://lists.openstack.org/cgi-bin/mailman/options/openstack-dev/sanjana%40hitachi.co.in


You can also make such adjustments via email by sending a message to:

  openstack-dev-requ...@lists.openstack.org

with the word `help' in the subject or body (don't include the quotes), and you 
will get back a message with instructions.

You must know your password to change your options (including changing the 
password, itself) or to unsubscribe.  It is:

  Hitachi1

Normally, Mailman will remind you of your lists.openstack.org mailing list 
passwords once every month, although you can disable this if you prefer.  This 
reminder will also include instructions on how to unsubscribe or change your 
account options.  There is also a button on your options page that will email 
your current password to you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] vision on new modules

2016-06-13 Thread Matt Fischer
On Wed, Jun 8, 2016 at 2:42 PM, Emilien Macchi  wrote:

> Hi folks,
>
> Over the last months we've been creating more and more modules [1] [2]
> and I would like to take the opportunity to continue some discussion
> we had during the last Summits about the quality of our modules.
>
> [1] octavia, vitrage, ec2api, tacker, watcher, congress, magnum,
> mistral, zaqar, etc.
> [2] by the end of Newton, we'll have ~ 33 Puppet modules !
>
> Announce your work
> As a reminder, we have defined a process when adding new modules:
> http://docs.openstack.org/developer/puppet-openstack-guide/new-module.html
> This process is really helpful to scale our project and easily add modules.
> If you're about to start a new module, I suggest you to start this
> process and avoid to start it on your personal github, because you'll
> loose the valuable community review on your work.
>
> Iterate
> I've noticed some folks pushing 3000 LOC in Gerrit when adding the
> bits for new Puppet modules (after the first cookiecutter init).
> That's IMHO bad, because it makes reviews harder, slower and expose
> the risk of missing something during the review process. Please write
> modules bits by bits.
> Example: start with init.pp for common bits, then api.pp, etc.
> For each bit, add its unit tests & functional tests (beaker). It will
> allow us to write modules with good design, good tests and good code
> in general.
>
> Write tests
> A good Puppet module is one that we can use to successfully deploy an
> OpenStack service. For that, please add beaker tests when you're
> initiating a module. Not at the end of your work, but for every new
> class or feature.
> It helps to easily detect issues that we'll have when running Puppet
> catalog and quickly fix it. It also helps community to report feedback
> on packaging, Tempest or detect issues in our libraries.
> If you're not familiar with beaker, you'll see in existing modules
> that there is nothing complicated, we basically write a manifest that
> will deploy the service.
>
>
> If you're new in this process, please join our IRC channel on freenode
> #puppet-openstack and don't hesitate to poke us.
>
> Any feedback / comment is highly welcome,
> Thanks,
> --
> Emilien Macchi
>
>
I like the ideas, especially about 3000 line commits. I started with your
tips and added them to the docs:

 https://review.openstack.org/329253 Document Emilien's tips for new
modules
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-13 Thread Na Zhu
Hi John,

My github account is JunoZhu, pls add me as member of your private repo.
If you submit WIP patch today, then i can update your WIP patch, no need 
to update your private repo.
If not, i will update your private repo.

Thanks.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN
Cc: discuss , Srilatha Tangirala 
, "OpenStack Development Mailing List (not for usage 
questions)" 
Date:   2016/06/13 23:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

What ever is easiest for you �C I can submit WIP patches today for 
networking-ovn and networking-ovs. If you send me your github login I will 
add you as a collaborator to my private repo. 

I am currently working on getting the changes into ovs/ovn ovn-northd.c to 
support the new schema �C hopefully today or tomorrow. Most of the IDL is 
in and I can get info from networking-sfc to ovs/ovn northd.

Regards

John
From: Na Zhu 
Date: Monday, June 13, 2016 at 6:25 AM
To: John McDowall 
Cc: discuss , Srilatha Tangirala <
srila...@us.ibm.com>, "OpenStack Development Mailing List (not for usage 
questions)" 
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I know you are busy recently, sorry to disturb you. I want to ask you 
whether I can submit patch to your private repo, I test your code changes 
and find some minor errors, I think we can work together to make the debug 
work done faster, then you can submit the WIP patch.

What do you think? 




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
Cc:Srilatha Tangirala , "OpenStack 
Development Mailing List \(not for usage questions\)" <
openstack-dev@lists.openstack.org>, discuss 
Date:2016/06/09 16:18
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Hi John,

I know most of the OVN driver codes are copied from OVS driver, OVN driver 
is different from OVS driver. For OVS driver, it should build the sfc 
flows and send to ovs agent, while OVN controller does not need to do it, 
OVN controller only need send the sfc parameters to OVN northbound DB, 
then ovn-controller can build the sfc flow.

networking-sfc defines some common APIs for each driver, see 
networking_sfc/services/sfc/drivers/base.py, I think for OVN, we only need 
write the methods about port-chain create/update/delete, and leave other 
method empty, What do you think? 
If you agree with me, you have to refactor the OVN sfc driver, do you want 
me to do it?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Amitabha Biswas 
Cc:Na Zhu/China/IBM@IBMCN, Srilatha Tangirala , "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>, discuss 
Date:2016/06/09 00:53
Subject:Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Amitabha,

Thanks for looking at it . I took the suggestion from Juno and implemented 
it. I think it is a good solution as it minimizes impact on both 
networking-ovn and networking-sfc. I have updated my repos, if you have 
suggestions for improvements let me know.

I agree that there needs to be some refactoring of the networking-sfc 
driver code. I think the team did a good job with it as it was easy for me 
to create the OVN driver ( copy and paste). As more drivers are created I 
think the model will get polished and refactored.

Regards

John

From: Amitabha Biswas 
Date: Tuesday, June 7, 2016 at 11:36 PM
To: John McDowall 
Cc: Na Zhu , Srilatha Tangirala , 
"OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>, discuss 
Subject: Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John, 

Looking at the code with Srilatha, it seems like the 

Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-06-13 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Hongbin,

Thank you for your guidance. I’ll take a look at the related existing 
blueprints and patches to see whether there are any duplicated jobs first.

Regards,
Gary

From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: Monday, June 13, 2016 10:58 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Gary,

It is hard to tell if your change fits into Magnum upstream or not, unless 
there are further details. I encourage you to upload your changes to gerrit, so 
that we can review and discuss it inline. Also, keep in mind that the change 
might be rejected if it doesn’t fit into upstream objectives or it is 
duplicated to other existing work, but I hope it won’t discourage your 
contribution. If your change is related to Ironic, we might request you to 
coordinate your work with Spyros and/or others who is working on Ironic 
integration.

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: June-13-16 3:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Hi Gary.

On 13 June 2016 at 09:06, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
> wrote:
Hi Tom/All,

>6. Ironic Integration: 
>https://etherpad.openstack.org/p/newton-magnum-ironic-integration
>- Start the implementation immediately
>- Prefer quick work-around for identified issues (cinder volume attachment, 
>variation of number of ports, etc.)

>We need to implement a bay template that can use a flat networking model as 
>this is the only networking model Ironic currently supports. Multi-tenant 
>networking is imminent. This should be done before work on an Ironic template 
>starts.

We have already implemented a bay template that uses a flat networking model 
and other python code(making magnum to find the correct heat template) which is 
used in our own project.
What do you think of this feature? If you think it is necessary for Magnum, I 
can contribute this codes to Magnum upstream.

This feature is useful to magnum and there is a blueprint for that:
https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips
You can add some notes on the whiteboard about your proposed change.

As for the ironic integration, we should modify the existing templates, there
is work in progress on that: https://review.openstack.org/#/c/320968/

btw, you added new yaml files or you modified the existing ones kubemaster,
minion and cluster?

Cheers,
Spyros


Regards,
Gary Duan


-Original Message-
From: Cammann, Tom
Sent: Tuesday, May 03, 2016 1:12 AM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Thanks for the write up Hongbin and thanks to all those who contributed to the 
design summit. A few comments on the summaries below.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

We need to implement a bay template that can use a flat networking model as 
this is the only networking model Ironic currently supports. Multi-tenant 
networking is imminent. This should be done before work on an Ironic template 
starts.

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

Ideally we need to turn this list into a set of actions which we can implement 
over the cycle, i.e. create a BP to remove requirement for LBaaS.

9. Magnum Heat template version: 
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for 
major changes.

We decided only bay driver versioning was required. The template and template 
driver does not need versioning due to the fact we can get heat to pass back 
the template which it used to create the bay.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by 
cAdvisor, Heapster, etc.

We split this topic into 3 parts – bay telemetry, bay monitoring, container 
monitoring.
Bay telemetry is done around actions such as bay/baymodel CRUD operations. This 
is implemented using using ceilometer notifications.
Bay monitoring is around monitoring health of individual nodes in the bay 
cluster and we decided to postpone work as more investigation is required on 

Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-06-13 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Sypros,

Thank you for pointing out the blueprint and the patch.
What we have done is modifying the existing kubecluster-ironic, kubemaster 
–ironic and kubeminion-ironic yaml files.
I take a look at the blueprint and the patch you pointed out.

Regards,
Gary

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: Monday, June 13, 2016 3:59 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Hi Gary.

On 13 June 2016 at 09:06, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
> wrote:
Hi Tom/All,

>6. Ironic Integration: 
>https://etherpad.openstack.org/p/newton-magnum-ironic-integration
>- Start the implementation immediately
>- Prefer quick work-around for identified issues (cinder volume attachment, 
>variation of number of ports, etc.)

>We need to implement a bay template that can use a flat networking model as 
>this is the only networking model Ironic currently supports. Multi-tenant 
>networking is imminent. This should be done before work on an Ironic template 
>starts.

We have already implemented a bay template that uses a flat networking model 
and other python code(making magnum to find the correct heat template) which is 
used in our own project.
What do you think of this feature? If you think it is necessary for Magnum, I 
can contribute this codes to Magnum upstream.

This feature is useful to magnum and there is a blueprint for that:
https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips
You can add some notes on the whiteboard about your proposed change.

As for the ironic integration, we should modify the existing templates, there
is work in progress on that: https://review.openstack.org/#/c/320968/

btw, you added new yaml files or you modified the existing ones kubemaster,
minion and cluster?

Cheers,
Spyros


Regards,
Gary Duan


-Original Message-
From: Cammann, Tom
Sent: Tuesday, May 03, 2016 1:12 AM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Thanks for the write up Hongbin and thanks to all those who contributed to the 
design summit. A few comments on the summaries below.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

We need to implement a bay template that can use a flat networking model as 
this is the only networking model Ironic currently supports. Multi-tenant 
networking is imminent. This should be done before work on an Ironic template 
starts.

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

Ideally we need to turn this list into a set of actions which we can implement 
over the cycle, i.e. create a BP to remove requirement for LBaaS.

9. Magnum Heat template version: 
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for 
major changes.

We decided only bay driver versioning was required. The template and template 
driver does not need versioning due to the fact we can get heat to pass back 
the template which it used to create the bay.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by 
cAdvisor, Heapster, etc.

We split this topic into 3 parts – bay telemetry, bay monitoring, container 
monitoring.
Bay telemetry is done around actions such as bay/baymodel CRUD operations. This 
is implemented using using ceilometer notifications.
Bay monitoring is around monitoring health of individual nodes in the bay 
cluster and we decided to postpone work as more investigation is required on 
what this should look like and what users actually need.
Container monitoring focuses on what containers are running in the bay and 
general usage of the bay COE. We decided this will be done completed by Magnum 
by adding access to cAdvisor/heapster through baking access to cAdvisor by 
default.

- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): 
It can address the use case of heterogeneity of bay nodes (i.e. different 
availability zones, flavors), but need to elaborate the details further.

The idea revolves around creating a heat stack for each node in the bay. This 
idea shows a lot of promise but needs more investigation and isn’t a current 
priority.


Re: [openstack-dev] [magnum-ui][magnum] Proposed Core addition, and removal notice

2016-06-13 Thread Yuanying OTSUKA
+1

Thanks
-yuanying

2016年6月14日(火) 10:55 Kai Qiang Wu :

> +1 Welcome to new one :)
>
>
>
> Thanks
>
> Best Wishes,
>
> 
> Kai Qiang Wu (吴开强 Kennan)
> IBM China System and Technology Lab, Beijing
>
> E-mail: wk...@cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
>
> 
> Follow your heart. You are miracle!
>
> [image: Inactive hide details for 王华 ---13/06/2016 05:30:39 pm---+1 On
> Fri, Jun 10, 2016 at 5:32 PM, Shuu Mutou  ---13/06/2016 05:30:39 pm---+1 On Fri, Jun 10, 2016 at 5:32 PM, Shuu Mutou <
> shu-mu...@rf.jp.nec.com> wrote:
>
> From: 王华 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 13/06/2016 05:30 pm
> Subject: Re: [openstack-dev] [magnum-ui][magnum] Proposed Core addition,
> and removal notice
> --
>
>
>
>
> +1
>
> On Fri, Jun 10, 2016 at 5:32 PM, Shuu Mutou <*shu-mu...@rf.jp.nec.com*
> > wrote:
>
>Hi team,
>
>I propose the following changes to the magnum-ui core group.
>
>+ Thai Tran
>  *http://stackalytics.com/report/contribution/magnum-ui/90*
>
>  I'm so happy to propose Thai as a core reviewer.
>  His reviews have been extremely valuable for us.
>  And he is active Horizon core member.
>  I believe his help will lead us to the correct future.
>
>- David Lyle
>
>
> *http://stackalytics.com/?metric=marks_type=openstack=all=magnum-ui_id=david-lyle*
>
> 
>  No activities for Magnum-UI since Mitaka cycle.
>
>- Harsh Shah
>  *http://stackalytics.com/report/users/hshah*
>
>  No activities for OpenStack in this year.
>
>- Ritesh
>  *http://stackalytics.com/report/users/rsritesh*
>
>  No activities for OpenStack in this year.
>
>Please respond with your +1 votes to approve this change or -1 votes
>to oppose.
>
>Thanks,
>Shu
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe:
>*openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
>
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum-ui][magnum] Proposed Core addition, and removal notice

2016-06-13 Thread Kai Qiang Wu
+1  Welcome to new one :)



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   王华 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   13/06/2016 05:30 pm
Subject:Re: [openstack-dev] [magnum-ui][magnum] Proposed Core addition,
and removal notice



+1

On Fri, Jun 10, 2016 at 5:32 PM, Shuu Mutou 
wrote:
  Hi team,

  I propose the following changes to the magnum-ui core group.

  + Thai Tran
    http://stackalytics.com/report/contribution/magnum-ui/90
    I'm so happy to propose Thai as a core reviewer.
    His reviews have been extremely valuable for us.
    And he is active Horizon core member.
    I believe his help will lead us to the correct future.

  - David Lyle

  
http://stackalytics.com/?metric=marks_type=openstack=all=magnum-ui_id=david-lyle

    No activities for Magnum-UI since Mitaka cycle.

  - Harsh Shah
    http://stackalytics.com/report/users/hshah
    No activities for OpenStack in this year.

  - Ritesh
    http://stackalytics.com/report/users/rsritesh
    No activities for OpenStack in this year.

  Please respond with your +1 votes to approve this change or -1 votes to
  oppose.

  Thanks,
  Shu


  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Need helps to implement the full baremetals support

2016-06-13 Thread Yuanying OTSUKA
Hi, Spyros

I updated ironic heat template, and succeeded booting k8s bay with Ironic.
Could you test it?

Unfortunately there are some problem and requirement to test.
I describe below.

* subnet which belongs to private network should be set up with
dns_nameservers like following.

$ neutron subnet-update private-subnet —dns-nameserver 8.8.8.8

* modify ironic.nodes table

$ alter table ironic.nodes modify instance_info LONGTEXT;

* baymodel

$ magnum baymodel-create —name kubernetes —keypair-id default \
   --server-type bm \
   --external-network-id public \
   --fixed-network private \
   --image-id fedora-k8s \
   --flavor-id baremetal \
   --network-driver flannel \
   --coe kubernetes

* Fedora image
Following procedure depends on diskimage-builder fix:
https://review.openstack.org/#/c/247296/
https://review.openstack.org/#/c/320968/10/magnum/elements/kubernetes/README.md

* my local.conf to setup ironic env
http://paste.openstack.org/show/515877/


Thanks
-yuanying


2016年5月25日(水) 22:00 Yuanying OTSUKA :

> Hi, Spyros
>
> I fixed a conflicts and upload following patch.
> * https://review.openstack.org/#/c/320968/
>
> But it isn’t tested yet, maybe it doesn’t work..
> If you have a question, please feel free to ask.
>
>
> Thanks
> -yuanying
>
>
>
> 2016年5月25日(水) 17:56 Spyros Trigazis :
>
>> Hi Yuanying,
>>
>> please upload your workaround. I can test it and try to fix the conflicts.
>> Even if it conflicts we can have some iterations on it.
>>
>> I'll upload later what worked for me on devstack.
>>
>> Thanks,
>> Spyros
>>
>> On 25 May 2016 at 05:13, Yuanying OTSUKA  wrote:
>>
>>> Hi, Hongbin, Spyros.
>>>
>>> I’m also interesting this work.
>>> I have workaround patch to support ironic.
>>> (but currently conflict with master.
>>> Is it helpful to upload it for initial step of the implementation?
>>>
>>> Thanks
>>> -yuanying
>>>
>>> 2016年5月25日(水) 6:52 Hongbin Lu :
>>>
 Hi all,



 One of the most important feature that Magnum team wants to deliver in
 Newton is the full baremetal support. There is a blueprint [1] created for
 that and the blueprint was marked as “essential” (that is the highest
 priority). Spyros is the owner of the blueprint and he is looking for helps
 from other contributors. For now, we immediately needs help to fix the
 existing Ironic templates [2][3][4] that are used to provision a Kubernetes
 cluster on top of baremetal instances. These templates were used to work,
 but they become outdated right now. We need help to fix those Heat template
 as an initial step of the implementation. Contributors are expected to
 follow the Ironic devstack guide to setup the environment. Then, exercise
 those templates in Heat.



 If you interest to take the work, please contact Spyros or me and we
 will coordinate the efforts.



 [1]
 https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support

 [2]
 https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster-fedora-ironic.yaml

 [3]
 https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubemaster-fedora-ironic.yaml

 [4]
 https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubeminion-fedora-ironic.yaml



 Best regards,

 Hongbin

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Let me know if you have an approved spec but unapproved blueprint

2016-06-13 Thread joehuang
Hi, Matt,

Thank you for the clarification.

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] 
Sent: Monday, June 13, 2016 9:07 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Let me know if you have an approved spec 
but unapproved blueprint

On 6/12/2016 7:48 PM, joehuang wrote:
> Hello,
>
> This spec is not approved yet: 
> https://review.openstack.org/#/c/295595/
>
> But the BP is approved: 
> https://blueprints.launchpad.net/nova/+spec/expose-quiesce-unquiesce-a
> pi
>
> Don't know how to deal with the spec now. Is this spec killed? Should Nova 
> support application level consistency snapshot for disaster recovery purpose 
> or not?
>
> Best Regards
> Chaoyi Huang ( Joe Huang )
>
> -Original Message-
> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: Sunday, June 12, 2016 9:08 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [nova] Let me know if you have an approved 
> spec but unapproved blueprint
>
> I've come across several changes up for review that are tied to Newton 
> blueprints which have specs approved but the blueprints in launchpad are not 
> yet approved.
>
> If you have a spec that was approved for Newton but your blueprint in 
> launchpad isn't approved yet, please ping me (mriedem) in IRC or reply to 
> this thread to get it approved and tracked for the Newton release.
> It's important (at least to me) that we have an accurate representation of 
> how much work we're trying to get done this release, especially with 
> non-priority feature freeze coming up in three weeks.
>

Neither the spec nor the blueprint is approved. The blueprint was previously 
approved in mitaka but is not for newton, with reasons in the spec review for 
newton.

At this point we're past non-priority spec approval freeze so this isn't going 
to get in for newton. There are a lot of concerns about this one so it's going 
to be tabled for at least this release, we can revisit in ocata, but it adds a 
lot of complexity and it's more than we're willing to take on right now given 
everything else planned for this release.

-- 

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] enabling new topologies

2016-06-13 Thread Stephen Balukoff
Hey Sergey,

In-line comments below:

On Sun, Jun 5, 2016 at 8:07 AM, Sergey Guenender  wrote:

>
> Hi Stephen, please find my reply next to your points below.
>
> Thank you,
> -Sergey.
>
>
> On 01/06/2016 20:23, Stephen Balukoff wrote:
> > Hey Sergey--
> >
> > Apologies for the delay in my response. I'm still wrapping my head
> > around your option 2 suggestion and the implications it might have for
> > the code base moving forward. I think, though, that I'm against your
> > option 2 proposal and in favor of option 1 (which, yes, is more work
> > initially) for the following reasons:
> >
> > A. We have a precedent in the code tree with how the stand-alone and
> > active-standby topologies are currently being handled. Yes, this does
> > entail various conditionals and branches in tasks and flows-- which is
> > not really that ideal, as it means the controller worker needs to have
> > more specific information on how topologies work than I think any of us
> > would like, and this adds some rigidity to the implementation (meaning
> > 3rd party vendors may have more trouble interfacing at that level)...
> > but it's actually "not that bad" in many ways, especially given we don't
> > anticipate supporting a large or variable number of topologies.
> > (stand-alone, active-standby, active-active... and then what? We've been
> > doing this for a number of years and nobody has mentioned any radically
> > new topologies they would like in their load balancing. Things like
> > auto-scale are just a specific case of active-active).
>
> Just as you say, two topologies are being handled as of now by only one
> set of flows. Option two goes along the same lines, instead of adding new
> flows for active-active it suggests that minor adjustments to existing
> flows can also satisfy active-active.
>

My point was that I think the distributor and amphora roles are different
enough that they ought to have separate drivers, separate flows, etc.
almost entirely. There's not much difference between a stand-alone amphora
and an amphora in an active-standby topology. However, there's a huge
difference between both of these and a distributor (which will have its own
back-end API, for example).


>
> > B. If anything Option 2 builds more less-obvious rigidity into the
> > implementation than option 1. For example, it makes the assumption that
> > the distributor is necessarily an amphora or service VM, whereas we have
> > already heard that some will implement the distributor as a pure network
> > routing function that isn't going to be managed the same way other
> > amphorae are.
>
> This is a good point. By looking at the code, I see there are comments
> mentioning the intent to share amphora between several load balancers.
> Although probably not straightforward to implement, it might be a good idea
> one day, but the fact is it looks like amphora has not been shared between
> load balancers for a few years.
>
> Personally, when developing something complex, I believe in taking baby
> steps. If the virtual, non-shared distributor (which is promised by the AA
> blueprint anyway) is the smallest step towards a working active-active,
> then I guess it should be considered taking first.
>

The AA blueprint has yet to be approved (and there appear to be a *lot* of
comments on the latest revision). But yes-- in general you need to walk
before you can run. But instead of torturing analogies, let me say this:
Assumptions about design are reflected in the code. So, I generally like to
do my best getting the design right... and then any baby steps taken should
be evaluated against that end design to ensure they don't introduce
assumptions that will make it difficult to get there.



>
> Unless of course, it precludes implementing the following, more complex
> topologies.
>
> My belief is it doesn't have to. The proposed change alone (splitting
> amphorae into sub-clusters to be used by the many for-loops) doesn't force
> any special direction on its own. Any future topology may leave its
> "front-facing amphorae" set equal to its "back-facing amphorae" which
> brings it back to the current style of for-loops handling.
>

See, I disagree that an amphora and a distributor are even really similar.
The idea that a distributor is just a front-facing amphora I think is
fundamentally false-- Especially if distributors are implemented with a
direct-return topology (as the blueprint under evaluation describes) then
they're almost nothing alike. The distributor service VM will not be
running haproxy and will be running its own unique API specifically because
it's fulfilling a vastly different role in the topology than the amphorae
fill.


>
> > C. Option 2 seems like it's going to have a lot more permutations that
> > would need testing to ensure that code changes don't break existing /
> > potentially supported functionality. Option 1 keeps the distributor and
> > amphorae management code separate, which means tests should be 

Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Peters, Rawlin
On Monday, June 13, 2016 6:28 AM, Daniel P. Berrange wrote:
> 
> On Mon, Jun 13, 2016 at 07:39:29AM -0400, Assaf Muller wrote:
> > On Mon, Jun 13, 2016 at 4:35 AM, Daniel P. Berrange
>  wrote:
> > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> > >> Hi,
> > >>
> > >> You may or may not be aware of the vlan-aware-vms effort [1] in
> > >> Neutron.  If not, there is a spec and a fair number of patches in
> > >> progress for this.  Essentially, the goal is to allow a VM to
> > >> connect to multiple Neutron networks by tagging traffic on a single
> > >> port with VLAN tags.
> > >>
> > >> This effort will have some effect on vif plugging because the
> > >> datapath will include some changes that will effect how vif
> > >> plugging is done today.
> > >>
> > >> The design proposal for trunk ports with OVS adds a new bridge for
> > >> each trunk port.  This bridge will demux the traffic and then
> > >> connect to br-int with patch ports for each of the networks.
> > >> Rawlin Peters has some ideas for expanding the vif capability to
> > >> include this wiring.
> > >>
> > >> There is also a proposal for connecting to linux bridges by using
> > >> kernel vlan interfaces.
> > >>
> > >> This effort is pretty important to Neutron in the Newton timeframe.
> > >> I wanted to send this out to start rounding up the reviewers and
> > >> other participants we need to see how we can start putting together
> > >> a plan for nova integration of this feature (via os-vif?).
> > >
> > > I've not taken a look at the proposal, but on the timing side of
> > > things it is really way to late to start this email thread asking
> > > for design input from os-vif or nova. We're way past the spec
> > > proposal deadline for Nova in the Newton cycle, so nothing is going
> > > to happen until the Ocata cycle no matter what Neutron want  in
> > > Newton. For os-vif our focus right now is exclusively on getting
> > > existing functionality ported over, and integrated into Nova in
> > > Newton. So again we're not really looking to spend time on further os-vif
> design work right now.
> > >
> > > In the Ocata cycle we'll be looking to integrate os-vif into Neutron
> > > to let it directly serialize VIF objects and send them over to Nova,
> > > instead of using the ad-hoc port-binding dicts.  From the Nova side,
> > > we're not likely to want to support any new functionality that
> > > affects port-binding data until after Neutron is converted to
> > > os-vif. So Ocata at the earliest, but probably more like P,
> > > unless the Neutron conversion to os-vif gets completed unexpectedly
> quickly.
> >
> > In light of this feature being requested by the NFV, container and
> > baremetal communities, and that Neutron's os-vif integration work
> > hasn't begun, does it make sense to block Nova VIF work? Are we
> > comfortable, from a wider OpenStack perspective, to wait until
> > possibly the P release? I think it's our collective responsibility as
> > developers to find creative ways to meet deadlines, not serializing
> > work on features and letting processes block us.
> 
> Everyone has their own personal set of features that are their personal
> priority items. Nova evaluates all the competing demands and decides on
> what the project's priorities are for the given cycle. For Newton Nova's
> priority is to convert existing VIF functionality to use os-vif. Anything 
> else vif
> related takes a backseat to this project priority. This formal modelling of 
> VIFs
> and developing a plugin facility has already been strung out over at least 3
> release cycles now. We're finally in a position to get it completed, and we're
> not going to divert attention away from this, to other new features requests
> until its done as that'll increase the chances of it getting strung out for 
> yet
> another release which is in no ones interests.

I think we are all in agreement that integrating os-vif into Nova during the 
Newton cycle is the highest priority.

The question is, once os-vif has been integrated into Nova are we going to have 
any problem augmenting the current os-vif OvsPlugin in order to support 
vlan-aware-vms in the Newton release? Based upon the current Nova integration 
patch [1] I believe that any vif-plugging changes required to implement 
vlan-aware-vms could be entirely localized to the os-vif OvsPlugin, so Nova 
wouldn't directly need to be involved there.

That said, there are currently a couple of vif-plugging strategies we could go 
with for wiring trunk ports for OVS, each of them requiring varying levels of 
os-vif augmentation:
Strategy 1) When Nova is plugging a trunk port, it creates the OVS trunk 
bridge, attaches the tap to it, and creates one patch port pair from the trunk 
bridge to br-int.
Strategy 2) Neutron passes a bridge name to Nova, and Nova uses this bridge 
name to create the OVS trunk bridge and attach the tap to it (no patch port 
pair plugging into br-int).

Strategy 1 requires 

[openstack-dev] [neutron] Neutron Common Flow Classifier meeting 6/14 UTC 1700

2016-06-13 Thread Cathy Zhang
Hi everyone,

We will have the second meeting discussion on Common Flow classifier at 1700 
UTC on openstack-meeting channel 6/14/2016. 
Please refer to the wiki page for more meeting information. 

https://wiki.openstack.org/wiki/Neutron/CommonFlowClassifier

Thanks,
Cathy 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposal for a new tool: dlrn-repo

2016-06-13 Thread Derek Higgins
On 13 June 2016 at 21:29, Ben Nemec  wrote:
> So our documented repo setup steps are three curls, a sed, and a
> multi-line bash command.  And the best part?  That's not even what we
> test.  The commands we actually use in tripleo.sh --repo-setup consist
> of the following: three curls, four seds, and (maybe) the same
> multi-line bash command.  Although whether that big list of packages in
> includepkgs is actually up to date with what we're testing is anybody's
> guess because without actually plugging both into a diff tool you
> probably can't visually find any differences.

Looking at the docs I think we should remove the list of packages
altogether, what we document for people trying to use tripleo should
only include the current-tripleo and deps repositories, as we know
this has passed a periodic CI job. This would reduce the documented
process too just 2 curls. The only place we need to worry about
pulling certain packages from /current is in CI and for devs who need
the absolute most up to date tripleo packages in these two cases
tripleo.sh should be used.


> What is my point?  That this whole process is overly complicated and
> error-prone.  If you miss one of those half dozen plus commands you're
> going to end up with a broken repo setup.  As one of the first things
> that a new user has to do in TripleO, this is a pretty poor introduction
> to the project.

Yup, couldn't agree more here, the simpler we can make things for a
new user the better

>
> My proposal is an rdo-release-esque project that will handle the repo
> setup for you, except that since dlrn doesn't really deal in releases I
> think the -repo name makes more sense.  Here's a first pass at such a
> tool: https://github.com/cybertron/dlrn-repo
>
> This would reduce the existing commands in tripleo.sh from:
> sudo sed -i -e 's%priority=.*%priority=30%' $REPO_PREFIX/delorean-deps.repo
> sudo curl -o $REPO_PREFIX/delorean.repo
> $DELOREAN_REPO_URL/$DELOREAN_REPO_FILE
> sudo sed -i -e 's%priority=.*%priority=20%' $REPO_PREFIX/delorean.repo
> sudo curl -o $REPO_PREFIX/delorean-current.repo
> http://trunk.rdoproject.org/centos7/current/delorean.repo
> sudo sed -i -e 's%priority=.*%priority=10%'
> $REPO_PREFIX/delorean-current.repo
> sudo sed -i 's/\[delorean\]/\[delorean-current\]/'
> $REPO_PREFIX/delorean-current.repo
> sudo /bin/bash -c "cat <<-EOF>>$REPO_PREFIX/delorean-current.repo
> includepkgs=diskimage-builder,instack,instack-undercloud,os-apply-config,os-cloud-config,os-collect-config,os-net-config,os-refresh-config,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tripleo,openstack-tripleo-puppet-elements
> EOF"
> sudo yum -y install yum-plugin-priorities
>
> to:
> sudo yum install -y http://tripleo.org/dlrn-repo.rpm # or wherever
> sudo dlrn-repo tripleo-current
>
> As you can see in the readme it also supports the stable branch repos or
> running against latest master of everything.
>
> Overall I think this is clearly a better user experience, and as an
> added bonus it would allow us to use the exact same code for repo
> management on the user side and in CI, which we can't have with a
> developer-specific tool like tripleo.sh.
>
> There's plenty left to do before this would be fully integrated (import
> to TripleO, package, update docs, update CI), so I wanted to solicit
> some broader input before pursuing it further.

I'm a little on the fence about this, I think the main problem you
bring up is the duplication of the includepkgs list, which I think we
can just remove from the docs, so whats left is the ugly blurb of
script in tripleo.sh --repo-setup, using a tool to do this certainly
improves the code, but does the creation of a new project complicate
things in its own way?

If we do go ahead with this the one suggestion I would have is
s/dlrn/trunk/g
delorean is the tool used to create trunk repositories, we shouldn't
care, it may even change some day, we are just dealing with trunk
repositories

>
> Thanks.
>
> -Ben
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-13 Thread Henry Nash
So, I think it depends what level of compatibility we are aiming at. Let me 
articulate them, and we can agree which we want:

C1) In all version of the our APIs today (v2 and v3.0 to v3.6), you have been 
able to issue an auth request which used project/tenant name as the scoping 
directive (with v3 you need a domain component as well, but that’s not relevant 
for this discussion). In these APIs, we absolutely expect that if you could 
issue an auth request to. say project “test”, in, say, v3.X, then you could 
absolutely issue the exact same command at V3.(X+1). This has remained true, 
even when we introduced project hierarchies, i.e.: if I create:

/development/myproject/test

...then I can still scope directly to the test project by simply specifying 
“test” as the project name (since, of course, all project names must still be 
unique in the domain). We never want to break this for so long as we formally 
support any APIs that once allowed this.

C2) To aid you issuing an auth request scoped by project (either name or id), 
we support a special API as part of the auth url (GET/auth/projects) that lists 
the projects the caller *could* scope to (i.e. those they have any kind of role 
on). You can take the “name” or “id” returned by this API and plug it directly 
into the auth request. Again for any API we currently support, we can’t break 
this.

C3) The name attribute of a project is its node-name in the hierarchy. If we 
decide to change this in a future API, we would not want a client using the 
existing API to get surprised and suddenly receive a path instead of the just 
the node-name (e.g. what if this was a UI of some type). 

Given all the above, there is no solution that can keep the above all true and 
allow more than one project of the same name in, say, v3.7 of the API. Even if 
we relaxed C2 and C2 -  C1 can never be guaranteed to be still supported. 
Neither of the original proposed solutions can address this (since it is a data 
modelling problem, not an API problem).

However, given that we will have, for the first time, the ability to 
microversion the Identity API starting with 3.7, there are things we can do to 
start us down this path. Let me re-articulate the options I am proposing:

Option 1A) In v3.7 we add a ‘path_name' attribute to a project entity, which is 
hence returned by any API that returns a project entity. The ‘path_name' 
attribute will contain the full path name, including the project itself. (Note 
that clients speaking 3.6 and earlier will not see this new attribute). 
Further, for clients speaking 3.7 and later, we add support to allow a 
‘path_name' (as an alternative to ‘name' or ‘id') to be used in auth scoping. 
We do not (yet) relax any uniqueness constraints, but mark API 3.6 and earlier 
as deprecated, as well as using the ‘name’ attribute in the auth request. (we 
still support all these, we just mark them as deprecated). At some time in the 
future (e.g. 3.8), we remove support for using ‘name’ for auth, insisting on 
the use of ‘path_name’ instead. Sometime later (e.g. 3.10) we remove support 
for 3.8 and earlier. Then and only then, do we relax the uniqueness constraint 
allowing projects with duplicate node-names (but with different parents).

Option 1B) The same as 1A, but we insist on path_name use in auth in v3.7 (i.e. 
no grace-period for still using just ’name', instead relying on the fact that 
3.6 clients will still work just fine). Then later (e.g. perhaps v3.9), we 
remove support for v3.6 and before…and relax the uniqueness constraint.

Option 2A) In 3.7 the meaning of the ‘name' attribute of project entity is 
redefined to be the full path name (note that clients speaking 3.6 will 
continue to see ’name’ just be the node-name without a path). We do not (yet) 
relax any uniqueness constraints. For clients speaking 3.7 and later, we return 
the full path name where the project entity ‘name’ attribute is returned. For 
auth, we allow a full path name to specified in the ‘name’ scope attribute - 
but we also still support just a name without a path (which we can guarantee to 
honour, since, so far, we haven’t relaxed the uniqueness constraint - however 
we mark that support as deprecated). At some time in the future (e.g. 3.8), we 
remove support for using an un-pathed ‘name’ for auth. Sometime later (e..g. 
3.10) we remove support for 3.8 and earlier. Then and only then, do we relax 
the uniqueness constraint allowing projects with duplicate node-names (but with 
different parents).

Option 2B) The same as 2A, but we insist on using ‘name’ with a full path use 
in auth in v3.7 (i.e. no grace-period for still using an un-pathed  ’name', 
instead relying on the fact that 3.6 clients will still work just fine). Then 
later (e.g. perhaps v3.9), we remove support for v3.6 and before…and relax the 
uniqueness constraint.

The downside for option 2A and 2B is that a client must do work to not be 
“surprised” by 3.7 (since the ‘name’ attribute of a 

[openstack-dev] The oslo-incubator and you

2016-06-13 Thread Joshua Harlow

Hi all,

I just wanted to let everyone know that the oslo-incubator[1, 2] 
repository is now officially closed for business (it has been deprecated 
for a long time now) and that it would be much appreciated that any 
projects with code (or open reviews to remove that code) to merge those 
sooner rather than later.


A listing (thanks ronald) that has been being worked through (also via 
ronald) is at https://etherpad.openstack.org/p/oslo-libraries-adoption 
(search for 'projects with incubator code'). It would be great/super if 
the missing open reviews in that list could be merged (so it will not be 
a surprise to projects that the incubator no longer exists).


Thanks for your cooperation,

May the force be with u!

-Josh

[1]: http://git.openstack.org/cgit/openstack/oslo-incubator/tree/
[2]: https://github.com/openstack/oslo-incubator/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] reclaimed the kolla namespace on docker

2016-06-13 Thread Steven Dake (stdake)
Hey folks,

Kuan Yi Ming @ Docker helped us reclaim the kolla namespace for our images.  If 
you are a core reviewer and would like access to push to the docker hub in the 
kolla namespace, please send me your email address offline with your interest.

Thanks
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] [stable] Re: [Openstack-stable-maint] Stable check of openstack/ceilometer failed

2016-06-13 Thread Ian Cordasco
-Original Message-
From: A mailing list for the OpenStack Stable Branch test reports.

Reply: openstack-dev@lists.openstack.org 
Date: June 13, 2016 at 01:13:54
To: openstack-stable-ma...@lists.openstack.org

Subject:  [Openstack-stable-maint] Stable check of openstack/ceilometer failed

> Build failed.
>
> - periodic-ceilometer-docs-liberty 
> http://logs.openstack.org/periodic-stable/periodic-ceilometer-docs-liberty/204fcec/
> : SUCCESS in 5m 31s
> - periodic-ceilometer-python27-liberty 
> http://logs.openstack.org/periodic-stable/periodic-ceilometer-python27-liberty/00f7474/
> : FAILURE in 6m 20s

Hey ceilometer stable maintainers,

The following tests have been failing in periodic jobs for the last 4 days:

ceilometer.tests.unit.alarm.evaluator.test_base.TestEvaluatorBaseClass

- test_base_time_constraints_by_month
- test_base_time_constraints_complex
- test_base_time_constraints
- test_base_time_constraints_timezone

ceilometer.tests.unit.alarm.evaluator.test_combination.TestEvaluate 

- test_no_state_change_outside_time_constraint
- test_state_change_inside_time_constraint

ceilometer.tests.unit.alarm.evaluator.test_gnocchi.TestGnocchiThresholdEvaluate

- test_no_state_change_outside_time_constraint

ceilometer.tests.unit.alarm.evaluator.test_threshold.TestEvaluate   

- test_no_state_change_outside_time_constraint
- test_state_change_inside_time_constraint

And this one has been failing every day for almost a week now
(starting on 7 June 2016)

ceilometer.tests.unit.test_messaging.MessagingTests.test_get_transport_optional

Is anyone looking into these?

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Re: [Openstack-stable-maint] Stable check of openstack/nova failed

2016-06-13 Thread Ian Cordasco
-Original Message-
From: A mailing list for the OpenStack Stable Branch test reports.

Reply: openstack-dev@lists.openstack.org 
Date: June 13, 2016 at 01:14:18
To: openstack-stable-ma...@lists.openstack.org

Subject:  [Openstack-stable-maint] Stable check of openstack/nova failed

> Build failed.
>
> - periodic-nova-docs-liberty 
> http://logs.openstack.org/periodic-stable/periodic-nova-docs-liberty/2ede148/
> : SUCCESS in 7m 29s
> - periodic-nova-python27-db-liberty 
> http://logs.openstack.org/periodic-stable/periodic-nova-python27-db-liberty/7aa69c6/
> : FAILURE in 4m 37s

Both yesterday's failure
(http://logs.openstack.org/periodic-stable/periodic-nova-python27-db-liberty/828d832/console.html#_2016-06-12_06_06_45_127)
and today's 
(http://logs.openstack.org/periodic-stable/periodic-nova-python27-db-liberty/7aa69c6/console.html#_2016-06-13_06_10_30_757)
are related to being unable to install python-ironicclient 0.8.2 which
is listed as having been published today
(https://pypi.python.org/pypi/python-ironicclient/0.8.2). I suspect
this will go away on its own now that the package is on PyPI.

> - periodic-nova-docs-mitaka 
> http://logs.openstack.org/periodic-stable/periodic-nova-docs-mitaka/70ad3c7/
> : SUCCESS in 7m 17s
> - periodic-nova-python27-db-mitaka 
> http://logs.openstack.org/periodic-stable/periodic-nova-python27-db-mitaka/a4028be/
> : SUCCESS in 7m 40s
>
> ___
> Openstack-stable-maint mailing list
> openstack-stable-ma...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
>

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/neutron failed

2016-06-13 Thread Ian Cordasco
-Original Message-
From: A mailing list for the OpenStack Stable Branch test reports.

Reply: openstack-dev@lists.openstack.org 
Date: June 9, 2016 at 01:26:01
To: openstack-stable-ma...@lists.openstack.org

Subject:  [Openstack-stable-maint] Stable check of openstack/neutron failed

> Build failed.
>
> - periodic-neutron-docs-liberty 
> http://logs.openstack.org/periodic-stable/periodic-neutron-docs-liberty/b33f495/
> : FAILURE in 6m 12s

This was merely a failure installing PBR from one of the PyPI mirrors.

> - periodic-neutron-python27-liberty 
> http://logs.openstack.org/periodic-stable/periodic-neutron-python27-liberty/4f62e93/
> : SUCCESS in 10m 53s
> - periodic-neutron-docs-mitaka 
> http://logs.openstack.org/periodic-stable/periodic-neutron-docs-mitaka/24979f8/
> : SUCCESS in 3m 27s
> - periodic-neutron-python27-mitaka 
> http://logs.openstack.org/periodic-stable/periodic-neutron-python27-mitaka/cf07c92/
> : SUCCESS in 12m 31s
>
> ___
> Openstack-stable-maint mailing list
> openstack-stable-ma...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
>

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] [Neutron] Waiting until Neutron Port isActive

2016-06-13 Thread Salvatore Orlando
As for the notifier proposed above it is correct that neutron needs to be
changed. This should not be a massive amount of work. Today it works with
nova only pretty much because nova it's the only compute service it
interacts with.

The question brought aboud ping vs operational status is a very good one.
In neutron status=UP for a port only means that L2 wiring (at least for
most plugins) occurred on the port. Networking might not yet be fully ready.
I know some plugins - like ML2 - are adding (or have recently added)
mechanism to improve this situation.

Pinging a port might seem the most reliable way of knowing whether a port
is up but this has issues:
- false positives (or negatives according to which event you are trying to
verify!)
- security groups getting in the way
- need to be able to reach container interfaces, which might lead to have
"health checking agents" to implement this.

I think that if:
- you are not using DHCP
- you can clear identify the sets of ports you are waiting on
- you are using the ML2-based reference implementation (or any other impl
which does not do round-trips to the backend on GET operations)

You should be ok with polling. I'm not sure however if a backoff mechanisms
is applicable in this case.

Salvatore




On 13 June 2016 at 21:00, Rick Jones  wrote:

> On 06/10/2016 03:13 PM, Kevin Benton wrote:
>
>> Polling should be fine. get_port operations a relatively cheap operation
>> for Neutron.
>>
>
> Just in principle, I would suggest this polling have a back-off built into
> it.  Poll once, see the port is not yet "up" - wait a semi-random short
> length of time,  poll again, see it is not yet "up" wait a longer
> semi-random length of time, lather, rinse, repeat until you've either
> gotten to the limits of your patience or the port has become "up."
>
> Fixed, short poll intervals can run the risk of congestive collapse "at
> scale."
>
> rick jones
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Kevin Benton
+1. Neutron should already be able to tell Nova which bridge to use for an
OVS port.[1] For the Linux bridge implementation it's a matter of creating
vlan interfaces and plugging them into bridges like regular VM ports, which
is all the responsibility of the L2 agent. We shouldn't need any changes
from Nova or os-vif from what I can see.



1.
https://github.com/openstack/nova/blob/6e2e1dc912199e057e5c3a5e07d39f26cbbbdd5b/nova/network/neutronv2/api.py#L1618

On Mon, Jun 13, 2016 at 5:26 AM, Mooney, Sean K 
wrote:

>
>
> > -Original Message-
> > From: Daniel P. Berrange [mailto:berra...@redhat.com]
> > Sent: Monday, June 13, 2016 1:12 PM
> > To: Armando M. 
> > Cc: Carl Baldwin ; OpenStack Development Mailing
> > List ; Jay Pipes
> > ; Maxime Leroy ; Moshe Levi
> > ; Russell Bryant ; sahid
> > ; Mooney, Sean K 
> > Subject: Re: [Neutron][os-vif] Expanding vif capability for wiring trunk
> > ports
> >
> > On Mon, Jun 13, 2016 at 02:08:30PM +0200, Armando M. wrote:
> > > On 13 June 2016 at 10:35, Daniel P. Berrange 
> > wrote:
> > >
> > > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> > > > > Hi,
> > > > >
> > > > > You may or may not be aware of the vlan-aware-vms effort [1] in
> > > > > Neutron.  If not, there is a spec and a fair number of patches in
> > > > > progress for this.  Essentially, the goal is to allow a VM to
> > > > > connect to multiple Neutron networks by tagging traffic on a
> > > > > single port with VLAN tags.
> > > > >
> > > > > This effort will have some effect on vif plugging because the
> > > > > datapath will include some changes that will effect how vif
> > > > > plugging is done today.
> > > > >
> > > > > The design proposal for trunk ports with OVS adds a new bridge for
> > > > > each trunk port.  This bridge will demux the traffic and then
> > > > > connect to br-int with patch ports for each of the networks.
> > > > > Rawlin Peters has some ideas for expanding the vif capability to
> > > > > include this wiring.
> > > > >
> > > > > There is also a proposal for connecting to linux bridges by using
> > > > > kernel vlan interfaces.
> > > > >
> > > > > This effort is pretty important to Neutron in the Newton
> > > > > timeframe.  I wanted to send this out to start rounding up the
> > > > > reviewers and other participants we need to see how we can start
> > > > > putting together a plan for nova integration of this feature (via
> > os-vif?).
> > > >
> > > > I've not taken a look at the proposal, but on the timing side of
> > > > things it is really way to late to start this email thread asking
> > > > for design input from os-vif or nova. We're way past the spec
> > > > proposal deadline for Nova in the Newton cycle, so nothing is going
> > > > to happen until the Ocata cycle no matter what Neutron want  in
> > Newton.
> > >
> > >
> > > For sake of clarity, does this mean that the management of the os-vif
> > > project matches exactly Nova's, e.g. same deadlines and processes
> > > apply, even though the core team and its release model are different
> > from Nova's?
> > > I may have erroneously implied that it wasn't, also from past talks I
> > > had with johnthetubaguy.
> >
> > No, we don't intend to force ourselves to only release at milestones
> > like nova does. We'll release the os-vif library whenever there is new
> > functionality in its code that we need to make available to
> > nova/neutron.
> > This could be as frequently as once every few weeks.
> [Mooney, Sean K]
> I have been tracking contributing to the vlan aware vm work in
> neutron since the Vancouver summit so I am quite familiar with what would
> have
> to be modified to support the vlan trucking. Provided the modifications do
> not
> delay the conversion to os-vif in nova this cycle I would be happy to
> review
> and help develop the code to support this use case.
>
> In the ovs case at lease which we have been discussing here
>
> https://review.openstack.org/#/c/318317/4/doc/source/devref/openvswitch_agent.rst
> no changes should be required for nova and all changes will be confined to
> the ovs
> plugin. In is essence check if bridge exists, if not create it with port
> id,
> Then plug as normal.
>
> Again though I do agree that we should focus on completing the initial
> nova integration
> But I don't think that mean we have to exclude other feature enhancements
> as long as they
> do not prevent us achieving that goal.
>
>
> >
> > Regards,
> > Daniel
> > --
> > |: http://berrange.com  -o-
> > http://www.flickr.com/photos/dberrange/ :|
> > |: http://libvirt.org  -o- http://virt-
> > manager.org :|
> > |: http://autobuild.org   -o-
> > http://search.cpan.org/~danberr/ :|
> > |: 

[openstack-dev] [TripleO] Proposal for a new tool: dlrn-repo

2016-06-13 Thread Ben Nemec
So our documented repo setup steps are three curls, a sed, and a
multi-line bash command.  And the best part?  That's not even what we
test.  The commands we actually use in tripleo.sh --repo-setup consist
of the following: three curls, four seds, and (maybe) the same
multi-line bash command.  Although whether that big list of packages in
includepkgs is actually up to date with what we're testing is anybody's
guess because without actually plugging both into a diff tool you
probably can't visually find any differences.

What is my point?  That this whole process is overly complicated and
error-prone.  If you miss one of those half dozen plus commands you're
going to end up with a broken repo setup.  As one of the first things
that a new user has to do in TripleO, this is a pretty poor introduction
to the project.

My proposal is an rdo-release-esque project that will handle the repo
setup for you, except that since dlrn doesn't really deal in releases I
think the -repo name makes more sense.  Here's a first pass at such a
tool: https://github.com/cybertron/dlrn-repo

This would reduce the existing commands in tripleo.sh from:
sudo sed -i -e 's%priority=.*%priority=30%' $REPO_PREFIX/delorean-deps.repo
sudo curl -o $REPO_PREFIX/delorean.repo
$DELOREAN_REPO_URL/$DELOREAN_REPO_FILE
sudo sed -i -e 's%priority=.*%priority=20%' $REPO_PREFIX/delorean.repo
sudo curl -o $REPO_PREFIX/delorean-current.repo
http://trunk.rdoproject.org/centos7/current/delorean.repo
sudo sed -i -e 's%priority=.*%priority=10%'
$REPO_PREFIX/delorean-current.repo
sudo sed -i 's/\[delorean\]/\[delorean-current\]/'
$REPO_PREFIX/delorean-current.repo
sudo /bin/bash -c "cat <<-EOF>>$REPO_PREFIX/delorean-current.repo
includepkgs=diskimage-builder,instack,instack-undercloud,os-apply-config,os-cloud-config,os-collect-config,os-net-config,os-refresh-config,python-tripleoclient,tripleo-common,openstack-tripleo-heat-templates,openstack-tripleo-image-elements,openstack-tripleo,openstack-tripleo-puppet-elements
EOF"
sudo yum -y install yum-plugin-priorities

to:
sudo yum install -y http://tripleo.org/dlrn-repo.rpm # or wherever
sudo dlrn-repo tripleo-current

As you can see in the readme it also supports the stable branch repos or
running against latest master of everything.

Overall I think this is clearly a better user experience, and as an
added bonus it would allow us to use the exact same code for repo
management on the user side and in CI, which we can't have with a
developer-specific tool like tripleo.sh.

There's plenty left to do before this would be fully integrated (import
to TripleO, package, update docs, update CI), so I wanted to solicit
some broader input before pursuing it further.

Thanks.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Mid-cycle date selection (need input!)

2016-06-13 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 06/09/2016 01:51 PM, Major Hayden wrote:
> Once we get that sorted out, we can fire up an etherpad for everyone to sign 
> up for a spot.

As promised, here's a link to the etherpad:

  https://etherpad.openstack.org/p/osa-midcycle-newton

Please add a +1 beside the dates you prefer and add your name to the bottom of 
the etherpad if you plan to attend.

I need this information by the end of the week to get the meeting room booked 
and arrange for a hotel discount! :)

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXXxWsAAoJEHNwUeDBAR+xXNkP/i1zX7/rv76ey1Ix+zcMIV41
L29BtTYhpDfzbG94cwzcpp8etDShL7dlt+R0yo3OKFiHI6QWSiZ0ahU78nfp6wpE
uiKHXGv7+vyhhP6obsywvV4iAFdXjI9fmaTAr5ibK+0dBHrMe0nrjF/5pSGoAtVq
mH1G4nP71yjppJFetcteroKFSW8gnMQD4DCrtlkFF7pDlrg+YTpnfzKIfddGvdAM
jsOFAU5uUln3C6qqwIYdGF8csNUMUhGGNr4yNwErNqAyDqZKheifkFWeysUh1MpR
X3JXkudUbVY+aKd6am7slF2UN8w167LN3uL40FNT/9Q0ZC4BHkZA0MTQztF3KmCj
Nn38+b28IfK8b/XhOwEK7kMn6J2ZoruBMKszeK5mZAd6mrVg3yvjchHjetchTxEW
lgPH4GSTVulzI7GdQ6AbqX0smHuNj4aDqkrbI3W5+ysQcjl2oYE/DsmnF2sqpHYa
xa4D2uQ9KEk5dR02ysIV5g8fUensWKjphtGYIfca/N07w+vppAtjUDPeMWP6mqPx
twyaaWYYHyKDSrep8c20n9a4lR6Y2lvEvg0ElGlT67vTZfaqAyeOtr9V87bhQIi0
MMWkj7EPuWAdRL1DjBdZMYXRN3QaLdnWqqA12Cj8WSDi7VdJR1GuAd4kzizTCNXr
EENIO/S6/4873dbwsrnT
=XmLr
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-13 Thread Clint Byrum
Excerpts from Dolph Mathews's message of 2016-06-13 20:11:57 +:
> On Fri, Jun 10, 2016 at 12:20 PM Clint Byrum  wrote:
> 
> > Excerpts from Henry Nash's message of 2016-06-10 14:37:37 +0100:
> > > On further reflection, it seems to me that we can never simply enable
> > either of these approaches in a single release. Even a v4.0 version of the
> > API doesn’t help - since presumably a sever supporting v4 would want to be
> > able to support v3.x for a signification time; and, already discussed, as
> > soon as you allow multiple none-names to have the same name, you can no
> > longer guarantee to support the current API.
> > >
> > > Hence the only thing I think we can do (if we really do want to change
> > the current functionality) is to do this over several releases with a
> > typical deprecation cycle, e.g.
> > >
> > > 1) At release 3.7 we allow you to (optionally) specify path names for
> > auth….but make no changes to the uniqueness constraints. We also change the
> > GET /auth/projects to return a path name. However, you can still auth
> > exactly the way we do today (since there will always only be a single
> > project of a given node-name). If however, you do auth without a path (to a
> > project that isn’t a top level project), we log a warning to say this is
> > deprecated (2 cycles, 4 cycles?)
> > > 2) If you connect with a 3.6 client, then you get the same as today for
> > GET /auth/projects and cannot use a path name to auth.
> > > 3) At sometime in the future, we deprecate the “auth without a path”
> > capability. We can debate as to whether this has to be a major release.
> > >
> > > If we take this gradual approach, I would be pushing for the “relax
> > project name constraints” approach…since I believe this leads to a cleaner
> > eventual solution (and there is no particular advantage with the
> > hierarchical naming approach) - and (until the end of the deprecation)
> > there is no break to the existing API.
> >
> >
> Please don't ever break the API - with or without a supposed "deprecation"
> period.
> 
> > This seems really complicated.
> >
> > Why don't users just start using paths in project names, if they want
> > paths in project names?
> >
> > And then in v3.7 you can allow them to specify paths relative to parent of
> > the user:
> >
> > So just allow this always:
> >
> > {"name": "finance/dev"}
> >
> > And then add this later once users are aware of what the / means:
> >
> > {"basename": "dev"}
> >
> > What breaks by adding that?
> >
> 
> if I'm following your approach, then I should point out that we already
> allow forward slashes in project names, so what breaks is any user that
> already has forward slashes in their project names, but have no awareness
> of, or intention to consume, hierarchical multitenancy.
> 

Pretty simple solution to that: they use the API they've always used,
which doesn't care about the hierarchy.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ovs] The way we deal with MTU

2016-06-13 Thread Terry Wilson
> So basically, as long as we try to plug ports with different MTUs into the 
> same bridge, we are utilizing a bug in Open vSwitch, that may break us any 
> time.
>
> I guess our alternatives are:
> - either redesign bridge setup for openvswitch to e.g. maintain a bridge per 
> network;
> - or talk to ovs folks on whether they may support that for us.
>
> I understand the former option is too scary. It opens lots of questions, 
> including upgrade impact since it will obviously introduce a dataplane 
> downtime. That would be a huge shift in paradigm, probably too huge to 
> swallow. The latter option may not fly with vswitch folks. Any better ideas?

I know I've heard from people who'd like to be able to support both
DPDK and non-DPDK workloads on the same node. The current
implementation with a single br-int (and thus datapath) makes that
impossible to pull of with good performance. So there may be other
reasons to consider introducing multiple isolated bridges: MTUs,
datapath_types, etc.

Terry

On Mon, Jun 13, 2016 at 11:49 AM, Ihar Hrachyshka  wrote:
> Hi all,
>
> in Mitaka, we introduced a bunch of changes to the way we handle MTU in 
> Neutron/Nova, making sure that the whole instance data path, starting from 
> instance internal interface, thru hybrid bridge, into the br-int; as well as 
> router data path (qr) have proper MTU value set on all participating devices. 
> On hypervisor side, both Nova and Neutron take part in it, setting it with 
> ip-link tool based on what Neutron plugin calculates for us. So far so good.
>
> Turns out that for OVS, it does not work as expected in regards to br-int. 
> There was a bug reported lately: https://launchpad.net/bugs/1590397
>
> Briefly, when we try to set MTU on a device that is plugged into a bridge, 
> and if the bridge already has another port with lower MTU, the bridge itself 
> inherits MTU from that latter port, and Linux kernel (?) does not allow to 
> set MTU on the first device at all, making ip link calls ineffective.
>
> AFAIU this behaviour is consistent with Linux bridging rules: you can’t have 
> ports of different MTU plugged into the same bridge.
>
> Now, that’s a huge problem for Neutron, because we plug ports that belong to 
> different networks (and that hence may have different MTUs) into the same 
> br-int bridge.
>
> So I played with the code locally a bit and spotted that currently, we set 
> MTU for router ports before we move their devices into router namespaces. And 
> once the device is in a namespace, ip-link actually works. So I wrote a fix 
> with a functional test that proves the point: 
> https://review.openstack.org/#/c/327651/ The fix was validated by the 
> reporter of the original bug and seems to fix the issue for him.
>
> It’s suspicious that it works from inside a namespace but not when the device 
> is still in the root namespace. So I reached out to Jiri Benc from our local 
> Open vSwitch team, and here is a quote:
>
> ===
>
> "It's a bug in ovs-vswitchd. It doesn't see the interface that's in
> other netns and thus cannot enforce the correct MTU.
>
> We'll hopefully fix it and disallow incorrect MTU setting even across
> namespaces. However, it requires significant effort and rework of ovs
> name space handling.
>
> You should not depend on the current buggy behavior. Don't set MTU of
> the internal interfaces higher than the rest of the bridge, it's not
> supported. Hacking this around by moving the interface to a netns is
> exploiting of a bug.
>
> We can certainly discuss whether this limitation could be relaxed.
> Honestly, I don't know, it's for a discussion upstream. But as of now,
> it's not supported and you should not do it.”
>
> So basically, as long as we try to plug ports with different MTUs into the 
> same bridge, we are utilizing a bug in Open vSwitch, that may break us any 
> time.
>
> I guess our alternatives are:
> - either redesign bridge setup for openvswitch to e.g. maintain a bridge per 
> network;
> - or talk to ovs folks on whether they may support that for us.
>
> I understand the former option is too scary. It opens lots of questions, 
> including upgrade impact since it will obviously introduce a dataplane 
> downtime. That would be a huge shift in paradigm, probably too huge to 
> swallow. The latter option may not fly with vswitch folks. Any better ideas?
>
> It’s also not clear whether we want to proceed with my immediate fix. Advices 
> are welcome.
>
> Thanks,
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-13 Thread Paul Michali
Hmm... I tried Friday and again today, and I'm not seeing the VMs being
evenly created on the NUMA nodes. Every Cirros VM is created on nodeid 0.

I have the m1/small flavor (@GB) selected and am using hw:numa_nodes=1 and
hw:mem_page_size=2048 flavor-key settings. Each VM is consuming 1024 huge
pages (of size 2MB), but is on nodeid 0 always. Also, it seems that when I
reach 1/2 of the total number of huge pages used, libvirt gives an error
saying there is not enough memory to create the VM. Is it expected that the
huge pages are "allocated" to each NUMA node?

I don't know why I cannot repeat what I did on 6/3, where I changed
 hw:mem_page_size from "large" to "2048"and it worked, allocation to each
of the two NUMA nodes. :(

Regards,

PCM


On Fri, Jun 10, 2016 at 9:16 AM Paul Michali  wrote:

> Actually, I had menm_page_size set to "large" and not "1024". However, it
> seemed like it was using 1024 pages per (small VM creation). Is there
> possibly some issue with large not using one of the supported values? I
> would have guessed it would have chosen 2M or 1G for the size.
>
> Any thoughts?
>
> PCM
>
> On Fri, Jun 10, 2016 at 9:05 AM Paul Michali  wrote:
>
>> Thanks Daniel and Chris! I think that was the problem, I had configured
>> Nova flavor with a mem_page_size of 1024, and it should have been one of
>> the supported values.
>>
>> I'll go through and check things out one more time, but I think that is
>> the problem. I still need to figure out what is going on with the neutron
>> port not being released - we have another person in my group who has seen
>> the same issue.
>>
>> Regards,
>>
>> PCM
>>
>> On Fri, Jun 10, 2016 at 4:41 AM Daniel P. Berrange 
>> wrote:
>>
>>> On Thu, Jun 09, 2016 at 12:35:06PM -0600, Chris Friesen wrote:
>>> > On 06/09/2016 05:15 AM, Paul Michali wrote:
>>> > > 1) On the host, I was seeing 32768 huge pages, of 2MB size.
>>> >
>>> > Please check the number of huge pages _per host numa node_.
>>> >
>>> > > 2) I changed mem_page_size from 1024 to 2048 in the flavor, and then
>>> when VMs
>>> > > were created, they were being evenly assigned to the two NUMA nodes.
>>> Each using
>>> > > 1024 huge pages. At this point I could create more than half, but
>>> when there
>>> > > were 1945 pages left, it failed to create a VM. Did it fail because
>>> the
>>> > > mem_page_size was 2048 and the available pages were 1945, even
>>> though we were
>>> > > only requesting 1024 pages?
>>> >
>>> > I do not think that "1024" is a valid page size (at least for x86).
>>>
>>> Correct, 4k, 2M and 1GB are valid page sizes.
>>>
>>> > Valid mem_page_size values are determined by the host CPU.  You do not
>>> need
>>> > a larger page size for flavors with larger memory sizes.
>>>
>>> Though note that page sizes should be a multiple of favour mem size
>>> unless you want to waste memory. eg if you have a flavour with 750MB
>>> RAM, then you probably don't want to use 1GB pages as it waste 250MB
>>>
>>> Regards,
>>> Daniel
>>> --
>>> |: http://berrange.com  -o-
>>> http://www.flickr.com/photos/dberrange/ :|
>>> |: http://libvirt.org  -o-
>>> http://virt-manager.org :|
>>> |: http://autobuild.org   -o-
>>> http://search.cpan.org/~danberr/ :|
>>> |: http://entangle-photo.org   -o-
>>> http://live.gnome.org/gtk-vnc :|
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-13 Thread Dolph Mathews
On Fri, Jun 10, 2016 at 12:20 PM Clint Byrum  wrote:

> Excerpts from Henry Nash's message of 2016-06-10 14:37:37 +0100:
> > On further reflection, it seems to me that we can never simply enable
> either of these approaches in a single release. Even a v4.0 version of the
> API doesn’t help - since presumably a sever supporting v4 would want to be
> able to support v3.x for a signification time; and, already discussed, as
> soon as you allow multiple none-names to have the same name, you can no
> longer guarantee to support the current API.
> >
> > Hence the only thing I think we can do (if we really do want to change
> the current functionality) is to do this over several releases with a
> typical deprecation cycle, e.g.
> >
> > 1) At release 3.7 we allow you to (optionally) specify path names for
> auth….but make no changes to the uniqueness constraints. We also change the
> GET /auth/projects to return a path name. However, you can still auth
> exactly the way we do today (since there will always only be a single
> project of a given node-name). If however, you do auth without a path (to a
> project that isn’t a top level project), we log a warning to say this is
> deprecated (2 cycles, 4 cycles?)
> > 2) If you connect with a 3.6 client, then you get the same as today for
> GET /auth/projects and cannot use a path name to auth.
> > 3) At sometime in the future, we deprecate the “auth without a path”
> capability. We can debate as to whether this has to be a major release.
> >
> > If we take this gradual approach, I would be pushing for the “relax
> project name constraints” approach…since I believe this leads to a cleaner
> eventual solution (and there is no particular advantage with the
> hierarchical naming approach) - and (until the end of the deprecation)
> there is no break to the existing API.
>
>
Please don't ever break the API - with or without a supposed "deprecation"
period.


> This seems really complicated.
>
> Why don't users just start using paths in project names, if they want
> paths in project names?
>
> And then in v3.7 you can allow them to specify paths relative to parent of
> the user:
>
> So just allow this always:
>
> {"name": "finance/dev"}
>
> And then add this later once users are aware of what the / means:
>
> {"basename": "dev"}
>
> What breaks by adding that?
>

if I'm following your approach, then I should point out that we already
allow forward slashes in project names, so what breaks is any user that
already has forward slashes in their project names, but have no awareness
of, or intention to consume, hierarchical multitenancy.


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
-Dolph
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday June 14th at 19:00 UTC

2016-06-13 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday June 14th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-06-07-19.03.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-06-07-19.03.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-06-07-19.03.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-13 Thread Ricardo Carrillo Cruz
Hi Doug

I've written a few Ansible OpenStack modules, for example:

https://github.com/ansible/ansible-modules-extras/pull/2240

Some pending merge, some others already merged.

Regards

2016-06-13 21:41 GMT+02:00 Doug Hellmann :

> Thanks!
>
> > On Jun 13, 2016, at 3:23 PM, Anita Kuno  wrote:
> >
> > On 06/13/2016 03:11 PM, Doug Hellmann wrote:
> >> I'm trying to pull together some information about contributions
> >> that OpenStack community members have made *upstream* of OpenStack,
> >> via code, docs, bug reports, or anything else to dependencies that
> >> we have.
> >>
> >> If you've made a contribution of that sort, I would appreciate a
> >> quick note.  Please reply off-list, there's no need to spam everyone,
> >> and I'll post the summary if folks want to see it.
> >>
> >> Thanks,
> >> Doug
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> > Upstream gerrit.
> >
> > I think it was a doc whitespace patch or comment whitespace patch. It
> > was fairly insignificant but it exists.
> >
> > Thanks for asking the question,
> > Anita.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-sfc] Proposing Mohan Kumar for networking-sfc core

2016-06-13 Thread Stephen Wong
+1. Great addition to the team!

On Mon, Jun 13, 2016 at 11:48 AM, Henry Fourie 
wrote:

> +1
>
>
>
> *From:* Cathy Zhang
> *Sent:* Monday, June 13, 2016 11:36 AM
> *To:* openstack-dev@lists.openstack.org; Cathy Zhang
> *Subject:* [openstack-dev] [neutron][networking-sfc] Proposing Mohan
> Kumar for networking-sfc core
>
>
>
> Mohan has been working on networking-sfc project for over one year. He is
> a key contributor to the design/coding/testing of SFC CLI,  SFC Horizon, as
> well as ONOS controller support for SFC functionality. He has been great at
> helping out with bug fixes, testing, and reviews of all components of
> networking-sfc. He is also actively providing guidance to the users on
> their SFC setup, testing, and usage. Mohan showed a very good understanding
> of the networking-sfc design, code base, and its usage scenarios.
> Networking-sfc could use more cores as our user base and features have
> grown and I think he'd be a valuable addition.
>
>
>
> Please respond with your +1 votes to approve this change or -1 votes to
> oppose.
>
>
>
> Thanks,
>
> Cathy
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-13 Thread Doug Hellmann
Thanks!

> On Jun 13, 2016, at 3:23 PM, Anita Kuno  wrote:
> 
> On 06/13/2016 03:11 PM, Doug Hellmann wrote:
>> I'm trying to pull together some information about contributions
>> that OpenStack community members have made *upstream* of OpenStack,
>> via code, docs, bug reports, or anything else to dependencies that
>> we have.
>> 
>> If you've made a contribution of that sort, I would appreciate a
>> quick note.  Please reply off-list, there's no need to spam everyone,
>> and I'll post the summary if folks want to see it.
>> 
>> Thanks,
>> Doug
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> Upstream gerrit.
> 
> I think it was a doc whitespace patch or comment whitespace patch. It
> was fairly insignificant but it exists.
> 
> Thanks for asking the question,
> Anita.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-13 Thread Doug Hellmann
Excerpts from Joshua Harlow's message of 2016-06-13 12:28:41 -0700:
> Just to clarify, upstream of openstack would be say in a library that 
> interacts with openstack (jclouds for example); or docs that jclouds has 
> about openstack or something like that?
>
> Or do u mean upstream of openstack to mean anything not openstack but 
> openstack related (for example there are libraries under openstack, say 
> the retrying library) that I wouldn't technically call upstream but 
> could be called downstream (underneath?) of it.

I was thinking of something in our dependency chain, but you make
a good point.  What I'm curious about is what other open source
contributions folks have made that were in some way triggered or
related to their work on OpenStack.  That could be anything outside
of OpenStack that we use as a direct dependency (setuptools) or as
a tool (gerrit), or that "uses us" in some sense (jclouds).

> 
> Doug Hellmann wrote:
> > I'm trying to pull together some information about contributions
> > that OpenStack community members have made *upstream* of OpenStack,
> > via code, docs, bug reports, or anything else to dependencies that
> > we have.
> >
> > If you've made a contribution of that sort, I would appreciate a
> > quick note.  Please reply off-list, there's no need to spam everyone,
> > and I'll post the summary if folks want to see it.
> >
> > Thanks,
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-13 Thread Joshua Harlow
Just to clarify, upstream of openstack would be say in a library that 
interacts with openstack (jclouds for example); or docs that jclouds has 
about openstack or something like that?


Or do u mean upstream of openstack to mean anything not openstack but 
openstack related (for example there are libraries under openstack, say 
the retrying library) that I wouldn't technically call upstream but 
could be called downstream (underneath?) of it.


Doug Hellmann wrote:

I'm trying to pull together some information about contributions
that OpenStack community members have made *upstream* of OpenStack,
via code, docs, bug reports, or anything else to dependencies that
we have.

If you've made a contribution of that sort, I would appreciate a
quick note.  Please reply off-list, there's no need to spam everyone,
and I'll post the summary if folks want to see it.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] new mailing list for tracking release job failures

2016-06-13 Thread Doug Hellmann
The kind folks on the infra team are helping to set things up so that we
have notifications for release failures, so we don't have to manually
check or wait to see if a package we think was released doesn't get
uploaded to PyPI.

Members of the release team should subscribe to the release-job-failures
list [1]. Release liaisons may subscribe, and we can set up topics for
specific projects to make it so you are only sent messages about
failures for your projects.

Doug

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-13 Thread Anita Kuno
On 06/13/2016 03:11 PM, Doug Hellmann wrote:
> I'm trying to pull together some information about contributions
> that OpenStack community members have made *upstream* of OpenStack,
> via code, docs, bug reports, or anything else to dependencies that
> we have.
> 
> If you've made a contribution of that sort, I would appreciate a
> quick note.  Please reply off-list, there's no need to spam everyone,
> and I'll post the summary if folks want to see it.
> 
> Thanks,
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Upstream gerrit.

I think it was a doc whitespace patch or comment whitespace patch. It
was fairly insignificant but it exists.

Thanks for asking the question,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Nodes Registration workflow improvements

2016-06-13 Thread Ben Nemec
On 06/13/2016 12:10 PM, Dmitry Tantsur wrote:
> On 06/13/2016 06:28 PM, Ben Nemec wrote:
>> On 06/13/2016 09:41 AM, Jiri Tomasek wrote:
>>> Hi all,
>>>
>>> As we are close to merging the initial Nodes Registration workflows and
>>> action [1, 2] using Mistral which successfully provides the current
>>> registration logic via common API, I'd like to start discussion on how
>>> to improve it so it satisfies GUI and CLI requirements. I'd like to try
>>> to describe the clients goals and define requirements, describe current
>>> workflow problems and propose a solution. I'd like to record the result
>>> of discussion to Blueprint [3] which Ryan already created.
>>>
>>>
>>> CLI goals and optimal workflow
>>> 
>>>
>>> CLI's main benefit is based on the fact that it's commands can simply
>>> become part of a script, so it is important that the operation is
>>> idempotent. The optimal CLI workflow is:
>>>
>>> User runs 'openstack baremetal import' and provides instackenv.json file
>>> which includes all nodes information. When the registration fails at
>>> some point, user is notified about the error and re-runs the command
>>> with the same set of nodes. Rinse and repeat until all nodes are
>>> properly registered.
>>
>> I would actually not describe this as the optimal workflow for CLI
>> registration either.  It would be much better if the registration
>> completed for all the nodes that it can in the first place and then any
>> failed nodes can be cleaned up later.  There's no reason one bad node in
>> a file containing 100 nodes should block the entire deployment.
>>
>> On that note, the only part of your proposal that I'm not sure would be
>> useful for the CLI is the non-blocking part.  I don't know that a CLI
>> fire-and-forget mode makes a lot of sense, although if there's a way for
>> the CLI to check the status then that might be fine too.  However,
>> pretty much all of the other usability stuff you're talking about would
>> benefit the CLI too.
>>
>>>
>>>
>>> GUI goals and optimal workflow
>>> =
>>>
>>> GUI's main goal is to provide a user friendly way to register nodes,
>>> inform the user on the process, handle the problems and lets user fix
>>> them. GUI strives for being responsive and interactive.
>>>
>>> GUI allows user to add nodes specification manually one by one by
>>> provided form or allow user (in same manner as CLI) to provide the
>>> instackenv.json file which holds the nodes description. Importing the
>>> file (or adding node manually) will populate an array of nodes the user
>>> wants to register. User is able to browse these nodes and make
>>> corrections to their configuration. GUI provides client side validations
>>> to verify inputs (node name format, required fields, mac address, ip
>>> address format etc.)
>>>
>>> Then user triggers the registration. The nodes are moved to nodes table
>>> as they are being registered. If an error occurs during registration of
>>> any of the nodes, user is notified about the issue and can fix it in
>>> registration form and can re-trigger registration for failed nodes.
>>> Rinse and repeat until all nodes are successfully registered and in
>>> proper state (manageable).
>>>
>>> Such workflow keeps the GUI interactive, user does not have to look at
>>> the spinner for several minutes (in case of registering hundreds of
>>> nodes), hoping that something does not happen wrong. User is constantly
>>> informed about the progress, user is able to react to the situation as
>>> he wants, User is able to freely interact with the GUI while
>>> registration is happening on the background. User is able to register
>>> nodes in batches.
>>>
>>>
>>> Current solution
>>> =
>>>
>>> Current solution uses register_or_update_nodes workflow [1] which takes
>>> a nodes_json array and runs register_or_update_nodes and
>>> set_nodes_managed tasks. When the whole operation completes it sends
>>> Zaqar message notifying about the result of the registration of the
>>> whole batch of nodes.
>>>
>>> register_or_update_nodes runs tripleo.register_or_update_nodes action
>>> [2] which uses business logic in tripleo_common/utils/nodes.py
>>>
>>> utils.nodes.py module has been originally extracted from tripleoclient
>>> to get the business logic behind the common API. It does following:
>>>
>>> - converts the instackenv.json nodes format to appropriate ironic driver
>>> format (driver-info fields)
>>> - sets kernel and ramdisk ids defaults if they're not provided
>>> - for each node it tests if node already exists (finds nodes by mac
>>> addresses) and updates it or registers it as new based on the result.
>>>
>>>
>>> Current Problems:
>>> - no zaqar notification is sent for each node
>>> - nodes are registered in batch, registration fails when error happens
>>> on a certain node, leaving already registered nodes in inconsistent state
>>> - workflow does not notify user about what nodes have been registered
>>> and 

[openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-13 Thread Doug Hellmann
I'm trying to pull together some information about contributions
that OpenStack community members have made *upstream* of OpenStack,
via code, docs, bug reports, or anything else to dependencies that
we have.

If you've made a contribution of that sort, I would appreciate a
quick note.  Please reply off-list, there's no need to spam everyone,
and I'll post the summary if folks want to see it.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-13 Thread Sean McGinnis
On Mon, Jun 13, 2016 at 08:06:50AM +, Wang, Shane wrote:
> Hi, OpenStackers,
> 
> As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack Bug 
> Smash at Hangzhou, China.
> The 1st China Bug Smash was at Shanghai, the 2nd was at Xi'an, and the 3rd 
> was at Chengdu.
> 
> We are constructing the etherpad page for registration, and the date will be 
> around July 11 (probably July 6 - 8, but to be determined very soon).
> 
> The China teams will still focus on Neutron, Nova, Cinder, Heat, Magnum, 
> Rally, Ironic, Dragonflow and Watcher, etc. projects, so need developers to 
> join and fix bugs as many as possible, and cores to be on site to moderate 
> the code changes and merges. Welcome to the smash mash at Hangzhou - 
> http://www.chinahighlights.com/hangzhou/attraction/.
> 
> Good news is still that for the first two cores who are from those above 
> projects and respond to this invitation in my email inbox and copy the CC 
> list, the sponsors are pleased to sponsor your international travel, 
> including flight and hotel. Please simply reply to me.
> 
> Best regards,
> --
> China OpenStack Bug Smash Team
> 
> 

Glad to see this continuing!

I would like to participate in this event, but that current timeframe
would conflict with OpenStack Days India. If that does end up being the
final date, I will try to be online as much as possible to help with
reviews.

If it does end up being moved to another date, I would be interested in
participating in person to help mentor.

Thanks,
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] [Neutron] Waiting until Neutron Port isActive

2016-06-13 Thread Rick Jones

On 06/10/2016 03:13 PM, Kevin Benton wrote:

Polling should be fine. get_port operations a relatively cheap operation
for Neutron.


Just in principle, I would suggest this polling have a back-off built 
into it.  Poll once, see the port is not yet "up" - wait a semi-random 
short length of time,  poll again, see it is not yet "up" wait a longer 
semi-random length of time, lather, rinse, repeat until you've either 
gotten to the limits of your patience or the port has become "up."


Fixed, short poll intervals can run the risk of congestive collapse "at 
scale."


rick jones


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-sfc] Proposing Mohan Kumar for networking-sfc core

2016-06-13 Thread Henry Fourie
+1

From: Cathy Zhang
Sent: Monday, June 13, 2016 11:36 AM
To: openstack-dev@lists.openstack.org; Cathy Zhang
Subject: [openstack-dev] [neutron][networking-sfc] Proposing Mohan Kumar for 
networking-sfc core

Mohan has been working on networking-sfc project for over one year. He is a key 
contributor to the design/coding/testing of SFC CLI,  SFC Horizon, as well as 
ONOS controller support for SFC functionality. He has been great at helping out 
with bug fixes, testing, and reviews of all components of networking-sfc. He is 
also actively providing guidance to the users on their SFC setup, testing, and 
usage. Mohan showed a very good understanding of the networking-sfc design, 
code base, and its usage scenarios. Networking-sfc could use more cores as our 
user base and features have grown and I think he'd be a valuable addition.


Please respond with your +1 votes to approve this change or -1 votes to oppose.

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-13 Thread Hongbin Lu


> -Original Message-
> From: Sudipto Biswas [mailto:sbisw...@linux.vnet.ibm.com]
> Sent: June-13-16 1:43 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap
> 
> 
> 
> On Monday 13 June 2016 06:57 PM, Flavio Percoco wrote:
> > On 12/06/16 22:10 +, Hongbin Lu wrote:
> >> Hi team,
> >>
> >> During the team meetings these weeks, we collaborated the initial
> >> project roadmap. I summarized it as below. Please review.
> >>
> >> * Implement a common container abstraction for different container
> >> runtimes. The initial implementation will focus on supporting basic
> >> container operations (i.e. CRUD).
> >
> > What COE's are being considered for the first implementation? Just
> > docker and kubernetes?
[Hongbin Lu] Container runtimes, docker in particular, are being considered for 
the first implementation. We discussed how to support COEs in Zun but cannot 
reach an agreement on the direction. I will leave it for further discussion.

> >
> >> * Focus on non-nested containers use cases (running containers on
> >> physical hosts), and revisit nested containers use cases (running
> >> containers on VMs) later.
> >> * Provide two set of APIs to access containers: The Nova APIs and
> the
> >> Zun-native APIs. In particular, the Zun-native APIs will expose full
> >> container capabilities, and Nova APIs will expose capabilities that
> >> are shared between containers and VMs.
> >
> > - Is the nova side going to be implemented in the form of a Nova
> > driver (like ironic's?)? What do you mean by APIs here?
[Hongbin Lu] Yes, the plan is to implement a Zun virt-driver for Nova. The idea 
is similar to Ironic.

> >
> > - What operations are we expecting this to support (just CRUD
> > operations on containers?)?
[Hongbin Lu] We are working on finding the list of operations to support. There 
is a BP for tracking this effort: 
https://blueprints.launchpad.net/zun/+spec/api-design .

> >
> > I can see this driver being useful for specialized services like
> Trove
> > but I'm curious/concerned about how this will be used by end users
> > (assuming that's the goal).
[Hongbin Lu] I agree that end users might not be satisfied by basic container 
operations like CRUD. We will discuss how to offer more to make the service to 
be useful in production.

> >
> >
> >> * Leverage Neutron (via Kuryr) for container networking.
> >> * Leverage Cinder for container data volume.
> >> * Leverage Glance for storing container images. If necessary,
> >> contribute to Glance for missing features (i.e. support layer of
> >> container images).
> >
> > Are you aware of https://review.openstack.org/#/c/249282/ ?
> This support is very minimalistic in nature, since it doesn't do
> anything beyond just storing a docker FS tar ball.
> I think it was felt that, further support for docker FS was needed.
> While there were suggestions of private docker registry, having
> something in band (w.r.t openstack) maybe desirable.
[Hongbin Lu] Yes, Glance doesn't support layer of container images which is a 
missing feature.

> >> * Support enforcing multi-tenancy by doing the following:
> >> ** Add configurable options for scheduler to enforce neighboring
> >> containers belonging to the same tenant.
> >> ** Support hypervisor-based container runtimes.
> >>
> >> The following topics have been discussed, but the team cannot reach
> >> consensus on including them into the short-term project scope. We
> >> skipped them for now and might revisit them later.
> >> * Support proxying API calls to COEs.
> >
> > Any link to what this proxy will do and what service it'll talk to?
> > I'd generally advice against having proxy calls in services. We've
> > just done work in Nova to deprecate the Nova Image proxy.
[Hongbin Lu] Maybe "proxy" is not the right word. What I mean is to translate 
the request to API calls of COEs. For example, users request to create a 
container in Zun, then Zun creates a single-container pod in k8s.

> >
> >> * Advanced container operations (i.e. keep container alive, load
> >> balancer setup, rolling upgrade).
> >> * Nested containers use cases (i.e. provision container hosts).
> >> * Container composition (i.e. support docker-compose like DSL).
> >>
> >> NOTE: I might forgot and mis-understood something. Please feel free
> >> to point out if anything is wrong or missing.
> >
> > It sounds you've got more than enough to work on for now, I think
> it's
> > fine to table these topics for now.
> >
> > just my $0.02
> > Flavio
> >
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage 

[openstack-dev] [neutron][networking-sfc] Proposing Mohan Kumar for networking-sfc core

2016-06-13 Thread Cathy Zhang
Mohan has been working on networking-sfc project for over one year. He is a key 
contributor to the design/coding/testing of SFC CLI,  SFC Horizon, as well as 
ONOS controller support for SFC functionality. He has been great at helping out 
with bug fixes, testing, and reviews of all components of networking-sfc. He is 
also actively providing guidance to the users on their SFC setup, testing, and 
usage. Mohan showed a very good understanding of the networking-sfc design, 
code base, and its usage scenarios. Networking-sfc could use more cores as our 
user base and features have grown and I think he'd be a valuable addition.


Please respond with your +1 votes to approve this change or -1 votes to oppose.

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nfv][tacker] Tacker sessions @ OPNFV Summit, Berlin

2016-06-13 Thread Sridhar Ramaswamy
For the folks going to OPNFV Summit @ Berlin next week, and interested in
NFV Orchestration + OpenStack, please consider stopping by at the following
OpenStack Tacker related sessions,

1) OPNFV Design Summit - Day 1/June 20th,
 https://wiki.opnfv.org/display/EVNT/Berlin+Design+Summit+Planning

2) OpenStack Tacker - Open Platform for NFV Orchestration - Thursday June
23 @ 3:30pm

thanks,
Sridhar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] weekly sub team status report

2016-06-13 Thread Loo, Ruby
Hi,

We are thrilled to present this week's subteam report for Ironic. As usual, 
this is pulled directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff with 6 June 2016)
- Ironic: 213 bugs (+9) + 182 wishlist items (+4). 15 new (+7), 147 in progress 
(+9), 0 critical, 34 high (-1) and 21 incomplete (-1)
- Inspector: 8 bugs + 19 wishlist items (-1). 0 new, 7 in progress, 0 critical, 
2 high (+1) and 0 incomplete
- Nova bugs with Ironic tag: 14 (-2). 0 new, 0 critical, 0 high

Upgrade (aka Grenade) testing (jlvillal/mgould):

- trello: https://trello.com/c/y127DhpD/3-ci-grenade-testing
- Grenade full job is now running in the check queue as non-voting! :)
- TODO: Setup the Grenade partial job (jlvillal)

Network isolation (Neutron/Ironic work) (jroll, TheJulia, devananda)

- trello: 
https://trello.com/c/HWVHhxOj/1-multi-tenant-networking-network-isolation
- needs a rebase, but after that let's focus on reviewing this stuff
- please make sure it passes grenade :D

Gate improvements (jlvillal, lucasagomes, dtantsur)
===
- removed old ramdisk support (not a direct improvement, but still)

Node search API (jroll, lintan, rloo)
=
- trello: https://trello.com/c/j35vJrSz/24-node-search-api

Node claims API (jroll, lintan)
===
- trello: https://trello.com/c/3ai8OQcA/25-node-claims-api

Multiple compute hosts (jroll, devananda)
=
- trello: https://trello.com/c/OXYBHStp/7-multiple-compute-hosts
- need to sync up with jaypipes and get a new spec up

Generic boot-from-volume (TheJulia, dtantsur, lucasagomes)
==
- trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- Specification and dependent specification updated last week, reviews required.
- https://review.openstack.org/#/c/200496/
- https://review.openstack.org/#/c/294995/

Driver composition (dtantsur)
=
- trello: https://trello.com/c/fTya14y6/14-driver-composition
- No updates (please review the spec)

Inspector (dtantsur)

- (milan) implementing grenade fork of Ironic in a slow pace; have got some 
progress on env setup (https://github.com/dparalen/devstack-gate-test )

.

Until the week after next,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-13 Thread Hongbin Lu


From: Antoni Segura Puimedon [mailto:toni+openstac...@midokura.com]
Sent: June-13-16 3:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: yanya...@cn.ibm.com; Qi Ming Teng; adit...@nectechnologies.in; 
sitlani.namr...@yahoo.in; flw...@catalyst.net.nz; Chandan Kumar; Sheel Rana 
Insaan; Yuanying
Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap



On Mon, Jun 13, 2016 at 12:10 AM, Hongbin Lu 
> wrote:
Hi team,

During the team meetings these weeks, we collaborated the initial project 
roadmap. I summarized it as below. Please review.

* Implement a common container abstraction for different container runtimes. 
The initial implementation will focus on supporting basic container operations 
(i.e. CRUD).
* Focus on non-nested containers use cases (running containers on physical 
hosts), and revisit nested containers use cases (running containers on VMs) 
later.
* Provide two set of APIs to access containers: The Nova APIs and the 
Zun-native APIs. In particular, the Zun-native APIs will expose full container 
capabilities, and Nova APIs will expose capabilities that are shared between 
containers and VMs.
* Leverage Neutron (via Kuryr) for container networking.

Great! Let us know anytime we can help

* Leverage Cinder for container data volume.
Have you considered fuxi?

https://github.com/openstack/fuxi
[Hongbin Lu] We discussed if we should leverage Kuryr/Fuxi for storage, but we 
are unclear what this project offer exactly and how it works. The maturity of 
the project is also a concern, but we will revisit it later.


* Leverage Glance for storing container images. If necessary, contribute to 
Glance for missing features (i.e. support layer of container images).
* Support enforcing multi-tenancy by doing the following:
** Add configurable options for scheduler to enforce neighboring containers 
belonging to the same tenant.

What about have the scheduler pluggable instead of having a lot of 
configuration options?
[Hongbin Lu] For short-term, no. We will implement a very simple scheduler to 
start. For long-term, we will wait for the scheduler-as-a-service project: 
https://wiki.openstack.org/wiki/Gantt . I believe Gantt will have a pluggable 
architecture so that your requirement will be satisfied. If not, we will 
revisit it.


** Support hypervisor-based container runtimes.

Is that hyper.sh?
[Hongbin Lu] It could be, or Clear Container, or something similar.



The following topics have been discussed, but the team cannot reach consensus 
on including them into the short-term project scope. We skipped them for now 
and might revisit them later.
* Support proxying API calls to COEs.
* Advanced container operations (i.e. keep container alive, load balancer 
setup, rolling upgrade).
* Nested containers use cases (i.e. provision container hosts).
* Container composition (i.e. support docker-compose like DSL).

Will it have ordering primitives, i.e. this container won't start until this 
one is up and running. ?
I also wonder whether the Higgins container abstraction will have rich status 
reporting that can be used in ordering.
For example, whether it can differentiate started containers from those that 
are already listening in their exposed
ports.
[Hongbin Lu] I am open to that, but needs to discuss the idea further.


NOTE: I might forgot and mis-understood something. Please feel free to point 
out if anything is wrong or missing.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Group-based-policy] what does policy rule action redirect do

2016-06-13 Thread Sumit Naiksatam
On Mon, Jun 13, 2016 at 3:17 AM, yong sheng gong <18618199...@163.com> wrote:
> hi,
>
> I have followed the steps at
> https://github.com/openstack/group-based-policy/blob/master/gbpservice/tests/contrib/devstack/exercises/gbp_servicechain.sh
>
> and I can see the firewall and lb are created right.
>
> But I thought the vm client-1's traffic will be redirected to firewall, lb
> and last to web-vm-1 somehow.
>

The rendering of the REDIRECT action is specific to the configured
traffic plumber, which in turn would depend on the underlying network
technology that provides connectivity, and also the nature of the
services being chained.

> howerver, I cannot see how it is done, or "redirect" action just helps to
> lauch a firewall, and lb and do nothing others.
>

I suspect you are using the default configuration, in which case a
traffic stitching plumber is used, and for the choice of Neutron FWaaS
firewall, and Neutron LBaaS LB, no traffic steering is required.
Creating the service instances in the appropriate context is enough to
get the traffic to flow through the services configured in the chain.

>
> any idea?
>
> thanks
> yong sheng gong
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ovs] The way we deal with MTU

2016-06-13 Thread Eugene Nikanorov
That's interesting.


In our deployments we do something like br-ex (linux bridge, mtu 9000) -
OVSIntPort (mtu 65000) - br-floating (ovs bridge, mtu 1500) - br-int (ovs
bridge, mtu 1500).
qgs then are getting created in br-int, traffic goes all the way and that
altogether allows jumbo frames over external network.

For that reason I thought that mtu inside OVS doesn't really matter.
This, however is for ovs 2.4.1

I wonder if that behavior has changed and if the description is available
anywhere.

Thanks,
Eugene.

On Mon, Jun 13, 2016 at 9:49 AM, Ihar Hrachyshka 
wrote:

> Hi all,
>
> in Mitaka, we introduced a bunch of changes to the way we handle MTU in
> Neutron/Nova, making sure that the whole instance data path, starting from
> instance internal interface, thru hybrid bridge, into the br-int; as well
> as router data path (qr) have proper MTU value set on all participating
> devices. On hypervisor side, both Nova and Neutron take part in it, setting
> it with ip-link tool based on what Neutron plugin calculates for us. So far
> so good.
>
> Turns out that for OVS, it does not work as expected in regards to br-int.
> There was a bug reported lately: https://launchpad.net/bugs/1590397
>
> Briefly, when we try to set MTU on a device that is plugged into a bridge,
> and if the bridge already has another port with lower MTU, the bridge
> itself inherits MTU from that latter port, and Linux kernel (?) does not
> allow to set MTU on the first device at all, making ip link calls
> ineffective.
>
> AFAIU this behaviour is consistent with Linux bridging rules: you can’t
> have ports of different MTU plugged into the same bridge.
>
> Now, that’s a huge problem for Neutron, because we plug ports that belong
> to different networks (and that hence may have different MTUs) into the
> same br-int bridge.
>
> So I played with the code locally a bit and spotted that currently, we set
> MTU for router ports before we move their devices into router namespaces.
> And once the device is in a namespace, ip-link actually works. So I wrote a
> fix with a functional test that proves the point:
> https://review.openstack.org/#/c/327651/ The fix was validated by the
> reporter of the original bug and seems to fix the issue for him.
>
> It’s suspicious that it works from inside a namespace but not when the
> device is still in the root namespace. So I reached out to Jiri Benc from
> our local Open vSwitch team, and here is a quote:
>
> ===
>
> "It's a bug in ovs-vswitchd. It doesn't see the interface that's in
> other netns and thus cannot enforce the correct MTU.
>
> We'll hopefully fix it and disallow incorrect MTU setting even across
> namespaces. However, it requires significant effort and rework of ovs
> name space handling.
>
> You should not depend on the current buggy behavior. Don't set MTU of
> the internal interfaces higher than the rest of the bridge, it's not
> supported. Hacking this around by moving the interface to a netns is
> exploiting of a bug.
>
> We can certainly discuss whether this limitation could be relaxed.
> Honestly, I don't know, it's for a discussion upstream. But as of now,
> it's not supported and you should not do it.”
>
> So basically, as long as we try to plug ports with different MTUs into the
> same bridge, we are utilizing a bug in Open vSwitch, that may break us any
> time.
>
> I guess our alternatives are:
> - either redesign bridge setup for openvswitch to e.g. maintain a bridge
> per network;
> - or talk to ovs folks on whether they may support that for us.
>
> I understand the former option is too scary. It opens lots of questions,
> including upgrade impact since it will obviously introduce a dataplane
> downtime. That would be a huge shift in paradigm, probably too huge to
> swallow. The latter option may not fly with vswitch folks. Any better ideas?
>
> It’s also not clear whether we want to proceed with my immediate fix.
> Advices are welcome.
>
> Thanks,
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-13 Thread Sudipto Biswas



On Monday 13 June 2016 06:57 PM, Flavio Percoco wrote:

On 12/06/16 22:10 +, Hongbin Lu wrote:

Hi team,

During the team meetings these weeks, we collaborated the initial 
project roadmap. I summarized it as below. Please review.


* Implement a common container abstraction for different container 
runtimes. The initial implementation will focus on supporting basic 
container operations (i.e. CRUD).


What COE's are being considered for the first implementation? Just 
docker and kubernetes?


* Focus on non-nested containers use cases (running containers on 
physical hosts), and revisit nested containers use cases (running 
containers on VMs) later.
* Provide two set of APIs to access containers: The Nova APIs and the 
Zun-native APIs. In particular, the Zun-native APIs will expose full 
container capabilities, and Nova APIs will expose capabilities that 
are shared between containers and VMs.


- Is the nova side going to be implemented in the form of a Nova 
driver (like

ironic's?)? What do you mean by APIs here?

- What operations are we expecting this to support (just CRUD 
operations on

containers?)?

I can see this driver being useful for specialized services like Trove 
but I'm
curious/concerned about how this will be used by end users (assuming 
that's the

goal).



* Leverage Neutron (via Kuryr) for container networking.
* Leverage Cinder for container data volume.
* Leverage Glance for storing container images. If necessary, 
contribute to Glance for missing features (i.e. support layer of 
container images).


Are you aware of https://review.openstack.org/#/c/249282/ ?
This support is very minimalistic in nature, since it doesn't do 
anything beyond just storing a docker FS tar ball.
I think it was felt that, further support for docker FS was needed. 
While there were suggestions of private docker registry, having something

in band (w.r.t openstack) maybe desirable.

* Support enforcing multi-tenancy by doing the following:
** Add configurable options for scheduler to enforce neighboring 
containers belonging to the same tenant.

** Support hypervisor-based container runtimes.

The following topics have been discussed, but the team cannot reach 
consensus on including them into the short-term project scope. We 
skipped them for now and might revisit them later.

* Support proxying API calls to COEs.


Any link to what this proxy will do and what service it'll talk to? I'd
generally advice against having proxy calls in services. We've just 
done work in

Nova to deprecate the Nova Image proxy.

* Advanced container operations (i.e. keep container alive, load 
balancer setup, rolling upgrade).

* Nested containers use cases (i.e. provision container hosts).
* Container composition (i.e. support docker-compose like DSL).

NOTE: I might forgot and mis-understood something. Please feel free 
to point out if anything is wrong or missing.


It sounds you've got more than enough to work on for now, I think it's 
fine to

table these topics for now.

just my $0.02
Flavio



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ovs] The way we deal with MTU

2016-06-13 Thread Peters, Rawlin
Hi Ihar,

This reminds me of a mailing list thread from a while back about moving OVS 
ports between namespaces being considered harmful [1]. Do you know if that was 
ever resolved by the OVS folks? Or, is this MTU bug just further indication of 
this action being harmful?

Another comment inline.

Rawlin Peters

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-February/056834.html

On  Monday, June 13, 2016 10:50 AM, Ihar Hrachyshka wrote:
> 
> Hi all,
> 
> in Mitaka, we introduced a bunch of changes to the way we handle MTU in
> Neutron/Nova, making sure that the whole instance data path, starting from
> instance internal interface, thru hybrid bridge, into the br-int; as well as
> router data path (qr) have proper MTU value set on all participating devices.
> On hypervisor side, both Nova and Neutron take part in it, setting it with ip-
> link tool based on what Neutron plugin calculates for us. So far so good.
> 
> Turns out that for OVS, it does not work as expected in regards to br-int.
> There was a bug reported lately: https://launchpad.net/bugs/1590397
> 
> Briefly, when we try to set MTU on a device that is plugged into a bridge, and
> if the bridge already has another port with lower MTU, the bridge itself
> inherits MTU from that latter port, and Linux kernel (?) does not allow to set
> MTU on the first device at all, making ip link calls ineffective.
> 
> AFAIU this behaviour is consistent with Linux bridging rules: you can’t have
> ports of different MTU plugged into the same bridge.
> 
> Now, that’s a huge problem for Neutron, because we plug ports that belong
> to different networks (and that hence may have different MTUs) into the
> same br-int bridge.
> 
> So I played with the code locally a bit and spotted that currently, we set MTU
> for router ports before we move their devices into router namespaces. And
> once the device is in a namespace, ip-link actually works. So I wrote a fix 
> with
> a functional test that proves the point:
> https://review.openstack.org/#/c/327651/ The fix was validated by the
> reporter of the original bug and seems to fix the issue for him.
> 
> It’s suspicious that it works from inside a namespace but not when the
> device is still in the root namespace. So I reached out to Jiri Benc from our
> local Open vSwitch team, and here is a quote:
> 
> ===
> 
> "It's a bug in ovs-vswitchd. It doesn't see the interface that's in other 
> netns
> and thus cannot enforce the correct MTU.
> 
> We'll hopefully fix it and disallow incorrect MTU setting even across
> namespaces. However, it requires significant effort and rework of ovs name
> space handling.
> 
> You should not depend on the current buggy behavior. Don't set MTU of the
> internal interfaces higher than the rest of the bridge, it's not supported.
> Hacking this around by moving the interface to a netns is exploiting of a bug.
> 
> We can certainly discuss whether this limitation could be relaxed.
> Honestly, I don't know, it's for a discussion upstream. But as of now, it's 
> not
> supported and you should not do it.”
> 
> So basically, as long as we try to plug ports with different MTUs into the 
> same
> bridge, we are utilizing a bug in Open vSwitch, that may break us any time.
> 
> I guess our alternatives are:
> - either redesign bridge setup for openvswitch to e.g. maintain a bridge per
> network;
> - or talk to ovs folks on whether they may support that for us.
> 

It seems like another alternative would be to always use veth devices by 
default rather than internal OVS ports (i.e. ovs_use_veth = True), but that 
would likely mean taking a large performance hit that no one will be happy 
about.

> I understand the former option is too scary. It opens lots of questions,
> including upgrade impact since it will obviously introduce a dataplane
> downtime. That would be a huge shift in paradigm, probably too huge to
> swallow. The latter option may not fly with vswitch folks. Any better ideas?
> 
> It’s also not clear whether we want to proceed with my immediate fix.
> Advices are welcome.
> 
> Thanks,
> Ihar
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] Meeting time doodle

2016-06-13 Thread Michael Krotscheck
Productive meeting this morning, everyone. As agreed, we'll be meeting
weekly at 1400UTC. I had to change the meeting time from Monday to
Wednesday though, because twice-a-month there were not meeting channels
available. Since Wednesday at 1400UTC was also available in the former
doodle poll, I chose that date. If this doesn't work for you, please
comment to that effect on this code review:

https://review.openstack.org/#/c/329120/

Next meeting will be next week, Wednesday at 1400UTC, in
#openstack-meeting, and we'll be kicking it off with a design discussion
for the JavaScript OpenStack SDK. See you then!

Michael

On Fri, Jun 10, 2016 at 3:30 PM Michael Krotscheck 
wrote:

> Alright, the first meeting will be on monday, 1500UTC. For the first
> meeting we'll just meet in #openstack-javascript, and see which of the
> rooms are available at that time. The agenda is here, go ahead and add
> anything you'd think is pertinent:
>
> https://etherpad.openstack.org/p/javascript-meeting-agenda
>
> Michael
>
> On Mon, Jun 6, 2016 at 11:34 AM Michael Krotscheck 
> wrote:
>
>> Between fuel, ironic, horizon, storyboard, the app ecosystem group, the
>> partridges, the pear trees, and the kitchen sinks, there's an awful lot of
>> JavaScript work happening in OpenStack. Enough so that it's a good idea to
>> actually start having regular about it.
>>
>> I've tried to identify dates/times in the week when a meeting channel
>> might be open. If you work, consume, and/or want to contribute to
>> JavaScript in OpenStack, please fill out this doodle and let me know when
>> you can attend!
>>
>> http://doodle.com/poll/3hxubef6va5wzpkc
>>
>> Michael Krotscheck
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-06-13 Thread Ken Giusti
So is this horse dead now, 'cuz I wanna turn hitting it...

First of all, this thread brings up two separate messaging concepts:

1) at-least-once delivery
2) message acknowledgement

For #1 - oslo.messaging cannot guarantee that messages will not be
duplicated.  Specifically in the case of multiple consumers on the
same topic.  In that case, oslo.messaging can only dedup on a
per-consumer basis because a consumer is unaware of what its peers
have received.  Therefore if a re-transmit is sent to a different
consumer than the original transmit (think lost Ack) both consumers
will regard the message as non-duplicate at process it.

For #2, I'll go on the record and say that ack-before-process is
inherently broken.

The acknowledgment is used to inform the messaging subsystem (note I
didn't say 'sender') that the receiver of the message assumed
ownership of the message. It's a transfer of control thing.  The
acknowledgment should only be sent when the consuming application has
completed processing the message.  Can oslo.messaging assume that on
the behalf of the consumer?  I don't think it should.  Acking a
message that hasn't been fully processed will negatively affect the
message window maintained by the message bus, possibly leading to
over-delivery.

Having said that, a proper acking mechanism would allow for
asynchronous acking - sending the ack from a later time or another
thread completely.  As Mehdi pointed out this would require some
significant changes to the oslo.messaging codebase.

my arm is tired - this is one big horse.

Thanks




On Tue, Jun 7, 2016 at 2:48 AM, Renat Akhmerov  wrote:
>
> On 04 Jun 2016, at 04:16, Doug Hellmann  wrote:
>
> Excerpts from Joshua Harlow's message of 2016-06-03 09:14:05 -0700:
>
> Deja, Dawid wrote:
>
> On Thu, 2016-05-05 at 11:08 +0700, Renat Akhmerov wrote:
>
>
> On 05 May 2016, at 01:49, Mehdi Abaakouk  > wrote:
>
>
> Le 2016-05-04 10:04, Renat Akhmerov a écrit :
>
> No problem. Let’s not call it RPC (btw, I completely agree with that).
> But it’s one of the messaging patterns and hence should be under
> oslo.messaging I guess, no?
>
>
> Yes and no, we currently have two APIs (rpc and notification). And
> personally I regret to have the notification part in oslo.messaging.
>
> RPC and Notification are different beasts, and both are today limited
> in terms of feature because they share the same driver implementation.
>
> Our RPC errors handling is really poor, for example Nova just put
> instance in ERROR when something bad occurs in oslo.messaging layer.
> This enforces deployer/user to fix the issue manually.
>
> Our Notification system doesn't allow fine grain routing of message,
> everything goes into one configured topic/queue.
>
> And now we want to add a new one... I'm not against this idea,
> but I'm not a huge fan.
>
> Thoughts from folks (mistral and oslo)?
>
> Also, I was not at the Summit, should I conclude the Tooz+taskflow
> approach (that ensure the idempotent of the application within the
> library API) have not been accepted by mistral folks ?
>
> Speaking about idempotency, IMO it’s not a central question that we
> should be discussing here. Mistral users should have a choice: if they
> manage to make their actions idempotent it’s excellent, in many cases
> idempotency is certainly possible, btw. If no, then they know about
> potential consequences.
>
>
> You shouldn't mix the idempotency of the user task and the idempotency
> of a Mistral action (that will at the end run the user task).
> You can have your Mistral task runner implementation idempotent and just
> make the workflow to use configurable in case the user task is
> interrupted or badly finished even if the user task is idempotent or not.
> This makes the thing very predictable. You will know for example:
> * if the user task has started or not,
> * if the error is due to a node power cut when the user task runs,
> * if you can safely retry a not idempotent user task on an other node,
> * you will not be impacted by rabbitmq restart or TCP connection issues,
> * ...
>
> With the oslo.messaging approach, everything will just end up in a
> generic MessageTimeout error.
>
> The RPC API already have this kind of issue. Applications have
> unfortunately
> dealt with that (and I think they want something better now).
> I'm just not convinced we should add a new "working queue" API in
> oslo.messaging for tasks scheduling that have the same issue we already
> have with RPC.
>
> Anyway, that's your choice, if you want rely on this poor structure,
> I will
> not be against, I'm not involved in Mistral. I just want everybody is
> aware
> of this.
>
> And even in this case there’s usually a number
> of measures that can be taken to mitigate those consequences (reruning
> workflows from certain points after manually fixing problems, rollback
> scenarios etc.).
>
>
> taskflow allows to describe and 

Re: [openstack-dev] [TripleO] Nodes Registration workflow improvements

2016-06-13 Thread Dmitry Tantsur

On 06/13/2016 06:28 PM, Ben Nemec wrote:

On 06/13/2016 09:41 AM, Jiri Tomasek wrote:

Hi all,

As we are close to merging the initial Nodes Registration workflows and
action [1, 2] using Mistral which successfully provides the current
registration logic via common API, I'd like to start discussion on how
to improve it so it satisfies GUI and CLI requirements. I'd like to try
to describe the clients goals and define requirements, describe current
workflow problems and propose a solution. I'd like to record the result
of discussion to Blueprint [3] which Ryan already created.


CLI goals and optimal workflow


CLI's main benefit is based on the fact that it's commands can simply
become part of a script, so it is important that the operation is
idempotent. The optimal CLI workflow is:

User runs 'openstack baremetal import' and provides instackenv.json file
which includes all nodes information. When the registration fails at
some point, user is notified about the error and re-runs the command
with the same set of nodes. Rinse and repeat until all nodes are
properly registered.


I would actually not describe this as the optimal workflow for CLI
registration either.  It would be much better if the registration
completed for all the nodes that it can in the first place and then any
failed nodes can be cleaned up later.  There's no reason one bad node in
a file containing 100 nodes should block the entire deployment.

On that note, the only part of your proposal that I'm not sure would be
useful for the CLI is the non-blocking part.  I don't know that a CLI
fire-and-forget mode makes a lot of sense, although if there's a way for
the CLI to check the status then that might be fine too.  However,
pretty much all of the other usability stuff you're talking about would
benefit the CLI too.




GUI goals and optimal workflow
=

GUI's main goal is to provide a user friendly way to register nodes,
inform the user on the process, handle the problems and lets user fix
them. GUI strives for being responsive and interactive.

GUI allows user to add nodes specification manually one by one by
provided form or allow user (in same manner as CLI) to provide the
instackenv.json file which holds the nodes description. Importing the
file (or adding node manually) will populate an array of nodes the user
wants to register. User is able to browse these nodes and make
corrections to their configuration. GUI provides client side validations
to verify inputs (node name format, required fields, mac address, ip
address format etc.)

Then user triggers the registration. The nodes are moved to nodes table
as they are being registered. If an error occurs during registration of
any of the nodes, user is notified about the issue and can fix it in
registration form and can re-trigger registration for failed nodes.
Rinse and repeat until all nodes are successfully registered and in
proper state (manageable).

Such workflow keeps the GUI interactive, user does not have to look at
the spinner for several minutes (in case of registering hundreds of
nodes), hoping that something does not happen wrong. User is constantly
informed about the progress, user is able to react to the situation as
he wants, User is able to freely interact with the GUI while
registration is happening on the background. User is able to register
nodes in batches.


Current solution
=

Current solution uses register_or_update_nodes workflow [1] which takes
a nodes_json array and runs register_or_update_nodes and
set_nodes_managed tasks. When the whole operation completes it sends
Zaqar message notifying about the result of the registration of the
whole batch of nodes.

register_or_update_nodes runs tripleo.register_or_update_nodes action
[2] which uses business logic in tripleo_common/utils/nodes.py

utils.nodes.py module has been originally extracted from tripleoclient
to get the business logic behind the common API. It does following:

- converts the instackenv.json nodes format to appropriate ironic driver
format (driver-info fields)
- sets kernel and ramdisk ids defaults if they're not provided
- for each node it tests if node already exists (finds nodes by mac
addresses) and updates it or registers it as new based on the result.


Current Problems:
- no zaqar notification is sent for each node
- nodes are registered in batch, registration fails when error happens
on a certain node, leaving already registered nodes in inconsistent state
- workflow does not notify user about what nodes have been registered
and what failed, only thing user gets is relevant error message
- when the workflow succeeds, the registered_nodes list sent by Zaqar
message has outdated information
- when nodes are updated using nodes registration, the forkflow ends up
as failed, without any error output, although the nodes are updated
successfully

- utils/nodes.py decides whether the node should be created or updated
based on mac address 

Re: [openstack-dev] [release][infra] os-brick 1.4.0 not on our pypi mirrors

2016-06-13 Thread Davanum Srinivas
Thanks a ton Jeremy!

On Mon, Jun 13, 2016 at 9:46 AM, Jeremy Stanley  wrote:
> On 2016-06-11 21:08:14 -0400 (-0400), Davanum Srinivas wrote:
>> Can someone nudge the banderstanch thingy please?
>>
>> Release in pypi:
>> https://pypi.python.org/pypi/os-brick/1.4.0
>>
>> Error in Nova:
>> http://logs.openstack.org/40/326940/4/check/gate-nova-python34-db/bd55893/console.html#_2016-06-12_00_50_13_521
>
> The sdist was never successfully uploaded. I think the release time
> (June 9) coincides with the denial of service attack against PyPI
> which was causing some uploads to partially fail.
>
> I have since manually uploaded the sdist for this to pypi and
> retriggered the release announcement job.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ovs] The way we deal with MTU

2016-06-13 Thread Ihar Hrachyshka
Hi all,

in Mitaka, we introduced a bunch of changes to the way we handle MTU in 
Neutron/Nova, making sure that the whole instance data path, starting from 
instance internal interface, thru hybrid bridge, into the br-int; as well as 
router data path (qr) have proper MTU value set on all participating devices. 
On hypervisor side, both Nova and Neutron take part in it, setting it with 
ip-link tool based on what Neutron plugin calculates for us. So far so good.

Turns out that for OVS, it does not work as expected in regards to br-int. 
There was a bug reported lately: https://launchpad.net/bugs/1590397

Briefly, when we try to set MTU on a device that is plugged into a bridge, and 
if the bridge already has another port with lower MTU, the bridge itself 
inherits MTU from that latter port, and Linux kernel (?) does not allow to set 
MTU on the first device at all, making ip link calls ineffective.

AFAIU this behaviour is consistent with Linux bridging rules: you can’t have 
ports of different MTU plugged into the same bridge.

Now, that’s a huge problem for Neutron, because we plug ports that belong to 
different networks (and that hence may have different MTUs) into the same 
br-int bridge.

So I played with the code locally a bit and spotted that currently, we set MTU 
for router ports before we move their devices into router namespaces. And once 
the device is in a namespace, ip-link actually works. So I wrote a fix with a 
functional test that proves the point: https://review.openstack.org/#/c/327651/ 
The fix was validated by the reporter of the original bug and seems to fix the 
issue for him.

It’s suspicious that it works from inside a namespace but not when the device 
is still in the root namespace. So I reached out to Jiri Benc from our local 
Open vSwitch team, and here is a quote:

===

"It's a bug in ovs-vswitchd. It doesn't see the interface that's in
other netns and thus cannot enforce the correct MTU.

We'll hopefully fix it and disallow incorrect MTU setting even across
namespaces. However, it requires significant effort and rework of ovs
name space handling.

You should not depend on the current buggy behavior. Don't set MTU of
the internal interfaces higher than the rest of the bridge, it's not
supported. Hacking this around by moving the interface to a netns is
exploiting of a bug.

We can certainly discuss whether this limitation could be relaxed.
Honestly, I don't know, it's for a discussion upstream. But as of now,
it's not supported and you should not do it.”

So basically, as long as we try to plug ports with different MTUs into the same 
bridge, we are utilizing a bug in Open vSwitch, that may break us any time.

I guess our alternatives are:
- either redesign bridge setup for openvswitch to e.g. maintain a bridge per 
network;
- or talk to ovs folks on whether they may support that for us.

I understand the former option is too scary. It opens lots of questions, 
including upgrade impact since it will obviously introduce a dataplane 
downtime. That would be a huge shift in paradigm, probably too huge to swallow. 
The latter option may not fly with vswitch folks. Any better ideas?

It’s also not clear whether we want to proceed with my immediate fix. Advices 
are welcome.

Thanks,
Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][cinder] os-brick 1.4.0 release (newton)

2016-06-13 Thread Ihar Hrachyshka
This release broke gate and will be blocked in g-r:

https://bugs.launchpad.net/os-brick/+bug/1592043

> On 13 Jun 2016, at 18:42, no-re...@openstack.org wrote:
> 
> We are content to announce the release of:
> 
> os-brick 1.4.0: OpenStack Cinder brick library for managing local
> volume attaches
> 
> This release is part of the newton release series.
> 
> With source available at:
> 
>http://git.openstack.org/cgit/openstack/os-brick
> 
> With package available at:
> 
>https://pypi.python.org/pypi/os-brick
> 
> Please report issues through launchpad:
> 
>http://bugs.launchpad.net/os-brick
> 
> For more details, please see below.
> 
> Changes in os-brick 1.3.0..1.4.0
> 
> 
> 0582781 Copy encryptors from Nova to os-brick
> a2d38af Updated from global requirements
> bb19819 Mock time.sleep in ISCSIConnectorTestCase
> ba83221 Updated from global requirements
> 111688e Updated from global requirements
> 8d9e115 Updated from global requirements
> e7e6801 Ensure that the base connector is platform independent
> 934cdef Updated from global requirements
> 80703d3 os-brick refactor get_connector_properties
> 0e75bf6 Handle exception case with only target_portals
> 9640b73 Fix coverage generation
> dbf77fb Trivial rootwrap -> privsep replacement
> 9fc9cc4 Updated from global requirements
> b0b0c70 Updated from global requirements
> 
> Diffstat (except docs and test files)
> -
> 
> .coveragerc  |   4 +-
> .gitignore   |   1 +
> etc/os-brick/rootwrap.d/os-brick.filters | 105 +--
> os_brick/encryptors/__init__.py  |  99 +++
> os_brick/encryptors/base.py  |  65 
> os_brick/encryptors/cryptsetup.py| 124 
> os_brick/encryptors/luks.py  | 143 +
> os_brick/encryptors/nop.py   |  47 +++
> os_brick/exception.py|   9 +
> os_brick/executor.py |  11 +-
> os_brick/initiator/connector.py  | 423 +--
> os_brick/initiator/linuxfc.py|  10 -
> os_brick/initiator/linuxscsi.py  |  24 +-
> os_brick/local_dev/lvm.py|  27 +-
> os_brick/privileged/__init__.py  |  23 ++
> os_brick/privileged/rootwrap.py  |  82 ++
> os_brick/remotefs/remotefs.py|  11 +-
> os_brick/utils.py|  39 +++
> requirements.txt |  12 +-
> test-requirements.txt|   4 +-
> tox.ini  |   5 +-
> 31 files changed, 1700 insertions(+), 347 deletions(-)
> 
> 
> Requirements updates
> 
> 
> diff --git a/requirements.txt b/requirements.txt
> index cfde43e..2344258 100644
> --- a/requirements.txt
> +++ b/requirements.txt
> @@ -6 +6 @@ pbr>=1.6 # Apache-2.0
> -Babel>=1.3 # BSD
> +Babel>=2.3.4 # BSD
> @@ -8 +8 @@ eventlet!=0.18.3,>=0.18.2 # MIT
> -oslo.concurrency>=3.5.0 # Apache-2.0
> +oslo.concurrency>=3.8.0 # Apache-2.0
> @@ -12,3 +12,4 @@ oslo.i18n>=2.1.0 # Apache-2.0
> -oslo.service>=1.0.0 # Apache-2.0
> -oslo.utils>=3.5.0 # Apache-2.0
> -requests!=2.9.0,>=2.8.1 # Apache-2.0
> +oslo.privsep>=1.5.0 # Apache-2.0
> +oslo.service>=1.10.0 # Apache-2.0
> +oslo.utils>=3.11.0 # Apache-2.0
> +requests>=2.10.0 # Apache-2.0
> @@ -16,0 +18 @@ six>=1.9.0 # MIT
> +castellan>=0.4.0 # Apache-2.0
> diff --git a/test-requirements.txt b/test-requirements.txt
> index dece983..cb2a1a5 100644
> --- a/test-requirements.txt
> +++ b/test-requirements.txt
> @@ -8 +8 @@ python-subunit>=0.0.18 # Apache-2.0/BSD
> -reno>=0.1.1 # Apache2
> +reno>=1.6.2 # Apache2
> @@ -15 +15 @@ testtools>=1.4.0 # MIT
> -os-testr>=0.4.1 # Apache-2.0
> +os-testr>=0.7.0 # Apache-2.0
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][cinder] os-brick 1.4.0 release (newton)

2016-06-13 Thread no-reply
We are content to announce the release of:

os-brick 1.4.0: OpenStack Cinder brick library for managing local
volume attaches

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-brick

With package available at:

https://pypi.python.org/pypi/os-brick

Please report issues through launchpad:

http://bugs.launchpad.net/os-brick

For more details, please see below.

Changes in os-brick 1.3.0..1.4.0


0582781 Copy encryptors from Nova to os-brick
a2d38af Updated from global requirements
bb19819 Mock time.sleep in ISCSIConnectorTestCase
ba83221 Updated from global requirements
111688e Updated from global requirements
8d9e115 Updated from global requirements
e7e6801 Ensure that the base connector is platform independent
934cdef Updated from global requirements
80703d3 os-brick refactor get_connector_properties
0e75bf6 Handle exception case with only target_portals
9640b73 Fix coverage generation
dbf77fb Trivial rootwrap -> privsep replacement
9fc9cc4 Updated from global requirements
b0b0c70 Updated from global requirements

Diffstat (except docs and test files)
-

.coveragerc  |   4 +-
.gitignore   |   1 +
etc/os-brick/rootwrap.d/os-brick.filters | 105 +--
os_brick/encryptors/__init__.py  |  99 +++
os_brick/encryptors/base.py  |  65 
os_brick/encryptors/cryptsetup.py| 124 
os_brick/encryptors/luks.py  | 143 +
os_brick/encryptors/nop.py   |  47 +++
os_brick/exception.py|   9 +
os_brick/executor.py |  11 +-
os_brick/initiator/connector.py  | 423 +--
os_brick/initiator/linuxfc.py|  10 -
os_brick/initiator/linuxscsi.py  |  24 +-
os_brick/local_dev/lvm.py|  27 +-
os_brick/privileged/__init__.py  |  23 ++
os_brick/privileged/rootwrap.py  |  82 ++
os_brick/remotefs/remotefs.py|  11 +-
os_brick/utils.py|  39 +++
requirements.txt |  12 +-
test-requirements.txt|   4 +-
tox.ini  |   5 +-
31 files changed, 1700 insertions(+), 347 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index cfde43e..2344258 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6 +6 @@ pbr>=1.6 # Apache-2.0
-Babel>=1.3 # BSD
+Babel>=2.3.4 # BSD
@@ -8 +8 @@ eventlet!=0.18.3,>=0.18.2 # MIT
-oslo.concurrency>=3.5.0 # Apache-2.0
+oslo.concurrency>=3.8.0 # Apache-2.0
@@ -12,3 +12,4 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.service>=1.0.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
-requests!=2.9.0,>=2.8.1 # Apache-2.0
+oslo.privsep>=1.5.0 # Apache-2.0
+oslo.service>=1.10.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
+requests>=2.10.0 # Apache-2.0
@@ -16,0 +18 @@ six>=1.9.0 # MIT
+castellan>=0.4.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index dece983..cb2a1a5 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +8 @@ python-subunit>=0.0.18 # Apache-2.0/BSD
-reno>=0.1.1 # Apache2
+reno>=1.6.2 # Apache2
@@ -15 +15 @@ testtools>=1.4.0 # MIT
-os-testr>=0.4.1 # Apache-2.0
+os-testr>=0.7.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][infra] os-brick 1.4.0 not on our pypi mirrors

2016-06-13 Thread Jeremy Stanley
On 2016-06-11 21:08:14 -0400 (-0400), Davanum Srinivas wrote:
> Can someone nudge the banderstanch thingy please?
> 
> Release in pypi:
> https://pypi.python.org/pypi/os-brick/1.4.0
> 
> Error in Nova:
> http://logs.openstack.org/40/326940/4/check/gate-nova-python34-db/bd55893/console.html#_2016-06-12_00_50_13_521

The sdist was never successfully uploaded. I think the release time
(June 9) coincides with the denial of service attack against PyPI
which was causing some uploads to partially fail.

I have since manually uploaded the sdist for this to pypi and
retriggered the release announcement job.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Nodes Registration workflow improvements

2016-06-13 Thread Ben Nemec
On 06/13/2016 09:41 AM, Jiri Tomasek wrote:
> Hi all,
> 
> As we are close to merging the initial Nodes Registration workflows and 
> action [1, 2] using Mistral which successfully provides the current 
> registration logic via common API, I'd like to start discussion on how 
> to improve it so it satisfies GUI and CLI requirements. I'd like to try 
> to describe the clients goals and define requirements, describe current 
> workflow problems and propose a solution. I'd like to record the result 
> of discussion to Blueprint [3] which Ryan already created.
> 
> 
> CLI goals and optimal workflow
> 
> 
> CLI's main benefit is based on the fact that it's commands can simply 
> become part of a script, so it is important that the operation is 
> idempotent. The optimal CLI workflow is:
> 
> User runs 'openstack baremetal import' and provides instackenv.json file 
> which includes all nodes information. When the registration fails at 
> some point, user is notified about the error and re-runs the command 
> with the same set of nodes. Rinse and repeat until all nodes are 
> properly registered.

I would actually not describe this as the optimal workflow for CLI
registration either.  It would be much better if the registration
completed for all the nodes that it can in the first place and then any
failed nodes can be cleaned up later.  There's no reason one bad node in
a file containing 100 nodes should block the entire deployment.

On that note, the only part of your proposal that I'm not sure would be
useful for the CLI is the non-blocking part.  I don't know that a CLI
fire-and-forget mode makes a lot of sense, although if there's a way for
the CLI to check the status then that might be fine too.  However,
pretty much all of the other usability stuff you're talking about would
benefit the CLI too.

> 
> 
> GUI goals and optimal workflow
> =
> 
> GUI's main goal is to provide a user friendly way to register nodes, 
> inform the user on the process, handle the problems and lets user fix 
> them. GUI strives for being responsive and interactive.
> 
> GUI allows user to add nodes specification manually one by one by 
> provided form or allow user (in same manner as CLI) to provide the 
> instackenv.json file which holds the nodes description. Importing the 
> file (or adding node manually) will populate an array of nodes the user 
> wants to register. User is able to browse these nodes and make 
> corrections to their configuration. GUI provides client side validations 
> to verify inputs (node name format, required fields, mac address, ip 
> address format etc.)
> 
> Then user triggers the registration. The nodes are moved to nodes table 
> as they are being registered. If an error occurs during registration of 
> any of the nodes, user is notified about the issue and can fix it in 
> registration form and can re-trigger registration for failed nodes. 
> Rinse and repeat until all nodes are successfully registered and in 
> proper state (manageable).
> 
> Such workflow keeps the GUI interactive, user does not have to look at 
> the spinner for several minutes (in case of registering hundreds of 
> nodes), hoping that something does not happen wrong. User is constantly 
> informed about the progress, user is able to react to the situation as 
> he wants, User is able to freely interact with the GUI while 
> registration is happening on the background. User is able to register 
> nodes in batches.
> 
> 
> Current solution
> =
> 
> Current solution uses register_or_update_nodes workflow [1] which takes 
> a nodes_json array and runs register_or_update_nodes and 
> set_nodes_managed tasks. When the whole operation completes it sends 
> Zaqar message notifying about the result of the registration of the 
> whole batch of nodes.
> 
> register_or_update_nodes runs tripleo.register_or_update_nodes action 
> [2] which uses business logic in tripleo_common/utils/nodes.py
> 
> utils.nodes.py module has been originally extracted from tripleoclient 
> to get the business logic behind the common API. It does following:
> 
> - converts the instackenv.json nodes format to appropriate ironic driver 
> format (driver-info fields)
> - sets kernel and ramdisk ids defaults if they're not provided
> - for each node it tests if node already exists (finds nodes by mac 
> addresses) and updates it or registers it as new based on the result.
> 
> 
> Current Problems:
> - no zaqar notification is sent for each node
> - nodes are registered in batch, registration fails when error happens 
> on a certain node, leaving already registered nodes in inconsistent state
> - workflow does not notify user about what nodes have been registered 
> and what failed, only thing user gets is relevant error message
> - when the workflow succeeds, the registered_nodes list sent by Zaqar 
> message has outdated information
> - when nodes are updated using nodes registration, the forkflow ends up 

Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-13 Thread John McDowall
Juno,

What ever is easiest for you – I can submit WIP patches today for 
networking-ovn and networking-ovs. If you send me your github login I will add 
you as a collaborator to my private repo.

I am currently working on getting the changes into ovs/ovn ovn-northd.c to 
support the new schema – hopefully today or tomorrow. Most of the IDL is in and 
I can get info from networking-sfc to ovs/ovn northd.

Regards

John
From: Na Zhu >
Date: Monday, June 13, 2016 at 6:25 AM
To: John McDowall 
>
Cc: discuss >, Srilatha 
Tangirala >, "OpenStack 
Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

I know you are busy recently, sorry to disturb you. I want to ask you whether I 
can submit patch to your private repo, I test your code changes and find some 
minor errors, I think we can work together to make the debug work done faster, 
then you can submit the WIP patch.

What do you think?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
>
Cc:Srilatha Tangirala 
>, "OpenStack Development 
Mailing List \(not for usage questions\)" 
>, 
discuss >
Date:2016/06/09 16:18
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Hi John,

I know most of the OVN driver codes are copied from OVS driver, OVN driver is 
different from OVS driver. For OVS driver, it should build the sfc flows and 
send to ovs agent, while OVN controller does not need to do it, OVN controller 
only need send the sfc parameters to OVN northbound DB, then ovn-controller can 
build the sfc flow.

networking-sfc defines some common APIs for each driver, see 
networking_sfc/services/sfc/drivers/base.py, I think for OVN, we only need 
write the methods about port-chain create/update/delete, and leave other method 
empty, What do you think?
If you agree with me, you have to refactor the OVN sfc driver, do you want me 
to do it?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Amitabha Biswas >
Cc:Na Zhu/China/IBM@IBMCN, Srilatha Tangirala 
>, "OpenStack Development 
Mailing List (not for usage questions)" 
>, 
discuss >
Date:2016/06/09 00:53
Subject:Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Amitabha,

Thanks for looking at it . I took the suggestion from Juno and implemented it. 
I think it is a good solution as it minimizes impact on both networking-ovn and 
networking-sfc. I have updated my repos, if you have suggestions for 
improvements let me know.

I agree that there needs to be some refactoring of the networking-sfc driver 
code. I think the team did a good job with it as it was easy for me to create 
the OVN driver ( copy and paste). As more drivers are created I think the model 
will get polished and refactored.

Regards

John

From: Amitabha Biswas >
Date: Tuesday, June 7, 2016 at 11:36 PM
To: John McDowall 
>
Cc: Na Zhu >, Srilatha Tangirala 
>, "OpenStack Development 
Mailing List (not for usage questions)" 
>, 
discuss >
Subject: Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

Looking at the code with Srilatha, it seems like the 

Re: [openstack-dev] [TripleO] Nodes Registration workflow improvements

2016-06-13 Thread Dmitry Tantsur

On 06/13/2016 04:41 PM, Jiri Tomasek wrote:

Hi all,

As we are close to merging the initial Nodes Registration workflows and
action [1, 2] using Mistral which successfully provides the current
registration logic via common API, I'd like to start discussion on how
to improve it so it satisfies GUI and CLI requirements. I'd like to try
to describe the clients goals and define requirements, describe current
workflow problems and propose a solution. I'd like to record the result
of discussion to Blueprint [3] which Ryan already created.


Hi and thanks for writing this up. Just a few clarifying comments inline.




CLI goals and optimal workflow


CLI's main benefit is based on the fact that it's commands can simply
become part of a script, so it is important that the operation is
idempotent. The optimal CLI workflow is:

User runs 'openstack baremetal import' and provides instackenv.json file
which includes all nodes information. When the registration fails at
some point, user is notified about the error and re-runs the command
with the same set of nodes. Rinse and repeat until all nodes are
properly registered.


Note that while in your example everything works, the command is not 
idempotent, and e.g. running it in the middle of deployment will 
probably cause funny things to happen.





GUI goals and optimal workflow
=

GUI's main goal is to provide a user friendly way to register nodes,
inform the user on the process, handle the problems and lets user fix
them. GUI strives for being responsive and interactive.

GUI allows user to add nodes specification manually one by one by
provided form or allow user (in same manner as CLI) to provide the
instackenv.json file which holds the nodes description. Importing the
file (or adding node manually) will populate an array of nodes the user
wants to register. User is able to browse these nodes and make
corrections to their configuration. GUI provides client side validations
to verify inputs (node name format, required fields, mac address, ip
address format etc.)


It's worth nothing that Ironic has API to provide required and optional 
properties for all drivers. But of course not in instackenv format ;)




Then user triggers the registration. The nodes are moved to nodes table
as they are being registered. If an error occurs during registration of
any of the nodes, user is notified about the issue and can fix it in
registration form and can re-trigger registration for failed nodes.
Rinse and repeat until all nodes are successfully registered and in
proper state (manageable).

Such workflow keeps the GUI interactive, user does not have to look at
the spinner for several minutes (in case of registering hundreds of
nodes), hoping that something does not happen wrong. User is constantly
informed about the progress, user is able to react to the situation as
he wants, User is able to freely interact with the GUI while
registration is happening on the background. User is able to register
nodes in batches.


Current solution
=

Current solution uses register_or_update_nodes workflow [1] which takes
a nodes_json array and runs register_or_update_nodes and
set_nodes_managed tasks. When the whole operation completes it sends
Zaqar message notifying about the result of the registration of the
whole batch of nodes.

register_or_update_nodes runs tripleo.register_or_update_nodes action
[2] which uses business logic in tripleo_common/utils/nodes.py

utils.nodes.py module has been originally extracted from tripleoclient
to get the business logic behind the common API. It does following:

- converts the instackenv.json nodes format to appropriate ironic driver
format (driver-info fields)
- sets kernel and ramdisk ids defaults if they're not provided
- for each node it tests if node already exists (finds nodes by mac
addresses) and updates it or registers it as new based on the result.


Current Problems:
- no zaqar notification is sent for each node
- nodes are registered in batch, registration fails when error happens
on a certain node, leaving already registered nodes in inconsistent state
- workflow does not notify user about what nodes have been registered
and what failed, only thing user gets is relevant error message
- when the workflow succeeds, the registered_nodes list sent by Zaqar
message has outdated information
- when nodes are updated using nodes registration, the forkflow ends up
as failed, without any error output, although the nodes are updated
successfully

- utils/nodes.py decides whether the node should be created or updated
based on mac address which is subject to change. It needs to be done by
UUID which is fixed.


Well, only if a user/CLI/GUI generates this UUID.

Also it's using not only MACs. We don't require MACs for non-virtual 
nodes, so it also uses some smart matching with BMC credentials in the 
same fashion as Ironic Inspector.



- utils/nodes.py uses instackenv.json nodes list 

Re: [openstack-dev] [rally] "Failed to create the requested number of tenants" error

2016-06-13 Thread Andrey Kurilin
hi Nate,
It looks like I know the core of your issue. We will try to fix it as soon
as possible.

On Fri, Jun 10, 2016 at 6:05 PM, Nate Johnston 
wrote:

> Running 'rally deployment check' appears to come up with good results:
>
> > keystone endpoints are valid and following services are available:
> > +-++---+
> > | services| type   | status|
> > +-++---+
> > | __unknown__ | object-store2  | Available |
> > | __unknown__ | s3_cluster2| Available |
> > | ceilometer  | metering   | Available |
> > | cinder  | volume | Available |
> > | cloud   | cloudformation | Available |
> > | glance  | image  | Available |
> > | heat| orchestration  | Available |
> > | keystone| identity   | Available |
> > | neutron | network| Available |
> > | nova| compute| Available |
> > | s3  | s3 | Available |
> > | swift   | object-store   | Available |
> > +-++---+
>
> Running `openstack network list` as you said appears to give good
> results:
>
> > [root@osrally-wc-1d ~]# openstack network list | wc -l
> > 62
> > [root@osrally-wc-1d ~]# openstack network list | tail -4
> > | ed4e4ab4-0b3d-447a-8333-e6020221391f | sample_network_5_updated |
> 6496fffb-47d0-4a0a-8a52-5b12ac5f15fc
>  |
> > | f7d26120-057f-4dd5-a5b3-a684c4ce3350 | WayneNework  |
> b0234e85-6e19-4f2b-ac6a-aac875ed445f
>  |
> > | fd569510-3306-4b41-b97a-c4d337881128 | private-test-net |
> f3aa1c34-d08a-41ea-87cf-019e87805a2e
>  |
> >
> +--+--+--+
>
> Looking at `rally deployment config | grep auth_url` shows the correct
> value for the auth URL, which is the centralized keystone service.
>
> Thanks,
>
> --N.
>
> On Fri, Jun 10, 2016 at 05:42:47PM +0300, Aleksandr Maretskiy wrote:
> > Nate,
> >
> > please try to make this simple check to make sure that everything is set
> up
> > properly:
> >
> > 1) command "rally deployment check" should print an ascii-table with a
> list
> > of services available
> > 2) load rally auto-generated openrc file and run some OpenStack CLI
> command,
> > for example:
> >
> >   $ . ~/.rally/openrc
> >   $ openstack network list   # does this work as expected?
> >
> > Also, make sure that value of "auth_url" in Rally deployment
> configuration
> > (this can be obtained via command "rally deployment config") is correct.
> > Please use correct deployment configuration in opposite to envvars like
> > OS_AUTH_URL while using Rally
> >
> >
> > On Fri, Jun 10, 2016 at 5:25 PM, Nate Johnston 
> > wrote:
> >
> > > Boris,
> > >
> > > We use a common Keystone across all of our production environments; I
> > > was running this against a new deployment we are working on making
> > > production-ready, so I had specified OS_AUTH_URL to be the common
> > > keystone.  There is no keystone deployed in this datacenter.
> > >
> > > Is there a specific way I need to tweak Rally for that kind of setup?
> > >
> > > Thanks,
> > >
> > > --N.
> > >
> > > P.S. Sending you the catalog under separate cover.
> > >
> > > Thu, Jun 09, 2016 at 10:15:09PM -0700, Boris Pavlovic wrote:
> > > > Nate,
> > > >
> > > > This looks quite strange. Could you share the information from
> keystone
> > > > catalog?
> > > >
> > > > Seems like you didn't setup admin endpoint for keystone in that
> region.
> > > >
> > > > Best regards,
> > > > Boris Pavlovic
> > > >
> > > > On Thu, Jun 9, 2016 at 12:41 PM, Nate Johnston <
> openstackn...@gmail.com>
> > > > wrote:
> > > >
> > > > > Rally folks,
> > > > >
> > > > > I am working with an engineer to get him up to speed on Rally on a
> new
> > > > > development.  He is trying out running a few tests from the samples
> > > > > directory, like samples/tasks/scenarios/nova/list-hypervisors.yaml
> -
> > > but
> > > > > he keeps getting the error "Completed: Exit context: `users`\nTask
> > > > > config is invalid: `Unable to setup context 'users': 'Failed to
> create
> > > > > the requested number of tenants.'`"
> > > > >
> > > > > This is against an Icehouse environment with Mitaka Rally; When I
> run
> > > > > Rally with debug logging I see:
> > > > >
> > > > > 2016-06-08 18:59:24.692 11197 ERROR rally.common.broker
> > > EndpointNotFound:
> > > > > admin endpoint for identity service in  region not found
> > > > >
> > > > > However I note that $OS_AUTH_URL is set in the Rally deployment...
> see
> > > > > http://paste.openstack.org/show/509002/ for the full log.
> > > > >
> > > > > Any ideas you could give me would be much appreciated.  Thanks!

Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-06-13 Thread Hongbin Lu
Gary,

It is hard to tell if your change fits into Magnum upstream or not, unless 
there are further details. I encourage you to upload your changes to gerrit, so 
that we can review and discuss it inline. Also, keep in mind that the change 
might be rejected if it doesn’t fit into upstream objectives or it is 
duplicated to other existing work, but I hope it won’t discourage your 
contribution. If your change is related to Ironic, we might request you to 
coordinate your work with Spyros and/or others who is working on Ironic 
integration.

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: June-13-16 3:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Hi Gary.

On 13 June 2016 at 09:06, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
> wrote:
Hi Tom/All,

>6. Ironic Integration: 
>https://etherpad.openstack.org/p/newton-magnum-ironic-integration
>- Start the implementation immediately
>- Prefer quick work-around for identified issues (cinder volume attachment, 
>variation of number of ports, etc.)

>We need to implement a bay template that can use a flat networking model as 
>this is the only networking model Ironic currently supports. Multi-tenant 
>networking is imminent. This should be done before work on an Ironic template 
>starts.

We have already implemented a bay template that uses a flat networking model 
and other python code(making magnum to find the correct heat template) which is 
used in our own project.
What do you think of this feature? If you think it is necessary for Magnum, I 
can contribute this codes to Magnum upstream.

This feature is useful to magnum and there is a blueprint for that:
https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips
You can add some notes on the whiteboard about your proposed change.

As for the ironic integration, we should modify the existing templates, there
is work in progress on that: https://review.openstack.org/#/c/320968/

btw, you added new yaml files or you modified the existing ones kubemaster,
minion and cluster?

Cheers,
Spyros


Regards,
Gary Duan


-Original Message-
From: Cammann, Tom
Sent: Tuesday, May 03, 2016 1:12 AM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Thanks for the write up Hongbin and thanks to all those who contributed to the 
design summit. A few comments on the summaries below.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

We need to implement a bay template that can use a flat networking model as 
this is the only networking model Ironic currently supports. Multi-tenant 
networking is imminent. This should be done before work on an Ironic template 
starts.

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

Ideally we need to turn this list into a set of actions which we can implement 
over the cycle, i.e. create a BP to remove requirement for LBaaS.

9. Magnum Heat template version: 
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for 
major changes.

We decided only bay driver versioning was required. The template and template 
driver does not need versioning due to the fact we can get heat to pass back 
the template which it used to create the bay.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by 
cAdvisor, Heapster, etc.

We split this topic into 3 parts – bay telemetry, bay monitoring, container 
monitoring.
Bay telemetry is done around actions such as bay/baymodel CRUD operations. This 
is implemented using using ceilometer notifications.
Bay monitoring is around monitoring health of individual nodes in the bay 
cluster and we decided to postpone work as more investigation is required on 
what this should look like and what users actually need.
Container monitoring focuses on what containers are running in the bay and 
general usage of the bay COE. We decided this will be done completed by Magnum 
by adding access to cAdvisor/heapster through baking access to cAdvisor by 
default.

- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): 
It can address the use case of heterogeneity of bay nodes 

[openstack-dev] [TripleO] Nodes Registration workflow improvements

2016-06-13 Thread Jiri Tomasek

Hi all,

As we are close to merging the initial Nodes Registration workflows and 
action [1, 2] using Mistral which successfully provides the current 
registration logic via common API, I'd like to start discussion on how 
to improve it so it satisfies GUI and CLI requirements. I'd like to try 
to describe the clients goals and define requirements, describe current 
workflow problems and propose a solution. I'd like to record the result 
of discussion to Blueprint [3] which Ryan already created.



CLI goals and optimal workflow


CLI's main benefit is based on the fact that it's commands can simply 
become part of a script, so it is important that the operation is 
idempotent. The optimal CLI workflow is:


User runs 'openstack baremetal import' and provides instackenv.json file 
which includes all nodes information. When the registration fails at 
some point, user is notified about the error and re-runs the command 
with the same set of nodes. Rinse and repeat until all nodes are 
properly registered.



GUI goals and optimal workflow
=

GUI's main goal is to provide a user friendly way to register nodes, 
inform the user on the process, handle the problems and lets user fix 
them. GUI strives for being responsive and interactive.


GUI allows user to add nodes specification manually one by one by 
provided form or allow user (in same manner as CLI) to provide the 
instackenv.json file which holds the nodes description. Importing the 
file (or adding node manually) will populate an array of nodes the user 
wants to register. User is able to browse these nodes and make 
corrections to their configuration. GUI provides client side validations 
to verify inputs (node name format, required fields, mac address, ip 
address format etc.)


Then user triggers the registration. The nodes are moved to nodes table 
as they are being registered. If an error occurs during registration of 
any of the nodes, user is notified about the issue and can fix it in 
registration form and can re-trigger registration for failed nodes. 
Rinse and repeat until all nodes are successfully registered and in 
proper state (manageable).


Such workflow keeps the GUI interactive, user does not have to look at 
the spinner for several minutes (in case of registering hundreds of 
nodes), hoping that something does not happen wrong. User is constantly 
informed about the progress, user is able to react to the situation as 
he wants, User is able to freely interact with the GUI while 
registration is happening on the background. User is able to register 
nodes in batches.



Current solution
=

Current solution uses register_or_update_nodes workflow [1] which takes 
a nodes_json array and runs register_or_update_nodes and 
set_nodes_managed tasks. When the whole operation completes it sends 
Zaqar message notifying about the result of the registration of the 
whole batch of nodes.


register_or_update_nodes runs tripleo.register_or_update_nodes action 
[2] which uses business logic in tripleo_common/utils/nodes.py


utils.nodes.py module has been originally extracted from tripleoclient 
to get the business logic behind the common API. It does following:


- converts the instackenv.json nodes format to appropriate ironic driver 
format (driver-info fields)

- sets kernel and ramdisk ids defaults if they're not provided
- for each node it tests if node already exists (finds nodes by mac 
addresses) and updates it or registers it as new based on the result.



Current Problems:
- no zaqar notification is sent for each node
- nodes are registered in batch, registration fails when error happens 
on a certain node, leaving already registered nodes in inconsistent state
- workflow does not notify user about what nodes have been registered 
and what failed, only thing user gets is relevant error message
- when the workflow succeeds, the registered_nodes list sent by Zaqar 
message has outdated information
- when nodes are updated using nodes registration, the forkflow ends up 
as failed, without any error output, although the nodes are updated 
successfully


- utils/nodes.py decides whether the node should be created or updated 
based on mac address which is subject to change. It needs to be done by 
UUID which is fixed.
- utils/nodes.py uses instackenv.json nodes list format - the conversion 
should be done in client


- instackenv.json uses nodes list format which is not compatible with 
ironic which forces us to do the format conversions and limit the ironic 
driver support



Proposed changes
===

To satisfy clients requirements we need to:
- assure the idempotency of idempotency of running the nodes 
registration providing the instackenv.json

- enable the workflow to track each node registration workflow separately


The changes can be done in 2 steps:
1. refactor register_or_update_nodes workflow and utils/nodes.py

- register_or_update_nodes workflow calls 

Re: [openstack-dev] [cinder][nova] Multi-attach/Cinder-Nova weekly IRC meetings

2016-06-13 Thread Ildikó Váncsa
Hi All,

A friendly reminder that we will have the next Cinder-Nova API changes meeting 
in a few hours at 1700UTC on #openstack-meeting-cp.

For the ongoing work items and agenda details please see the following 
etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes

Thanks and Best Regards,
/Ildikó

> -Original Message-
> From: Ildikó Váncsa [mailto:ildiko.van...@ericsson.com]
> Sent: May 31, 2016 20:57
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [cinder][nova] Multi-attach/Cinder-Nova weekly 
> IRC meetings
> 
> Hi All,
> 
> We skipped the Monday slot this week due to the holiday in the US. __Only 
> this week__ we will hold the meeting on __Thursday,
> 1700UTC__ on the __#openstack-meeting-cp__ channel.
> 
> Related etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes
> 
> Thanks and Best Regards,
> /Ildikó
> 
> > -Original Message-
> > From: Ildikó Váncsa [mailto:ildiko.van...@ericsson.com]
> > Sent: May 20, 2016 18:31
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [cinder][nova] Multi-attach/Cinder-Nova weekly 
> > IRC meetings
> >
> > Hi All,
> >
> > We have now the approved slot for the Cinder-Nova interaction changes 
> > meeting series. The new slot is __Monday, 1700UTC__, it
> will
> > be on channel  __#openstack-meeting-cp__.
> >
> > Related etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes
> > Summary about ongoing items: 
> > http://lists.openstack.org/pipermail/openstack-dev/2016-May/094089.html
> >
> > We will have one exception which is May 30 as it is a US holiday, I will 
> > announce a temporary slot for that week.
> >
> > Thanks,
> > /Ildikó
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][TripleO] Adding interfaces to environment files?

2016-06-13 Thread Ben Nemec
On 06/08/2016 07:00 AM, Jiri Tomasek wrote:
> 
> On Wed, Jun 8, 2016 at 11:23 AM, Steven Hardy  > wrote:
> 
> On Tue, Jun 07, 2016 at 04:53:12PM -0400, Zane Bitter wrote:
> > On 07/06/16 15:57, Jay Dobies wrote:
> > > >
> > > > 1. Now that we support passing un-merged environment files to heat,
> > > > it'd be
> > > > good to support an optional description key for environments,
> > >
> > > I've never understood why the environment file doesn't have a
> > > description field itself. Templates have descriptions, and IMO it 
> makes
> > > sense for an environment to describe what its particular additions to
> > > the parameters/registry do.
> >
> > Just use a comment?
> 
> This doesn't work for any of the TripleO use-cases because you can't
> parse
> a comment.
> 
> The requirements are twofold:
> 
> 1. Prior to creating the stack, we need a way to present choices to the
> user about which environment files to enable.  This is made much
> easier if
> you can include a human-readable description about what the environment
> actually does.
> 
> 2. After creating the stack, we need a way to easily introspect the
> stack
> and see what environments were enabled.  Same as above, it's be
> super-awesome if we could just then strip out the description of
> what they
> do, so we don't have to maintain hacks like this:
> 
> 
> https://github.com/openstack/tripleo-heat-templates/blob/master/capabilities-map.yaml
> 
> The description is one potentially easy-win here, it just makes far more
> sense to keep the description of a thing inside the same file (just
> like we
> do already with HOT templates).
> 
> The next step beyond that is the need to express dependencies between
> things, which is what I was trying to address via the
> https://review.openstack.org/#/c/196656/ spec - that kinda stalled
> when it
> took 7 months to land so we'll probably need that capabilities_map
> for that
> unless we can revive that effort.
> 
> > > I'd be happy to write that patch, but I wanted to first double check
> > > that there wasn't a big philosophical reason why it shouldn't have a
> > > description.
> >
> > There's not much point unless you're also adding an API to retrieve
> > environment files like Steve mentioned. Comments get stripped when the 
> yaml
> > is parsed, but that's fairly academic if you don't have a way to get it 
> out
> > again.
> 
> Yup, I'm absolutely proposing we add an interface to retrieve the
> environment files (or, in fact, the entire stack files map, and a
> list of
> environment_files).
> 
> Steve
> 
> 
> 
> Hi, thanks for bringing this topic up. Capabilities map provides several
> information about environments. We definitely need to get rid of it in
> favor of having Heat provide this from the environment file metadata.
> How much additional work would it be to enable environments provide more
> metadata than just a description?
> 
> From the GUI point of view an information structure such as following
> would be much appreciated:
> 
> environments/environments/net-bond-with-vlans.yaml:
> 
> meta:
>   label: Net Bond with Vlans
>   description: >
> Configure each role to use a pair of bonded nics (nic2 and
> nic3) and configures an IP address on each relevant isolated network
> for each role. This option assumes use of Network Isolation.
>   requires:
> - environments/network-isolation.yaml
> - overcloud-resource-registry-puppet.yaml
>   alternatives:
> - environments/net-single-nic-with-vlans.yaml
>   group:
> - network-configuration
> 
> Grouping of environments is a bit problematic. We could introduce
> something like 'group' which could categorize the environments. Problem
> is that each group would eventually require own entity to cover group
> label and description.

This is why I actually don't think grouping information belongs in the
environment files at all.  I left some related thoughts in a response to
Steve on https://review.openstack.org/#/c/253638/ but mostly it boils
down to the fact that the group metadata is at a different level from
the environments so putting it in the environment is a bad fit.

Note that the same applies to alternatives.  Putting requirements in the
environments makes perfect sense, but making them be aware of all their
siblings too gets messy (consider that if we add a single new network
environment now all of the existing environments would have to be
updated as well).

> 
> 
> -- Jirka
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

[openstack-dev] [nova] next notification subteam meeting

2016-06-13 Thread Balázs Gibizer
Hi, 

The next notification subteam meeting will be held on 2016.06.14 17:00 UTC [1] 
on #openstack-meeting-4.

Cheers,
Gibi

[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160614T17


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-13 Thread Na Zhu
Hi John,

I know you are busy recently, sorry to disturb you. I want to ask you 
whether I can submit patch to your private repo, I test your code changes 
and find some minor errors, I think we can work together to make the debug 
work done faster, then you can submit the WIP patch.

What do you think? 




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   Na Zhu/China/IBM@IBMCN
To: John McDowall 
Cc: Srilatha Tangirala , "OpenStack Development 
Mailing List \(not for usage questions\)" 
, discuss 
Date:   2016/06/09 16:18
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Hi John,

I know most of the OVN driver codes are copied from OVS driver, OVN driver 
is different from OVS driver. For OVS driver, it should build the sfc 
flows and send to ovs agent, while OVN controller does not need to do it, 
OVN controller only need send the sfc parameters to OVN northbound DB, 
then ovn-controller can build the sfc flow.
 
networking-sfc defines some common APIs for each driver, see 
networking_sfc/services/sfc/drivers/base.py, I think for OVN, we only need 
write the methods about port-chain create/update/delete, and leave other 
method empty, What do you think? 
If you agree with me, you have to refactor the OVN sfc driver, do you want 
me to do it?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Amitabha Biswas 
Cc:Na Zhu/China/IBM@IBMCN, Srilatha Tangirala 
, "OpenStack Development Mailing List (not for usage 
questions)" , discuss 

Date:2016/06/09 00:53
Subject:Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Amitabha,

Thanks for looking at it . I took the suggestion from Juno and implemented 
it. I think it is a good solution as it minimizes impact on both 
networking-ovn and networking-sfc. I have updated my repos, if you have 
suggestions for improvements let me know.

I agree that there needs to be some refactoring of the networking-sfc 
driver code. I think the team did a good job with it as it was easy for me 
to create the OVN driver ( copy and paste). As more drivers are created I 
think the model will get polished and refactored.

Regards

John

From: Amitabha Biswas 
Date: Tuesday, June 7, 2016 at 11:36 PM
To: John McDowall 
Cc: Na Zhu , Srilatha Tangirala , 
"OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>, discuss 
Subject: Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John, 

Looking at the code with Srilatha, it seems like the 
https://github.com/doonhammer/networking-ovnrepo has gone down the path of 
having a sfc_ovn.py file in the networking-ovn/ovsdb directory. This file 
deals with the SFC specific OVSDB transactions in OVN. So to answer your 
question of invoking OVS-IDL, we can import the src_ovn.py file from 
networking_sfc/services/src/drivers/ovn/driver.py and invoke calls into 
IDL.

Another aspect from a networking-sfc point of view is the duplication of 
code between networking_sfc/services/src/drivers/ovn/driver.py and 
networking_sfc/services/src/drivers/ovs/driver.py in the 
https://github.com/doonhammer/networking-sfcrepo. There should be a 
mechanism to coalesce the common code and invoke the OVS and OVN specific 
parts separately.

Regards
Amitabha

On Jun 7, 2016, at 9:54 PM, John McDowall  
wrote:

Juno, Srilatha,

I need some help �C I have fixed most of the obvious typo’s in the three 
repos and merged them with mainline. There is still a problem with the 
build I think in mech_driver.py but I will fix it asap in the am.

However I am not sure of the best way to interface between sfc and ovn.

In networking_sfc/services/src/drivers/ovn/driver.py there is a function 
that creates a deep copy of the port-chain dict, 
create_port_chain(self,contact,port_chain). 

Looking at networking-ovn I think it should use mech_driver.py so we can 
call the OVS-IDL to send the parameters to ovn. However I am not sure of 
the best way to do it. Could you make some suggestions or send me some 
sample code showing the best approach?

I will get the ovs/ovn cleaned up and ready. Also Louis from the 
networking-sfc has posted a draft 

Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-13 Thread Flavio Percoco

On 12/06/16 22:10 +, Hongbin Lu wrote:

Hi team,

During the team meetings these weeks, we collaborated the initial project 
roadmap. I summarized it as below. Please review.

* Implement a common container abstraction for different container runtimes. 
The initial implementation will focus on supporting basic container operations 
(i.e. CRUD).


What COE's are being considered for the first implementation? Just docker and 
kubernetes?


* Focus on non-nested containers use cases (running containers on physical 
hosts), and revisit nested containers use cases (running containers on VMs) 
later.
* Provide two set of APIs to access containers: The Nova APIs and the 
Zun-native APIs. In particular, the Zun-native APIs will expose full container 
capabilities, and Nova APIs will expose capabilities that are shared between 
containers and VMs.


- Is the nova side going to be implemented in the form of a Nova driver (like
ironic's?)? What do you mean by APIs here?

- What operations are we expecting this to support (just CRUD operations on
containers?)?

I can see this driver being useful for specialized services like Trove but I'm
curious/concerned about how this will be used by end users (assuming that's the
goal).



* Leverage Neutron (via Kuryr) for container networking.
* Leverage Cinder for container data volume.
* Leverage Glance for storing container images. If necessary, contribute to 
Glance for missing features (i.e. support layer of container images).


Are you aware of https://review.openstack.org/#/c/249282/ ?



* Support enforcing multi-tenancy by doing the following:
** Add configurable options for scheduler to enforce neighboring containers 
belonging to the same tenant.
** Support hypervisor-based container runtimes.

The following topics have been discussed, but the team cannot reach consensus 
on including them into the short-term project scope. We skipped them for now 
and might revisit them later.
* Support proxying API calls to COEs.


Any link to what this proxy will do and what service it'll talk to? I'd
generally advice against having proxy calls in services. We've just done work in
Nova to deprecate the Nova Image proxy.


* Advanced container operations (i.e. keep container alive, load balancer 
setup, rolling upgrade).
* Nested containers use cases (i.e. provision container hosts).
* Container composition (i.e. support docker-compose like DSL).

NOTE: I might forgot and mis-understood something. Please feel free to point 
out if anything is wrong or missing.


It sounds you've got more than enough to work on for now, I think it's fine to
table these topics for now.

just my $0.02
Flavio

--
@flaper87
Flavio Percoco



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] weekly meeting #85

2016-06-13 Thread Emilien Macchi
Hi Puppeteers!

We'll have our weekly meeting tomorrow at 3pm UTC on
#openstack-meeting-4.

Here's a first agenda:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160614

Feel free to add more topics, and any outstanding bug and patch.

See you tomorrow!
Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] OpenStack Newton B1 for Ubuntu 16.04 LTS and Ubuntu 16.10

2016-06-13 Thread Corey Bryant
Hi All,

The Ubuntu OpenStack team is pleased to announce the general availability
of the OpenStack Newton B1 milestone in Ubuntu 16.10 and for Ubuntu 16.04
LTS via the Ubuntu Cloud Archive.

Ubuntu 16.04 LTS


You can enable the Ubuntu Cloud Archive pocket for OpenStack Newton on
Ubuntu 16.04 installations by running the following commands:

echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu
xenial-updates/newton main" | sudo tee
/etc/apt/sources.list.d/newton-uca.list
sudo apt-get install -y ubuntu-cloud-keyring
sudo apt-get update

The Ubuntu Cloud Archive for Newton includes updates for Cinder, Designate,
Glance, Heat, Horizon, Keystone, Manila, Neutron, Neutron-FWaaS,
Neutron-LBaaS, Neutron-VPNaaS, Nova, and Swift (2.8.0).

You can check out the full list of packages and versions at [0].

Ubuntu 16.10
--

No extra steps required; just start installing OpenStack!

Branch Package Builds
---

We’ve resurrected the branch package builds of OpenStack projects that we
had in place a while back - if you want to try out the latest master branch
updates, or updates to stable branches, the following PPA’s are now
up-to-date and maintained:

   sudo add-apt-repository ppa:openstack-ubuntu-testing/liberty
   sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
   sudo add-apt-repository ppa:openstack-ubuntu-testing/newton

bear in mind these are built per-commit-ish (30 min checks for new commits
at the moment) so ymmv from time-to-time.

Reporting bugs
-

Any issues please report bugs using the 'ubuntu-bug' tool:

  sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Thanks and have fun!

Regards,
Corey
(on behalf of the Ubuntu OpenStack team)

[0]
http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/newton_versions.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Mooney, Sean K


> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: Monday, June 13, 2016 1:12 PM
> To: Armando M. 
> Cc: Carl Baldwin ; OpenStack Development Mailing
> List ; Jay Pipes
> ; Maxime Leroy ; Moshe Levi
> ; Russell Bryant ; sahid
> ; Mooney, Sean K 
> Subject: Re: [Neutron][os-vif] Expanding vif capability for wiring trunk
> ports
> 
> On Mon, Jun 13, 2016 at 02:08:30PM +0200, Armando M. wrote:
> > On 13 June 2016 at 10:35, Daniel P. Berrange 
> wrote:
> >
> > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> > > > Hi,
> > > >
> > > > You may or may not be aware of the vlan-aware-vms effort [1] in
> > > > Neutron.  If not, there is a spec and a fair number of patches in
> > > > progress for this.  Essentially, the goal is to allow a VM to
> > > > connect to multiple Neutron networks by tagging traffic on a
> > > > single port with VLAN tags.
> > > >
> > > > This effort will have some effect on vif plugging because the
> > > > datapath will include some changes that will effect how vif
> > > > plugging is done today.
> > > >
> > > > The design proposal for trunk ports with OVS adds a new bridge for
> > > > each trunk port.  This bridge will demux the traffic and then
> > > > connect to br-int with patch ports for each of the networks.
> > > > Rawlin Peters has some ideas for expanding the vif capability to
> > > > include this wiring.
> > > >
> > > > There is also a proposal for connecting to linux bridges by using
> > > > kernel vlan interfaces.
> > > >
> > > > This effort is pretty important to Neutron in the Newton
> > > > timeframe.  I wanted to send this out to start rounding up the
> > > > reviewers and other participants we need to see how we can start
> > > > putting together a plan for nova integration of this feature (via
> os-vif?).
> > >
> > > I've not taken a look at the proposal, but on the timing side of
> > > things it is really way to late to start this email thread asking
> > > for design input from os-vif or nova. We're way past the spec
> > > proposal deadline for Nova in the Newton cycle, so nothing is going
> > > to happen until the Ocata cycle no matter what Neutron want  in
> Newton.
> >
> >
> > For sake of clarity, does this mean that the management of the os-vif
> > project matches exactly Nova's, e.g. same deadlines and processes
> > apply, even though the core team and its release model are different
> from Nova's?
> > I may have erroneously implied that it wasn't, also from past talks I
> > had with johnthetubaguy.
> 
> No, we don't intend to force ourselves to only release at milestones
> like nova does. We'll release the os-vif library whenever there is new
> functionality in its code that we need to make available to
> nova/neutron.
> This could be as frequently as once every few weeks.
[Mooney, Sean K] 
I have been tracking contributing to the vlan aware vm work in 
neutron since the Vancouver summit so I am quite familiar with what would have
to be modified to support the vlan trucking. Provided the modifications do not
delay the conversion to os-vif in nova this cycle I would be happy to review
and help develop the code to support this use case.

In the ovs case at lease which we have been discussing here
https://review.openstack.org/#/c/318317/4/doc/source/devref/openvswitch_agent.rst
no changes should be required for nova and all changes will be confined to the 
ovs
plugin. In is essence check if bridge exists, if not create it with port id,
Then plug as normal.

Again though I do agree that we should focus on completing the initial nova 
integration
But I don't think that mean we have to exclude other feature enhancements as 
long as they
do not prevent us achieving that goal.


> 
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-
> http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-
> manager.org :|
> |: http://autobuild.org   -o-
> http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-
> vnc :|
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Daniel P. Berrange
On Mon, Jun 13, 2016 at 07:39:29AM -0400, Assaf Muller wrote:
> On Mon, Jun 13, 2016 at 4:35 AM, Daniel P. Berrange  
> wrote:
> > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> >> Hi,
> >>
> >> You may or may not be aware of the vlan-aware-vms effort [1] in
> >> Neutron.  If not, there is a spec and a fair number of patches in
> >> progress for this.  Essentially, the goal is to allow a VM to connect
> >> to multiple Neutron networks by tagging traffic on a single port with
> >> VLAN tags.
> >>
> >> This effort will have some effect on vif plugging because the datapath
> >> will include some changes that will effect how vif plugging is done
> >> today.
> >>
> >> The design proposal for trunk ports with OVS adds a new bridge for
> >> each trunk port.  This bridge will demux the traffic and then connect
> >> to br-int with patch ports for each of the networks.  Rawlin Peters
> >> has some ideas for expanding the vif capability to include this
> >> wiring.
> >>
> >> There is also a proposal for connecting to linux bridges by using
> >> kernel vlan interfaces.
> >>
> >> This effort is pretty important to Neutron in the Newton timeframe.  I
> >> wanted to send this out to start rounding up the reviewers and other
> >> participants we need to see how we can start putting together a plan
> >> for nova integration of this feature (via os-vif?).
> >
> > I've not taken a look at the proposal, but on the timing side of things
> > it is really way to late to start this email thread asking for design
> > input from os-vif or nova. We're way past the spec proposal deadline
> > for Nova in the Newton cycle, so nothing is going to happen until the
> > Ocata cycle no matter what Neutron want  in Newton. For os-vif our
> > focus right now is exclusively on getting existing functionality ported
> > over, and integrated into Nova in Newton. So again we're not really looking
> > to spend time on further os-vif design work right now.
> >
> > In the Ocata cycle we'll be looking to integrate os-vif into Neutron to
> > let it directly serialize VIF objects and send them over to Nova, instead
> > of using the ad-hoc port-binding dicts.  From the Nova side, we're not
> > likely to want to support any new functionality that affects port-binding
> > data until after Neutron is converted to os-vif. So Ocata at the earliest,
> > but probably more like P, unless the Neutron conversion to os-vif gets
> > completed unexpectedly quickly.
> 
> In light of this feature being requested by the NFV, container and
> baremetal communities, and that Neutron's os-vif integration work
> hasn't begun, does it make sense to block Nova VIF work? Are we
> comfortable, from a wider OpenStack perspective, to wait until
> possibly the P release? I think it's our collective responsibility as
> developers to find creative ways to meet deadlines, not serializing
> work on features and letting processes block us.

Everyone has their own personal set of features that are their personal
priority items. Nova evaluates all the competing demands and decides on
what the project's priorities are for the given cycle. For Newton Nova's
priority is to convert existing VIF functionality to use os-vif. Anything
else vif related takes a backseat to this project priority. This formal
modelling of VIFs and developing a plugin facility has already been strung
out over at least 3 release cycles now. We're finally in a position to get
it completed, and we're not going to divert attention away from this, to
other new features requests until its done as that'll increase the chances
of it getting strung out for yet another release which is in no ones
interests.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Armando M.
On 13 June 2016 at 14:11, Daniel P. Berrange  wrote:

> On Mon, Jun 13, 2016 at 02:08:30PM +0200, Armando M. wrote:
> > On 13 June 2016 at 10:35, Daniel P. Berrange 
> wrote:
> >
> > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> > > > Hi,
> > > >
> > > > You may or may not be aware of the vlan-aware-vms effort [1] in
> > > > Neutron.  If not, there is a spec and a fair number of patches in
> > > > progress for this.  Essentially, the goal is to allow a VM to connect
> > > > to multiple Neutron networks by tagging traffic on a single port with
> > > > VLAN tags.
> > > >
> > > > This effort will have some effect on vif plugging because the
> datapath
> > > > will include some changes that will effect how vif plugging is done
> > > > today.
> > > >
> > > > The design proposal for trunk ports with OVS adds a new bridge for
> > > > each trunk port.  This bridge will demux the traffic and then connect
> > > > to br-int with patch ports for each of the networks.  Rawlin Peters
> > > > has some ideas for expanding the vif capability to include this
> > > > wiring.
> > > >
> > > > There is also a proposal for connecting to linux bridges by using
> > > > kernel vlan interfaces.
> > > >
> > > > This effort is pretty important to Neutron in the Newton timeframe.
> I
> > > > wanted to send this out to start rounding up the reviewers and other
> > > > participants we need to see how we can start putting together a plan
> > > > for nova integration of this feature (via os-vif?).
> > >
> > > I've not taken a look at the proposal, but on the timing side of things
> > > it is really way to late to start this email thread asking for design
> > > input from os-vif or nova. We're way past the spec proposal deadline
> > > for Nova in the Newton cycle, so nothing is going to happen until the
> > > Ocata cycle no matter what Neutron want  in Newton.
> >
> >
> > For sake of clarity, does this mean that the management of the os-vif
> > project matches exactly Nova's, e.g. same deadlines and processes apply,
> > even though the core team and its release model are different from
> Nova's?
> > I may have erroneously implied that it wasn't, also from past talks I had
> > with johnthetubaguy.
>
> No, we don't intend to force ourselves to only release at milestones
> like nova does. We'll release the os-vif library whenever there is new
> functionality in its code that we need to make available to nova/neutron.
> This could be as frequently as once every few weeks.
>

Thanks, but I could get this answer from [1]. I was asking about specs and
deadlines.

[1] https://governance.openstack.org/reference/projects/nova.html


> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Random IP address allocations

2016-06-13 Thread Gary Kotton
I think for a major change like this we should have at least had a mail list 
discussion.


On 6/10/16, 7:24 PM, "Carl Baldwin"  wrote:

>On Fri, Jun 10, 2016 at 2:40 AM, Gary Kotton  wrote:
>> Hi,
>> The patch https://review.openstack.org/#/c/292207/ has broken decomposed
>> plugins. I am not sure if we can classify this as a API change – basically
>
>Can you be more specific about how it "has broken decomposed plugins?"
> What's broken?  There are no links or anything.

The unit tests break and took a lot of work to get them fixed. The reason for 
this is that the plugin has native DHCP. This mean that a port is created for 
DHCP. This randomly would fail as it ‘may’ have been the IP that was configured 
for the test. There are a number of ways for addressing this.

>
>> the IP address allocation model has changed. So someone prior to the patch
>> could create a network and expect addresses A, B and C to be allocated in
>
>Nothing has ever been documented indicating that you can expect this.
>People just noticed a pattern and assumed they could count on it.
>Anyone depending on this has been depending on an implementation
>detail.  This has come up before [1].  I think we need flexibility to
>allocate IPs as we need to to scale well.  I don't think we should be
>restricted by defacto patterns in IP allocation that people have come
>to depend on.
>
>Any real world use should always take in to account the fact there
>there may be other users of the system trying to get IP allocations in
>parallel.  To them, the expected behavior doesn't change: they could
>get any address from a window of the next few available addresses.
>So, the problem here must be in tests running in a contrived setup
>making too many assumptions.
>
>> that order. Now random addresses will be allocated.
>
>After nearly 3 years of dealing with DB contention around IP
>allocation, this is the only way that we've been able to come up with
>to finally relieve it.  When IPAM gets busy, there is a lot of racing
>to get that next IP address which results in contention between worker
>processes.  Allocating from a random window relieves it considerably.
>

What about locking? I know a lot of people wanted to discuss the distributed 
locking. Just doing a random retry also looks veryt error prone. Have you guys 
tested this on scale.

In addition to this it also seems like the IPM is called under a DB transaction 
– so that will break things – that is when a IPAM driver is talking to a 
external service and there is a little load.

>> I think that this requires some discussion and we should consider reverting
>> the patch. Maybe I am missing something here but this may break people who
>> are working according the existing outputs of Neutron according to existing
>> behavior (which may have been wrong from the start).
>
>Some discussion was had [2][3] leading up to this change.  I didn't
>think we needed broader discussion because we've already established
>that IP allocation is an implementation detail [1].  The only contract
>in place for IP allocation is that an IP address will be allocated
>from within the allocation_pools defined on the subnet if available.
>
>I am against reverting this patch as I have stated on the review to
>revert it [4].
>
>Carl
>
>[1] https://review.openstack.org/#/c/58017/17
>[2] 
>http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2016-03-11.log.html#t2016-03-11T17:04:57
>[3] https://bugs.launchpad.net/neutron/+bug/1543094/comments/7
>[4] https://review.openstack.org/#/c/328342/
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Daniel P. Berrange
On Mon, Jun 13, 2016 at 02:08:30PM +0200, Armando M. wrote:
> On 13 June 2016 at 10:35, Daniel P. Berrange  wrote:
> 
> > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> > > Hi,
> > >
> > > You may or may not be aware of the vlan-aware-vms effort [1] in
> > > Neutron.  If not, there is a spec and a fair number of patches in
> > > progress for this.  Essentially, the goal is to allow a VM to connect
> > > to multiple Neutron networks by tagging traffic on a single port with
> > > VLAN tags.
> > >
> > > This effort will have some effect on vif plugging because the datapath
> > > will include some changes that will effect how vif plugging is done
> > > today.
> > >
> > > The design proposal for trunk ports with OVS adds a new bridge for
> > > each trunk port.  This bridge will demux the traffic and then connect
> > > to br-int with patch ports for each of the networks.  Rawlin Peters
> > > has some ideas for expanding the vif capability to include this
> > > wiring.
> > >
> > > There is also a proposal for connecting to linux bridges by using
> > > kernel vlan interfaces.
> > >
> > > This effort is pretty important to Neutron in the Newton timeframe.  I
> > > wanted to send this out to start rounding up the reviewers and other
> > > participants we need to see how we can start putting together a plan
> > > for nova integration of this feature (via os-vif?).
> >
> > I've not taken a look at the proposal, but on the timing side of things
> > it is really way to late to start this email thread asking for design
> > input from os-vif or nova. We're way past the spec proposal deadline
> > for Nova in the Newton cycle, so nothing is going to happen until the
> > Ocata cycle no matter what Neutron want  in Newton.
> 
> 
> For sake of clarity, does this mean that the management of the os-vif
> project matches exactly Nova's, e.g. same deadlines and processes apply,
> even though the core team and its release model are different from Nova's?
> I may have erroneously implied that it wasn't, also from past talks I had
> with johnthetubaguy.

No, we don't intend to force ourselves to only release at milestones
like nova does. We'll release the os-vif library whenever there is new
functionality in its code that we need to make available to nova/neutron.
This could be as frequently as once every few weeks.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Armando M.
On 13 June 2016 at 10:35, Daniel P. Berrange  wrote:

> On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> > Hi,
> >
> > You may or may not be aware of the vlan-aware-vms effort [1] in
> > Neutron.  If not, there is a spec and a fair number of patches in
> > progress for this.  Essentially, the goal is to allow a VM to connect
> > to multiple Neutron networks by tagging traffic on a single port with
> > VLAN tags.
> >
> > This effort will have some effect on vif plugging because the datapath
> > will include some changes that will effect how vif plugging is done
> > today.
> >
> > The design proposal for trunk ports with OVS adds a new bridge for
> > each trunk port.  This bridge will demux the traffic and then connect
> > to br-int with patch ports for each of the networks.  Rawlin Peters
> > has some ideas for expanding the vif capability to include this
> > wiring.
> >
> > There is also a proposal for connecting to linux bridges by using
> > kernel vlan interfaces.
> >
> > This effort is pretty important to Neutron in the Newton timeframe.  I
> > wanted to send this out to start rounding up the reviewers and other
> > participants we need to see how we can start putting together a plan
> > for nova integration of this feature (via os-vif?).
>
> I've not taken a look at the proposal, but on the timing side of things
> it is really way to late to start this email thread asking for design
> input from os-vif or nova. We're way past the spec proposal deadline
> for Nova in the Newton cycle, so nothing is going to happen until the
> Ocata cycle no matter what Neutron want  in Newton.


For sake of clarity, does this mean that the management of the os-vif
project matches exactly Nova's, e.g. same deadlines and processes apply,
even though the core team and its release model are different from Nova's?
I may have erroneously implied that it wasn't, also from past talks I had
with johnthetubaguy.

Perhaps the answer to this question is clearly stated somewhere else, but I
must have missed it. I want to make sure I ask explicitly now to avoid
future confusion.

For os-vif our
> focus right now is exclusively on getting existing functionality ported
> over, and integrated into Nova in Newton. So again we're not really looking
> to spend time on further os-vif design work right now.
>
> In the Ocata cycle we'll be looking to integrate os-vif into Neutron to
> let it directly serialize VIF objects and send them over to Nova, instead
> of using the ad-hoc port-binding dicts.  From the Nova side, we're not
> likely to want to support any new functionality that affects port-binding
> data until after Neutron is converted to os-vif. So Ocata at the earliest,
> but probably more like P, unless the Neutron conversion to os-vif gets
> completed unexpectedly quickly.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Assaf Muller
On Mon, Jun 13, 2016 at 4:35 AM, Daniel P. Berrange  wrote:
> On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
>> Hi,
>>
>> You may or may not be aware of the vlan-aware-vms effort [1] in
>> Neutron.  If not, there is a spec and a fair number of patches in
>> progress for this.  Essentially, the goal is to allow a VM to connect
>> to multiple Neutron networks by tagging traffic on a single port with
>> VLAN tags.
>>
>> This effort will have some effect on vif plugging because the datapath
>> will include some changes that will effect how vif plugging is done
>> today.
>>
>> The design proposal for trunk ports with OVS adds a new bridge for
>> each trunk port.  This bridge will demux the traffic and then connect
>> to br-int with patch ports for each of the networks.  Rawlin Peters
>> has some ideas for expanding the vif capability to include this
>> wiring.
>>
>> There is also a proposal for connecting to linux bridges by using
>> kernel vlan interfaces.
>>
>> This effort is pretty important to Neutron in the Newton timeframe.  I
>> wanted to send this out to start rounding up the reviewers and other
>> participants we need to see how we can start putting together a plan
>> for nova integration of this feature (via os-vif?).
>
> I've not taken a look at the proposal, but on the timing side of things
> it is really way to late to start this email thread asking for design
> input from os-vif or nova. We're way past the spec proposal deadline
> for Nova in the Newton cycle, so nothing is going to happen until the
> Ocata cycle no matter what Neutron want  in Newton. For os-vif our
> focus right now is exclusively on getting existing functionality ported
> over, and integrated into Nova in Newton. So again we're not really looking
> to spend time on further os-vif design work right now.
>
> In the Ocata cycle we'll be looking to integrate os-vif into Neutron to
> let it directly serialize VIF objects and send them over to Nova, instead
> of using the ad-hoc port-binding dicts.  From the Nova side, we're not
> likely to want to support any new functionality that affects port-binding
> data until after Neutron is converted to os-vif. So Ocata at the earliest,
> but probably more like P, unless the Neutron conversion to os-vif gets
> completed unexpectedly quickly.

In light of this feature being requested by the NFV, container and
baremetal communities, and that Neutron's os-vif integration work
hasn't begun, does it make sense to block Nova VIF work? Are we
comfortable, from a wider OpenStack perspective, to wait until
possibly the P release? I think it's our collective responsibility as
developers to find creative ways to meet deadlines, not serializing
work on features and letting processes block us.

>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Group-based-policy] what does policy rule action redirect do

2016-06-13 Thread yong sheng gong
hi,


I have followed the steps at
https://github.com/openstack/group-based-policy/blob/master/gbpservice/tests/contrib/devstack/exercises/gbp_servicechain.sh


and I can see the firewall and lb are created right.


But I thought the vm client-1's traffic will be redirected to firewall, lb and 
last to web-vm-1 somehow.


howerver, I cannot see how it is done, or "redirect" action just helps to lauch 
a firewall, and lb and do nothing others.




any idea?


thanks
yong sheng gong__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest unstable interfaces in plugins

2016-06-13 Thread Andrea Frittoli
On Sun, Jun 12, 2016 at 2:25 PM Assaf Muller  wrote:

> On Sat, Jun 11, 2016 at 4:04 PM, Ken'ichi Ohmichi 
> wrote:
> > 2016-06-10 17:01 GMT-07:00 Assaf Muller :
> >> On Fri, Jun 10, 2016 at 12:02 PM, Andrea Frittoli
> >>  wrote:
> >>> Dear all,
> >>>
> >>> I'm working on making the client manager in Tempest a stable
> interface, so
> >>> that in future it may be used safely by plugins to easily gain access
> >>> service clients [0].
> >>>
> >>> This work inevitably involves changing the current client manager
> (unstable)
> >>> interface.
> >>> Several tempest plugins in OpenStack already consume that interface
> (namely
> >>> the manager.Manager class) [1], so my work is likely to break them.
> >>>
> >>> I would ask the people maintaining the plugins to be careful about
> using
> >>> unstable interfaces, as they are likely to change, especially since
> we're
> >>> working on converting them to stable.
> >>>
> >>> If you maintain a plugin (in OpenStack or outside of OpenStack) that is
> >>> likely to be affected by my work, please keep an eye on my gerrit
> review
> >>> [0], leave a comment there or ping me on IRC (andreaf), I would very
> much
> >>> like to make sure the transition is as painless as possible for
> everyone.
> >>
> >> FWIW this doesn't seem to break Neutron:
> >> https://review.openstack.org/#/c/328398/.
> >>
> >> I would appreciate it if changes are made in a backwards compatible
> >> manner (Similar to this:
> >> https://review.openstack.org/#/c/322492/13/tempest/common/waiters.py)
> >> so that projects with Tempest plugins may adapt and not break voting
> >> jobs. The reason projects are using interfaces outside of tempest.lib
> >> is that that's all there is, and the alternative of copy/pasting in to
> >> the repo isn't amazing.
> >
> > Yeah, copy/pasting of tempest code which is outside of tempest.lib is
> > not amazing.
> > However, that is a possible option to continue gate testing on each
> project.
> > We did that to pass Ceilometer gate as a workaround[1], then
> > we(QA-team) knew what lib code is necessary and are concentrating on
> > making the code as tempest.lib.
> > After finishing, we can remove the copy/pasting code from Ceilometer
> > by using new tempest.lib code.
> >
> > During this work, I feel it is nice to add a new hacking rule to block
> > importing the local tempest code from other projects.
> > From viewpoints of outside of QA team, it would be difficult to know
> > the stability of tempest code I guess.
> > Then by adding a rule, most projects know that and it is nice to
> > ignore it by understanding the stability.
>
> I added a comment on the patch, but when I looked in to this a couple
> of months ago, Neutron, Ironic and Heat all imported
> tempest.{|test|manager}.
>

Within OpenStack only the list of plugins importing client manager is
rather  long, which is why I sent this message to begin with :)

I made a new patch-set in [0] now which keeps manager.Manager around while
the new stable manager is being prepared. This ensures backward
compatibility and emits a warning. Once the client manager moves to
tempest.lib namespace I'll send another email asking folks to update their
plugins, and eventually remove the version in Tempest (after a grace time).

[0] https://review.openstack.org/#/c/326683/


> >
> > The hacking rule patch is https://review.openstack.org/#/c/328651/
> > And tempest itself needs to ignore that if merging the rule ;-) [2]
> >
> > Thanks
> > Ken Ohmichi
> > ---
> > [1]: https://review.openstack.org/#/c/325727/
> > [2]: https://review.openstack.org/#/c/328652/
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Group-based-policy] what does policy rule action redirect do

2016-06-13 Thread gong_ys2004

hi,
I have followed the steps 
athttps://github.com/openstack/group-based-policy/blob/master/gbpservice/tests/contrib/devstack/exercises/gbp_servicechain.sh
and I can see the firewall and lb are created right.
But I thought the vm client-1's traffic will be redirected to firewall, lb and 
last to web-vm-1 somehow.
howerver, I cannot see how it is done, or "redirect" action just helps to lauch 
a firewall, and lb and do nothing others.

any idea?
thanksyong sheng gong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-13 Thread Daniel P. Berrange
On Mon, Jun 13, 2016 at 08:06:50AM +, Wang, Shane wrote:
> Hi, OpenStackers,
> 
> As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack Bug 
> Smash at Hangzhou, China.
> The 1st China Bug Smash was at Shanghai, the 2nd was at Xi'an, and the 3rd 
> was at Chengdu.
> 
> We are constructing the etherpad page for registration, and the date will
> be around July 11 (probably July 6 - 8, but to be determined very soon).

The newton-2 milestone release date is July 15th, so you certainly *don't*
want the event during that week. IOW, the 8th July is the latest you should
schedule it - don't let it slip into the next week starting July 11th, as
during the week of the n-2 milestone focus of the teams will be almost
exclusively on prep for that release, to the detriment of any bug smash
event.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-13 Thread Tom Fifield

Hi,

Are there plans to follow the OpenStack events policy this time?

eg Commercial participants should have equal opportunity to sponsor and 
support the activity. When the number of sponsorships is limited, a best 
practice is to publish a sponsorship prospectus online on a date known 
in advance with sponsorships filled on a "first to sign" basis.



Regards,


Tom

On 13/06/16 16:06, Wang, Shane wrote:

Hi, OpenStackers,
As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack
Bug Smash at Hangzhou, China.
The 1st China Bug Smash was at Shanghai, the 2nd was at Xi’an, and the
3rd was at Chengdu.
We are constructing the etherpad page for registration, and the date
will be around July 11 (probably July 6 – 8, but to be determined very
soon).
The China teams will still focus on Neutron, Nova, Cinder, Heat, Magnum,
Rally, Ironic, Dragonflow and Watcher, etc. projects, so need developers
to join and fix bugs as many as possible, and cores to be on site to
moderate the code changes and merges. Welcome to the smash mash at
Hangzhou -_http://www.chinahighlights.com/hangzhou/attraction/_.
Good news is still that for the first two cores who are from those above
projects and respond to this invitation in my email inbox and copy the
CC list, the sponsors are pleased to sponsor your international travel,
including flight and hotel. Please simply reply to me.
Best regards,
--
China OpenStack Bug Smash Team


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum-ui][magnum] Proposed Core addition, and removal notice

2016-06-13 Thread 王华
+1

On Fri, Jun 10, 2016 at 5:32 PM, Shuu Mutou  wrote:

> Hi team,
>
> I propose the following changes to the magnum-ui core group.
>
> + Thai Tran
>   http://stackalytics.com/report/contribution/magnum-ui/90
>   I'm so happy to propose Thai as a core reviewer.
>   His reviews have been extremely valuable for us.
>   And he is active Horizon core member.
>   I believe his help will lead us to the correct future.
>
> - David Lyle
>
> http://stackalytics.com/?metric=marks_type=openstack=all=magnum-ui_id=david-lyle
>   No activities for Magnum-UI since Mitaka cycle.
>
> - Harsh Shah
>   http://stackalytics.com/report/users/hshah
>   No activities for OpenStack in this year.
>
> - Ritesh
>   http://stackalytics.com/report/users/rsritesh
>   No activities for OpenStack in this year.
>
> Please respond with your +1 votes to approve this change or -1 votes to
> oppose.
>
> Thanks,
> Shu
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposed TripleO core changes

2016-06-13 Thread Marios Andreou
On 09/06/16 17:03, Steven Hardy wrote:
> Hi all,
> 
> I've been in discussion with Martin André and Tomas Sedovic, who are
> involved with the creation of the new tripleo-validations repo[1]
> 
> We've agreed that rather than create another gerrit group, they can be
> added to tripleo-core and agree to restrict +A to this repo for the time
> being (hopefully they'll both continue to review more widely, and obviously
> Tomas is a former TripleO core anyway, so welcome back! :)

+1

> 
> If folks feel strongly we should create another group we can, but this
> seems like a low-overhead approach, and well aligned with the scope of the
> repo, let me know if you disagree.
> 
> Also, while reviewing the core group[2] I noticed the following members who
> are no longer active and should probably be removed:
> 
> - Radomir Dopieralski
> - Martyn Taylor
> - Clint Byrum
> 
> I know Clint is still involved with DiB (which has a separate core group),
> but he's indicated he's no longer going to be directly involved in other
> tripleo development, and AFAIK neither Martyn or Radomir are actively
> involved in TripleO reviews - thanks to them all for their contribution,
> we'll gladly add you back in the future should you wish to return :)
> 
> Please let me know if there are any concerns or objections, if there are
> none I will make these changes next week.
> 
> Thanks,
> 
> Steve
> 
> [1] https://github.com/openstack/tripleo-validations
> [2] https://review.openstack.org/#/admin/groups/190,members
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Daniel P. Berrange
On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> Hi,
> 
> You may or may not be aware of the vlan-aware-vms effort [1] in
> Neutron.  If not, there is a spec and a fair number of patches in
> progress for this.  Essentially, the goal is to allow a VM to connect
> to multiple Neutron networks by tagging traffic on a single port with
> VLAN tags.
> 
> This effort will have some effect on vif plugging because the datapath
> will include some changes that will effect how vif plugging is done
> today.
> 
> The design proposal for trunk ports with OVS adds a new bridge for
> each trunk port.  This bridge will demux the traffic and then connect
> to br-int with patch ports for each of the networks.  Rawlin Peters
> has some ideas for expanding the vif capability to include this
> wiring.
> 
> There is also a proposal for connecting to linux bridges by using
> kernel vlan interfaces.
> 
> This effort is pretty important to Neutron in the Newton timeframe.  I
> wanted to send this out to start rounding up the reviewers and other
> participants we need to see how we can start putting together a plan
> for nova integration of this feature (via os-vif?).

I've not taken a look at the proposal, but on the timing side of things
it is really way to late to start this email thread asking for design
input from os-vif or nova. We're way past the spec proposal deadline
for Nova in the Newton cycle, so nothing is going to happen until the
Ocata cycle no matter what Neutron want  in Newton. For os-vif our
focus right now is exclusively on getting existing functionality ported
over, and integrated into Nova in Newton. So again we're not really looking
to spend time on further os-vif design work right now.

In the Ocata cycle we'll be looking to integrate os-vif into Neutron to
let it directly serialize VIF objects and send them over to Nova, instead
of using the ad-hoc port-binding dicts.  From the Nova side, we're not
likely to want to support any new functionality that affects port-binding
data until after Neutron is converted to os-vif. So Ocata at the earliest,
but probably more like P, unless the Neutron conversion to os-vif gets
completed unexpectedly quickly.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Dragonflow] - No IRC meeting today (06/13)

2016-06-13 Thread Gal Sagie
Hello all,

Sorry for the late notice but we will not have an IRC meeting today.
We will continue the IRC meetings next week.

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [sr-iov] rename *device* config options to *interface*

2016-06-13 Thread Andreas Scheuring
While reviewing [1] I got hung up on the terms "device" and "interface".
It seems like in sr-iov agent they are used in a different manner than
in the linuxbridge agent. 

For Example the lb agent uses a config option
"physical_interface_mappings" (mapping between uplink interface for
bridge and physnet). A similar option in the sr-iov agent is named
"physical_device_mappings" (mapping between PF and physnet -> missing in
config reference for some reason [2]). In the l2 agent context, a
variable named device typically references to a port specific device
(e.g. the tap device) and not to a shared host device (like eth0). 

As now patchset [1] introduces a new agent extension for lb & ovs agent
including a new config option "shared_physcial_device_mappings", I
really got a bit confused during the review as now in lb context
"device" is something different (namely a physical interface).

Would it make sense rename all the sr-iov options from *device* to
*interface* to stay consistent and to have a clear separation between
port specific and shared host devices?

My proposal is to name
- shared host device: interface
- port specific devices: device



[1] https://review.openstack.org/320562
[2] https://bugs.launchpad.net/openstack-manuals/+bug/1591906


-- 
-
Andreas 
IRC: andreas_s (formerly scheuran)





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-13 Thread Duncan Thomas
Hi

I would, once again, love to attend.

If you find that other cores apply and you'd rather have a new face, I
would be very understanding of the situation.

Regards

-- 
Duncan Thomas




On 13 June 2016 at 11:06, Wang, Shane  wrote:

> Hi, OpenStackers,
>
> As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack
> Bug Smash at Hangzhou, China.
> The 1st China Bug Smash was at Shanghai, the 2nd was at Xi’an, and the 3rd
> was at Chengdu.
>
> We are constructing the etherpad page for registration, and the date will
> be around July 11 (probably July 6 – 8, but to be determined very soon).
>
> The China teams will still focus on Neutron, Nova, Cinder, Heat, Magnum,
> Rally, Ironic, Dragonflow and Watcher, etc. projects, so need developers to
> join and fix bugs as many as possible, and cores to be on site to moderate
> the code changes and merges. Welcome to the smash mash at Hangzhou -
> *http://www.chinahighlights.com/hangzhou/attraction/*
> .
>
> Good news is still that for the first two cores who are from those above
> projects and respond to this invitation in my email inbox and copy the CC
> list, the sponsors are pleased to sponsor your international travel,
> including flight and hotel. Please simply reply to me.
>
> Best regards,
> --
> China OpenStack Bug Smash Team
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >