Re: [openstack-dev] [neutron] How should edge services APIs integrate into Neutron?

2015-05-19 Thread A, Keshava
Hi Vikarm



1.   What are the use case of “ Dynamic Routing Framework” ?

https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing

You are thinking of running both IGP and BGP in the same neutron ?

In which kind of scenario we need this ? It is better have more information.

We also need to think do we really need IGP with in the cloud, or we only need 
BGP for external connectivity .

In that scenario, we may not go for Routing Framework, and not to complicate 
things too much.

If some things works well with L2 with in the cloud lets not touch those area. 
We may need to see where there are real problem.



2.   What is the use case of “Prefix Clashing” ? You are thinking of 
running multiple routing protocol and they will learn “same prefix +  Route” ?

https://blueprints.launchpad.net/neutron/+spec/prefix-clashing-issue-with-dynamic-routing-protocol

In my opinion, with in the cloud we may not such deployment scenario.


Let us not mix underlay network with overlay network . Both 
will go as different solution provider, so different business domain.

This is my thoughts .

keshava

From: Vikram Choudhary [mailto:vikram.choudh...@huawei.com]
Sent: Wednesday, May 06, 2015 10:45 AM
To: p...@michali.net; openstack-dev@lists.openstack.org
Cc: Kalyankumar Asangi
Subject: Re: [openstack-dev] [neutron] How should edge services APIs integrate 
into Neutron?

Hi Paul,

Thanks for starting this mail thread.  We are also eyeing for supporting MPBGP 
in neutron and will like to actively participate in this discussion.
Please let me know about the IRC channels which we will be following for this 
discussion.

Currently, I am following below BP’s for this work.
https://blueprints.launchpad.net/neutron/+spec/edge-vpn
https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing
https://blueprints.launchpad.net/neutron/+spec/dynamic-routing-framework
https://blueprints.launchpad.net/neutron/+spec/prefix-clashing-issue-with-dynamic-routing-protocol

Moreover, a similar kind of work is being headed by Cathy for defining an 
intent framework which can extended for various use case. Currently it will be 
leveraged for SFC but I feel the same can be used for providing intend VPN use 
case.
https://blueprints.launchpad.net/neutron/+spec/intent-based-service-chaining

Thanks
Vikram

From: Paul Michali [mailto:p...@michali.net]
Sent: 06 May 2015 01:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] How should edge services APIs integrate into 
Neutron?

There's been talk in VPN land about new services, like BGP VPN and DM VPN. I 
suspect there are similar things in other Advanced Services. I talked to 
Salvatore today, and he suggested starting a ML thread on this...

Can someone elaborate on how we should integrate these API extensions into 
Neutron, both today, and in the future, assuming the proposal that Salvatore 
has is adopted?

I could see two cases. The first, and simplest, is when a feature has an 
entirely new API that doesn't leverage off of an existing API.

The other case would be when the feature's API would dovetail into the existing 
service API. For example, one may use the existing vpn_service API to create 
the service, but then create BGP VPN or DM VPN connections for that service, 
instead of the IPSec connections we have today.

If there are examples already of how to extend an existing API extension that 
would help in understanding how to do this.

I see that there are RESOURCE_ATTRIBUTE_MAPs with the information on the API, 
and I see that the plugin has a supported_extension_aliases, but beyond that, 
I'm not really sure how it all hooks up, and how to extend an existing 
extension.

I'm assuming that the python-neutronclient would also need to be updated.


So... the intent here is to start some discussion on how we do this, such that 
we have some things figured out before the summit and can save some time.

Thanks in advance,

Paul Michali (pc_m)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] service chaining feature development meeting minutes for May 12 2015

2015-05-19 Thread A, Keshava
Hi Cathy,
Sorry I was not able to attend the meetings.
I have the below question w.r.t.  S. Chaining ..


1.   Do we have use case of VM getting migrated  scenario ?

2.   This VM may be

a.   end host VM, where traffic gets terminated on that VM.

b.  Or it may be Service VM, it acts as transit network (NFV case:  Routing 
VM).
Are we thinking of considering this scenarios ?
If so how is there any possibility of unifying the solution  for the above 
scenario ? Or we differentiate this scenario separately in the solution ?
Interested to know more about this .

keshava


From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Thursday, May 14, 2015 3:29 AM
To: Cathy Zhang; openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] service chaining feature development meeting 
minutes for May 12 2015

Hi,

I have added the meeting minutes for May 5 2015 to this feature's meeting link 
https://wiki.openstack.org/wiki/Meetings/Service_Chaining

I have a recording of the service chain feature goto meeting for May 12 2015. 
The recording is about 50k. Does anyone know an OpenStack link that I can 
upload this recording to?

Thanks,
Cathy



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] openwrt VM as service

2015-04-16 Thread A, Keshava
Hi,
SO we are going in a direction , where Open stack Infrastructure features  also 
moving into Service-VM ?
Moving into Service-VM is mixing with NFV world, where these   
Tennant/NFV Services are supposed to be outside open stack infrastructure.
Let me know if my understanding is correct here.

keshava

From: Dean Troyer [mailto:dtro...@gmail.com]
Sent: Wednesday, April 15, 2015 10:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] openwrt VM as service

On Wed, Apr 15, 2015 at 2:37 AM, Guo, Ruijing 
ruijing@intel.commailto:ruijing@intel.com wrote:
I’d like to propose openwrt VM as service.

What’s openWRT VM as service:

a)Tenant can download openWRT VM from http://downloads.openwrt.org/
b)Tenant can create WAN interface from external public network
c)Tenant can create private network and create instance from 
private network
d)Tenent can configure openWRT for several services including DHCP, 
route, QoS, ACL and VPNs.


So first off, I'll be the first on in line to promote using OpenWRT for the 
basis of appliances for this sort of thing.  I use it to overcome the 'joy' of 
VirtualBox's local networking and love what it can do in 64M RAM.

However, what you are describing are services, yes, but I think to focus on the 
OpenWRT part of it is missing the point.  For example, Neutron has a VPNaaS 
already, but I agree it can also be built using OpenWRT and OpenVPN.  I don't 
think it is a stand-alone service though, using a combination of 
Heat/{ansible|chef|puppet|salt}/any other deployment/orchestration can get you 
there.  I have a shell script somewhere for doing exactly that on AWS from way 
back.

What I've always wanted was an image builder that would customize the packages 
pre-installed.  This would be especially useful for disposable ramdisk-only or 
JFFS images that really can't install additional packages.  Such a front-end to 
the SDK/imagebuilder sounds like about half of what you are talking about above.

Also, FWIW, a while back I packaged up a micro cloud-init replacement[0] in 
shell that turns out to be really useful.  It's based on something I couldn't 
find again to give proper attribution so if anyone knows who originated this 
I'd be grateful.

dt

[0] https://github.com/dtroyer/openwrt-packages/tree/master/rc.cloud
--

Dean Troyer
dtro...@gmail.commailto:dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] openwrt VM as service

2015-04-16 Thread A, Keshava
Hi,
These Service-VMs are for which purpose. What are differentiations factors 
between “open stack Service-VM” (OSVM)  and nfv / Tenant-Service-VM (TSVM)?
[cid:image002.png@01D0785B.7FFD05F0]
Open stack control: (OSC):

o   OSC Manages these OSVM ?

o   Each services inside OSVM are enabled by OSC ?

o   These OSVM can process the east-west packet also ? Or there scope is only 
program the OVS  of the open stack infrastructure ?

o   Can OSVM also have OVS ?

o   If OSVM process packet, at user level what will be impact on latency ?

o   How to protect OVSM from  Tenants  ? How to provide security w.r.t OVSM 
data, so in multi-tenants scenario , they will not be corrupt he OVSM.
Tenants Controller: (Ex: NFV VIM / others):

o   Each TSVM services are enabled by Tennant Controller ?

o   TSVM can write into OVS local to that VM.
Can OSC controller write into TSVM OVS, if required , to optimize the latency ?
If required OSC controller can program the OVS inside the OVSVM also ?  Are we 
heading in this direction ?



Regards,
keshava


From: Zang, Rui [mailto:rui.z...@intel.com]
Sent: Thursday, April 16, 2015 1:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] openwrt VM as service

We have a “Tacker” project aiming to manage service VMs.
https://wiki.openstack.org/wiki/ServiceVM
Personally I think all the advance network services like firewall/LB/VPN can be 
provided by service VMs and eventually managed by Tacker.

From: A, Keshava [mailto:keshav...@hp.com]
Sent: Thursday, April 16, 2015 2:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] openwrt VM as service

Hi,
SO we are going in a direction , where Open stack Infrastructure features  also 
moving into Service-VM ?
Moving into Service-VM is mixing with NFV world, where these   
Tennant/NFV Services are supposed to be outside open stack infrastructure.
Let me know if my understanding is correct here.

keshava


From: Dean Troyer [mailto:dtro...@gmail.com]
Sent: Wednesday, April 15, 2015 10:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] openwrt VM as service

On Wed, Apr 15, 2015 at 2:37 AM, Guo, Ruijing 
ruijing@intel.commailto:ruijing@intel.com wrote:
I’d like to propose openwrt VM as service.

What’s openWRT VM as service:

a)Tenant can download openWRT VM from http://downloads.openwrt.org/
b)Tenant can create WAN interface from external public network
c)Tenant can create private network and create instance from 
private network
d)Tenent can configure openWRT for several services including DHCP, 
route, QoS, ACL and VPNs.


So first off, I'll be the first on in line to promote using OpenWRT for the 
basis of appliances for this sort of thing.  I use it to overcome the 'joy' of 
VirtualBox's local networking and love what it can do in 64M RAM.

However, what you are describing are services, yes, but I think to focus on the 
OpenWRT part of it is missing the point.  For example, Neutron has a VPNaaS 
already, but I agree it can also be built using OpenWRT and OpenVPN.  I don't 
think it is a stand-alone service though, using a combination of 
Heat/{ansible|chef|puppet|salt}/any other deployment/orchestration can get you 
there.  I have a shell script somewhere for doing exactly that on AWS from way 
back.

What I've always wanted was an image builder that would customize the packages 
pre-installed.  This would be especially useful for disposable ramdisk-only or 
JFFS images that really can't install additional packages.  Such a front-end to 
the SDK/imagebuilder sounds like about half of what you are talking about above.

Also, FWIW, a while back I packaged up a micro cloud-init replacement[0] in 
shell that turns out to be really useful.  It's based on something I couldn't 
find again to give proper attribution so if anyone knows who originated this 
I'd be grateful.

dt

[0] https://github.com/dtroyer/openwrt-packages/tree/master/rc.cloud
--

Dean Troyer
dtro...@gmail.commailto:dtro...@gmail.com


image001.emz
Description: image001.emz


oledata.mso
Description: oledata.mso
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] open Virtual Router support in OpenStack from Cloud-Router

2015-04-01 Thread A, Keshava
Hi,
I think it is better to have  Open source enabled Virtual Router, in 
OpenStack supported by Cloud Router.
With open virtual router  available,  many of the L3 functionality/routing 
functionality  can be easily integrated in cloud for overlay network.

Regards,
keshava
-Original Message-
From: A, Keshava 
Sent: Wednesday, April 01, 2015 7:15 PM
To: Chandrasekar Kannan; de...@lists.cloudrouter.org
Cc: A, Keshava
Subject: RE: [Devel] Cloudrouter as a L3 plugin in Openstack neutron ?

Hi,
I agree with this idea,
I think there should be open sourced router (like cloud router) available in 
OpenStack will be of greater advantage.
Currently OpenStack is in the need of such  Open virtual router. 
If Cloud router is available in the OpenStack OVS enabled server, many of the 
L3 functionality can be added/enhanced.
 

Regards,
keshava

-Original Message-
From: Devel [mailto:devel-boun...@lists.cloudrouter.org] On Behalf Of 
Chandrasekar Kannan
Sent: Wednesday, April 01, 2015 6:36 PM
To: de...@lists.cloudrouter.org
Subject: [Devel] Cloudrouter as a L3 plugin in Openstack neutron ?


This is an idea for now. But I was wondering if it makes sense to start 
pushing/promoting
  including CloudRouter as a neutron agent in Openstack for routing needs 
similar to this.


https://blueprints.launchpad.net/neutron/+spec/l3-plugin-brocade-vyatta-vrouter

Thoughts?
-Chandra


___
Devel mailing list
de...@lists.cloudrouter.org
https://lists.cloudrouter.org/mailman/listinfo/devel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][AdvancedServices] Confusion about the solution of the service chaining!

2015-01-07 Thread A, Keshava
Yes, I agree with Kyle decision.

First we should define what is Service.
Service is within OpenStack infrastructure ? or Service belongs  to NFV 
vNF/Service-VM ?
Based on that its Chaining needs to be defined.
If it is chaining of vNFs(which are service/set of services)  then it  will be 
based on ietf  ‘service header insertion’ at the ingress.
This header will have all the set services  that needs to be executed  across 
vNFV, will be carried in each of the Tennant packet.

So it requires coordinated effort along with NFV/Telco  working groups.

keshava

From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: Wednesday, January 07, 2015 8:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][AdvancedServices] Confusion about the 
solution of the service chaining!

On Wed, Jan 7, 2015 at 6:25 AM, 
lv.erc...@zte.com.cnmailto:lv.erc...@zte.com.cn wrote:
Hi,

I want to confirm that how is the project about Neutron Services Insertion, 
Chaining, and Steering going, I found that all the code implementation about 
service insertion、service chaining and traffic steering list in JunoPlan were 
Abandoned .

https://wiki.openstack.org/wiki/Neutron/AdvancedServices/JunoPlan

and I also found that we have a new project about GBP and 
group-based-policy-service-chaining be located at:

https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-abstraction

https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining

so I'm confused with solution of the service chaining.

We are developing the service chaining feature, so we need to know which one is 
the neutron's choice. Are the blueprints about the service insertion, service 
chaining and traffic steering list in JunoPlan all Abandoned ?
Service chaining isn't in the plan for Kilo [1], but I expect it to be 
something we talk about in Vancouver for the Lxxx release. The NFV/Telco group 
has been talking about this as well. I'm hopeful we can combine efforts and 
come up with a coherent service chaining solution that solves a handful of 
useful use cases during Lxxx.

Thanks,
Kyle

[1] 
http://specs.openstack.org/openstack/neutron-specs/priorities/kilo-priorities.html

BR
Alan






ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Our idea for SFC using OpenFlow. RE: [NFV][Telco] Service VM v/s its basic framework

2014-12-22 Thread A, Keshava
Vikram,

1.   In this solution it is assumed that all the OpenStack services are 
available/enabled on all the CNs ?

2.   Consider a  scenario: For a particular Tennant traffic  the  flows are 
chained across a set of CNs .

Then if one of the  VM (of that Tennant) migrates to a new CN, where that 
Tennant was not there earlier on that CN, what will be the impact ?

How to control the chaining of flows in these kind of scenario ? so that packet 
will reach that Tennant VM on new CN ?



Here this Tennant VM be a NFV Service-VM (which should be transparent to 
OpenStack).

keshava



From: Vikram Choudhary [mailto:vikram.choudh...@huawei.com]
Sent: Monday, December 22, 2014 12:28 PM
To: Murali B
Cc: openstack-dev@lists.openstack.org; yuriy.babe...@telekom.de; A, Keshava; 
stephen.kf.w...@gmail.com; Dhruv Dhody; Dongfeng (C); Kalyankumar Asangi
Subject: RE: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] 
Service VM v/s its basic framework

Sorry for the incontinence. We will sort the issue at the earliest.
Please find the BP attached with the mail!!!

From: Murali B [mailto:mbi...@gmail.com]
Sent: 22 December 2014 12:20
To: Vikram Choudhary
Cc: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de; 
keshav...@hp.commailto:keshav...@hp.com; 
stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com; Dhruv Dhody; 
Dongfeng (C); Kalyankumar Asangi
Subject: Re: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] 
Service VM v/s its basic framework

Thank you Vikram,

Could you or somebody please provide the access the full specification document

Thanks
-Murali

On Mon, Dec 22, 2014 at 11:48 AM, Vikram Choudhary 
vikram.choudh...@huawei.commailto:vikram.choudh...@huawei.com wrote:
Hi Murali,

We have proposed service function chaining idea using open flow.
https://blueprints.launchpad.net/neutron/+spec/service-function-chaining-using-openflow

Will submit the same for review soon.

Thanks
Vikram

From: yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de 
[mailto:yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de]
Sent: 18 December 2014 19:35
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi,
in the IRC meeting yesterday we agreed to work on the use-case for service 
function chaining as it seems to be important for a lot of participants [1].
We will prepare the first draft and share it in the TelcoWG Wiki for discussion.

There is one blueprint in openstack on that in [2]


[1] 
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt
[2] 
https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining

Kind regards/Mit freundlichen Grüßen
Yuriy Babenko

Von: A, Keshava [mailto:keshav...@hp.com]
Gesendet: Mittwoch, 10. Dezember 2014 19:06
An: stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com; OpenStack 
Development Mailing List (not for usage questions)
Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There are many unknows w.r.t ‘Service-VM’ and how it should from NFV 
perspective.
In my opinion it was not decided how the Service-VM framework should be.
Depending on this we at OpenStack also will have impact for ‘Service Chaining’.
Please find the mail attached w.r.t that discussion with NFV for ‘Service-VM + 
Openstack OVS related discussion”.


Regards,
keshava

From: Stephen Wong [mailto:stephen.kf.w...@gmail.com]
Sent: Wednesday, December 10, 2014 10:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There is already a ServiceVM project (Tacker), currently under development 
on stackforge:

https://wiki.openstack.org/wiki/ServiceVM

If you are interested in this topic, please take a look at the wiki page 
above and see if the project's goals align with yours. If so, you are certainly 
welcome to join the IRC meeting and start to contribute to the project's 
direction and design.

Thanks,
- Stephen


On Wed, Dec 10, 2014 at 7:01 AM, Murali B 
mbi...@gmail.commailto:mbi...@gmail.com wrote:
Hi keshava,

We would like contribute towards service chain and NFV

Could you please share the document if you have any related to service VM

The service chain can be achieved if we able to redirect the traffic to service 
VM using ovs-flows

in this case we no need to have routing enable on the service VM(traffic is 
redirected at L2).

All the tenant VM's in cloud could use this service VM services  by adding the 
ovs-rules in OVS


Thanks
-Murali




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http

Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-18 Thread A, Keshava
Hi  Thomas,

Basically as per your thought, extend the 'vpn-label' to OVS itself.
So that, when  MPLS-over-GRE packet comes from OVS , use that  incoming label 
to index respective VPN table at DC-Edge side ?

Question:
1. Who tells which label to use to OVS ? 
You are thinking to have BGP-VPN session between DC-Edge to Compute 
Node(OVS) ? 
So that there it self-look at the BGP-VPN table and based on 
destination add that VPN label as MPLS label  in OVS ?
OR
 ODL or OpenStack controller will dictate  which VPN label to use to 
both DC-Edge and CN(ovs)?

2. How much will be the gain/advantage by generating the mpls from OVS 
? (compare the terminating VxLAN on DC-edge and then originating the mpls from 
there ?)


keshava

-Original Message-
From: Thomas Morin [mailto:thomas.mo...@orange.com] 
Sent: Tuesday, December 16, 2014 7:10 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and 
collaboration

Hi Keshava,

2014-12-15 11:52, A, Keshava :
   I have been thinking of Starting MPLS right from CN for L2VPN/EVPN 
 scenario also.

   Below are my queries w.r.t supporting MPLS from OVS :
   1. MPLS will be used even for VM-VM traffic across CNs 
 generated by OVS  ?

If E-VPN is used only to interconnect outside of a Neutron domain, then MPLS 
does not have to be used for traffic between VMs.

If E-VPN is used inside one DC for VM-VM traffic, then MPLS is *one* of the 
possible encapsulation only: E-VPN specs have been defined to use VXLAN (handy 
because there is native kernel support), MPLS/GRE or MPLS/UDP are other 
possibilities.

   2. MPLS will be originated right from OVS and will be mapped at 
 Gateway (it may be NN/Hardware router ) to SP network ?
   So MPLS will carry 2 Labels ? (one for hop-by-hop, and 
 other one 
 for end to identify network ?)

On will carry 2 Labels ? : this would be one possibility, but not the one we 
target.
We would actually favor MPLS/GRE (GRE used instead of what you call the MPLS 
hop-by-hop label) inside the DC -- this requires only one label.
At the DC edge gateway, depending on the interconnection techniques to connect 
the WAN, different options can be used (RFC4364 section 10): 
Option A with back-to-back VRFs (no MPLS label, but typically VLANs), or option 
B (with one MPLS label), a mix of A/B is also possible and sometimes called 
option D (one label) ;  option C also exists, but is not a good fit here.

Inside one DC, if vswitches see each other across an Ethernet segment, we can 
also use MPLS with just one label (the VPN label) without a GRE encap.

In a way, you can say that in Option B, the label are mapped at the DC/WAN 
gateway(s), but this is really just MPLS label swaping, not to be misunderstood 
as mapping a DC label space to a WAN label space (see below, the label space is 
local to each device).


   3. MPLS will go over even the network physical infrastructure 
  also ?

The use of MPLS/GRE means we are doing an overlay, just like your typical 
VXLAN-based solution, and the network physical infrastructure does not need to 
be MPLS-aware (it just needs to be able to carry IP
traffic)

   4. How the Labels will be mapped a/c virtual and physical world 
 ?

(I don't get the question, I'm not sure what you mean by mapping labels)

   5. Who manages the label space  ? Virtual world or physical 
 world or 
 both ? (OpenStack +  ODL ?)

In MPLS*, the label space is local to each device : a label is 
downstream-assigned, i.e. allocated by the receiving device for a specific 
purpose (e.g. forwarding in a VRF). It is then (typically) avertized in a 
routing protocol; the sender device will use this label to send traffic to the 
receiving device for this specific purpose.  As a result a sender device may 
then use label 42 to forward traffic in the context of VPN X to a receiving 
device A, and the same label 42 to forward traffic in the context of another 
VPN Y to another receiving device B, and locally use label 42 to receive 
traffic for VPN Z.  There is no global label space to manage.

So, while you can design a solution where the label space is managed in a 
centralized fashion, this is not required.

You could design an SDN controller solution where the controller would manage 
one label space common to all nodes, or all the label spaces of all forwarding 
devices, but I think its hard to derive any interesting property from such a 
design choice.

In our BaGPipe distributed design (and this is also true in OpenContrail for 
instance) the label space is managed locally on each compute node (or network 
node if the BGP speaker is on a network node). More precisely in VPN 
implementation.

If you take a step back, the only naming space that has to be managed 
in BGP VPNs is the Route Target space. This is only in the control plane

Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

2014-12-18 Thread A, Keshava
Hi Yuriy Babenko,

I am  little worried about the direction we need to think about Service 
Chaining .

OpenStack will focus on Service Chaining of its own internal features (like 
FWaaS, LBaaS, VPNasS , L2 Gateway Aas .. ? )
OR
will it consider Service Chaining of ‘Service-VM’ also ?

A. If we are considering ‘Service-VM’ service Chaining I have below points to 
mention ..


1.   Does the OpenStack needs to worry about Service-VM capability ?

2.   Does OpenStack worry if Service-VM also have OVS or not ?

3.   Does OpenStack worry if Service-VM has its own routing instance 
running in that ? Which can reconfigures the OVS flow .

4.   Can Service-VM configure OpenStack infrastructure OVS ?

5.   Can Service-VM have multiple features in it ? (Example DPI + FW + NAT 
) …

Is Service-VM is = vNFVC  ?

B. If we are thinking of Service-chaining of ‘OpenStack only Services’ :
Then have below points

For a Tennant:

1.   Can Services be  binded to a particular Compute node(CN) ?

2.   A Tennant may not want to run/enable  all the Services on all CN’s.

Tennant may want to run  FWaaS, VPNaaS  on different CNs  so that Tenant get 
better  infrastructure performance.

Then are we considering chaining of Services per Tennant ?

3.   If so how to control this ? (Please consider that tenants VM can get 
migrated to different CNs)

Let me know others opinion.

keshava

From: yuriy.babe...@telekom.de [mailto:yuriy.babe...@telekom.de]
Sent: Thursday, December 18, 2014 7:35 PM
To: openstack-dev@lists.openstack.org; stephen.kf.w...@gmail.com
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi,
in the IRC meeting yesterday we agreed to work on the use-case for service 
function chaining as it seems to be important for a lot of participants [1].
We will prepare the first draft and share it in the TelcoWG Wiki for discussion.

There is one blueprint in openstack on that in [2]


[1] 
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt
[2] 
https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining

Kind regards/Mit freundlichen Grüßen
Yuriy Babenko

Von: A, Keshava [mailto:keshav...@hp.com]
Gesendet: Mittwoch, 10. Dezember 2014 19:06
An: stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com; OpenStack 
Development Mailing List (not for usage questions)
Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There are many unknows w.r.t ‘Service-VM’ and how it should from NFV 
perspective.
In my opinion it was not decided how the Service-VM framework should be.
Depending on this we at OpenStack also will have impact for ‘Service Chaining’.
Please find the mail attached w.r.t that discussion with NFV for ‘Service-VM + 
Openstack OVS related discussion”.


Regards,
keshava

From: Stephen Wong [mailto:stephen.kf.w...@gmail.com]
Sent: Wednesday, December 10, 2014 10:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There is already a ServiceVM project (Tacker), currently under development 
on stackforge:

https://wiki.openstack.org/wiki/ServiceVM

If you are interested in this topic, please take a look at the wiki page 
above and see if the project's goals align with yours. If so, you are certainly 
welcome to join the IRC meeting and start to contribute to the project's 
direction and design.

Thanks,
- Stephen


On Wed, Dec 10, 2014 at 7:01 AM, Murali B 
mbi...@gmail.commailto:mbi...@gmail.com wrote:
Hi keshava,

We would like contribute towards service chain and NFV

Could you please share the document if you have any related to service VM

The service chain can be achieved if we able to redirect the traffic to service 
VM using ovs-flows

in this case we no need to have routing enable on the service VM(traffic is 
redirected at L2).

All the tenant VM's in cloud could use this service VM services  by adding the 
ovs-rules in OVS


Thanks
-Murali




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-15 Thread A, Keshava
Mathieua,
I have been thinking of Starting MPLS right from CN for L2VPN/EVPN 
scenario also.

Below are my queries w.r.t supporting MPLS from OVS :
1. MPLS will be used even for VM-VM traffic across CNs 
generated by OVS  ?
2. MPLS will be originated right from OVS and will be mapped at 
Gateway (it may be NN/Hardware router ) to SP network ?
So MPLS will carry 2 Labels ? (one for hop-by-hop, and 
other one for end to identify network ?)
3. MPLS will go over even the network physical infrastructure 
 also ?
4. How the Labels will be mapped a/c virtual and physical world 
?
5. Who manages the label space  ? Virtual world or physical 
world or both ? (OpenStack +  ODL ?)
6. The labels are nested (i.e. Like L3 VPN end to end MPLS 
connectivity ) will be established ?
7. Or it will be label stitching between Virtual-Physical 
network ? 
How the end-to-end path will be setup ?

Let me know your opinion for the same.

regards,
keshava


-Original Message-
From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com] 
Sent: Monday, December 15, 2014 3:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and 
collaboration

Hi Ryan,

We have been working on similar Use cases to announce /32 with the Bagpipe 
BGPSpeaker that supports EVPN.
Please have a look at use case B in [1][2].
Note also that the L2population Mechanism driver for ML2, that is compatible 
with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN, and I'm sure it 
could help in your use case

[1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
[2]https://www.youtube.com/watch?v=q5z0aPrUZYcsns
[3]https://blueprints.launchpad.net/neutron/+spec/l2-population

Mathieu

On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger ryan.cleven...@rackspace.com 
wrote:
 Hi,

 At Rackspace, we have a need to create a higher level networking 
 service primarily for the purpose of creating a Floating IP solution 
 in our environment. The current solutions for Floating IPs, being tied 
 to plugin implementations, does not meet our needs at scale for the following 
 reasons:

 1. Limited endpoint H/A mainly targeting failover only and not 
 multi-active endpoints, 2. Lack of noisy neighbor and DDOS mitigation, 
 3. IP fragmentation (with cells, public connectivity is terminated 
 inside each cell leading to fragmentation and IP stranding when cell 
 CPU/Memory use doesn't line up with allocated IP blocks. Abstracting 
 public connectivity away from nova installations allows for much more 
 efficient use of those precious IPv4 blocks).
 4. Diversity in transit (multiple encapsulation and transit types on a 
 per floating ip basis).

 We realize that network infrastructures are often unique and such a 
 solution would likely diverge from provider to provider. However, we 
 would love to collaborate with the community to see if such a project 
 could be built that would meet the needs of providers at scale. We 
 believe that, at its core, this solution would boil down to 
 terminating north-south traffic temporarily at a massively 
 horizontally scalable centralized core and then encapsulating traffic 
 east-west to a specific host based on the association setup via the current 
 L3 router's extension's 'floatingips'
 resource.

 Our current idea, involves using Open vSwitch for header rewriting and 
 tunnel encapsulation combined with a set of Ryu applications for management:

 https://i.imgur.com/bivSdcC.png

 The Ryu application uses Ryu's BGP support to announce up to the 
 Public Routing layer individual floating ips (/32's or /128's) which 
 are then summarized and announced to the rest of the datacenter. If a 
 particular floating ip is experiencing unusually large traffic (DDOS, 
 slashdot effect, etc.), the Ryu application could change the 
 announcements up to the Public layer to shift that traffic to 
 dedicated hosts setup for that purpose. It also announces a single /32 
 Tunnel Endpoint ip downstream to the TunnelNet Routing system which 
 provides transit to and from the cells and their hypervisors. Since 
 traffic from either direction can then end up on any of the FLIP 
 hosts, a simple flow table to modify the MAC and IP in either the SRC 
 or DST fields (depending on traffic direction) allows the system to be 
 completely stateless. We have proven this out (with static routing and
 flows) to work reliably in a small lab setup.

 On the hypervisor side, we currently plumb networks into separate OVS 
 bridges. Another Ryu application would control the bridge that handles 
 overlay networking to selectively divert traffic destined for the 
 default gateway up to the FLIP NAT systems, taking into account any 
 configured logical routing and local L2 traffic to pass out

Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

2014-12-10 Thread A, Keshava
Hi Murali,

There are many unknows w.r.t ‘Service-VM’ and how it should from NFV 
perspective.
In my opinion it was not decided how the Service-VM framework should be.
Depending on this we at OpenStack also will have impact for ‘Service Chaining’.
Please find the mail attached w.r.t that discussion with NFV for ‘Service-VM + 
Openstack OVS related discussion”.


Regards,
keshava

From: Stephen Wong [mailto:stephen.kf.w...@gmail.com]
Sent: Wednesday, December 10, 2014 10:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There is already a ServiceVM project (Tacker), currently under development 
on stackforge:

https://wiki.openstack.org/wiki/ServiceVM

If you are interested in this topic, please take a look at the wiki page 
above and see if the project's goals align with yours. If so, you are certainly 
welcome to join the IRC meeting and start to contribute to the project's 
direction and design.

Thanks,
- Stephen


On Wed, Dec 10, 2014 at 7:01 AM, Murali B 
mbi...@gmail.commailto:mbi...@gmail.com wrote:
Hi keshava,

We would like contribute towards service chain and NFV

Could you please share the document if you have any related to service VM

The service chain can be achieved if we able to redirect the traffic to service 
VM using ovs-flows

in this case we no need to have routing enable on the service VM(traffic is 
redirected at L2).

All the tenant VM's in cloud could use this service VM services  by adding the 
ovs-rules in OVS


Thanks
-Murali




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

---BeginMessage---
Some of my perspective with [RP]

From: A, Keshava keshav...@hp.commailto:keshav...@hp.com
Date: Wednesday, December 10, 2014 at 8:59 AM
To: Christopher Price 
christopher.pr...@ericsson.commailto:christopher.pr...@ericsson.com, 
opnfv-tech-disc...@lists.opnfv.orgmailto:opnfv-tech-disc...@lists.opnfv.org 
opnfv-tech-disc...@lists.opnfv.orgmailto:opnfv-tech-disc...@lists.opnfv.org,
 opnfv-...@lists.opnfv.orgmailto:opnfv-...@lists.opnfv.org 
opnfv-...@lists.opnfv.orgmailto:opnfv-...@lists.opnfv.org
Subject: Re: [opnfv-tech-discuss] Service VM v/s its basic framework

Hi Chris,

Thanks for your reply.
In my opinion it is very important to have common understanding how the 
Service-VM should look .


1.   There are many question coming-in like ‘OVS can be also part of the 
Service-VM”?

[RP] It can, but the only advantage is if it can process NSH headers.


If multiple features(services)  are running with-in one Service-VM, use ‘local 
OVS  to do service-chaining’.

(Of course it can be handled by internal routing table by setting  Next-hop as 
Next Service running with in that local-VM also, if routing is running in that 
VM)

[RP] Then local OVS becomes a SFF. This is fine, I see no issues with it. Your 
Service Path topology would include each (SFF, SF) tuple inside your service-VM 
as a service hop.



2.   OVS running in compute node:

a.   Can be used to do ‘service chaining across Service-VM’ running with in 
same compute Node ?

b.  Service-VM running in different compute-nodes needs can be chained by 
Service Layer.

[RP] As soon as a OVS (irrespective where it is) sends packets to Service 
Functions it becomes a SFF. If you think like that everything becomes simpler.

   With both 1 + 2 and ‘Service Layer running in NFV orchestration’ +  
‘Service topology’ .
This ‘Service Layer’ will configures

a.   ‘OpenStack Controller’ to configure OVS which it manages for Service 
Chaining.

b.  Service-VM , to chain the service within that VM itself.

[RP] I think Openstanck’s  current layer 2  hop-by-hop Service Path diverges 
considerably a departure from IETF’s proposal and consequently ODL 
implementation. I think this is a good opportunity to align everything.


3.   HA framework :

a.   Service VMs will run in Active-Active mode or Active-Standby mode ?

b.  How the incoming packet should be  delivered ?

c.   OpenStack should deliver the packet only to Active-VM ?

   i.  or to 
both Active and Standby-VM together ?

 ii.  or first 
to Standby-VM, from there to Standby-VM to deliver to Active-VM ?

d.  Active-VM should control Standby-VM ?

[RP] Let’s think about SFF and SF. SFF controls where the packet are sent, 
period. SFs has no saying in it.


e.  Active-VM will control the network ?

f.Active-VM will be ahead of Standby-VM as for ‘live network 
information is concerned ‘ ?


4.   Can  the Service-VM can run routing/Forwarding  information to 
OpenStack infrastructure ? Or it should be within that Service-VM

[openstack-dev] [NFV][Telco] Service VM v/s its basic framework

2014-12-09 Thread A, Keshava
[cid:image003.png@01D0145E.75813DF0]

I have some of the basic question w.r.t Service-VM running  the NFV. (These 
Service-VMs can be vNAT, vFW, vDPI, vRouting , vCPE etc ),


1.   When these service-VM runs over  the cloud (over OpenStack CN) each 
will be treated as Routable entity in the network ?

i.e, Each Service-VM will run its own routing protocol so that it is a 
reachable entity in the network ?



2.   Will there be any basic framework / basic elements that needs to run 
in these VM ?

a.   Each Service-VM should also its routing instance + L3 Forwarding ..

If so are they optional or mandatory



3.   When these Service-VM runs (which may be part vCPE) then each service 
packet will be carried till Service-VM or it will be handled in the OVS of the 
compute node it self ?

Then how this will be handled for routing packet ?


4.   If there are multiple 'features running with in a Service-VM' (example 
NAT,FW,IPSEC),

a.   Then depending on the prefix(tenant/user traffic) may need to chain 
them differently .

   i.  Example 
for tenant-1 packet prefix P1: Service execution may be NAT -- FW -- IPSEC

 ii.  Tennant-2 
p2 : it may NAT-IPSEC-AAA



5.   How the Service chain Execution, which may be running across multiple 
Service-VM  is controlled.

a.   Is it controlled by configuring the OVS (Open V Switch ) running in 
the compute node ?

b.  When the Service-VMs are running across different Compute nodes  they 
need to be chained across OVS .

This needs to controlled by NFV Service Layer + OpenStack Controller ?


In my opinion there should be a some basic discussion on how these Service-VM's 
framework and how they needs to be chained ,which entity are mandatory in such 
service to run over the cloud.

Please let me know if such discussion already happened ? Let me know others 
opinion for the same.

Thanks  regards,
keshava



oledata.mso
Description: oledata.mso
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-08 Thread A, Keshava
Stephen,

Interesting to know what is “ACTIVE-ACTIVE topology of load balancing VMs”.
What is the scenario is it Service-VM (of NFV) or Tennant VM ?
Curious to know the background of this thoughts .

keshava


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, December 09, 2014 7:18 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and 
collaboration

For what it's worth, I know that the Octavia project will need something which 
can do more advanced layer-3 networking in order to deliver and ACTIVE-ACTIVE 
topology of load balancing VMs / containers / machines. That's still a down 
the road feature for us, but it would be great to be able to do more advanced 
layer-3 networking in earlier releases of Octavia as well. (Without this, we 
might have to go through back doors to get Neutron to do what we need it to, 
and I'd rather avoid that.)

I'm definitely up for learning more about your proposal for this project, 
though I've not had any practical experience with Ryu yet. I would also like to 
see whether it's possible to do the sort of advanced layer-3 networking you've 
described without using OVS. (We have found that OVS tends to be not quite 
mature / stable enough for our needs and have moved most of our clouds to use 
ML2 / standard linux bridging.)

Carl:  I'll also take a look at the two gerrit reviews you've linked. Is this 
week's L3 meeting not happening then? (And man-- I wish it were an hour or two 
later in the day. Coming at y'all from PST timezone here.)

Stephen

On Mon, Dec 8, 2014 at 11:57 AM, Carl Baldwin 
c...@ecbaldwin.netmailto:c...@ecbaldwin.net wrote:
Ryan,

I'll be traveling around the time of the L3 meeting this week.  My
flight leaves 40 minutes after the meeting and I might have trouble
attending.  It might be best to put it off a week or to plan another
time -- maybe Friday -- when we could discuss it in IRC or in a
Hangout.

Carl

On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger
ryan.cleven...@rackspace.commailto:ryan.cleven...@rackspace.com wrote:
 Thanks for getting back Carl. I think we may be able to make this weeks
 meeting. Jason Kölker is the engineer doing all of the lifting on this side.
 Let me get with him to review what you all have so far and check our
 availability.

 

 Ryan Clevenger
 Manager, Cloud Engineering - US
 m: 678.548.7261tel:678.548.7261
 e: ryan.cleven...@rackspace.commailto:ryan.cleven...@rackspace.com

 
 From: Carl Baldwin [c...@ecbaldwin.netmailto:c...@ecbaldwin.net]
 Sent: Sunday, December 07, 2014 4:04 PM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation
 and collaboration

 Ryan,

 I have been working with the L3 sub team in this direction.  Progress has
 been slow because of other priorities but we have made some.  I have written
 a blueprint detailing some changes needed to the code to enable the
 flexibility to one day run glaring ups on an l3 routed network [1].  Jaime
 has been working on one that integrates ryu (or other speakers) with neutron
 [2].  Dvr was also a step in this direction.

 I'd like to invite you to the l3 weekly meeting [3] to discuss further.  I'm
 very happy to see interest in this area and have someone new to collaborate.

 Carl

 [1] https://review.openstack.org/#/c/88619/
 [2] https://review.openstack.org/#/c/125401/
 [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

 On Dec 3, 2014 4:04 PM, Ryan Clevenger 
 ryan.cleven...@rackspace.commailto:ryan.cleven...@rackspace.com
 wrote:

 Hi,

 At Rackspace, we have a need to create a higher level networking service
 primarily for the purpose of creating a Floating IP solution in our
 environment. The current solutions for Floating IPs, being tied to plugin
 implementations, does not meet our needs at scale for the following reasons:

 1. Limited endpoint H/A mainly targeting failover only and not
 multi-active endpoints,
 2. Lack of noisy neighbor and DDOS mitigation,
 3. IP fragmentation (with cells, public connectivity is terminated inside
 each cell leading to fragmentation and IP stranding when cell CPU/Memory use
 doesn't line up with allocated IP blocks. Abstracting public connectivity
 away from nova installations allows for much more efficient use of those
 precious IPv4 blocks).
 4. Diversity in transit (multiple encapsulation and transit types on a per
 floating ip basis).

 We realize that network infrastructures are often unique and such a
 solution would likely diverge from provider to provider. However, we would
 love to collaborate with the community to see if such a project could be
 built that would meet the needs of providers at scale. We believe that, at
 its core, this solution would boil down to terminating north-south traffic
 temporarily at a massively horizontally scalable centralized

Re: [openstack-dev] opnfv proposal on DR capability enhancement on OpenStack Nova

2014-11-13 Thread A, Keshava
Zhipeng Huang,

When multiple  Datacenters are interconnected over WAN/Internet if the remote 
the Datacenter goes down, expect the 'native VM status' to get changed 
accordingly ?
Is this the requirement ? This requirement is  from NFV Service VM (like 
routing VM ? )
Then is not it is  NFV routing (BGP/IGP) /MPLS signaling (LDP/RSVP) protocol to 
handle  ? Does the OpenStack needs to handle that ?

Please correct me if my understanding on this problem  is not correct.

Thanks  regards,
keshava

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: Wednesday, November 12, 2014 6:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][DR][NFV] opnfv proposal on DR capability 
enhancement on OpenStack Nova

- Original Message -
 From: Zhipeng Huang zhipengh...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 Hi Team,
 
 I knew we didn't propose this in the design summit and it is kinda 
 rude in this way to jam a topic into the schedule. We were really 
 stretched thin during the summit and didn't make it to the Nova 
 discussion. Full apologies here :)
 
 What we want to discuss here is that we proposed a project in opnfv ( 
 https://wiki.opnfv.org/collaborative_development_projects/rescuer), 
 which in fact is to enhance inter-DC DR capabilities in Nova. We hope 
 we could achieve this in the K cycle, since there is no HUGE changes 
 required to be done in Nova. We just propose to add certain DR status 
 in Nova so operators could see what DR state the OpenStack is 
 currently in, therefore when disaster occurs they won't cut off the wrong 
 stuff.
 
 Sorry again if we kinda barge in here, and we sincerely hope the Nova 
 community could take a look at our proposal. Feel free to contact me 
 if anyone got any questions :)
 
 --
 Zhipeng Huang

Hi Zhipeng,

I would just like to echo the comments from the opnfv-tech-discuss list (which 
I notice is still private?) in saying that there is very little detail on the 
wiki page describing what you actually intend to do. Given this, it's very hard 
to provide any meaningful feedback. A lot more detail is required, particularly 
if you intend to propose a specification based on this idea.

Thanks,

Steve

[1] https://wiki.opnfv.org/collaborative_development_projects/rescuer


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-04 Thread A, Keshava

Hi Ian/Erik,

If the Service-VM contain the ‘multiple Features’ in which packets needs to 
processed one after other.
Example: When the packet the Service-VM from external network via OpenStack , 
First it should processed for vNAT and after finishing that packet  should be 
processed for  DPI functionality ).

How to control the chaining of execution for each packet entering the NFV 
service VM ?



1.   Each feature execution in the Service-VM should  be controlled by 
OpenStack ? By having nested Q-in-Q (where each Q maps to corresponding feature 
in that Service/NFV VM ? )

Or

2.   It will be informed  by Service Layer to Service-VM  (outside 
OpenStack) .Then execution chain should be handled  “internally   by that 
Service-VM”  itself and it should be transparent to OpenStack  ?

Or thinking is different here ?
[cid:image001.png@01CFF889.F7F5C1C0]

Thanks  regards,
Keshava

From: Erik Moe [mailto:erik@ericsson.com]
Sent: Monday, November 03, 2014 3:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints



From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
erik@ericsson.commailto:erik@ericsson.com wrote:


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.
 HI
Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let’s try. I hope you theory turns out to be realistic. ☺
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage.

I'm not sure that that's particularly a problem - any VM with a port will have 
one globally unique MAC address.  I wonder if I'm missing the point here, 
though.
Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses 
among Neutron networks. But I guess this is configurable.
Also some implementations might not be able to take VID into account when doing 
mac address learning, forcing at least unique macs on a trunk network.

If an implementation struggles with VLANs then the logical thing to do would be 
not to implement them in that driver.  Which is fine: I would

[openstack-dev] openstack-dev] [neutron] [nfv]

2014-11-04 Thread A, Keshava
Hi,
I am thinking loud here, about NFV Service VM and OpenStack infrastructure.
Please let me know does the below scenario analysis make sense.

NFV Service VM's are hosted on cloud (OpenStack)  where in there are  2 Tenants 
with different Service order of execution.
(Service order what I have mentioned here is  just an example ..)

* Does OpenStack controls the order of Service execution for every 
packet ?

* Does OpenStack will have different Service-Tag for different Service ?

* If there are multiple features with in a Service-VM, how 
Service-Execution is controlled in that  VM ?

* After completion of a particular Service ,  how the next Service will 
be invoked ?

Will there be pre-configured flows from OpenStack  to invoke next service for 
tagged packet from Service-VM ?

[cid:image003.png@01CFF8F8.1BCCFAF0]

[cid:image007.png@01CFF8F8.1BCCFAF0]


Thanks  regards,
keshava




image001.emz
Description: image001.emz


oledata.mso
Description: oledata.mso


image006.emz
Description: image006.emz
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-03 Thread A, Keshava
Hi Ian,

I think we need to understand how these VRRP and HSRP will work and where it 
used. What is NFV problem domain also ..

VRRP: is used to provide the redundancy for L2 network, where those routers are 
connected to Last mile L2 devices/L2 network.
   Since L2 network are stateless, Active entity going down and 
standby entity talking control is simple.
HSRP: is proprietary protocol anyway.

Here we are also talking about NFV and its redundancy is for L3 network also . 
(Routing /Signaling protocol redundancy)
For routing protocols “Active Routing Entity” always wants to have control over 
‘Standby Routing Entity’ so that Active is under the control of the 
network/Standby.
If VRRP kind of approach is issued for ‘L3 Routing Redundancy’, each Active and 
Standby will be running as independently which is not good model and it will 
not provide the five-9 redundancy.
In order to provide the 99.9 redundancy at L3 network/routing  NFV elements 
it is  required to run Active and Standby Entity, where Standby will be under 
the control of Active. VRRP will not be good option for this.

Then it is required to run as I mentioned .
I hope you got it .

Thanks  regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Saturday, November 01, 2014 4:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Go  read about HSRP and VRRP.  What you propose is akin to turning off one 
physical switch port and turning on another when you want to switch from an 
active physical server to a standby, and this is not how it's done in practice; 
instead, you connect the two VMs to the same network and let them decide which 
gets the primary address.

On 28 October 2014 10:27, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi Alan and  Salvatore,

Thanks for response and I also agree we need to take small steps.
However I have below points to make.

It is very important how the Service VM needs will be deployed w.r.t HA.
As per current discussion, you are proposing something like below kind of 
deployment for Carrier Grade HA.
Since there is a separate port for Standby-VM also, then the corresponding 
standby-VM interface address should be globally routable also.
Means it may require the Standby Routing protocols to advertise its interface 
as Next-HOP for prefix it routes.
However external world should not be aware of the standby-routing running in 
the network.

[cid:image001.png@01CFF770.BF467FA0]

[cid:image002.png@01CFF770.BF467FA0]

Instead if we can think of running Standby on same stack with Passive port, ( 
as shown below)  then external world will be unaware of the standing Service 
Routing running.
This may be  something very basic requirement from Service-VM (NFV HA 
perspective) for Routing/MPLS/Packet processing domain.
I am brining this issue now itself, because you are proposing to change the 
basic framework of packer delivering to VM’s.
(Of course there may be  other mechanism of supporting redundancy, however it 
will not be as efficient as that of handing at packet level).

[cid:image003.png@01CFF770.BF467FA0]


Thanks  regards,
Keshava

From: Alan Kavanagh 
[mailto:alan.kavan...@ericsson.commailto:alan.kavan...@ericsson.com]
Sent: Tuesday, October 28, 2014 6:48 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi Salvatore

Inline below.

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: October-28-14 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Keshava,

I think the thread is not going a bit off its stated topic - which is to 
discuss the various proposed approaches to vlan trunking.
Regarding your last post, I'm not sure I saw either spec implying that at the 
data plane level every instance attached to a trunk will be implemented as a 
different network stack.
AK-- Agree
Also, quoting the principle earlier cited in this thread -  make the easy 
stuff easy and the hard stuff possible - I would say that unless five 9s is a 
minimum requirement for a NFV application, we might start worrying about it 
once we have the bare minimum set of tools for allowing a NFV application over 
a neutron network.
AK-- five 9’s is a 100% must requirement for NFV, but lets ensure we don’t mix 
up what the underlay service needs to guarantee and what openstack needs to do 
to ensure this type of service. Would agree, we should focus more on having the 
right configuration sets for onboarding NFV which is what Openstack needs to 
ensure is exposed then what is used underneath guarantee the 5 9’s is a 
separate matter.
I think Ian has done a good job in explaining that while both approaches 
considered here address trunking for NFV use cases, they propose alternative

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-03 Thread A, Keshava
Hi,
What are all the Service-VM(NFV elements)  HA architecture ?
L2 NFV and L3 NFV elements HA architecture will be different.

Keeping mind we need to  expose  the underlying port/interface  architecture 
from OpenStack side to Service-VM’s.

Thanks  Regards,
Keshava

From: A, Keshava
Sent: Monday, November 03, 2014 2:16 PM
To: Ian Wells; OpenStack Development Mailing List (not for usage questions)
Cc: A, Keshava
Subject: RE: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi Ian,

I think we need to understand how these VRRP and HSRP will work and where it 
used. What is NFV problem domain also ..

VRRP: is used to provide the redundancy for L2 network, where those routers are 
connected to Last mile L2 devices/L2 network.
   Since L2 network are stateless, Active entity going down and 
standby entity talking control is simple.
HSRP: is proprietary protocol anyway.

Here we are also talking about NFV and its redundancy is for L3 network also . 
(Routing /Signaling protocol redundancy)
For routing protocols “Active Routing Entity” always wants to have control over 
‘Standby Routing Entity’ so that Active is under the control of the 
network/Standby.
If VRRP kind of approach is issued for ‘L3 Routing Redundancy’, each Active and 
Standby will be running as independently which is not good model and it will 
not provide the five-9 redundancy.
In order to provide the 99.9 redundancy at L3 network/routing  NFV elements 
it is  required to run Active and Standby Entity, where Standby will be under 
the control of Active. VRRP will not be good option for this.

Then it is required to run as I mentioned .
I hope you got it .

Thanks  regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Saturday, November 01, 2014 4:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Go  read about HSRP and VRRP.  What you propose is akin to turning off one 
physical switch port and turning on another when you want to switch from an 
active physical server to a standby, and this is not how it's done in practice; 
instead, you connect the two VMs to the same network and let them decide which 
gets the primary address.

On 28 October 2014 10:27, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi Alan and  Salvatore,

Thanks for response and I also agree we need to take small steps.
However I have below points to make.

It is very important how the Service VM needs will be deployed w.r.t HA.
As per current discussion, you are proposing something like below kind of 
deployment for Carrier Grade HA.
Since there is a separate port for Standby-VM also, then the corresponding 
standby-VM interface address should be globally routable also.
Means it may require the Standby Routing protocols to advertise its interface 
as Next-HOP for prefix it routes.
However external world should not be aware of the standby-routing running in 
the network.

[cid:image001.png@01CFF772.FBECB440]

[cid:image002.png@01CFF772.FBECB440]

Instead if we can think of running Standby on same stack with Passive port, ( 
as shown below)  then external world will be unaware of the standing Service 
Routing running.
This may be  something very basic requirement from Service-VM (NFV HA 
perspective) for Routing/MPLS/Packet processing domain.
I am brining this issue now itself, because you are proposing to change the 
basic framework of packer delivering to VM’s.
(Of course there may be  other mechanism of supporting redundancy, however it 
will not be as efficient as that of handing at packet level).

[cid:image003.png@01CFF772.FBECB440]


Thanks  regards,
Keshava

From: Alan Kavanagh 
[mailto:alan.kavan...@ericsson.commailto:alan.kavan...@ericsson.com]
Sent: Tuesday, October 28, 2014 6:48 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi Salvatore

Inline below.

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: October-28-14 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Keshava,

I think the thread is not going a bit off its stated topic - which is to 
discuss the various proposed approaches to vlan trunking.
Regarding your last post, I'm not sure I saw either spec implying that at the 
data plane level every instance attached to a trunk will be implemented as a 
different network stack.
AK-- Agree
Also, quoting the principle earlier cited in this thread -  make the easy 
stuff easy and the hard stuff possible - I would say that unless five 9s is a 
minimum requirement for a NFV application, we might start worrying about it 
once we have the bare minimum set of tools for allowing a NFV application over 
a neutron network.
AK-- five 9’s is a 100% must requirement for NFV, but lets ensure we

Re: [openstack-dev] [neutron] Clear all flows when ovs agent start? why and how avoid?

2014-10-31 Thread A, Keshava
Hi,
Agents upgrade support will be common requirements which we needs to address on 
priority.

Regards,
Keshava

-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: Wednesday, October 29, 2014 8:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Clear all flows when ovs agent start? 
why and how avoid?

On Wed, Oct 29, 2014 at 7:25 AM, Hly henry4...@gmail.com wrote:


 Sent from my iPad

 On 2014-10-29, at 下午8:01, Robert van Leeuwen 
 robert.vanleeu...@spilgames.com wrote:

 I find our current design is remove all flows then add flow by 
 entry, this will cause every network node will break off all 
 tunnels between other network node and all compute node.
 Perhaps a way around this would be to add a flag on agent startup 
 which would have it skip reprogramming flows. This could be used for 
 the upgrade case.

 I hit the same issue last week and filed a bug here:
 https://bugs.launchpad.net/neutron/+bug/1383674

 From an operators perspective this is VERY annoying since you also cannot 
 push any config changes that requires/triggers a restart of the agent.
 e.g. something simple like changing a log setting becomes a hassle.
 I would prefer the default behaviour to be to not clear the flows or at the 
 least an config option to disable it.


 +1, we also suffered from this even when a very little patch is done

I'd really like to get some input from the tripleo folks, because they were the 
ones who filed the original bug here and were hit by the agent NOT 
reprogramming flows on agent restart. It does seem fairly obvious that adding 
an option around this would be a good way forward, however.

Thanks,
Kyle


 Cheers,
 Robert van Leeuwen
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-30 Thread A, Keshava
Hi,
Can the VM migration happens across POD (Zone) ?
If so then how reachability of VM is addressed dynamically without any packet 
loss ?

Thanks  Regards,
keshava

-Original Message-
From: Wuhongning [mailto:wuhongn...@huawei.com] 
Sent: Thursday, October 30, 2014 7:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi keshava,

Thanks for interested in Cascading. Here are some very simple explanation:

Basically Datacenter is not in the 2-level tree of cascading. We use term POD 
to represent a cascaded child openstack (same meaning of your term Zone?). 
There may be single or multiple PODs in one Datacenter, Just like below:

(A, B, C)  ...  (D, E)  ...  (F)  ...   (G)
Each character represent a POD or child openstack, while parenthesis represent 
a Datacenter. 

Each POD has a corresponding virtual host node in the parent openstack, so when 
scheduler of any projects (nova/neutron/cinder...) locate a host node, the 
resource POD is determined, also with its geo-located Datacenter by side 
effect. Cascading don't schedule by Datacenter directly, DC is just an 
attribute of POD (for example we can configure host aggregate to identify a DC 
with multiple PODs). The upper scale of POD is fixed, maybe several hundreds, 
so a super large DC with tens of thousands servers could be built by 
modularized PODs, avoiding the difficult of tuning and maintaining such a huge 
monolithic openstack.

Next do you mean networking reachability? Sorry for the limitation of mail post 
I can just give some very simple idea: in parent openstack the L2pop and DVR is 
used, so L2/L3 agent-proxy in each virtual host node can get all the vm 
reachability information of other POD, then they are set to local POD by 
Neutron REST API. However, cascading depends on some feature not exists yet in 
current Neutron, like L2GW, pluggable external network, WE Fwaas in DVR, 
centralized FIP in DVR... so we have to do some little patch in the front. In 
the future if these features is merged, these patch code can be removed. 

Indeed Neutron is the most challenge part of cascading, without considering 
those proxies in the parent openstack virtual host node, Neutron patchs account 
for 85% or more LOC in the whole project.

Regards,
Wu

From: keshava [keshav...@hp.com]
Sent: Wednesday, October 29, 2014 2:22 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

This is very interesting problem to solve.
I am curious to know how the reachability is provided across different 
Datacenter.
How to know which VM is part of which Datacenter?
VM may be in different Zone but under same DC or in different DC itself.

How this problem is solved?


thanks  regards,
keshava



--
View this message in context: 
http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-30 Thread A, Keshava
Hi,
w.r.t.  ' VM packet forwarding' at L3 level by enabling routing I have below 
points.
With below  reference diagram , when the routing is enabled to detect the 
destination VM's compute node ..


1.   How many route prefix will be injected in each of the compute node ?



2.   For each of the VM address, there will be corresponding IP address in 
the 'L3 Forwarding Tbl' ?

When we have large number of VM's of the order 50,000/ 1 Million VM's in the 
cloud each compute node needs to maintain 1 Million Route Entries ?



3.   Even with route aggregations, it is not guaranteed to be very 
efficient because

a.   Tenants can span across computes.

b.  VM migration can happen which  may break the aggregation  and allow the 
growth  of routing table.



4.   Across Switch if we  try to run BGP and try to aggregate, we will be 
introducing the Hierarchical Network.

If any change in topology what will be convergence time and will there any 
looping issues ?

Cost of the L3 switch will go up as the capacity of that switch to support 
10,000 + routes.


5.   With this we want to break the classical L2 broadcast in the last mile 
Cloud ?

I was under the impression that the cloud network we want to keep simple L2 
broadcast domain, without adding any complexity like MPLS label, Routing, 
Aggregation .



6.   The whole purpose of brining VxLAN in datacenter cloud is to keep the 
L2 and even able to extend the L2 to different Datacenter.



7.   I also saw some ietf draft w.r.t implementation architecture of 
OpenStack !!!.


  Let me know the opinion w.r.t. this ?


[cid:image003.png@01CFF45F.51F2F0A0]





Thanks  regards,

Keshava



-Original Message-
From: Fred Baker (fred) [mailto:f...@cisco.com]
Sent: Wednesday, October 29, 2014 5:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking





On Oct 28, 2014, at 4:59 PM, Angus Lees 
g...@inodes.orgmailto:g...@inodes.org wrote:



 On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:

 Agreed. The way I'm thinking about this is that tenants shouldn't

 care what the underlying implementation is - L2 or L3. As long as the

 connectivity requirements are met using the model/API, end users

 should be fine. The data center network design should be an

 administrators decision based on the implementation mechanism that has been 
 configured for OpenStack.



 I don't know anything about Project Calico, but I have been involved

 with running a large cloud network previously that made heavy use of L3 
 overlays.



 Just because these points weren't raised earlier in this thread:  In

 my experience, a move to L3 involves losing:



 - broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but

 that's a whole can of worms - so perhaps best to just say up front

 that this is a non-broadcast network.



 - support for other IP protocols.



 - various L2 games like virtual MAC addresses, etc that NFV/etc people like.



I'm a little confused. IP supports multicast. It requires a routing protocol, 
and you have to join the multicast group, but it's not out of the picture.



What other IP protocols do you have in mind? Are you thinking about 
IPX/CLNP/etc? Or are you thinking about new network layers?



I'm afraid the L2 games leave me a little cold. We have been there, such as 
with DECNET IV. I'd need to understand what you were trying to achieve before I 
would consider that a loss.



 We gain:



 - the ability to have proper hierarchical addressing underneath (which

 is a big one for scaling a single network).  This itself is a

 tradeoff however - an efficient/strict hierarchical addressing scheme

 means VMs can't choose their own IP addresses, and VM migration is 
 messy/limited/impossible.



It does require some variation on a host route, and it leads us to ask about 
renumbering. The hard part of VM migration is at the application layer, not the 
network, and is therefore pretty much the same.



 - hardware support for dynamic L3 routing is generally universal,

 through a small set of mostly-standard protocols (BGP, ISIS, etc).



 - can play various L3 games like BGP/anycast, which is super useful

 for geographically diverse services.





 It's certainly a useful tradeoff for many use cases.  Users lose some

 generality in return for more powerful cooperation with the provider

 around particular features, so I sort of think of it like a step

 halfway up the IaaS-

 PaaS stack - except for networking.



 - Gus



 Thanks

 Rohit



 From: Kevin Benton 
 blak...@gmail.commailto:blak...@gmail.commailto:blak...@gmail.com%3cmailto:blak...@gmail.com

 Reply-To: OpenStack Development Mailing List (not for usage questions)

 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.opensta

 ck.org

 Date: Tuesday, October 28, 2014 1:01 PM

 To: OpenStack Development Mailing List (not for usage

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-30 Thread A, Keshava
Hi Cory,

Here NFV-Apps will use the infrastructure' L3 Route table' to make any decision 
?
From OpenStack perspective NFV-App(VM)  is not like any other Tennant-VM as 
for as delivering the packet is concerned ?
Is there any  thinking of NFV-App ( Service router VM) to insert any routing 
information in OpenStack infrastructure ?


Thanks  Regards,
keshava 

-Original Message-
From: Cory Benfield [mailto:cory.benfi...@metaswitch.com] 
Sent: Thursday, October 30, 2014 2:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

On Tue, Oct 28, 2014 at 21:50:09, Carl Baldwin wrote:
 Many API users won't care about the L2 details.  This could be a 
 compelling alternative for them.  However, some do.  The L2 details 
 seem to matter an awful lot to many NFV use cases.  It might be that 
 this alternative is just not compelling for those.  Not to say it 
 isn't compelling overall though.

Agreed. This is a point worth emphasising: routed networking is not a panacea 
for everyone's networking woes. We've got a lot of NFV people and products at 
my employer, and while we're engaged in work to come up with L3 approaches to 
solve their use-cases, we'd like to draw a balance between adding complexity to 
the network layer to support legacy L2-based requirements and providing better 
native L3 solutions that NFV applications can use instead.  One of the key 
challenges with NFV is that it shouldn't just be a blind porting of existing 
codebases - you need to make sure you're producing something which takes 
advantage of the new environment.

Cory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-30 Thread A, Keshava
OK,
You may  need to think of brining BGP routing between POD to support Live 
migration.


Thanks  Regards,
keshava

-Original Message-
From: joehuang [mailto:joehu...@huawei.com] 
Sent: Thursday, October 30, 2014 4:58 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hello, Keshava

Live migration is allowed inside one pod ( one cascaded OpenStack instance ), 
not support cross pods live migration yet. 

But cold migration could be done between pods, even cross data centers.

Live migration cross pods will be studied in the future.

Best Regards

Chaoyi Huang ( joehuang )


From: A, Keshava [keshav...@hp.com]
Sent: 30 October 2014 17:45
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi,
Can the VM migration happens across POD (Zone) ?
If so then how reachability of VM is addressed dynamically without any packet 
loss ?

Thanks  Regards,
keshava

-Original Message-
From: Wuhongning [mailto:wuhongn...@huawei.com]
Sent: Thursday, October 30, 2014 7:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi keshava,

Thanks for interested in Cascading. Here are some very simple explanation:

Basically Datacenter is not in the 2-level tree of cascading. We use term POD 
to represent a cascaded child openstack (same meaning of your term Zone?). 
There may be single or multiple PODs in one Datacenter, Just like below:

(A, B, C)  ...  (D, E)  ...  (F)  ...   (G)
Each character represent a POD or child openstack, while parenthesis represent 
a Datacenter.

Each POD has a corresponding virtual host node in the parent openstack, so when 
scheduler of any projects (nova/neutron/cinder...) locate a host node, the 
resource POD is determined, also with its geo-located Datacenter by side 
effect. Cascading don't schedule by Datacenter directly, DC is just an 
attribute of POD (for example we can configure host aggregate to identify a DC 
with multiple PODs). The upper scale of POD is fixed, maybe several hundreds, 
so a super large DC with tens of thousands servers could be built by 
modularized PODs, avoiding the difficult of tuning and maintaining such a huge 
monolithic openstack.

Next do you mean networking reachability? Sorry for the limitation of mail post 
I can just give some very simple idea: in parent openstack the L2pop and DVR is 
used, so L2/L3 agent-proxy in each virtual host node can get all the vm 
reachability information of other POD, then they are set to local POD by 
Neutron REST API. However, cascading depends on some feature not exists yet in 
current Neutron, like L2GW, pluggable external network, WE Fwaas in DVR, 
centralized FIP in DVR... so we have to do some little patch in the front. In 
the future if these features is merged, these patch code can be removed.

Indeed Neutron is the most challenge part of cascading, without considering 
those proxies in the parent openstack virtual host node, Neutron patchs account 
for 85% or more LOC in the whole project.

Regards,
Wu

From: keshava [keshav...@hp.com]
Sent: Wednesday, October 29, 2014 2:22 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

This is very interesting problem to solve.
I am curious to know how the reachability is provided across different 
Datacenter.
How to know which VM is part of which Datacenter?
VM may be in different Zone but under same DC or in different DC itself.

How this problem is solved?


thanks  regards,
keshava



--
View this message in context: 
http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-30 Thread A, Keshava
Agreed !.

Regards,
keshava

From: Kevin Benton [mailto:blak...@gmail.com]
Sent: Friday, October 31, 2014 2:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

These are all important discussion topics, but we are getting pulled into 
implementation-specific details again. Routing aggregation and network topology 
is completely up to the backend implementation.

We should keep this thread focused on the user-facing abstractions and the 
changes required to Nova and Neutron to enable them. Then when it is time to 
implement the reference implementation in Neutron, we can have this discussion 
on optimal placement of BGP nodes, etc.



On Thu, Oct 30, 2014 at 4:04 AM, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi,
w.r.t.  ‘ VM packet forwarding’ at L3 level by enabling routing I have below 
points.
With below  reference diagram , when the routing is enabled to detect the 
destination VM’s compute node ..


1.   How many route prefix will be injected in each of the compute node ?



2.   For each of the VM address, there will be corresponding IP address in 
the ‘L3 Forwarding Tbl’ ?

When we have large number of VM’s of the order 50,000/ 1 Million VM’s in the 
cloud each compute node needs to maintain 1 Million Route Entries ?



3.   Even with route aggregations, it is not guaranteed to be very 
efficient because

a.   Tenants can span across computes.

b.  VM migration can happen which  may break the aggregation  and allow the 
growth  of routing table.



4.   Across Switch if we  try to run BGP and try to aggregate, we will be 
introducing the Hierarchical Network.

If any change in topology what will be convergence time and will there any 
looping issues ?

Cost of the L3 switch will go up as the capacity of that switch to support 
10,000 + routes.


5.   With this we want to break the classical L2 broadcast in the last mile 
Cloud ?

I was under the impression that the cloud network we want to keep simple L2 
broadcast domain, without adding any complexity like MPLS label, Routing, 
Aggregation .



6.   The whole purpose of brining VxLAN in datacenter cloud is to keep the 
L2 and even able to extend the L2 to different Datacenter.



7.   I also saw some ietf draft w.r.t implementation architecture of 
OpenStack !!!.


  Let me know the opinion w.r.t. this ?


[cid:image001.png@01CFF4E9.7DD2FB90]





Thanks  regards,

Keshava



-Original Message-
From: Fred Baker (fred) [mailto:f...@cisco.commailto:f...@cisco.com]
Sent: Wednesday, October 29, 2014 5:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking





On Oct 28, 2014, at 4:59 PM, Angus Lees 
g...@inodes.orgmailto:g...@inodes.org wrote:



 On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:

 Agreed. The way I'm thinking about this is that tenants shouldn't

 care what the underlying implementation is - L2 or L3. As long as the

 connectivity requirements are met using the model/API, end users

 should be fine. The data center network design should be an

 administrators decision based on the implementation mechanism that has been 
 configured for OpenStack.



 I don't know anything about Project Calico, but I have been involved

 with running a large cloud network previously that made heavy use of L3 
 overlays.



 Just because these points weren't raised earlier in this thread:  In

 my experience, a move to L3 involves losing:



 - broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but

 that's a whole can of worms - so perhaps best to just say up front

 that this is a non-broadcast network.



 - support for other IP protocols.



 - various L2 games like virtual MAC addresses, etc that NFV/etc people like.



I’m a little confused. IP supports multicast. It requires a routing protocol, 
and you have to “join” the multicast group, but it’s not out of the picture.



What other “IP” protocols do you have in mind? Are you thinking about 
IPX/CLNP/etc? Or are you thinking about new network layers?



I’m afraid the L2 games leave me a little cold. We have been there, such as 
with DECNET IV. I’d need to understand what you were trying to achieve before I 
would consider that a loss.



 We gain:



 - the ability to have proper hierarchical addressing underneath (which

 is a big one for scaling a single network).  This itself is a

 tradeoff however - an efficient/strict hierarchical addressing scheme

 means VMs can't choose their own IP addresses, and VM migration is 
 messy/limited/impossible.



It does require some variation on a host route, and it leads us to ask about 
renumbering. The hard part of VM migration is at the application layer, not the 
network, and is therefore pretty much the same.



 - hardware support for dynamic L3 routing is generally universal

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-29 Thread keshava
This is very interesting problem to solve.
I am curious to know how the reachability is provided across different
Datacenter.
How to know which VM is part of which Datacenter? 
VM may be in different Zone but under same DC or in different DC itself.

How this problem is solved?


thanks  regards,
keshava



--
View this message in context: 
http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-tp54115p56323.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread A, Keshava
Hi,
Current Open-stack was built as flat network.
With the introduction of the L3 lookup (by inserting the routing table in 
forwarding path) and separate 'VIF Route Type' interface:

At what point of time in the packet processing  decision will be made to lookup 
FIB  during ? For each packet there will additional  FIB lookup ?
How about the  impact on  'inter compute traffic', processed by  DVR  ?

Here thinking  OpenStack cloud as hierarchical network instead of Flat network ?

Thanks  regards,
Keshava

From: Rohit Agarwalla (roagarwa) [mailto:roaga...@cisco.com]
Sent: Monday, October 27, 2014 12:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

Hi

I'm interested as well in this model. Curious to understand the routing filters 
and their implementation that will enable isolation between tenant networks.
Also, having a BoF session on Virtual Networking using L3 may be useful to 
get all interested folks together at the Summit.


Thanks
Rohit

From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, October 24, 2014 12:51 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

Hi,

Thanks for posting this. I am interested in this use case as well.

I didn't find a link to a review for the ML2 driver. Do you have any more 
details for that available?
It seems like not providing L2 connectivity between members of the same Neutron 
network conflicts with assumptions ML2 will make about segmentation IDs, etc. 
So I am interested in seeing how exactly the ML2 driver will bind ports, 
segments, etc.


Cheers,
Kevin Benton

On Fri, Oct 24, 2014 at 6:38 AM, Cory Benfield 
cory.benfi...@metaswitch.commailto:cory.benfi...@metaswitch.com wrote:
All,

Project Calico [1] is an open source approach to virtual networking based on L3 
routing as opposed to L2 bridging.  In order to accommodate this approach 
within OpenStack, we've just submitted 3 blueprints that cover

-  minor changes to nova to add a new VIF type [2]
-  some changes to neutron to add DHCP support for routed interfaces [3]
-  an ML2 mechanism driver that adds support for Project Calico [4].

We feel that allowing for routed network interfaces is of general use within 
OpenStack, which was our motivation for submitting [2] and [3].  We also 
recognise that there is an open question over the future of 3rd party ML2 
drivers in OpenStack, but until that is finally resolved in Paris, we felt 
submitting our driver spec [4] was appropriate (not least to provide more 
context on the changes proposed in [2] and [3]).

We're extremely keen to hear any and all feedback on these proposals from the 
community.  We'll be around at the Paris summit in a couple of weeks and would 
love to discuss with anyone else who is interested in this direction.

Regards,

Cory Benfield (on behalf of the entire Project Calico team)

[1] http://www.projectcalico.org
[2] https://blueprints.launchpad.net/nova/+spec/vif-type-routed
[3] https://blueprints.launchpad.net/neutron/+spec/dhcp-for-routed-ifs
[4] https://blueprints.launchpad.net/neutron/+spec/calico-mechanism-driver

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] vm can not transport large file under neutron ml2 + linux bridge + vxlan

2014-10-28 Thread A, Keshava
Hi,
Pl find my reply .

Regards,
keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] vm can not transport large file under 
neutron ml2 + linux bridge + vxlan

On 28 October 2014 00:18, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi,

Currently OpenStack have any framework to notify the Tennant/Service-VM for 
such kind of notification based on VM’s interest ?

It's possible to use DHCP or RA to notify a VM of the MTU but there are 
limitations (RAs don't let you increase the MTU, only decrease it, and 
obviously VMs must support the MTU element of DHCP) and Openstack doesn't 
currently use it.  You can statically configure the DHCP MTU number that DHCP 
transmits; this is useful to work around problems but not really the right 
answer to the problem.

VM may be very much interested for such kind of notification like

1.   Path MTU.
This will be correctly discovered from the ICMP PMTU exceeded message, and 
Neutron routers should certainly be expected to send that.  (In fact the 
namespace implementation of routers would do this if the router ever had 
different MTUs on its ports; it's in the kernel network stack.)  There's no 
requirement for a special notification, and indeed you couldn't do it that way 
anyway.

In the network interface/router going down is  common scenario. In that case  
the packet will take different path which may have different MTU.
In that case the PATH MTU calculated at the source may be different and should 
be notified dynamically to VM. So that  VM can originate the packet with 
requirement MTU size .
If there is no notification mechanism ( as per this reply):
If there is no such dynamic PATH MTU change notification to VM,  how VM  can 
change the  packet size  ?
Or
Do we expect ICMP too big message reaches all the way  to VM ?
Or
VM itself the run the PATH MTU ?


2.   Based on specific incoming Tennant traffic, block/Allow  particular 
traffic flow at infrastructure level itself, instead of at VM.
I don't see the relevance; and you appear to be describing security groups.

This may require OpenStack infrastructure notification support to 
Tenant/Service VM.
Not particularly, as MTU doesn't generally change, and I think we would forbid 
changing the MTU of a network after creation.  It's only an initial 
configuration thing, therefore.  It might involve better cloud-init support for 
network configuration, something that gets discussed periodically.

--
Ian.

…
Thanks  regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 11:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] vm can not transport large file under 
neutron ml2 + linux bridge + vxlan

Path MTU discovery works on a path - something with an L3 router in the way - 
where the outbound interface has a smaller MTU than the inbound one.  You're 
transmitting across an L2 network - no L3 routers present.  You send a 1500 
byte packet, the network fabric (which is not L3, has no address, and therefore 
has no means to answer you) does all that it can do with that packet - it drops 
it.  The sender retransmits, assuming congestion, but the same thing happens.  
Eventually the sender decides there's a network problem and times out.

This is a common problem with Openstack deployments, although various features 
of the virtual networking let you get away with it, with some configs and not 
others.  OVS used to fake a PMTU exceeded message from the destination if you 
tried to pass an overlarge packet - not in spec, but it hid the problem nicely. 
 I have a suspicion that some implementations will fragment the containing UDP 
packet, which is also not in spec and also solves the problem (albeit with poor 
performance).

The right answer for you is to set the MTU in your machines to the same MTU 
you've given the network, that is, 1450 bytes.  You can do this by setting a 
DHCP option for MTU, providing your VMs support that option (search the web for 
the solution, I don't have it offhand) or lower the MTU by hand or by script 
when you start your VM.
The right answer for everyone is to properly determine and advertise the 
network MTU to VMs (which, with provider networks, is not even consistent from 
one network to the next) and that's the spec Kyle is referring to.  We'll be 
fixing this in Kilo.
--
Ian.

On 27 October 2014 20:14, Li Tianqing jaze...@163.commailto:jaze...@163.com 
wrote:



--
Best
Li Tianqing


At 2014-10-27 17:42:41, Ihar Hrachyshka 
ihrac...@redhat.commailto:ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-

Hash: SHA512



On 27/10/14 02:18, Li Tianqing wrote:

 Hello, Right now, we test neutron under havana release. We

 configured network_device_mtu=1450

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread A, Keshava
Hi,
Pl fine the reply for the same.

Regards,
keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

This all appears to be referring to trunking ports, rather than anything else, 
so I've addressed the points in that respect.
On 28 October 2014 00:03, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi,

1.   How many Trunk ports can be created ?
Why would there be a limit?

Will there be any Active-Standby concepts will be there ?
I don't believe active-standby, or any HA concept, is directly relevant.  Did 
you have something in mind?
For the NFV kind of the scenario, it is very much required to run the ‘Service 
VM’ in Active and Standby Mode.
Standby is more of passive entity and will not take any action to external 
network. It will be passive consumer of the packet/information.
In that scenario it will be very meaningful to have
“Active port – connected to  “Active  Service VM”.
“Standby port – connected to ‘Standby Service VM’. Which will turn Active when 
old Active-VM goes down  ?

Let us know others opinion about this concept.

 2.   Is it possible to configure multiple IP address configured on these 
ports ?
Yes, in the sense that you can have addresses per port.  The usual restrictions 
to ports would apply, and they don't currently allow multiple IP addresses 
(with the exception of the address-pair extension).

In case IPv6 there can be multiple primary address configured will this be 
supported ?
No reason why not - we're expecting to re-use the usual port, so you'd expect 
the features there to apply (in addition to having multiple sets of subnet on a 
trunking port).

 3.   If required can these ports can be aggregated into single one 
dynamically ?
That's not really relevant to trunk ports or networks.

 4.   Will there be requirement to handle Nested tagged packet on such 
interfaces ?
For trunking ports, I don't believe anyone was considering it.




Thanks  Regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk]
Sent: Monday, October 27, 2014 9:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

On 25 October 2014 15:36, Erik Moe 
erik@ericsson.commailto:erik@ericsson.com wrote:
Then I tried to just use the trunk network as a plain pipe to the L2-gateway 
and connect to normal Neutron networks. One issue is that the L2-gateway will 
bridge the networks, but the services in the network you bridge to is unaware 
of your existence. This IMO is ok then bridging Neutron network to some remote 
network, but if you have an Neutron VM and want to utilize various resources in 
another Neutron network (since the one you sit on does not have any resources), 
things gets, let’s say non streamlined.

Indeed.  However, non-streamlined is not the end of the world, and I wouldn't 
want to have to tag all VLANs a port is using on the port in advance of using 
it (this works for some use cases, and makes others difficult, particularly if 
you just want a native trunk and are happy for Openstack not to have insight 
into what's going on on the wire).

 Another issue with trunk network is that it puts new requirements on the 
infrastructure. It needs to be able to handle VLAN tagged frames. For a VLAN 
based network it would be QinQ.

Yes, and that's the point of the VLAN trunk spec, where we flag a network as 
passing VLAN tagged packets; if the operator-chosen network implementation 
doesn't support trunks, the API can refuse to make a trunk network.  Without it 
we're still in the situation that on some clouds passing VLANs works and on 
others it doesn't, and that the tenant can't actually tell in advance which 
sort of cloud they're working on.
Trunk networks are a requirement for some use cases independent of the port 
awareness of VLANs.  Based on the maxim, 'make the easy stuff easy and the hard 
stuff possible' we can't just say 'no Neutron network passes VLAN tagged 
packets'.  And even if we did, we're evading a problem that exists with exactly 
one sort of network infrastructure - VLAN tagging for network separation - 
while making it hard to use for all of the many other cases in which it would 
work just fine.

In summary, if we did port-based VLAN knowledge I would want to be able to use 
VLANs without having to use it (in much the same way that I would like, in 
certain circumstances, not to have to use Openstack's address allocation and 
DHCP - it's nice that I can, but I shouldn't be forced to).
My requirements were to have low/no extra cost for VMs using VLAN trunks 
compared to normal ports, no new bottlenecks/single point of failure. Due to 
this and previous issues I implemented the L2 gateway in a distributed fashion 
and since

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread A, Keshava
Hi,
Pl find my reply ..


Regards,
keshava

From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
Sent: Tuesday, October 28, 2014 3:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi
Please find some additions to Ian and responses below.
/Alan

From: A, Keshava [mailto:keshav...@hp.com]
Sent: October-28-14 9:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi,
Pl fine the reply for the same.

Regards,
keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

This all appears to be referring to trunking ports, rather than anything else, 
so I've addressed the points in that respect.
On 28 October 2014 00:03, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi,

1.   How many Trunk ports can be created ?
Why would there be a limit?

Will there be any Active-Standby concepts will be there ?
I don't believe active-standby, or any HA concept, is directly relevant.  Did 
you have something in mind?
For the NFV kind of the scenario, it is very much required to run the ‘Service 
VM’ in Active and Standby Mode.
AK-- We have a different view on this, the “application runs as a pair” of 
which the application either runs in active-active or active standby…this has 
nothing to do with HA, its down to the application and how its provisioned and 
configured via Openstack. So agree with Ian on this.
Standby is more of passive entity and will not take any action to external 
network. It will be passive consumer of the packet/information.
AK-- Why would we need to care?
In that scenario it will be very meaningful to have
“Active port – connected to  “Active  Service VM”.
“Standby port – connected to ‘Standby Service VM’. Which will turn Active when 
old Active-VM goes down  ?
AK-- Cant you just have two VM’s and then via a controller decide how to 
address MAC+IP_Address control…..FYI…most NFV Apps have that built-in today.
Let us know others opinion about this concept.
AK--Perhaps I am miss reading this but I don’t understand what this would 
provide as opposed to having two VM’s instantiated and running, why does 
Neutron need to care about the port state between these two VM’s? Similarly its 
better to just have 2 or more VM’s up and the application will be able to 
address when failover occurs/requires. Lets keep it simple and not mix up with 
what the apps do inside the containment.

Keshava:
Since this is solution is more for Carrier Grade NFV Service VM, I have below 
points to make.
Let’s us say Service-VM running is BGP or BGP-VPN or ‘MPLS + LDP + BGP-VPN’.
When such kind of carrier grade service are running, how to provide the Five-9  
HA ?
In my opinion,
Both (Active,/Standby) Service-VM to hook same underlying 
OpenStack infrastructure stack (br-ext-br-int-qxx- VMa)
However ‘active VM’ can hooks to  ‘active-port’  and ‘standby VM’ hook to 
‘passive-port’ with in same stack.

Instead if Active and Standby VM hooks to 2 different stack (br-ext1-br-int1 
--qxx1- VM-active) and (br-ext2-br-int2-qxx2- VM-Standby) can those 
Service-VM achieve the 99.9 reliability ?

Yes I may be thinking little  complicated  way from open-stack 
perspective..

 2.   Is it possible to configure multiple IP address configured on these 
ports ?
Yes, in the sense that you can have addresses per port.  The usual restrictions 
to ports would apply, and they don't currently allow multiple IP addresses 
(with the exception of the address-pair extension).

In case IPv6 there can be multiple primary address configured will this be 
supported ?
No reason why not - we're expecting to re-use the usual port, so you'd expect 
the features there to apply (in addition to having multiple sets of subnet on a 
trunking port).

 3.   If required can these ports can be aggregated into single one 
dynamically ?
That's not really relevant to trunk ports or networks.

 4.   Will there be requirement to handle Nested tagged packet on such 
interfaces ?
For trunking ports, I don't believe anyone was considering it.





Thanks  Regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk]
Sent: Monday, October 27, 2014 9:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

On 25 October 2014 15:36, Erik Moe 
erik@ericsson.commailto:erik@ericsson.com wrote:
Then I tried to just use the trunk network as a plain pipe to the L2-gateway 
and connect to normal Neutron networks. One issue is that the L2-gateway will 
bridge the networks, but the services in the network you bridge

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread A, Keshava
Hi Cory,

Yes that is the basic question I have. 

OpenStack cloud  is ready to move away from Flat L2 network ?

1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2 
Hash/Index Lookup ? 
2. Will there be Hierarchical network ?  How much of the Routes will be 
imported from external world ?
3. Will there be  Separate routing domain for overlay network  ? Or it will be 
mixed with external/underlay network ?
4. What will be the basic use case of this ? Thinking of L3 switching to 
support BGP-MPLS L3 VPN Scenario right from compute node ?

Others can give their opinion also.

Thanks  Regards,
keshava

-Original Message-
From: Cory Benfield [mailto:cory.benfi...@metaswitch.com] 
Sent: Tuesday, October 28, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
 Hi,
 
 Current Open-stack was built as flat network.
 
 With the introduction of the L3 lookup (by inserting the routing table 
 in forwarding path) and separate 'VIF Route Type' interface:
 
 At what point of time in the packet processing  decision will be made 
 to lookup FIB  during ? For each packet there will additional  FIB 
 lookup ?
 
 How about the  impact on  'inter compute traffic', processed by  DVR  ?
 Here thinking  OpenStack cloud as hierarchical network instead of Flat 
 network ?

Keshava,

It's difficult for me to answer in general terms: the proposed specs are 
general enough to allow multiple approaches to building purely-routed networks 
in OpenStack, and they may all have slightly different answers to some of these 
questions. I can, however, speak about how Project Calico intends to apply them.

For Project Calico, the FIB lookup is performed for every packet emitted by a 
VM and destined for a VM. Each compute host routes all the traffic to/from its 
guests. The DVR approach isn't necessary in this kind of network because it 
essentially already implements one: all packets are always routed, and no 
network node is ever required in the network.

The routed network approach doesn't add any hierarchical nature to an OpenStack 
cloud. The difference between the routed approach and the standard OVS approach 
is that packet processing happens entirely at layer 3. Put another way, in 
Project Calico-based networks a Neutron subnet no longer maps to a layer 2 
broadcast domain.

I hope that clarifies: please shout if you'd like more detail.

Cory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] BGPVPN implementation discussions

2014-10-20 Thread A, Keshava
Hi,


1.   From where the MPLS traffic will be initiated ?

2.   How it will be mapped ?


Regards,
Keshava
From: Damon Wang [mailto:damon.dev...@gmail.com]
Sent: Friday, October 17, 2014 12:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] BGPVPN implementation discussions

Good news, +1

2014-10-17 0:48 GMT+08:00 Mathieu Rohon 
mathieu.ro...@gmail.commailto:mathieu.ro...@gmail.com:
Hi all,

as discussed during today's l3-meeting, we keep on working on BGPVPN
service plugin implementation [1].
MPLS encapsulation is now supported in OVS [2], so we would like to
summit a design to leverage OVS capabilities. A first design proposal,
based on l3agent, can be found here :

https://docs.google.com/drawings/d/1NN4tDgnZlBRr8ZUf5-6zzUcnDOUkWSnSiPm8LuuAkoQ/edit

this solution is based on bagpipe [3], and its capacity to manipulate
OVS, based on advertised and learned routes.

[1]https://blueprints.launchpad.net/neutron/+spec/neutron-bgp-vpn
[2]https://raw.githubusercontent.com/openvswitch/ovs/master/FAQ
[3]https://github.com/Orange-OpenSource/bagpipe-bgp


Thanks

Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Private external network

2014-10-14 Thread A, Keshava
Hi,
Across these private External network/tenant :: floating IP can be shared ?


Keshava


From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Tuesday, October 14, 2014 10:33 PM
To: Édouard Thuleau
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [neutron] Private external network

The blueprint was untargeted mostly because the analysis indicated that there 
was no easy solution, and that what we needed was a solution to do some RBAC on 
neutron resources.

I think this would be a good addition to the Neutron resource model, and it 
would be great if you could start the discussion on the mailing list exposing 
your thoughts.

Salvatore

On 14 October 2014 11:50, Édouard Thuleau 
thul...@gmail.commailto:thul...@gmail.com wrote:
Hi Salvatore,

I like to propose a blueprint for the next Neutron release that permits to 
dedicated an external network to a tenant. For that I though to rethink the he 
conjunction of the two attributes `shared`
and `router:external' of the network resource.

I saw that you already initiate a work on that topic [1] and [2] but the bp was 
un-targeted for an alternative approaches which might be more complete. Does it 
alternative was released or in work in progress? To be sure to not duplicating 
work/effort.

[1] 
https://blueprints.launchpad.net/neutron/+spec/sharing-model-for-external-networks
[2] https://wiki.openstack.org/wiki/Neutron/sharing-model-for-external-networks

Regards,
Édouard.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] High bandwidth routers

2014-06-23 Thread A, Keshava
Hi,
I think there is no much consideration of  L3 forwarding capacity,  of the 
order of 100G in Network-Node(NN).
Not sure current software queues in NNare capable of handling 100G times of 
packet rate.
(Of course for compute node there will  SRIOV to speedup these)

Instead you can consider  of having multiple Network Nodes deployed, so that L3 
forwarding will be distributed across multiple NN.
Make sure you will have separate public  IP for each of these NNs, so that any 
session related issue will not have issues in NN.


Regards,
Keshava.A.K.


From: CARVER, PAUL [mailto:pc2...@att.com]
Sent: Monday, June 23, 2014 6:51 PM
To: OpenStack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] High bandwidth routers

Is anyone using Neutron for high bandwidth workloads? (for sake of discussion 
let's high = 50Gbps or greater)

With routers being implemented as network namespaces within x86 servers it 
seems like Neutron networks would be pretty bandwidth constrained relative to 
real routers.

As we start migrating the physical connections on our physical routers from 
multiple of 10G to multiples of 100G, I'm wondering if Neutron has a clear 
roadmap towards networks where the bandwidth requirements exceed what an x86 
box can do.

Is the thinking that x86 boxes will soon be capable of 100G and multi-100G 
throughput? Or does DVR take care of this by spreading the routing function 
over a large number of compute nodes so that we don't need to channel 
multi-100G flows through single network nodes?

I'm mostly thinking about WAN connectivity here, video and big data 
applications moving huge amounts of traffic into and out of OpenStack based 
datacenters.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-06-21 Thread A, Keshava
Hi Thomas,

This is interesting.
I have some of the basic question about deployment model of using this BaGPipe 
BGP in virtual cloud network.

1. We want MPLS to start right from compute node as part Tennant traffic ?
2. We want L3 VRF separation right on Compute nodes (or NN Node) ? 
Tenant = VRF ? 
Tenant span can be across multiple CN nodes,  then have BGP to Full 
mesh with in CN ?
3. How to have  E-VPN connectivity mapping at NN/CN nodes ? 
Is there an L2 VPN psuedowire thinking from CN nodes itself ? 
4. Tennant traffic is L2 or L3 or MPLS ? Where will be L2 terminated ?

Help me understand the deployment model for this .



-Original Message-
From: Thomas Morin [mailto:thomas.mo...@orange.com] 
Sent: Thursday, June 19, 2014 9:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

Hi everyone,

Sorry, I couldn't make it in time for the IRC meeting.

Just saw in the logs:
15:19:12 yamamoto are orange folks here?  they might want to
 introduce their bgp speaker.

The best intro to BaGPipe BGP is the README on github:
https://github.com/Orange-OpenSource/bagpipe-bgp/blob/master/README.md

Beyond just speaking the BGP protocol on the wire, BaGPipe is a an 
implementation of BGP VPNs (IP VPNs and E-VPNs) including the forwarding part. 
It can be run as a service exposing a REST API, or a library inside an agent; 
it handles the lifecylcle of VRFs and port attached/detached from them and 
appropriately circulates event to/from BGP peers based on VRF import/export 
rules and RTC publish/subscribe semantics.  It's complete enough to let us 
build neutron virtual networks with IP VPNs, and interconnect them with 
external VPNs; the parts for Opentsack integration are not on this github, I'm 
just mentioning this for the sake of illustrating the relative maturity.

Although it does not address plain IP, this would I believe by a really easy 
addition to make.

I'll do my best to attend next week IRC meeting, but until this, feel free to 
ask.  We can also do a QA session on IRC if that sounds convenient.

Best,

-Thomas



2014-06-13, YAMAMOTO Takashi:
 an update after today's l3 meeting:
 here's a new version of ryu bgp api patch.
 http://sourceforge.net/p/ryu/mailman/message/32453021/

 it has been merged to the ryu master.
  https://github.com/osrg/ryu.git

 here's formatted documentation:
  http://ryu.readthedocs.org/en/latest/library_bgp_speaker.html
  http://ryu.readthedocs.org/en/latest/library_bgp_speaker_ref.html

 YAMAMOTO Takashi


 YAMAMOTO Takashi

 I have seen the Ryu team is involved and responsive to the community.
 That goes a long way to support it as the reference implementation 
 for BPG speaking in Neutron.  Thank you for your support.  I'll look 
 forward to the API and documentation refinement

 Let's be sure to document any work that needs to be done so that it 
 will support the features we need.  We can use the comparison page 
 for now [1] to gather that information (or links).  If Ryu is 
 lacking in any area, it will be good to understand the timeline on 
 which the features can be delivered and stable before we make a 
 formal decision on the reference implementation.

 Carl

 [1] https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison

 On Thu, Jun 5, 2014 at 10:36 AM, Jaume Devesa devv...@gmail.com wrote:
 After watch the documentation and the code of exabgp and Ryu, I 
 find the Ryu speaker much more easy to integrate and pythonic than 
 exabgp. I will use it as well as reference implementation in the Dynamic 
 Routing bp.

 Regards,


 On 5 June 2014 18:23, Nachi Ueno na...@ntti3.com wrote:

 Yamamoto
 Cool! OK, I'll make ryu based bgpspeaker as ref impl for my bp.

 Yong
 Ya, we have already decided to have the driver architecture.
 IMO, this discussion is for reference impl.

 2014-06-05 0:24 GMT-07:00 Yongsheng Gong gong...@unitedstack.com:
 I think maybe we can device a kind of framework so that we can 
 plugin different BGP speakers.


 On Thu, Jun 5, 2014 at 2:59 PM, YAMAMOTO Takashi 
 yamam...@valinux.co.jp
 wrote:

 hi,

 ExaBgp was our first choice because we thought that run 
 something in library mode would be much more easy to deal with 
 (especially the exceptions and corner cases) and the code would 
 be much cleaner. But seems that Ryu BGP also can fit in this 
 requirement. And having the help from a Ryu developer like you 
 turns it into a promising candidate!

 I'll start working now in a proof of concept to run the agent 
 with these implementations and see if we need more requirements 
 to compare between the speakers.

 we (ryu team) love to hear any suggestions and/or requests.
 we are currently working on our bgp api refinement and documentation.
 hopefully they will be available early next week.

 for both of bgp blueprints, it would be possible, and might be 
 desirable, to create reference 

[openstack-dev] [Neutron][L3] - L3 High availability blueprint v/s pure L3 packet

2014-06-05 Thread A, Keshava
Carl,

In the L3 High availability Blue Print I want to make following observation...


1.   Just before Active going down if there was fragmented packet getting 
reassembling how to recover it  ?

If that packet is related App, App will try to resend those packet so that it 
will get reassembled in new board .
Issue:
Issue with  packets which are pure L3 level  packets.
   Example  IPSEC packet (where there is recursive  encapsulation) 
, Neighbor Discover packet, ICMP related which are pure L3 packet.
If these packets are get losses during process 
of reassembled how to recover it ?
Worst thing is no one is informing that these 
packets are lost, because there are no session based Socket for pure L3 level 
packet.


2.   Since this HA mechanism is stateless (VRRP mechanism) what is the 
impact those L3 protocol which have status ?

How exactly NAT sessions are handled ?


3.   I think we need to have serious discussion with Distributed 
functionality (like DVR) for session less High availability solution.


Thanks  Regards,
Keshava

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Number of RabbitMQ connection : Some connection without consumers ?

2014-05-29 Thread A, Keshava
Hi,
It is observed that RabbitMQ number of connection will vary and sometime 
connection without any  consumers.
OpenStack controller connected to 2 compute nodes ( each with a VM) and NN node 
scenario.


1.   RabbitMQ connection will vary (like 6, 7,  8 connection) w.r.t one
Compute-Node.
For as many number of 'X + N' connection there are   X number of consumer ?


2.   For 2-3  connections there are no  consumers. But the connection 
continue to exist.


Question:

A.
Is it the behavior it is observed ? Is it because  stale   of connection ?
Or message is getting Time-out ( TTL)  in the Queue  and once its timed out it 
cannot fetched out and cannot delivered to consumers, because of this 
connection remain stale ?
Uncleaned Socket connection ?

B. Who creates some of the reply queues  (ex : 
reply_844c53cf0eb54b5b9abf1e3b4b3a0404)  and which point of time ?
How to know this ?


Below are the data collected from rabbit-MQ.

Compute Node-1:- (100.10.10.51)
==

1. On the connection 100.10.10.51:40488 - 100.10.10.53:5672
   queue :

q-agent-notifier-network-delete_fanout_19cd5e739c1d49d5bae4b1c24026ae2b

q-agent-notifier-port-update_fanout_c970ccc903364b14a8b6ff44a8cb9f2f

q-agent-notifier-security_group-update_fanout_57f9d018da9e4f9b8b7f33259c60692b

q-agent-notifier-tunnel-update_fanout_5705b3b7de7d46cb9650706332405955

2. On the connection 100.10.10.51:40485 - 100.10.10.53:5672 (1)

Queue:
compute.sdn-keshava-cn1
compute_fanout_b092ea21d2dc425ca13b376d1731df9f

3. On the connection 100.10.10.51:40484 - 100.10.10.53:5672 (1)
queue:
There is no consumers .

4.On the Connection 100.10.10.51:40486 - 100.10.10.53:5672
queue:
reply_844c53cf0eb54b5b9abf1e3b4b3a0404


5.On the connection 100.10.10.51:40487 - 100.10.10.53:5672
queue:
reply_77584ebd2bc143db922830683df65c89


6.On the Connection 100.10.10.51:40489 - 100.10.10.53:5672

Queue:
There are  no consumers.


Thanks  regards,
Keshava.A

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Adding Routing v/s Topology as Service v/s exporting Topology info to ODL ..

2014-05-27 Thread A, Keshava
Hi,
I am observing that here there are overlapping functionality between Routing 
and Topology ..

1. Adding Routing into OpenStack ( by link state protocol : OSPF/other) to 
learn prefix of underlay network and inject into overlay network.
Builds for CN + Physical Switch/Router ..

2. Use  topology ( build by OSPF)  for any other purpose, including Traffic 
Engineer if required later.

3. Export this topology information to ODL (Open daylight) later to build  
Service Chaining across underlay and Overlay network.

So in my opinion adding the routing in the underlay network and 'topology as 
service' can be interlinked and needs consider many aspects considering further 
upcoming requirements ..




Thanks  regards,
Keshava.A

-Original Message-
From: Isaku Yamahata [mailto:isaku.yamah...@gmail.com] 
Sent: Monday, May 26, 2014 5:02 PM
To: OpenStack Development Mailing List
Cc: isaku.yamah...@intel.com; kpr...@yahoo.com; isaku.yamah...@gmail.com
Subject: [openstack-dev] [neutron][ironic] topology as a service (physical 
network topology): design summit follow up

Hi. As discussed at the summit[1], there are much requirement for topology as a 
service that stores/provides information of physical network topology in one 
place.

In order to make progress, I'd like to discuss issues which were raised at the 
summit. I also created etherpad page and wiki page for this[2][3].

- IRC meeting
  For starter, how about having IRC meeting?
  I propose this time slot
  June 4 Wednesday: 5:00am UTC- #openstack-meeting-3
  My time zone is JST(UTC+9)

- Should this service be a separated service from Neutron?
  Although I originally created blueprint/specs for neutron extension[4][5],
  it was argued that this service should be a separated service from neutron
  because it is useful not only for neutron, but also for nova, ironic, gantt
  and so on without neutron.

  To be honest I don't have strong opinion on this and I'm fine to
  start incubation process.
  Are there anyone who helps as core-reviewer? I need help for
  incubation process. Otherwise I have no choice except staying in
  Neutron.

- TripleO link aggrigation
   TripleO has a need to configure link aggregation. Will this
   provide enough info/capability? - ChuckC
  Chuck, could you please elaborate on it and/or provide any pointers for it?

  http://lists.openstack.org/pipermail/openstack-dev/2014-February/026868.html
  I found only the pointer.
  As long as I understand from this link, what TripleO needs is something like
  - compute-node has multiple nics
  - those nics are connected to same switch
  - the configuration of the switch (properly configured)

- API and data models are under review as [5]


[1] https://etherpad.openstack.org/p/hierarchical_network_topology
[2] https://wiki.openstack.org/wiki/Topology-as-a-service
[3] https://etherpad.openstack.org/p/topology-as-a-service
[4] https://blueprints.launchpad.net/neutron/+spec/physical-network-topology
[5] https://review.openstack.org/#/c/91275/

thanks,
--
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][FWaaS]Firewall Web Services Research Thesis Applicability to the OpenStack Project

2014-05-23 Thread A, Keshava
Hi,
Please find reply in line ..

Thanks  regards,
Keshava.A

-Original Message-
From: Mike Grima [mailto:mike.r.gr...@gmail.com] 
Sent: Thursday, May 22, 2014 3:55 PM
To: A, Keshava
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][FWaaS]Firewall Web Services Research 
Thesis Applicability to the OpenStack Project

Hello,

Just to make sure I understand:

1.) I'm assuming that you can dilettante which policies apply to specific VM's 
within a group (Is this correct?).  With regards to DENY permissions, they are 
handled specially.  In such a case, all other VM's are provided with ALLOW 
permissions for that rule, while the destined VM for the DENY policy is 
provided with a DENY.
- Would you necessarily want to automatically provide all other VM's with an 
ALLOW privilege?  Not all VM's in that group may need access to that port...

Keshava: Yes that's correct 

2.) Group Policy does support a Hierarchy. (Is this correct?)

Keshava: Yes that's correct 

3.) On a separate note: Is the Group Policy feature exposed via a RESTful API 
akin to FWaaS?

Thank you,

Mike Grima, RHCE


On May 22, 2014, at 2:08 AM, A, Keshava keshav...@hp.com wrote:

 Hi,
 
 1. When the group policy is applied ( across to all the VMs ) say deny for 
 specific TCP port = 80, however because some special reason one of that VM 
 needs to 'ALLOW TCP port' how to handle this ?  
 When deny is applied to any one of VM in that group , this framework  
 takes care of 
   individually breaking that and apply ALLOW for other VM  
 automatically ?
   and apply Deny for that specific VM ? 
 
 2. Can there be 'Hierarchy of Group Policy  ? 
 
 
 
 Thanks  regards,
 Keshava.A
 
 -Original Message-
 From: Michael Grima [mailto:mike.r.gr...@gmail.com] 
 Sent: Wednesday, May 21, 2014 5:00 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][FWaaS]Firewall Web Services Research 
 Thesis Applicability to the OpenStack Project
 
 Sumit,
 
 Unfortunately, I missed the IRC meeting on FWaaS (got the timezones screwed 
 up...).
 
 However, in the meantime, please review this section of my thesis on the 
 OpenStack project:
 https://docs.google.com/document/d/1DGhgtTY4FxYxOqhKvMSV20cIw5WWR-gXbaBoMMMA-f0/edit?usp=sharing
 
 Please let me know if it is missing anything, or contains any wrong 
 information.  Also, if you have some time, please review the questions I have 
 asked in the previous messages.
 
 Thank you,
 
 --
 Mike Grima, RHCE
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-23 Thread A, Keshava
Hi,
Pl find the reply  inline 

Thanks  regards,
Keshava.A

-Original Message-
From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com] 
Sent: Thursday, May 22, 2014 8:24 PM
To: OpenStack Development Mailing List (not for usage questions); Kyle Mestery
Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

Hi 

Just wanted to comment on some points below inline.

/Alan

-Original Message-
From: A, Keshava [mailto:keshav...@hp.com]
Sent: May-22-14 2:25 AM
To: Kyle Mestery; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

Hi

In my opinion the first and foremost requirement for NFV ( which is from 
carrier class ) is 99.9 ( 5 nines ) reliability.
If we want OpenStack architecture to scale to Carrier class below are basic 
thing we need to address.

1. There should be a framework from open-stack to support 5 nine level 
reliability  to Service/Tennant-VM . ? ( Example for Carrier Class NAT Service/ 
SIP Service/HLR/VLR service/BRAS service)
AK-- I believe what is important is for Openstack to support various degrees 
of configurations for a given tenant network. The reliability of the network is 
outside of Openstack, but where Openstack plays a role here imho is for check 
and validation of the network when its been provisioned and configured. 
Similarly for VM to ensure we have sufficient check and validation 
(watchdogs/event call backs etc etc) so that we can expose faults and act on 
them.

Keshava: In order to provide the reliability to Service/Tenant-VM don't   you 
agree open-stack network also has to be reliable ? 
   Without OpenStack having network reliability to extend of 5 nine can 
we give the same to  Service/Tenant-VM ?

2. They also should be capable of 'In-service up gradation (ISSU) without 
service disruption.
AK-- Fully agree, its imperative to be able to upgrade Openstack without any 
service interruption.

3. OpenStack itself should ( its own Compute Node/L3/Routing,  Controller )  
have (5 nine capable) reliability.
AK-- If we are referring to Openstack controllers/agents/db's etc then yes 
makes perfect sense, I would however stop short on saying you can achieve 5 
nine's in various ways and its typically up to the vendors themselves how they 
want to implement this even in OS.

Keshava: I think we better to talk with one of the Tennant-VM hosted on 
OpenStack as example discuss more about this. So that it will be clear and we 
will have common language to speak.

If we can provide such of infrastructure to NFV then we think of adding rest of 
requirement .

Let me know others/NFv people opinion for the same.



Thanks  regards,
Keshava.A

-Original Message-
From: Kyle Mestery [mailto:mest...@noironetworks.com]
Sent: Monday, May 19, 2014 11:49 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

On Mon, May 19, 2014 at 1:44 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:
 I think the Service VM discussion resolved itself in a way that 
 reduces the problem to a form of NFV - there are standing issues using 
 VMs for services, orchestration is probably not a responsibility that 
 lies in Neutron, and as such the importance is in identifying the 
 problems with the plumbing features of Neutron that cause 
 implementation difficulties.  The end result will be that VMs 
 implementing tenant services and implementing NFV should be much the 
 same, with the addition of offering a multitenant interface to Openstack 
 users on the tenant service VM case.

 Geoff Arnold is dealing with the collating of information from people 
 that have made the attempt to implement service VMs.  The problem 
 areas should fall out of his effort.  I also suspect that the key 
 points of NFV that cause problems (for instance, dealing with VLANs 
 and trunking) will actually appear quite high up the service VM list as well.
 --
There is a weekly meeting for the Service VM project [1], I hope some 
representatives from the NFB sub-project can make it to this meeting and 
participate there.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/Meetings/ServiceVM

 Ian.



 On 18 May 2014 20:01, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
  From: Sumit Naiksatam sumitnaiksa...@gmail.com
 
  Thanks for initiating this conversation. Unfortunately I was not 
  able to participate during the summit on account of overlapping sessions.
  As has been identified in the wiki and etherpad, there seem to be 
  obvious/potential touch points with the advanced services'
  discussion we are having in Neutron [1]. Our sub team, and I, will 
  track and participate in this NFV discussion. Needless to say, we 
  are definitely very keen to understand and accommodate the NFV 
  requirements.
 
  Thanks,
  ~Sumit.
  [1] https://wiki.openstack.org/wiki/Neutron/AdvancedServices

 Yes

Re: [openstack-dev] [Neutron] Default routes to SNAT gateway in DVR

2014-05-23 Thread A, Keshava
Hi,
I have one basic question, what is this tunneled over to network node means ? 
( At this point, the packet will go back out to br-int and but tunneled over 
to the network node just like any other intra-network traffic.)
What kind of tunnel between Compute to Network Node during SNAT ? 
Why tunneling  will happen during NAT ?

Thanks  regards,
Keshava.A

-Original Message-
From: Carl Baldwin [mailto:c...@ecbaldwin.net] 
Sent: Thursday, May 22, 2014 3:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Default routes to SNAT gateway in DVR

Hi,

I found this message in my backlog from when I was at the summit.
Sorry for the delay in responding.

The default SNAT or dynamic SNAT use case is one of the last details being 
worked in the DVR subteam.  That may be why you do not see any code around this 
in the patches that have been submitted.
Outbound traffic that will use this SNAT address will first enter the IR on the 
compute host.  In the IR, it will not match against any of the static SNAT 
addresses for floating IPs.  At that point the packet will be redirected to 
another port belonging to the central component of the DVR.  This port has an 
IP address  different from the default gateway address (e.g. 192.168.1.2 
instead of 192.168.1.1).  At this point, the packet will go back out to br-int 
and but tunneled over to the network node just like any other intra-network 
traffic.

Once the packet hits the central component of the DVR on the network node it 
will be processed very much like default SNAT traffic is processed in the 
current Neutron implementation.  Another interconnect subnet should not be 
needed here and would be overkill.

I hope this helps.  Let me know if you have any questions.

Carl

On Fri, May 16, 2014 at 1:57 AM, Wuhongning wuhongn...@huawei.com wrote:
 Hi DVRers,

 I didn't see any detail documents or source code on how to deal with 
 routing packet from DVR node to SNAT gw node. If the routing table see 
 a outside ip, it should be matched with a default route, so for the 
 next hop, which interface will it select?

 Maybe another standalone interconnect subnet per DVR is needed, 
 which connect each DVR node and optionally, the SNAT gw node. For 
 packets from dvr
 node-snat node, the interconnect subnet act as the default route 
 node-for this
 host, and the next hop will be the snat node.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Default routes to SNAT gateway in DVR

2014-05-23 Thread A, Keshava
Vivek,
CN to NN Vxlan tunnel is something user/customer configured ?
Or DVR is mandating this VxLan tunnel to reach from NN from CN ? 
Means the packets are encapsulated over network even they are not mandated to 
do so ? 

Then there should be standard if something is getting done like this.


Thanks  regards,
Keshava.A

-Original Message-
From: Narasimhan, Vivekanandan 
Sent: Friday, May 23, 2014 2:49 AM
To: A, Keshava; OpenStack Development Mailing List (not for usage questions); 
Carl Baldwin
Cc: Grover, Rajeev
Subject: RE: [openstack-dev] [Neutron] Default routes to SNAT gateway in DVR

Keshava,

Tunneled over to network node means:

OVS VXLAN Tunnel will be established between compute node and network node and 
packets will flow through that OVS VXLAN Tunnel.

NAT'ing and tunneling are not related here.  NAT'ing happens in network node.  
Packets that need to reach the external network will be tunneled to NN where 
SNAT'ing puts them onto external network.

--
Thanks,

Vivek

-Original Message-
From: A, Keshava
Sent: Friday, May 23, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions); Carl Baldwin
Cc: Narasimhan, Vivekanandan; Grover, Rajeev; A, Keshava
Subject: RE: [openstack-dev] [Neutron] Default routes to SNAT gateway in DVR

Hi,
I have one basic question, what is this tunneled over to network node means ? 
( At this point, the packet will go back out to br-int and but tunneled over 
to the network node just like any other intra-network traffic.) What kind of 
tunnel between Compute to Network Node during SNAT ? 
Why tunneling  will happen during NAT ?

Thanks  regards,
Keshava.A

-Original Message-
From: Carl Baldwin [mailto:c...@ecbaldwin.net]
Sent: Thursday, May 22, 2014 3:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Default routes to SNAT gateway in DVR

Hi,

I found this message in my backlog from when I was at the summit.
Sorry for the delay in responding.

The default SNAT or dynamic SNAT use case is one of the last details being 
worked in the DVR subteam.  That may be why you do not see any code around this 
in the patches that have been submitted.
Outbound traffic that will use this SNAT address will first enter the IR on the 
compute host.  In the IR, it will not match against any of the static SNAT 
addresses for floating IPs.  At that point the packet will be redirected to 
another port belonging to the central component of the DVR.  This port has an 
IP address  different from the default gateway address (e.g. 192.168.1.2 
instead of 192.168.1.1).  At this point, the packet will go back out to br-int 
and but tunneled over to the network node just like any other intra-network 
traffic.

Once the packet hits the central component of the DVR on the network node it 
will be processed very much like default SNAT traffic is processed in the 
current Neutron implementation.  Another interconnect subnet should not be 
needed here and would be overkill.

I hope this helps.  Let me know if you have any questions.

Carl

On Fri, May 16, 2014 at 1:57 AM, Wuhongning wuhongn...@huawei.com wrote:
 Hi DVRers,

 I didn't see any detail documents or source code on how to deal with 
 routing packet from DVR node to SNAT gw node. If the routing table see 
 a outside ip, it should be matched with a default route, so for the 
 next hop, which interface will it select?

 Maybe another standalone interconnect subnet per DVR is needed, 
 which connect each DVR node and optionally, the SNAT gw node. For 
 packets from dvr
 node-snat node, the interconnect subnet act as the default route 
 node-for this
 host, and the next hop will be the snat node.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][FWaaS]Firewall Web Services Research Thesis Applicability to the OpenStack Project

2014-05-22 Thread A, Keshava
Hi,

1. When the group policy is applied ( across to all the VMs ) say deny for 
specific TCP port = 80, however because some special reason one of that VM 
needs to 'ALLOW TCP port' how to handle this ?  
When deny is applied to any one of VM in that group ,   this framework  takes 
care of 
individually breaking that and apply ALLOW for other VM  
automatically ?
and apply Deny for that specific VM ? 

2. Can there be 'Hierarchy of Group Policy  ? 



Thanks  regards,
Keshava.A

-Original Message-
From: Michael Grima [mailto:mike.r.gr...@gmail.com] 
Sent: Wednesday, May 21, 2014 5:00 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][FWaaS]Firewall Web Services Research 
Thesis Applicability to the OpenStack Project

Sumit,

Unfortunately, I missed the IRC meeting on FWaaS (got the timezones screwed 
up...).

However, in the meantime, please review this section of my thesis on the 
OpenStack project:
https://docs.google.com/document/d/1DGhgtTY4FxYxOqhKvMSV20cIw5WWR-gXbaBoMMMA-f0/edit?usp=sharing

Please let me know if it is missing anything, or contains any wrong 
information.  Also, if you have some time, please review the questions I have 
asked in the previous messages.

Thank you,

--
Mike Grima, RHCE

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev