Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-11-06 Thread Rohit Agarwalla (roagarwa)
A reminder for folks interested that we'll have a BoF discussion on Routed
Network model (without L2) at 12.30 pm today.
I'll have the Neutron placard on one of the table outside Manet room (at
Le Meridien) for folks to find us.

https://etherpad.openstack.org/p/RoutedNetworking


Thanks
Rohit

On 10/28/14 2:50 PM, Carl Baldwin c...@ecbaldwin.net wrote:

On Tue, Oct 28, 2014 at 3:07 PM, Rohit Agarwalla (roagarwa)
roaga...@cisco.com wrote:
 Agreed. The way I'm thinking about this is that tenants shouldn't care
what
 the underlying implementation is - L2 or L3. As long as the connectivity
 requirements are met using the model/API, end users should be fine.
 The data center network design should be an administrators decision
based on
 the implementation mechanism that has been configured for OpenStack.

Many API users won't care about the L2 details.  This could be a
compelling alternative for them.  However, some do.  The L2 details
seem to matter an awful lot to many NFV use cases.  It might be that
this alternative is just not compelling for those.  Not to say it
isn't compelling overall though.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-30 Thread Cory Benfield
On Tue, Oct 28, 2014 at 21:50:09, Carl Baldwin wrote:
 Many API users won't care about the L2 details.  This could be a
 compelling alternative for them.  However, some do.  The L2 details
 seem to matter an awful lot to many NFV use cases.  It might be that
 this alternative is just not compelling for those.  Not to say it
 isn't compelling overall though.

Agreed. This is a point worth emphasising: routed networking is not a panacea 
for everyone's networking woes. We've got a lot of NFV people and products at 
my employer, and while we're engaged in work to come up with L3 approaches to 
solve their use-cases, we'd like to draw a balance between adding complexity to 
the network layer to support legacy L2-based requirements and providing better 
native L3 solutions that NFV applications can use instead.  One of the key 
challenges with NFV is that it shouldn't just be a blind porting of existing 
codebases - you need to make sure you're producing something which takes 
advantage of the new environment.

Cory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-30 Thread A, Keshava
Hi,
w.r.t.  ' VM packet forwarding' at L3 level by enabling routing I have below 
points.
With below  reference diagram , when the routing is enabled to detect the 
destination VM's compute node ..


1.   How many route prefix will be injected in each of the compute node ?



2.   For each of the VM address, there will be corresponding IP address in 
the 'L3 Forwarding Tbl' ?

When we have large number of VM's of the order 50,000/ 1 Million VM's in the 
cloud each compute node needs to maintain 1 Million Route Entries ?



3.   Even with route aggregations, it is not guaranteed to be very 
efficient because

a.   Tenants can span across computes.

b.  VM migration can happen which  may break the aggregation  and allow the 
growth  of routing table.



4.   Across Switch if we  try to run BGP and try to aggregate, we will be 
introducing the Hierarchical Network.

If any change in topology what will be convergence time and will there any 
looping issues ?

Cost of the L3 switch will go up as the capacity of that switch to support 
10,000 + routes.


5.   With this we want to break the classical L2 broadcast in the last mile 
Cloud ?

I was under the impression that the cloud network we want to keep simple L2 
broadcast domain, without adding any complexity like MPLS label, Routing, 
Aggregation .



6.   The whole purpose of brining VxLAN in datacenter cloud is to keep the 
L2 and even able to extend the L2 to different Datacenter.



7.   I also saw some ietf draft w.r.t implementation architecture of 
OpenStack !!!.


  Let me know the opinion w.r.t. this ?


[cid:image003.png@01CFF45F.51F2F0A0]





Thanks  regards,

Keshava



-Original Message-
From: Fred Baker (fred) [mailto:f...@cisco.com]
Sent: Wednesday, October 29, 2014 5:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking





On Oct 28, 2014, at 4:59 PM, Angus Lees 
g...@inodes.orgmailto:g...@inodes.org wrote:



 On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:

 Agreed. The way I'm thinking about this is that tenants shouldn't

 care what the underlying implementation is - L2 or L3. As long as the

 connectivity requirements are met using the model/API, end users

 should be fine. The data center network design should be an

 administrators decision based on the implementation mechanism that has been 
 configured for OpenStack.



 I don't know anything about Project Calico, but I have been involved

 with running a large cloud network previously that made heavy use of L3 
 overlays.



 Just because these points weren't raised earlier in this thread:  In

 my experience, a move to L3 involves losing:



 - broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but

 that's a whole can of worms - so perhaps best to just say up front

 that this is a non-broadcast network.



 - support for other IP protocols.



 - various L2 games like virtual MAC addresses, etc that NFV/etc people like.



I'm a little confused. IP supports multicast. It requires a routing protocol, 
and you have to join the multicast group, but it's not out of the picture.



What other IP protocols do you have in mind? Are you thinking about 
IPX/CLNP/etc? Or are you thinking about new network layers?



I'm afraid the L2 games leave me a little cold. We have been there, such as 
with DECNET IV. I'd need to understand what you were trying to achieve before I 
would consider that a loss.



 We gain:



 - the ability to have proper hierarchical addressing underneath (which

 is a big one for scaling a single network).  This itself is a

 tradeoff however - an efficient/strict hierarchical addressing scheme

 means VMs can't choose their own IP addresses, and VM migration is 
 messy/limited/impossible.



It does require some variation on a host route, and it leads us to ask about 
renumbering. The hard part of VM migration is at the application layer, not the 
network, and is therefore pretty much the same.



 - hardware support for dynamic L3 routing is generally universal,

 through a small set of mostly-standard protocols (BGP, ISIS, etc).



 - can play various L3 games like BGP/anycast, which is super useful

 for geographically diverse services.





 It's certainly a useful tradeoff for many use cases.  Users lose some

 generality in return for more powerful cooperation with the provider

 around particular features, so I sort of think of it like a step

 halfway up the IaaS-

 PaaS stack - except for networking.



 - Gus



 Thanks

 Rohit



 From: Kevin Benton 
 blak...@gmail.commailto:blak...@gmail.commailto:blak...@gmail.com%3cmailto:blak...@gmail.com

 Reply-To: OpenStack Development Mailing List (not for usage questions)

 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.opensta

 ck.org

 Date: Tuesday, October 28, 2014 1:01 PM

 To: OpenStack Development Mailing List (not for usage

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-30 Thread A, Keshava
Hi Cory,

Here NFV-Apps will use the infrastructure' L3 Route table' to make any decision 
?
From OpenStack perspective NFV-App(VM)  is not like any other Tennant-VM as 
for as delivering the packet is concerned ?
Is there any  thinking of NFV-App ( Service router VM) to insert any routing 
information in OpenStack infrastructure ?


Thanks  Regards,
keshava 

-Original Message-
From: Cory Benfield [mailto:cory.benfi...@metaswitch.com] 
Sent: Thursday, October 30, 2014 2:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

On Tue, Oct 28, 2014 at 21:50:09, Carl Baldwin wrote:
 Many API users won't care about the L2 details.  This could be a 
 compelling alternative for them.  However, some do.  The L2 details 
 seem to matter an awful lot to many NFV use cases.  It might be that 
 this alternative is just not compelling for those.  Not to say it 
 isn't compelling overall though.

Agreed. This is a point worth emphasising: routed networking is not a panacea 
for everyone's networking woes. We've got a lot of NFV people and products at 
my employer, and while we're engaged in work to come up with L3 approaches to 
solve their use-cases, we'd like to draw a balance between adding complexity to 
the network layer to support legacy L2-based requirements and providing better 
native L3 solutions that NFV applications can use instead.  One of the key 
challenges with NFV is that it shouldn't just be a blind porting of existing 
codebases - you need to make sure you're producing something which takes 
advantage of the new environment.

Cory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-30 Thread Kevin Benton
These are all important discussion topics, but we are getting pulled into
implementation-specific details again. Routing aggregation and network
topology is completely up to the backend implementation.

We should keep this thread focused on the user-facing abstractions and the
changes required to Nova and Neutron to enable them. Then when it is time
to implement the reference implementation in Neutron, we can have this
discussion on optimal placement of BGP nodes, etc.



On Thu, Oct 30, 2014 at 4:04 AM, A, Keshava keshav...@hp.com wrote:

  Hi,

 w.r.t.  ‘ VM packet forwarding’ at L3 level by enabling routing I have
 below points.

 With below  reference diagram , when the routing is enabled to detect the
 destination VM’s compute node ..



 1.   How many route prefix will be injected in each of the compute
 node ?



 2.   For each of the VM address, there will be corresponding IP
 address in the ‘L3 Forwarding Tbl’ ?

 When we have large number of VM’s of the order 50,000/ 1 Million VM’s in
 the cloud each compute node needs to maintain 1 Million Route Entries ?



 3.   Even with route aggregations, it is not guaranteed to be very
 efficient because

 a.   Tenants can span across computes.

 b.  VM migration can happen which  may break the aggregation  and
 allow the growth  of routing table.



 4.   Across Switch if we  try to run BGP and try to aggregate, we
 will be introducing the Hierarchical Network.

 If any change in topology what will be convergence time and will there any
 looping issues ?

 Cost of the L3 switch will go up as the capacity of that switch to support
 10,000 + routes.



 5.   With this we want to break the classical L2 broadcast in the
 last mile Cloud ?

 I was under the impression that the cloud network we want to keep simple
 L2 broadcast domain, without adding any complexity like MPLS label,
 Routing, Aggregation .



 6.   The whole purpose of brining VxLAN in datacenter cloud is to
 keep the L2 and even able to extend the L2 to different Datacenter.



 7.   I also saw some ietf draft w.r.t implementation architecture of
 OpenStack !!!.



   Let me know the opinion w.r.t. this ?









 Thanks  regards,

 Keshava



 -Original Message-
 From: Fred Baker (fred) [mailto:f...@cisco.com]
 Sent: Wednesday, October 29, 2014 5:51 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking





 On Oct 28, 2014, at 4:59 PM, Angus Lees g...@inodes.org wrote:



  On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:

  Agreed. The way I'm thinking about this is that tenants shouldn't

  care what the underlying implementation is - L2 or L3. As long as the

  connectivity requirements are met using the model/API, end users

  should be fine. The data center network design should be an

  administrators decision based on the implementation mechanism that has
 been configured for OpenStack.

 

  I don't know anything about Project Calico, but I have been involved

  with running a large cloud network previously that made heavy use of L3
 overlays.

 

  Just because these points weren't raised earlier in this thread:  In

  my experience, a move to L3 involves losing:

 

  - broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but

  that's a whole can of worms - so perhaps best to just say up front

  that this is a non-broadcast network.

 

  - support for other IP protocols.

 

  - various L2 games like virtual MAC addresses, etc that NFV/etc people
 like.



 I’m a little confused. IP supports multicast. It requires a routing
 protocol, and you have to “join” the multicast group, but it’s not out of
 the picture.



 What other “IP” protocols do you have in mind? Are you thinking about
 IPX/CLNP/etc? Or are you thinking about new network layers?



 I’m afraid the L2 games leave me a little cold. We have been there, such
 as with DECNET IV. I’d need to understand what you were trying to achieve
 before I would consider that a loss.



  We gain:

 

  - the ability to have proper hierarchical addressing underneath (which

  is a big one for scaling a single network).  This itself is a

  tradeoff however - an efficient/strict hierarchical addressing scheme

  means VMs can't choose their own IP addresses, and VM migration is
 messy/limited/impossible.



 It does require some variation on a host route, and it leads us to ask
 about renumbering. The hard part of VM migration is at the application
 layer, not the network, and is therefore pretty much the same.



  - hardware support for dynamic L3 routing is generally universal,

  through a small set of mostly-standard protocols (BGP, ISIS, etc).

 

  - can play various L3 games like BGP/anycast, which is super useful

  for geographically diverse services.

 

 

  It's certainly a useful tradeoff for many use cases.  Users lose some

  generality in return for more powerful

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-30 Thread A, Keshava
Agreed !.

Regards,
keshava

From: Kevin Benton [mailto:blak...@gmail.com]
Sent: Friday, October 31, 2014 2:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

These are all important discussion topics, but we are getting pulled into 
implementation-specific details again. Routing aggregation and network topology 
is completely up to the backend implementation.

We should keep this thread focused on the user-facing abstractions and the 
changes required to Nova and Neutron to enable them. Then when it is time to 
implement the reference implementation in Neutron, we can have this discussion 
on optimal placement of BGP nodes, etc.



On Thu, Oct 30, 2014 at 4:04 AM, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi,
w.r.t.  ‘ VM packet forwarding’ at L3 level by enabling routing I have below 
points.
With below  reference diagram , when the routing is enabled to detect the 
destination VM’s compute node ..


1.   How many route prefix will be injected in each of the compute node ?



2.   For each of the VM address, there will be corresponding IP address in 
the ‘L3 Forwarding Tbl’ ?

When we have large number of VM’s of the order 50,000/ 1 Million VM’s in the 
cloud each compute node needs to maintain 1 Million Route Entries ?



3.   Even with route aggregations, it is not guaranteed to be very 
efficient because

a.   Tenants can span across computes.

b.  VM migration can happen which  may break the aggregation  and allow the 
growth  of routing table.



4.   Across Switch if we  try to run BGP and try to aggregate, we will be 
introducing the Hierarchical Network.

If any change in topology what will be convergence time and will there any 
looping issues ?

Cost of the L3 switch will go up as the capacity of that switch to support 
10,000 + routes.


5.   With this we want to break the classical L2 broadcast in the last mile 
Cloud ?

I was under the impression that the cloud network we want to keep simple L2 
broadcast domain, without adding any complexity like MPLS label, Routing, 
Aggregation .



6.   The whole purpose of brining VxLAN in datacenter cloud is to keep the 
L2 and even able to extend the L2 to different Datacenter.



7.   I also saw some ietf draft w.r.t implementation architecture of 
OpenStack !!!.


  Let me know the opinion w.r.t. this ?


[cid:image001.png@01CFF4E9.7DD2FB90]





Thanks  regards,

Keshava



-Original Message-
From: Fred Baker (fred) [mailto:f...@cisco.commailto:f...@cisco.com]
Sent: Wednesday, October 29, 2014 5:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking





On Oct 28, 2014, at 4:59 PM, Angus Lees 
g...@inodes.orgmailto:g...@inodes.org wrote:



 On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:

 Agreed. The way I'm thinking about this is that tenants shouldn't

 care what the underlying implementation is - L2 or L3. As long as the

 connectivity requirements are met using the model/API, end users

 should be fine. The data center network design should be an

 administrators decision based on the implementation mechanism that has been 
 configured for OpenStack.



 I don't know anything about Project Calico, but I have been involved

 with running a large cloud network previously that made heavy use of L3 
 overlays.



 Just because these points weren't raised earlier in this thread:  In

 my experience, a move to L3 involves losing:



 - broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but

 that's a whole can of worms - so perhaps best to just say up front

 that this is a non-broadcast network.



 - support for other IP protocols.



 - various L2 games like virtual MAC addresses, etc that NFV/etc people like.



I’m a little confused. IP supports multicast. It requires a routing protocol, 
and you have to “join” the multicast group, but it’s not out of the picture.



What other “IP” protocols do you have in mind? Are you thinking about 
IPX/CLNP/etc? Or are you thinking about new network layers?



I’m afraid the L2 games leave me a little cold. We have been there, such as 
with DECNET IV. I’d need to understand what you were trying to achieve before I 
would consider that a loss.



 We gain:



 - the ability to have proper hierarchical addressing underneath (which

 is a big one for scaling a single network).  This itself is a

 tradeoff however - an efficient/strict hierarchical addressing scheme

 means VMs can't choose their own IP addresses, and VM migration is 
 messy/limited/impossible.



It does require some variation on a host route, and it leads us to ask about 
renumbering. The hard part of VM migration is at the application layer, not the 
network, and is therefore pretty much the same.



 - hardware support for dynamic L3 routing is generally universal

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-29 Thread Fred Baker (fred)
Some of us are looking at a different model. I’d be interested in your thoughts.

The premise in this is that a great deal of the complexity in OpenStack is 
basically working around the deficiencies of IPv4, especially its address space 
and issues in multicast deployment. IPv6 actually addresses a lot of that. So I 
would posit that we can use a label to isolate tenants, use IPv6 Multicast for 
the cases in which IPv4 and Ethernet use broadcast, and wind up with some much 
simpler and more scalable. 

https://tools.ietf.org/html/draft-baker-openstack-ipv6-model
  A Model for IPv6 Operation in OpenStack, Fred Baker, Chris Marino, Ian
  Wells, 2014-10-17

https://tools.ietf.org/html/draft-baker-openstack-rbac-federated-identity
  Federated Identity for IPv6 Role-base Access Control, Fred Baker,
  2014-10-17


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-29 Thread Cory Benfield
On Tue, Oct 28, 2014 at 20:01:40, Kevin Benton wrote:
 I think the simplest use case is just that a provider doesn't want to
 deal with extending L2 domains all over their datacenter.

This is the core motivation. As mentioned in Fred Baker's internet draft[0], 
extending layer 2 domains can be extremely challenging, and we've found it to 
be problematic to debug at setup time, let alone to operate.

If your tenants have no need of layer 2 networks (i.e. their workload is 
all-IP), the datacenter network becomes much simpler in a routed model. 
Generally speaking, simpler is better!

Cory

[0]: https://tools.ietf.org/html/draft-baker-openstack-ipv6-model-00
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-29 Thread Rohit Agarwalla (roagarwa)
I have also started to capture some of our discussions here.
https://etherpad.openstack.org/p/RoutedNetworking


Thanks
Rohit

On 10/29/14 1:32 AM, Cory Benfield cory.benfi...@metaswitch.com wrote:

On Tue, Oct 28, 2014 at 20:01:40, Kevin Benton wrote:
 I think the simplest use case is just that a provider doesn't want to
 deal with extending L2 domains all over their datacenter.

This is the core motivation. As mentioned in Fred Baker's internet
draft[0], extending layer 2 domains can be extremely challenging, and
we've found it to be problematic to debug at setup time, let alone to
operate.

If your tenants have no need of layer 2 networks (i.e. their workload is
all-IP), the datacenter network becomes much simpler in a routed model.
Generally speaking, simpler is better!

Cory

[0]: https://tools.ietf.org/html/draft-baker-openstack-ipv6-model-00
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-29 Thread Fred Baker (fred)

On Oct 28, 2014, at 12:44 AM, A, Keshava keshav...@hp.com wrote:

 Here thinking  OpenStack cloud as hierarchical network instead of Flat 
 network ?

A routed network has one lookup just like a bridged network. The difference is 
that the router operates as a host in the L2 domain - it only receives or 
operates on messages sent to its MAC address or to multicast addresses it 
accepts.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-29 Thread Cory Benfield
 Some of us are looking at a different model. I’d be interested in your 
 thoughts.

Fred,

Thanks for the link to the drafts. They look extremely similar to the 
approach we've been pursuing for Project Calico, and it's good to see 
that we're not the only people thinking in this direction.

It looks like the main differences between our approach and yours are 
that we've tried to come up with a model that works both for IPv4 and 
IPv6 (although we agree that moving the data center fabric to IPv6 has a 
lot of advantages - e.g. we are planning on using 464XLAT as the 
mechanism to handle IPv4 overlap).  Given this, we've focused our 
policy/security model on ACLs rather than flow labels.  An interesting 
derivative effect of that choice is that any policy or security model 
can be enforced (such as intra-tenant controls, extra-cloud controls, 
etc).

As a side note, we have been interested in using flow labels as 
namespace identifiers and for SFC.  Recently, we have moved away from 
that thinking given the guidance that the flow label should be not be 
modified in flight.  If you believe that such modifications will be 
acceptable, we would love to discuss that with you, and see where we can 
collaborate.

As it is, I believe our proposed changes to Nova and Neutron should be 
generic enough to provide a basis for implementing your approach as well 
as supporting our Project Calico ML2 driver. If they aren't, we should 
work together to make whatever changes we have to make to achieve that 
generality.

It might also be worth checking out our agent code[0]. It's in the 
middle of a rewrite at the minute so the code is unfinished, but it 
handles a lot of what you'd be doing with your proposed drafts. 
Hopefully it'd be a useful jumping off point.

Cory

[0]: https://github.com/Metaswitch/calico/tree/master/calico/felix

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-29 Thread Fred Baker (fred)
Certainly, let’s talk next week in Paris.

On Oct 29, 2014, at 12:11 PM, Cory Benfield cory.benfi...@metaswitch.com 
wrote:

 Some of us are looking at a different model. I’d be interested in your 
 thoughts.
 
 Fred,
 
 Thanks for the link to the drafts. They look extremely similar to the 
 approach we've been pursuing for Project Calico, and it's good to see 
 that we're not the only people thinking in this direction.
 
 It looks like the main differences between our approach and yours are 
 that we've tried to come up with a model that works both for IPv4 and 
 IPv6 (although we agree that moving the data center fabric to IPv6 has a 
 lot of advantages - e.g. we are planning on using 464XLAT as the 
 mechanism to handle IPv4 overlap).  Given this, we've focused our 
 policy/security model on ACLs rather than flow labels.  An interesting 
 derivative effect of that choice is that any policy or security model 
 can be enforced (such as intra-tenant controls, extra-cloud controls, 
 etc).
 
 As a side note, we have been interested in using flow labels as 
 namespace identifiers and for SFC.  Recently, we have moved away from 
 that thinking given the guidance that the flow label should be not be 
 modified in flight.  If you believe that such modifications will be 
 acceptable, we would love to discuss that with you, and see where we can 
 collaborate.
 
 As it is, I believe our proposed changes to Nova and Neutron should be 
 generic enough to provide a basis for implementing your approach as well 
 as supporting our Project Calico ML2 driver. If they aren't, we should 
 work together to make whatever changes we have to make to achieve that 
 generality.
 
 It might also be worth checking out our agent code[0]. It's in the 
 middle of a rewrite at the minute so the code is unfinished, but it 
 handles a lot of what you'd be doing with your proposed drafts. 
 Hopefully it'd be a useful jumping off point.
 
 Cory
 
 [0]: https://github.com/Metaswitch/calico/tree/master/calico/felix
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread A, Keshava
Hi,
Current Open-stack was built as flat network.
With the introduction of the L3 lookup (by inserting the routing table in 
forwarding path) and separate 'VIF Route Type' interface:

At what point of time in the packet processing  decision will be made to lookup 
FIB  during ? For each packet there will additional  FIB lookup ?
How about the  impact on  'inter compute traffic', processed by  DVR  ?

Here thinking  OpenStack cloud as hierarchical network instead of Flat network ?

Thanks  regards,
Keshava

From: Rohit Agarwalla (roagarwa) [mailto:roaga...@cisco.com]
Sent: Monday, October 27, 2014 12:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

Hi

I'm interested as well in this model. Curious to understand the routing filters 
and their implementation that will enable isolation between tenant networks.
Also, having a BoF session on Virtual Networking using L3 may be useful to 
get all interested folks together at the Summit.


Thanks
Rohit

From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, October 24, 2014 12:51 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

Hi,

Thanks for posting this. I am interested in this use case as well.

I didn't find a link to a review for the ML2 driver. Do you have any more 
details for that available?
It seems like not providing L2 connectivity between members of the same Neutron 
network conflicts with assumptions ML2 will make about segmentation IDs, etc. 
So I am interested in seeing how exactly the ML2 driver will bind ports, 
segments, etc.


Cheers,
Kevin Benton

On Fri, Oct 24, 2014 at 6:38 AM, Cory Benfield 
cory.benfi...@metaswitch.commailto:cory.benfi...@metaswitch.com wrote:
All,

Project Calico [1] is an open source approach to virtual networking based on L3 
routing as opposed to L2 bridging.  In order to accommodate this approach 
within OpenStack, we've just submitted 3 blueprints that cover

-  minor changes to nova to add a new VIF type [2]
-  some changes to neutron to add DHCP support for routed interfaces [3]
-  an ML2 mechanism driver that adds support for Project Calico [4].

We feel that allowing for routed network interfaces is of general use within 
OpenStack, which was our motivation for submitting [2] and [3].  We also 
recognise that there is an open question over the future of 3rd party ML2 
drivers in OpenStack, but until that is finally resolved in Paris, we felt 
submitting our driver spec [4] was appropriate (not least to provide more 
context on the changes proposed in [2] and [3]).

We're extremely keen to hear any and all feedback on these proposals from the 
community.  We'll be around at the Paris summit in a couple of weeks and would 
love to discuss with anyone else who is interested in this direction.

Regards,

Cory Benfield (on behalf of the entire Project Calico team)

[1] http://www.projectcalico.org
[2] https://blueprints.launchpad.net/nova/+spec/vif-type-routed
[3] https://blueprints.launchpad.net/neutron/+spec/dhcp-for-routed-ifs
[4] https://blueprints.launchpad.net/neutron/+spec/calico-mechanism-driver

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Mathieu Rohon
Hi,

really interesting, thanks corry.
during l3 meeting we spoke about planning a POD session around bgp use cases.
at least 2 spec have bgp use cases :

https://review.openstack.org/#/c/125401/
https://review.openstack.org/#/c/93329/

It would be interesting that you join this POD, to share your view and
leverage bgp capabilities that will be introduced in kilo for the
calico project.

Mathieu


On Tue, Oct 28, 2014 at 8:44 AM, A, Keshava keshav...@hp.com wrote:
 Hi,

 Current Open-stack was built as flat network.

 With the introduction of the L3 lookup (by inserting the routing table in
 forwarding path) and separate ‘VIF Route Type’ interface:



 At what point of time in the packet processing  decision will be made to
 lookup FIB  during ? For each packet there will additional  FIB lookup ?

 How about the  impact on  ‘inter compute traffic’, processed by  DVR  ?



 Here thinking  OpenStack cloud as hierarchical network instead of Flat
 network ?



 Thanks  regards,

 Keshava



 From: Rohit Agarwalla (roagarwa) [mailto:roaga...@cisco.com]
 Sent: Monday, October 27, 2014 12:36 AM
 To: OpenStack Development Mailing List (not for usage questions)

 Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking



 Hi



 I'm interested as well in this model. Curious to understand the routing
 filters and their implementation that will enable isolation between tenant
 networks.

 Also, having a BoF session on Virtual Networking using L3 may be useful to
 get all interested folks together at the Summit.





 Thanks

 Rohit



 From: Kevin Benton blak...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Friday, October 24, 2014 12:51 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking



 Hi,



 Thanks for posting this. I am interested in this use case as well.



 I didn't find a link to a review for the ML2 driver. Do you have any more
 details for that available?

 It seems like not providing L2 connectivity between members of the same
 Neutron network conflicts with assumptions ML2 will make about segmentation
 IDs, etc. So I am interested in seeing how exactly the ML2 driver will bind
 ports, segments, etc.





 Cheers,

 Kevin Benton



 On Fri, Oct 24, 2014 at 6:38 AM, Cory Benfield
 cory.benfi...@metaswitch.com wrote:

 All,

 Project Calico [1] is an open source approach to virtual networking based on
 L3 routing as opposed to L2 bridging.  In order to accommodate this approach
 within OpenStack, we've just submitted 3 blueprints that cover

 -  minor changes to nova to add a new VIF type [2]
 -  some changes to neutron to add DHCP support for routed interfaces [3]
 -  an ML2 mechanism driver that adds support for Project Calico [4].

 We feel that allowing for routed network interfaces is of general use within
 OpenStack, which was our motivation for submitting [2] and [3].  We also
 recognise that there is an open question over the future of 3rd party ML2
 drivers in OpenStack, but until that is finally resolved in Paris, we felt
 submitting our driver spec [4] was appropriate (not least to provide more
 context on the changes proposed in [2] and [3]).

 We're extremely keen to hear any and all feedback on these proposals from
 the community.  We'll be around at the Paris summit in a couple of weeks and
 would love to discuss with anyone else who is interested in this direction.

 Regards,

 Cory Benfield (on behalf of the entire Project Calico team)

 [1] http://www.projectcalico.org
 [2] https://blueprints.launchpad.net/nova/+spec/vif-type-routed
 [3] https://blueprints.launchpad.net/neutron/+spec/dhcp-for-routed-ifs
 [4] https://blueprints.launchpad.net/neutron/+spec/calico-mechanism-driver

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Cory Benfield
On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
 Hi,
 
 Current Open-stack was built as flat network.
 
 With the introduction of the L3 lookup (by inserting the routing table
 in forwarding path) and separate 'VIF Route Type' interface:
 
 At what point of time in the packet processing  decision will be made
 to lookup FIB  during ? For each packet there will additional  FIB
 lookup ?
 
 How about the  impact on  'inter compute traffic', processed by  DVR  ?
 Here thinking  OpenStack cloud as hierarchical network instead of Flat
 network ?

Keshava,

It's difficult for me to answer in general terms: the proposed specs are 
general enough to allow multiple approaches to building purely-routed networks 
in OpenStack, and they may all have slightly different answers to some of these 
questions. I can, however, speak about how Project Calico intends to apply them.

For Project Calico, the FIB lookup is performed for every packet emitted by a 
VM and destined for a VM. Each compute host routes all the traffic to/from its 
guests. The DVR approach isn't necessary in this kind of network because it 
essentially already implements one: all packets are always routed, and no 
network node is ever required in the network.

The routed network approach doesn't add any hierarchical nature to an OpenStack 
cloud. The difference between the routed approach and the standard OVS approach 
is that packet processing happens entirely at layer 3. Put another way, in 
Project Calico-based networks a Neutron subnet no longer maps to a layer 2 
broadcast domain.

I hope that clarifies: please shout if you'd like more detail.

Cory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread A, Keshava
Hi Cory,

Yes that is the basic question I have. 

OpenStack cloud  is ready to move away from Flat L2 network ?

1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2 
Hash/Index Lookup ? 
2. Will there be Hierarchical network ?  How much of the Routes will be 
imported from external world ?
3. Will there be  Separate routing domain for overlay network  ? Or it will be 
mixed with external/underlay network ?
4. What will be the basic use case of this ? Thinking of L3 switching to 
support BGP-MPLS L3 VPN Scenario right from compute node ?

Others can give their opinion also.

Thanks  Regards,
keshava

-Original Message-
From: Cory Benfield [mailto:cory.benfi...@metaswitch.com] 
Sent: Tuesday, October 28, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
 Hi,
 
 Current Open-stack was built as flat network.
 
 With the introduction of the L3 lookup (by inserting the routing table 
 in forwarding path) and separate 'VIF Route Type' interface:
 
 At what point of time in the packet processing  decision will be made 
 to lookup FIB  during ? For each packet there will additional  FIB 
 lookup ?
 
 How about the  impact on  'inter compute traffic', processed by  DVR  ?
 Here thinking  OpenStack cloud as hierarchical network instead of Flat 
 network ?

Keshava,

It's difficult for me to answer in general terms: the proposed specs are 
general enough to allow multiple approaches to building purely-routed networks 
in OpenStack, and they may all have slightly different answers to some of these 
questions. I can, however, speak about how Project Calico intends to apply them.

For Project Calico, the FIB lookup is performed for every packet emitted by a 
VM and destined for a VM. Each compute host routes all the traffic to/from its 
guests. The DVR approach isn't necessary in this kind of network because it 
essentially already implements one: all packets are always routed, and no 
network node is ever required in the network.

The routed network approach doesn't add any hierarchical nature to an OpenStack 
cloud. The difference between the routed approach and the standard OVS approach 
is that packet processing happens entirely at layer 3. Put another way, in 
Project Calico-based networks a Neutron subnet no longer maps to a layer 2 
broadcast domain.

I hope that clarifies: please shout if you'd like more detail.

Cory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Kevin Benton
1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
Hash/Index Lookup ?
2. Will there be Hierarchical network ?  How much of the Routes will
be imported from external world ?
3. Will there be  Separate routing domain for overlay network  ? Or it
will be mixed with external/underlay network ?

These are all implementation specific details. Different deployments and
network backends can implement them however they want. What we need to
discuss now is how this model will look to the end-user and API.

4. What will be the basic use case of this ? Thinking of L3 switching to
support BGP-MPLS L3 VPN Scenario right from compute node ?

I think the simplest use case is just that a provider doesn't want to deal
with extending L2 domains all over their datacenter.

On Tue, Oct 28, 2014 at 12:39 PM, A, Keshava keshav...@hp.com wrote:

 Hi Cory,

 Yes that is the basic question I have.

 OpenStack cloud  is ready to move away from Flat L2 network ?

 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
 Hash/Index Lookup ?
 2. Will there be Hierarchical network ?  How much of the Routes will
 be imported from external world ?
 3. Will there be  Separate routing domain for overlay network  ? Or it
 will be mixed with external/underlay network ?
 4. What will be the basic use case of this ? Thinking of L3 switching to
 support BGP-MPLS L3 VPN Scenario right from compute node ?

 Others can give their opinion also.

 Thanks  Regards,
 keshava

 -Original Message-
 From: Cory Benfield [mailto:cory.benfi...@metaswitch.com]
 Sent: Tuesday, October 28, 2014 10:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

 On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
  Hi,
 
  Current Open-stack was built as flat network.
 
  With the introduction of the L3 lookup (by inserting the routing table
  in forwarding path) and separate 'VIF Route Type' interface:
 
  At what point of time in the packet processing  decision will be made
  to lookup FIB  during ? For each packet there will additional  FIB
  lookup ?
 
  How about the  impact on  'inter compute traffic', processed by  DVR  ?
  Here thinking  OpenStack cloud as hierarchical network instead of Flat
  network ?

 Keshava,

 It's difficult for me to answer in general terms: the proposed specs are
 general enough to allow multiple approaches to building purely-routed
 networks in OpenStack, and they may all have slightly different answers to
 some of these questions. I can, however, speak about how Project Calico
 intends to apply them.

 For Project Calico, the FIB lookup is performed for every packet emitted
 by a VM and destined for a VM. Each compute host routes all the traffic
 to/from its guests. The DVR approach isn't necessary in this kind of
 network because it essentially already implements one: all packets are
 always routed, and no network node is ever required in the network.

 The routed network approach doesn't add any hierarchical nature to an
 OpenStack cloud. The difference between the routed approach and the
 standard OVS approach is that packet processing happens entirely at layer
 3. Put another way, in Project Calico-based networks a Neutron subnet no
 longer maps to a layer 2 broadcast domain.

 I hope that clarifies: please shout if you'd like more detail.

 Cory

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Rohit Agarwalla (roagarwa)
There isn't a mechanism for us to get a BoF scheduled in advance. So,
let's gather at the Neutron contributors meetup on Friday.
Hopefully, some of us would have already met each other at the Neutron
design sessions before Friday and we can figure out a good time slot that
works for everyone interested.

Thanks
Rohit

On 10/27/14 2:20 AM, Cory Benfield cory.benfi...@metaswitch.com wrote:

On Sun, Oct 26, 2014 at 19:05:43, Rohit Agarwalla (roagarwa) wrote:
 Hi
 
 I'm interested as well in this model. Curious to understand the routing
 filters and their implementation that will enable isolation between
 tenant networks.
 Also, having a BoF session on Virtual Networking using L3 may be
 useful to get all interested folks together at the Summit.

A BoF sounds great. I've also proposed a lightning talk for the summit.

Cory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Rohit Agarwalla (roagarwa)
Agreed. The way I'm thinking about this is that tenants shouldn't care what the 
underlying implementation is - L2 or L3. As long as the connectivity 
requirements are met using the model/API, end users should be fine.
The data center network design should be an administrators decision based on 
the implementation mechanism that has been configured for OpenStack.

Thanks
Rohit

From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, October 28, 2014 1:01 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2 
Hash/Index Lookup ?
2. Will there be Hierarchical network ?  How much of the Routes will be 
imported from external world ?
3. Will there be  Separate routing domain for overlay network  ? Or it will be 
mixed with external/underlay network ?

These are all implementation specific details. Different deployments and 
network backends can implement them however they want. What we need to discuss 
now is how this model will look to the end-user and API.

4. What will be the basic use case of this ? Thinking of L3 switching to 
support BGP-MPLS L3 VPN Scenario right from compute node ?

I think the simplest use case is just that a provider doesn't want to deal with 
extending L2 domains all over their datacenter.

On Tue, Oct 28, 2014 at 12:39 PM, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi Cory,

Yes that is the basic question I have.

OpenStack cloud  is ready to move away from Flat L2 network ?

1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2 
Hash/Index Lookup ?
2. Will there be Hierarchical network ?  How much of the Routes will be 
imported from external world ?
3. Will there be  Separate routing domain for overlay network  ? Or it will be 
mixed with external/underlay network ?
4. What will be the basic use case of this ? Thinking of L3 switching to 
support BGP-MPLS L3 VPN Scenario right from compute node ?

Others can give their opinion also.

Thanks  Regards,
keshava

-Original Message-
From: Cory Benfield 
[mailto:cory.benfi...@metaswitch.commailto:cory.benfi...@metaswitch.com]
Sent: Tuesday, October 28, 2014 10:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
 Hi,

 Current Open-stack was built as flat network.

 With the introduction of the L3 lookup (by inserting the routing table
 in forwarding path) and separate 'VIF Route Type' interface:

 At what point of time in the packet processing  decision will be made
 to lookup FIB  during ? For each packet there will additional  FIB
 lookup ?

 How about the  impact on  'inter compute traffic', processed by  DVR  ?
 Here thinking  OpenStack cloud as hierarchical network instead of Flat
 network ?

Keshava,

It's difficult for me to answer in general terms: the proposed specs are 
general enough to allow multiple approaches to building purely-routed networks 
in OpenStack, and they may all have slightly different answers to some of these 
questions. I can, however, speak about how Project Calico intends to apply them.

For Project Calico, the FIB lookup is performed for every packet emitted by a 
VM and destined for a VM. Each compute host routes all the traffic to/from its 
guests. The DVR approach isn't necessary in this kind of network because it 
essentially already implements one: all packets are always routed, and no 
network node is ever required in the network.

The routed network approach doesn't add any hierarchical nature to an OpenStack 
cloud. The difference between the routed approach and the standard OVS approach 
is that packet processing happens entirely at layer 3. Put another way, in 
Project Calico-based networks a Neutron subnet no longer maps to a layer 2 
broadcast domain.

I hope that clarifies: please shout if you'd like more detail.

Cory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Carl Baldwin
On Tue, Oct 28, 2014 at 2:01 PM, Kevin Benton blak...@gmail.com wrote:
 I think the simplest use case is just that a provider doesn't want to deal
 with extending L2 domains all over their datacenter.

This is similar to a goal behind [1] and [2].  I'm trying to figure
out where the commonalities and differences are with our respective
approaches.  One obvious difference is that the approach that I
referenced deals only with external networks where typically only
routers connect their gateway interfaces whereas your approach is an
ML2 driver.  As an ML2 driver, it could handle tenant networks too.
I'm curious to know how it works.  I will read through the blueprint
proposals.  The first question that pops in my mind is how (or if) it
supports isolated overlapping L3 address spaces between tenant
networks.  I imagine that it will support a restricted networking
model.

I look forward to meeting and discussing this at Summit.  Look for me.

Carl

[1] https://blueprints.launchpad.net/neutron/+spec/pluggable-ext-net
[2] https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Carl Baldwin
On Tue, Oct 28, 2014 at 3:07 PM, Rohit Agarwalla (roagarwa)
roaga...@cisco.com wrote:
 Agreed. The way I'm thinking about this is that tenants shouldn't care what
 the underlying implementation is - L2 or L3. As long as the connectivity
 requirements are met using the model/API, end users should be fine.
 The data center network design should be an administrators decision based on
 the implementation mechanism that has been configured for OpenStack.

Many API users won't care about the L2 details.  This could be a
compelling alternative for them.  However, some do.  The L2 details
seem to matter an awful lot to many NFV use cases.  It might be that
this alternative is just not compelling for those.  Not to say it
isn't compelling overall though.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Clint Byrum
Excerpts from Cory Benfield's message of 2014-10-24 06:38:44 -0700:
 All,
 
 Project Calico [1] is an open source approach to virtual networking based on 
 L3 routing as opposed to L2 bridging.  In order to accommodate this approach 
 within OpenStack, we've just submitted 3 blueprints that cover
 
 -  minor changes to nova to add a new VIF type [2]
 -  some changes to neutron to add DHCP support for routed interfaces [3]
 -  an ML2 mechanism driver that adds support for Project Calico [4].
 
 We feel that allowing for routed network interfaces is of general use within 
 OpenStack, which was our motivation for submitting [2] and [3].  We also 
 recognise that there is an open question over the future of 3rd party ML2 
 drivers in OpenStack, but until that is finally resolved in Paris, we felt 
 submitting our driver spec [4] was appropriate (not least to provide more 
 context on the changes proposed in [2] and [3]).
 
 We're extremely keen to hear any and all feedback on these proposals from the 
 community.  We'll be around at the Paris summit in a couple of weeks and 
 would love to discuss with anyone else who is interested in this direction. 

I'm quite interested in this, as we've recently been looking at how to
scale OpenStack on bare metal servers beyond the limits of a single flat
L2 network. We have a blueprint for it in TripleO as well:

https://blueprints.launchpad.net/tripleo/+spec/l3-network-segmentation

Hopefully you will be at the summit and can attend our scheduled session,
which I believe will be some time on Wednesday.

We're basically just planning on having routers configured out of band
and just writing a Nova filter to ensure that the compute host has a
property which matches the network names requested with a server.

But if we could automate that too, that would make for an even more
automatic and scalable solution.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Angus Lees
On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:
 Agreed. The way I'm thinking about this is that tenants shouldn't care what
 the underlying implementation is - L2 or L3. As long as the connectivity
 requirements are met using the model/API, end users should be fine. The
 data center network design should be an administrators decision based on
 the implementation mechanism that has been configured for OpenStack.

I don't know anything about Project Calico, but I have been involved with 
running a large cloud network previously that made heavy use of L3 overlays.  

Just because these points weren't raised earlier in this thread:  In my 
experience, a move to L3 involves losing:

- broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but that's 
a whole can of worms - so perhaps best to just say up front that this is a 
non-broadcast network.

- support for other IP protocols.

- various L2 games like virtual MAC addresses, etc that NFV/etc people like.


We gain:

- the ability to have proper hierarchical addressing underneath (which is a 
big one for scaling a single network).  This itself is a tradeoff however - 
an efficient/strict hierarchical addressing scheme means VMs can't choose their 
own IP addresses, and VM migration is messy/limited/impossible.

- hardware support for dynamic L3 routing is generally universal, through a 
small set of mostly-standard protocols (BGP, ISIS, etc).

- can play various L3 games like BGP/anycast, which is super useful for 
geographically diverse services.


It's certainly a useful tradeoff for many use cases.  Users lose some 
generality in return for more powerful cooperation with the provider around 
particular features, so I sort of think of it like a step halfway up the IaaS-
PaaS stack - except for networking.

 - Gus

 Thanks
 Rohit
 
 From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
  Date: Tuesday, October 28, 2014 1:01 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [neutron][nova] New specs on routed
 networking
 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
 Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How much
 of the Routes will be imported from external world ? 3. Will there be 
 Separate routing domain for overlay network  ? Or it will be mixed with
 external/underlay network ?
 These are all implementation specific details. Different deployments and
 network backends can implement them however they want. What we need to
 discuss now is how this model will look to the end-user and API.
 4. What will be the basic use case of this ? Thinking of L3 switching to
 support BGP-MPLS L3 VPN Scenario right from compute node ?
 I think the simplest use case is just that a provider doesn't want to deal
 with extending L2 domains all over their datacenter.
 
 On Tue, Oct 28, 2014 at 12:39 PM, A, Keshava
 keshav...@hp.commailto:keshav...@hp.com wrote: Hi Cory,
 
 Yes that is the basic question I have.
 
 OpenStack cloud  is ready to move away from Flat L2 network ?
 
 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
 Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How much
 of the Routes will be imported from external world ? 3. Will there be 
 Separate routing domain for overlay network  ? Or it will be mixed with
 external/underlay network ? 4. What will be the basic use case of this ?
 Thinking of L3 switching to support BGP-MPLS L3 VPN Scenario right from
 compute node ?
 
 Others can give their opinion also.
 
 Thanks  Regards,
 keshava
 
 -Original Message-
 From: Cory Benfield
 [mailto:cory.benfi...@metaswitch.commailto:cory.benfi...@metaswitch.com]
 Sent: Tuesday, October 28, 2014 10:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking
 
 On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
  Hi,
  
  Current Open-stack was built as flat network.
  
  With the introduction of the L3 lookup (by inserting the routing table
  in forwarding path) and separate 'VIF Route Type' interface:
  
  At what point of time in the packet processing  decision will be made
  to lookup FIB  during ? For each packet there will additional  FIB
  lookup ?
  
  How about the  impact on  'inter compute traffic', processed by  DVR  ?
  Here thinking  OpenStack cloud as hierarchical network instead of Flat
  network ?
 
 Keshava,
 
 It's difficult for me to answer in general terms: the proposed specs are
 general enough to allow multiple approaches to building purely-routed
 networks in OpenStack, and they may all have slightly different answers to
 some of these questions. I can, however

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Fred Baker (fred)

On Oct 28, 2014, at 4:59 PM, Angus Lees g...@inodes.org wrote:

 On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:
 Agreed. The way I'm thinking about this is that tenants shouldn't care what
 the underlying implementation is - L2 or L3. As long as the connectivity
 requirements are met using the model/API, end users should be fine. The
 data center network design should be an administrators decision based on
 the implementation mechanism that has been configured for OpenStack.
 
 I don't know anything about Project Calico, but I have been involved with 
 running a large cloud network previously that made heavy use of L3 overlays.  
 
 Just because these points weren't raised earlier in this thread:  In my 
 experience, a move to L3 involves losing:
 
 - broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but that's 
 a whole can of worms - so perhaps best to just say up front that this is a 
 non-broadcast network.
 
 - support for other IP protocols.
 
 - various L2 games like virtual MAC addresses, etc that NFV/etc people like.

I’m a little confused. IP supports multicast. It requires a routing protocol, 
and you have to “join” the multicast group, but it’s not out of the picture.

What other “IP” protocols do you have in mind? Are you thinking about 
IPX/CLNP/etc? Or are you thinking about new network layers?

I’m afraid the L2 games leave me a little cold. We have been there, such as 
with DECNET IV. I’d need to understand what you were trying to achieve before I 
would consider that a loss.

 We gain:
 
 - the ability to have proper hierarchical addressing underneath (which is a 
 big one for scaling a single network).  This itself is a tradeoff however - 
 an efficient/strict hierarchical addressing scheme means VMs can't choose 
 their 
 own IP addresses, and VM migration is messy/limited/impossible.

It does require some variation on a host route, and it leads us to ask about 
renumbering. The hard part of VM migration is at the application layer, not the 
network, and is therefore pretty much the same.

 - hardware support for dynamic L3 routing is generally universal, through a 
 small set of mostly-standard protocols (BGP, ISIS, etc).
 
 - can play various L3 games like BGP/anycast, which is super useful for 
 geographically diverse services.
 
 
 It's certainly a useful tradeoff for many use cases.  Users lose some 
 generality in return for more powerful cooperation with the provider around 
 particular features, so I sort of think of it like a step halfway up the IaaS-
 PaaS stack - except for networking.
 
 - Gus
 
 Thanks
 Rohit
 
 From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: Tuesday, October 28, 2014 1:01 PM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron][nova] New specs on routed
 networking
 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
 Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How much
 of the Routes will be imported from external world ? 3. Will there be 
 Separate routing domain for overlay network  ? Or it will be mixed with
 external/underlay network ?
 These are all implementation specific details. Different deployments and
 network backends can implement them however they want. What we need to
 discuss now is how this model will look to the end-user and API.
 4. What will be the basic use case of this ? Thinking of L3 switching to
 support BGP-MPLS L3 VPN Scenario right from compute node ?
 I think the simplest use case is just that a provider doesn't want to deal
 with extending L2 domains all over their datacenter.
 
 On Tue, Oct 28, 2014 at 12:39 PM, A, Keshava
 keshav...@hp.commailto:keshav...@hp.com wrote: Hi Cory,
 
 Yes that is the basic question I have.
 
 OpenStack cloud  is ready to move away from Flat L2 network ?
 
 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
 Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How much
 of the Routes will be imported from external world ? 3. Will there be 
 Separate routing domain for overlay network  ? Or it will be mixed with
 external/underlay network ? 4. What will be the basic use case of this ?
 Thinking of L3 switching to support BGP-MPLS L3 VPN Scenario right from
 compute node ?
 
 Others can give their opinion also.
 
 Thanks  Regards,
 keshava
 
 -Original Message-
 From: Cory Benfield
 [mailto:cory.benfi...@metaswitch.commailto:cory.benfi...@metaswitch.com]
 Sent: Tuesday, October 28, 2014 10:35 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking
 
 On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
 Hi

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Harshad Nakil
L3 routed network can support
1. broadcast/multicast
2. VRRP virtual MAC like technology

For example OpenContrail does support both of these in fully L3 routed
networks.

Regards
-Harshad

On Tue, Oct 28, 2014 at 4:59 PM, Angus Lees g...@inodes.org wrote:

 On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:
  Agreed. The way I'm thinking about this is that tenants shouldn't care
 what
  the underlying implementation is - L2 or L3. As long as the connectivity
  requirements are met using the model/API, end users should be fine. The
  data center network design should be an administrators decision based on
  the implementation mechanism that has been configured for OpenStack.

 I don't know anything about Project Calico, but I have been involved with
 running a large cloud network previously that made heavy use of L3
 overlays.

 Just because these points weren't raised earlier in this thread:  In my
 experience, a move to L3 involves losing:

 - broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but
 that's
 a whole can of worms - so perhaps best to just say up front that this is a
 non-broadcast network.

 - support for other IP protocols.

 - various L2 games like virtual MAC addresses, etc that NFV/etc people
 like.


 We gain:

 - the ability to have proper hierarchical addressing underneath (which is a
 big one for scaling a single network).  This itself is a tradeoff
 however -
 an efficient/strict hierarchical addressing scheme means VMs can't choose
 their
 own IP addresses, and VM migration is messy/limited/impossible.

 - hardware support for dynamic L3 routing is generally universal, through a
 small set of mostly-standard protocols (BGP, ISIS, etc).

 - can play various L3 games like BGP/anycast, which is super useful for
 geographically diverse services.


 It's certainly a useful tradeoff for many use cases.  Users lose some
 generality in return for more powerful cooperation with the provider around
 particular features, so I sort of think of it like a step halfway up the
 IaaS-
 PaaS stack - except for networking.

  - Gus

  Thanks
  Rohit
 
  From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
  Reply-To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.orgmailto:
 openstack-dev@lists.openstack.org
   Date: Tuesday, October 28, 2014 1:01 PM
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.orgmailto:
 openstack-dev@lists.openstack.org
   Subject: Re: [openstack-dev] [neutron][nova] New specs on routed
  networking
  1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
  Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How
 much
  of the Routes will be imported from external world ? 3. Will there be
  Separate routing domain for overlay network  ? Or it will be mixed with
  external/underlay network ?
  These are all implementation specific details. Different deployments and
  network backends can implement them however they want. What we need to
  discuss now is how this model will look to the end-user and API.
  4. What will be the basic use case of this ? Thinking of L3 switching to
  support BGP-MPLS L3 VPN Scenario right from compute node ?
  I think the simplest use case is just that a provider doesn't want to
 deal
  with extending L2 domains all over their datacenter.
 
  On Tue, Oct 28, 2014 at 12:39 PM, A, Keshava
  keshav...@hp.commailto:keshav...@hp.com wrote: Hi Cory,
 
  Yes that is the basic question I have.
 
  OpenStack cloud  is ready to move away from Flat L2 network ?
 
  1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
  Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How much
  of the Routes will be imported from external world ? 3. Will there be
  Separate routing domain for overlay network  ? Or it will be mixed with
  external/underlay network ? 4. What will be the basic use case of this ?
  Thinking of L3 switching to support BGP-MPLS L3 VPN Scenario right from
  compute node ?
 
  Others can give their opinion also.
 
  Thanks  Regards,
  keshava
 
  -Original Message-
  From: Cory Benfield
  [mailto:cory.benfi...@metaswitch.commailto:cory.benfi...@metaswitch.com
 ]
  Sent: Tuesday, October 28, 2014 10:35 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [neutron][nova] New specs on routed
 networking
 
  On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
   Hi,
  
   Current Open-stack was built as flat network.
  
   With the introduction of the L3 lookup (by inserting the routing table
   in forwarding path) and separate 'VIF Route Type' interface:
  
   At what point of time in the packet processing  decision will be made
   to lookup FIB  during ? For each packet there will additional  FIB
   lookup ?
  
   How about the  impact on  'inter compute traffic', processed by  DVR  ?
   Here thinking  OpenStack

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Angus Lees
On Wed, 29 Oct 2014 12:21:10 AM Fred Baker wrote:
 On Oct 28, 2014, at 4:59 PM, Angus Lees g...@inodes.org wrote:
  On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:
  Agreed. The way I'm thinking about this is that tenants shouldn't care
  what
  the underlying implementation is - L2 or L3. As long as the connectivity
  requirements are met using the model/API, end users should be fine. The
  data center network design should be an administrators decision based on
  the implementation mechanism that has been configured for OpenStack.
  
  I don't know anything about Project Calico, but I have been involved with
  running a large cloud network previously that made heavy use of L3
  overlays.
  
  Just because these points weren't raised earlier in this thread:  In my
  experience, a move to L3 involves losing:
  
  - broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but
  that's a whole can of worms - so perhaps best to just say up front that
  this is a non-broadcast network.
  
  - support for other IP protocols.
  
  - various L2 games like virtual MAC addresses, etc that NFV/etc people
  like.
 I’m a little confused. IP supports multicast. It requires a routing
 protocol, and you have to “join” the multicast group, but it’s not out of
 the picture.

Agreed, you absolutely can do multicast and broadcast on an L3 overlay 
network.  I was just saying that IGMP support tends to be a lot more 
inconsistent and flaky across vendors compared with L2 multicast (which pretty 
much always works).

Further, if the goal of moving to routed L3 is to allow a network to span 
more geographically diverse underlying networks, then we might want to 
administratively prohibit broadcast due to its increased cost and it no longer 
being a hard requirement for basic functionality (no need for ARP/DHCP 
anymore!).

If we're foregoing an L2 abstraction and moving to L3, I was merely suggesting 
it might also be reasonable to say that broadcast/multicast are not supported 
and thus the requirements on the underlying infrastructure can be drastically 
reduced.  Non-broadcast L3 overlay networks are common and prove to be useful 
for just about every task except mDNS/WINS discovery, which everyone is rather 
happy to leave behind ;)

 What other “IP” protocols do you have in mind? Are you thinking about
 IPX/CLNP/etc? Or are you thinking about new network layers?

eg: If the underlying L3 network only supported IPv4, then it would be 
impossible to run IPv6 (without yet another overlay network).  With a L2 
abstraction, theoretically any IP protocol can be used.

 I’m afraid the L2 games leave me a little cold. We have been there, such as
 with DECNET IV. I’d need to understand what you were trying to achieve
 before I would consider that a loss.

Sure, just listing it as one of the changes for completeness.

Traditional network devices often use VRRP or similar for HA failover, and 
so NFV-on-L3 would need to use some alternative (failover via overlapping BGP 
advertisements, for example, is easy and works well, so this isn't hard - just 
different).

  We gain:
  
  - the ability to have proper hierarchical addressing underneath (which is
  a
  big one for scaling a single network).  This itself is a tradeoff
  however - an efficient/strict hierarchical addressing scheme means VMs
  can't choose their own IP addresses, and VM migration is
  messy/limited/impossible.
 
 It does require some variation on a host route, and it leads us to ask about
 renumbering. The hard part of VM migration is at the application layer, not
 the network, and is therefore pretty much the same.
  - hardware support for dynamic L3 routing is generally universal, through
  a
  small set of mostly-standard protocols (BGP, ISIS, etc).
  
  - can play various L3 games like BGP/anycast, which is super useful for
  geographically diverse services.
  
  
  It's certainly a useful tradeoff for many use cases.  Users lose some
  generality in return for more powerful cooperation with the provider
  around
  particular features, so I sort of think of it like a step halfway up the
  IaaS- 
  PaaS stack - except for networking.
  
  - Gus
  
  Thanks
  Rohit
  
  From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
  Reply-To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.o
  rg
  
  Date: Tuesday, October 28, 2014 1:01 PM
  
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.o
  rg
  
  Subject: Re: [openstack-dev] [neutron][nova] New specs on routed
  
  networking
  
  1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
  Hash/Index Lookup ? 2. Will there be Hierarchical network ?  How
  much
  of the Routes will be imported from external world ? 3. Will there be
  Separate routing domain for overlay network  ? Or it will be mixed

Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-27 Thread Cory Benfield
On Fri, Oct 24, 2014 at 20:51:36, Kevin Benton wrote:
 Hi,
 
 Thanks for posting this. I am interested in this use case as well.
 
 I didn't find a link to a review for the ML2 driver. Do you have any
 more details for that available?

Sure. The ML2 driver itself isn't submitted for review yet because we're still 
working on it, but you can find the current code here[1]. If you think there's 
a risk of confusion with ML2 we'd love to hear about it.

Cory

[1]: 
https://github.com/Metaswitch/calico-neutron/blob/master/neutron/plugins/ml2/drivers/mech_calico.py

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-27 Thread Cory Benfield
On Sun, Oct 26, 2014 at 19:05:43, Rohit Agarwalla (roagarwa) wrote:
 Hi
 
 I'm interested as well in this model. Curious to understand the routing
 filters and their implementation that will enable isolation between
 tenant networks.
 Also, having a BoF session on Virtual Networking using L3 may be
 useful to get all interested folks together at the Summit.

A BoF sounds great. I've also proposed a lightning talk for the summit.

Cory

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-26 Thread Rohit Agarwalla (roagarwa)
Hi

I'm interested as well in this model. Curious to understand the routing filters 
and their implementation that will enable isolation between tenant networks.
Also, having a BoF session on Virtual Networking using L3 may be useful to 
get all interested folks together at the Summit.


Thanks
Rohit

From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, October 24, 2014 12:51 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking

Hi,

Thanks for posting this. I am interested in this use case as well.

I didn't find a link to a review for the ML2 driver. Do you have any more 
details for that available?
It seems like not providing L2 connectivity between members of the same Neutron 
network conflicts with assumptions ML2 will make about segmentation IDs, etc. 
So I am interested in seeing how exactly the ML2 driver will bind ports, 
segments, etc.


Cheers,
Kevin Benton

On Fri, Oct 24, 2014 at 6:38 AM, Cory Benfield 
cory.benfi...@metaswitch.commailto:cory.benfi...@metaswitch.com wrote:
All,

Project Calico [1] is an open source approach to virtual networking based on L3 
routing as opposed to L2 bridging.  In order to accommodate this approach 
within OpenStack, we've just submitted 3 blueprints that cover

-  minor changes to nova to add a new VIF type [2]
-  some changes to neutron to add DHCP support for routed interfaces [3]
-  an ML2 mechanism driver that adds support for Project Calico [4].

We feel that allowing for routed network interfaces is of general use within 
OpenStack, which was our motivation for submitting [2] and [3].  We also 
recognise that there is an open question over the future of 3rd party ML2 
drivers in OpenStack, but until that is finally resolved in Paris, we felt 
submitting our driver spec [4] was appropriate (not least to provide more 
context on the changes proposed in [2] and [3]).

We're extremely keen to hear any and all feedback on these proposals from the 
community.  We'll be around at the Paris summit in a couple of weeks and would 
love to discuss with anyone else who is interested in this direction.

Regards,

Cory Benfield (on behalf of the entire Project Calico team)

[1] http://www.projectcalico.org
[2] https://blueprints.launchpad.net/nova/+spec/vif-type-routed
[3] https://blueprints.launchpad.net/neutron/+spec/dhcp-for-routed-ifs
[4] https://blueprints.launchpad.net/neutron/+spec/calico-mechanism-driver

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-24 Thread Kevin Benton
Hi,

Thanks for posting this. I am interested in this use case as well.

I didn't find a link to a review for the ML2 driver. Do you have any more
details for that available?
It seems like not providing L2 connectivity between members of the same
Neutron network conflicts with assumptions ML2 will make about segmentation
IDs, etc. So I am interested in seeing how exactly the ML2 driver will bind
ports, segments, etc.


Cheers,
Kevin Benton

On Fri, Oct 24, 2014 at 6:38 AM, Cory Benfield cory.benfi...@metaswitch.com
 wrote:

 All,

 Project Calico [1] is an open source approach to virtual networking based
 on L3 routing as opposed to L2 bridging.  In order to accommodate this
 approach within OpenStack, we've just submitted 3 blueprints that cover

 -  minor changes to nova to add a new VIF type [2]
 -  some changes to neutron to add DHCP support for routed interfaces [3]
 -  an ML2 mechanism driver that adds support for Project Calico [4].

 We feel that allowing for routed network interfaces is of general use
 within OpenStack, which was our motivation for submitting [2] and [3].  We
 also recognise that there is an open question over the future of 3rd party
 ML2 drivers in OpenStack, but until that is finally resolved in Paris, we
 felt submitting our driver spec [4] was appropriate (not least to provide
 more context on the changes proposed in [2] and [3]).

 We're extremely keen to hear any and all feedback on these proposals from
 the community.  We'll be around at the Paris summit in a couple of weeks
 and would love to discuss with anyone else who is interested in this
 direction.

 Regards,

 Cory Benfield (on behalf of the entire Project Calico team)

 [1] http://www.projectcalico.org
 [2] https://blueprints.launchpad.net/nova/+spec/vif-type-routed
 [3] https://blueprints.launchpad.net/neutron/+spec/dhcp-for-routed-ifs
 [4] https://blueprints.launchpad.net/neutron/+spec/calico-mechanism-driver

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev