These are all important discussion topics, but we are getting pulled into
implementation-specific details again. Routing aggregation and network
topology is completely up to the backend implementation.

We should keep this thread focused on the user-facing abstractions and the
changes required to Nova and Neutron to enable them. Then when it is time
to implement the reference implementation in Neutron, we can have this
discussion on optimal placement of BGP nodes, etc.



On Thu, Oct 30, 2014 at 4:04 AM, A, Keshava <keshav...@hp.com> wrote:

>  Hi,
>
> w.r.t.  ‘ VM packet forwarding’ at L3 level by enabling routing I have
> below points.
>
> With below  reference diagram , when the routing is enabled to detect the
> destination VM’s compute node ..
>
>
>
> 1.       How many route prefix will be injected in each of the compute
> node ?
>
>
>
> 2.       For each of the VM address, there will be corresponding IP
> address in the ‘L3 Forwarding Tbl’ ?
>
> When we have large number of VM’s of the order 50,000/ 1 Million VM’s in
> the cloud each compute node needs to maintain 1 Million Route Entries ?
>
>
>
> 3.       Even with route aggregations, it is not guaranteed to be very
> efficient because
>
> a.       Tenants can span across computes.
>
> b.      VM migration can happen which  may break the aggregation  and
> allow the growth  of routing table.
>
>
>
> 4.       Across Switch if we  try to run BGP and try to aggregate, we
> will be introducing the Hierarchical Network.
>
> If any change in topology what will be convergence time and will there any
> looping issues ?
>
> Cost of the L3 switch will go up as the capacity of that switch to support
> 10,000 + routes.
>
>
>
> 5.       With this we want to break the classical L2 broadcast in the
> last mile Cloud ?
>
> I was under the impression that the cloud network we want to keep simple
> L2 broadcast domain, without adding any complexity like MPLS label,
> Routing, Aggregation .
>
>
>
> 6.       The whole purpose of brining VxLAN in datacenter cloud is to
> keep the L2 and even able to extend the L2 to different Datacenter.
>
>
>
> 7.       I also saw some ietf draft w.r.t implementation architecture of
> OpenStack !!!.
>
>
>
>   Let me know the opinion w.r.t. this ?
>
>
>
>
>
>
>
>
>
> Thanks & regards,
>
> Keshava
>
>
>
> -----Original Message-----
> From: Fred Baker (fred) [mailto:f...@cisco.com]
> Sent: Wednesday, October 29, 2014 5:51 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking
>
>
>
>
>
> On Oct 28, 2014, at 4:59 PM, Angus Lees <g...@inodes.org> wrote:
>
>
>
> > On Tue, 28 Oct 2014 09:07:03 PM Rohit Agarwalla wrote:
>
> >> Agreed. The way I'm thinking about this is that tenants shouldn't
>
> >> care what the underlying implementation is - L2 or L3. As long as the
>
> >> connectivity requirements are met using the model/API, end users
>
> >> should be fine. The data center network design should be an
>
> >> administrators decision based on the implementation mechanism that has
> been configured for OpenStack.
>
> >
>
> > I don't know anything about Project Calico, but I have been involved
>
> > with running a large cloud network previously that made heavy use of L3
> overlays.
>
> >
>
> > Just because these points weren't raised earlier in this thread:  In
>
> > my experience, a move to L3 involves losing:
>
> >
>
> > - broadcast/multicast.  It's possible to do L3 multicast/IGMP/etc, but
>
> > that's a whole can of worms - so perhaps best to just say up front
>
> > that this is a non-broadcast network.
>
> >
>
> > - support for other IP protocols.
>
> >
>
> > - various "L2 games" like virtual MAC addresses, etc that NFV/etc people
> like.
>
>
>
> I’m a little confused. IP supports multicast. It requires a routing
> protocol, and you have to “join” the multicast group, but it’s not out of
> the picture.
>
>
>
> What other “IP” protocols do you have in mind? Are you thinking about
> IPX/CLNP/etc? Or are you thinking about new network layers?
>
>
>
> I’m afraid the L2 games leave me a little cold. We have been there, such
> as with DECNET IV. I’d need to understand what you were trying to achieve
> before I would consider that a loss.
>
>
>
> > We gain:
>
> >
>
> > - the ability to have proper hierarchical addressing underneath (which
>
> > is a big one for scaling a single "network").  This itself is a
>
> > tradeoff however - an efficient/strict hierarchical addressing scheme
>
> > means VMs can't choose their own IP addresses, and VM migration is
> messy/limited/impossible.
>
>
>
> It does require some variation on a host route, and it leads us to ask
> about renumbering. The hard part of VM migration is at the application
> layer, not the network, and is therefore pretty much the same.
>
>
>
> > - hardware support for dynamic L3 routing is generally universal,
>
> > through a small set of mostly-standard protocols (BGP, ISIS, etc).
>
> >
>
> > - can play various "L3 games" like BGP/anycast, which is super useful
>
> > for geographically diverse services.
>
> >
>
> >
>
> > It's certainly a useful tradeoff for many use cases.  Users lose some
>
> > generality in return for more powerful cooperation with the provider
>
> > around particular features, so I sort of think of it like a step
>
> > halfway up the IaaS-
>
> >> PaaS stack - except for networking.
>
> >
>
> > - Gus
>
> >
>
> >> Thanks
>
> >> Rohit
>
> >>
>
> >> From: Kevin Benton <blak...@gmail.com<mailto:blak...@gmail.com>>
>
> >> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
> >> <openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.opensta
>
> >> ck.org
>
> >>>> Date: Tuesday, October 28, 2014 1:01 PM
>
> >> To: "OpenStack Development Mailing List (not for usage questions)"
>
> >> <openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.opensta
>
> >> ck.org
>
> >>>> Subject: Re: [openstack-dev] [neutron][nova] New specs on routed
>
> >> networking
>
> >>> 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current
> L2
>
> >>> Hash/Index Lookup ? 2. Will there be Hierarchical network ?      How
> much
>
> >>> of the Routes will be imported from external world ? 3. Will there
>
> >>> be Separate routing domain for overlay network  ? Or it will be
>
> >>> mixed with external/underlay network ?
>
> >> These are all implementation specific details. Different deployments
>
> >> and network backends can implement them however they want. What we
>
> >> need to discuss now is how this model will look to the end-user and API.
>
> >>> 4. What will be the basic use case of this ? Thinking of L3
>
> >>> switching to support BGP-MPLS L3 VPN Scenario right from compute node ?
>
> >> I think the simplest use case is just that a provider doesn't want to
>
> >> deal with extending L2 domains all over their datacenter.
>
> >>
>
> >> On Tue, Oct 28, 2014 at 12:39 PM, A, Keshava
>
> >> <keshav...@hp.com<mailto:keshav...@hp.com>> wrote: Hi Cory,
>
> >>
>
> >> Yes that is the basic question I have.
>
> >>
>
> >> OpenStack cloud  is ready to move away from Flat L2 network ?
>
> >>
>
> >> 1. Every packet L3 FIB Lookup : Radix Tree Search, instead of current L2
>
> >> Hash/Index Lookup ? 2. Will there be Hierarchical network ?      How
> much
>
> >> of the Routes will be imported from external world ? 3. Will there be
>
> >> Separate routing domain for overlay network  ? Or it will be mixed
>
> >> with external/underlay network ? 4. What will be the basic use case of
> this ?
>
> >> Thinking of L3 switching to support BGP-MPLS L3 VPN Scenario right
>
> >> from compute node ?
>
> >>
>
> >> Others can give their opinion also.
>
> >>
>
> >> Thanks & Regards,
>
> >> keshava
>
> >>
>
> >> -----Original Message-----
>
> >> From: Cory Benfield
>
> >> [mailto:cory.benfi...@metaswitch.com<mailto:Cory.Benfield@metaswitch.
>
> >> com>]
>
> >> Sent: Tuesday, October 28, 2014 10:35 PM
>
> >> To: OpenStack Development Mailing List (not for usage questions)
>
> >> Subject: Re: [openstack-dev] [neutron][nova] New specs on routed
>
> >> networking
>
> >>
>
> >> On Tue, Oct 28, 2014 at 07:44:48, A, Keshava wrote:
>
> >>> Hi,
>
> >>>
>
> >>> Current Open-stack was built as flat network.
>
> >>>
>
> >>> With the introduction of the L3 lookup (by inserting the routing
>
> >>> table in forwarding path) and separate 'VIF Route Type' interface:
>
> >>>
>
> >>> At what point of time in the packet processing  decision will be
>
> >>> made to lookup FIB  during ? For each packet there will additional
>
> >>> FIB lookup ?
>
> >>>
>
> >>> How about the  impact on  'inter compute traffic', processed by  DVR  ?
>
> >>> Here thinking  OpenStack cloud as hierarchical network instead of
>
> >>> Flat network ?
>
> >>
>
> >> Keshava,
>
> >>
>
> >> It's difficult for me to answer in general terms: the proposed specs
>
> >> are general enough to allow multiple approaches to building
>
> >> purely-routed networks in OpenStack, and they may all have slightly
>
> >> different answers to some of these questions. I can, however, speak
>
> >> about how Project Calico intends to apply them.
>
> >>
>
> >> For Project Calico, the FIB lookup is performed for every packet
>
> >> emitted by a VM and destined for a VM. Each compute host routes all
>
> >> the traffic to/from its guests. The DVR approach isn't necessary in
>
> >> this kind of network because it essentially already implements one:
>
> >> all packets are always routed, and no network node is ever required in
> the network.
>
> >>
>
> >> The routed network approach doesn't add any hierarchical nature to an
>
> >> OpenStack cloud. The difference between the routed approach and the
>
> >> standard OVS approach is that packet processing happens entirely at
>
> >> layer 3. Put another way, in Project Calico-based networks a Neutron
>
> >> subnet no longer maps to a layer 2 broadcast domain.
>
> >>
>
> >> I hope that clarifies: please shout if you'd like more detail.
>
> >>
>
> >> Cory
>
> >>
>
> >> _______________________________________________
>
> >> OpenStack-dev mailing list
>
> >> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstac
>
> >> k.org>
>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >>
>
> >> _______________________________________________
>
> >> OpenStack-dev mailing list
>
> >> OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstac
>
> >> k.org>
>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >>
>
> >>
>
> >>
>
> >> --
>
> >> Kevin Benton
>
> >
>
> > _______________________________________________
>
> > OpenStack-dev mailing list
>
> > OpenStack-dev@lists.openstack.org
>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to