> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf Of
> Ivan Pepelnjak
> Sent: Tuesday, August 28, 2012 12:22 AM
> To: Stiliadis, Dimitrios (Dimitri)
> Cc: Black, David; [email protected]; Linda Dunbar
> Subject: Re: [nvo3] Let's refocus on real world (was: Comments on Live
> Migration and VLAN-IDs)
> 
> Dimitri,
> 
> We're more in agreement than it might seem. I might have my doubts
> about
> the operational viability of the OpenStack-to-baremetal use case you
> described below, but I'm positive someone will try to do that as well.
> 
> In any case, regardless of whether we're considering VMs or bare-metal
> servers, in the simplest scenario the server-to-NVE connection is a
> point-to-point link, usually without VLAN tagging.
> 
> In the VM/hypervisor case, NVE is implemented in the hypervisor soft
> switch; in the baremetal server case, it has to be implemented in the
> ToR switch.

  This is certainly only today's restriction. If nov3 takes off, there
  certainly could be a pseudo-driver in Linux that could implement the
  NVE (like a VLAN driver) without much additional overhead.

> 
> It's important to keep in mind the limitations of the ToR switches to
> ensure whatever solution we agree upon will be implementable in ToR
> switches as well, but it makes absolutely no sense to assume NVE will
> not be in the hypervisor (because someone wants to support a customer
> having a decade-old VLAN-only hypervisor soft switch).
> 
> As for ToR switch capabilities, Dell has demonstrated NVGRE support and
> Arista is right now showing off a hardware VXLAN VTEP prototype, so I
> guess it's safe to assume next-generation merchant silicon will support
> GRE- and UDP-based encapsulations well before we'll agree on what NVO3
> solution should be.
> 
> Finally, can at least some of us agree that the topology that makes
> most
> sense is a direct P2P link between (VM or bare-metal) server and NVE
> using VLAN tagging only when a server participating in multiple L2 CUGs
> has interface limitations?
> 
> Kind regards,
> Ivan
> 
> On 8/27/12 6:55 AM, Stiliadis, Dimitrios (Dimitri) wrote:
> > Ivan:
> >
> > I agree and at the same time disagree with some of the statements
> > below. I would like to understand your view.
> >
> > See inline:
> >
> > On 8/25/12 8:22 AM, "Ivan Pepelnjak"<[email protected]>  wrote:
> >
> >> On 8/24/12 11:11 PM, Linda Dunbar wrote:
> >> [...]
> >>
> >>> But most, if not all, data centers today don't have the Hypervisors
> >>> which can encapsulate the NVo3 defined header. The deployment to
> all
> >>> 100% NVo3 header based servers won't happen overnight. One thing
> for
> >>> sure that you will see data centers with mixed types of servers for
> >>> very long time.
> >>>
> >>> If NVEs are in the ToR, you will see mixed scenario of blade
> servers,
> >>> servers with simple virtual switches, or even IEEE802.1Qbg's VEPA.
> So
> >>> it is necessary for NVo3 to deal with the "L2 Site" defined in this
> >>> draft.
> >>
> >> There are two hypothetical ways of implementing NVO3: existing
> layer-2
> >> technologies (with well-known scaling properties that prompted the
> >> creation of NVO3 working group) or something-over-IP encapsulation.
> >>
> >> I might be myopic, but from what I see most data centers today (at
> least
> >> based on market shares of individual vendors) don't have ToR
> switches
> >> that would be able to encapsulate MAC frames or IP datagrams in UDP,
> GRE
> >> or MPLS envelopes. I am not familiar enough with the commonly used
> >> merchant silicon hardware to understand whether that's a software or
> >> hardware limitation. In any case, I wouldn't expect switch vendors
> to
> >> roll out NVO3-like something-over-IP solutions any time soon.
> >>
> >> On the hypervisor front, VXLAN is shipping for months, NVGRE is
> included
> >> in the next version of Hyper-V and MAC-over-GRE is available (with
> Open
> >> vSwitch) for both KVM and Xen. Open vSwitch is also part of standard
> >> Linux kernel distribution and thus available to any other Linux-
> based
> >> hypervisor product.
> >>
> >> So: all major hypervisors have MAC-over-IP solutions, each one using
> a
> >> proprietary encapsulation because there's no standard way of doing
> it,
> >> and yet we're spending time discussing and documenting the history
> of
> >> evolution of virtual networking. Maybe we should be a bit more
> >> forward-looking, acknowledge the world has changed, and come up with
> a
> >> relevant hypervisor-based solution.
> >
> > Correct, and here is where IETF as a standard body fails. There is no
> > easy way (any time soon) for a VXLAN based solution to talk to an
> NVGRE
> > or MAC/GRE, or Cloudstack MAC/GRE or STT  (you forgot this one),
> based
> > solution.
> > Proprietary approaches that drive enterprises to vendor lock ins. And
> > instead
> > of trying to address the first problem that is about
> "interoperability",
> > we completely throw it under the rug as "not important". And by the
> time
> > we are done with NVO3, there will be a controller lock in as well,
> and the
> > death of interoperability.. If I was on the deployment side of the
> > solution,
> > that's the number one flexibility I would like to see. I don't want
> to be
> > forced to buy all my hypervisors from a given vendor, given that not
> all
> > applications are served equally by all hypervisors (for several $$$
> > reasons
> > I might add, that can be related with the licensing options of
> different
> > OSes on top of different hypervisors).
> >
> >>
> >> Furthermore, performing something-in-IP encapsulation in the
> hypervisors
> >> greatly simplifies the data center network, removes the need for
> >> bridging (each ToR switch can be a L3 switch) and all associated
> >> bridging kludges (including large-scale bridging solutions). Maybe
> we
> >> should remember that "Perfection is achieved, not when there is
> nothing
> >> more to add, but when there is nothing left to take away" along with
> a
> >> few lessons from RFC 3439.
> >>
> >> I am positive a decade from now we'll see ancient servers still
> using
> >> VLAN-only hypervisor switches (or untagged interfaces), so there
> might
> >> definitely be an need for an NVO3-to-VLAN gateway, but we shouldn't
> >> continuously focus our efforts on something that's probably going to
> be
> >> a rare corner case a few years from now.
> >
> > You are absolutely correct.
> > I think that if the gateways were trying to solve the
> interoperability with
> > legacy servers problem, then obviously they are doomed since they
> have a
> > limited
> > life time as you correctly point out.
> >
> > But I would argue that the only reason for the gateways is rather
> > different,
> > and it has to do with the point of separation of trust. I believe
> that
> > people
> > have several use cases in mind, where there is no hypervisor
> involved.
> > Some examples: ARM/Low power servers where the unit of computation is
> the
> > processor and there is no hypervisor, offers of "bare metal servers
> as as
> > service", where the handover is the physical wire and what the server
> puts
> > on the wire cannot be trusted, etc.
> >
> > Another real problem described in the last Openstack conference
> > by one of the panelists:"We want test&dev and QA systems to run over
> VMs
> > and production systems to run over the same data center network, but
> on
> > bare metal."
> > Obviously, they want to scale bare metal usage the same way as VMs
> > depending
> > on load etc (i.e. drop down the available test&dev resources when
> > production needs it).
> > Same separation problems etc exist.
> >
> > So yes, spending too much time worrying about VMs moving around and
> doing
> > encapsulations on ToRs is probably a waste of time. But spending a
> lot of
> > time
> > to understand interoperability between a hypervisor based
> environments and
> > use
> > cases such as the above that require gateways, I think is a real
> world
> > problem.
> >
> > Dimitri
> >
> >>
> >> ... or I may be completely wrong. Wouldn't be the first time.
> >> Ivan
> >> _______________________________________________
> >> nvo3 mailing list
> >> [email protected]
> >> https://www.ietf.org/mailman/listinfo/nvo3
> >
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to