Hi Thomas.

<[email protected]> writes:

> Hi all,

> > 3.2.  Benefits of Network Overlays
> >
> > [...]  Some examples of network overlays are tunnels such
> >    as IP GRE [RFC2784], LISP [I-D.ietf-lisp] or TRILL [RFC6325].

> Three comments things on the above:
> - I don't think the "tunnel" term is helpful to designate LISP or TRILL, 
> and some possible uses of GRE
> - the encapsulation in itself does not define how you build an overlay, 
> so I don't think the example of GRE RFC2784 is not an appropriate 
> example; first of all, you can put different payloads in GRE (e.g. IP, 
> Ethernet, MPLS, and of course many more), in some cases (e.g MPLS in 
> GRE) there are even other possible payloads; second, the payload carried 
> is also not enough to describe how the overlay is built (e.g. you can 
> use GRE keys to separate tenants, or you can use something else; for 
> MPLS-in-GRE with an Ethernet payload, you could build the overlay with 
> VPLS (different flavors) or with E-VPN ).
>   - maybe the wiser is to not  rush into the  gap analysis, and not use 
> some approaches as examples, rather than others.

You make some good points, and the cited sentence needs
tightening. How about I replace it with:

          Examples of architectures based on network overlays include LISP
          <xref target="I-D.ietf-lisp"></xref>,
          TRILL <xref target="RFC6325"></xref>, and Shortest Path
          Bridging <xref target="SPB"></xref>.

I've dropped the reference to GRE because it can be used to build
overlays (as NVGRE does), but that depends on how it is used. You
could say the same about any tunneling technology for that matter...

> >   The use of a large (e.g., 24-bit) VNID would allow 16 million
> >    distinct virtual networks within a single data center, eliminating
> >    current VLAN size limitations.  This VNID needs to be carried in the
> >    data plane along with the packet.  Adding an overlay header provides
> >    a place to carry this VNID.

> I find the above very misleading, since you can very much achieve the 
> same result without having a "large" 24-bit VNID in the dataplane.

how about s/needs to be carried/can be carried/

I expect we agree that a globally significant VNID could be carried
end to end in this manner. And I say "can" not "must", as I agree
there are other possible approaches (i.e.,  a locally significant
one could be carried as well).

> >    External communications (from a VM within a virtual network instance
> >    to a machine outside of any virtual network instance, e.g. on the
> >    Internet) is handled by having an ingress switch forward traffic to
> >    an external router, where an egress switch decapsulates a tunneled
> >    packet and delivers it to the router for normal processing. This
> >    router is external to the overlay, and behaves much like existing
> >    external facing routers in data centers today.

> If this is all we'll achieve with NVO3, then I certainly wouldn't put it 
> in a section called "benefits of overlays", but rather in a section 
> called "Drawbacks of NVO3"... ;-)

Well, the context of the text you quote was explaining how things
work. I.e., having an isolated VN unable to talk to the outside world
doesn't seem particularly useful...

That said, I can move this all into 2.7 " Support Communication
Between Virtual and Traditional Networks"

> More seriously, beyond the fact that the paragraph above looks misplaced 
> in this section, I think the problem statement should insist on the 
> feasibility of efficient interworking with external networks. Being 
> limited to an architecture where a box "decapsulates NVO3" to some VLAN 
> toward another box which then has to map this VLAN in the proper 
> context, will actually be a pain to manage. The provisioning efficiency 
> brought by NVO3 is also needed for these interconnects, and the problem 
> statement should I think reflect this: the problem statement should 
> include the ability to terminate NVO3 directly on a router.

Re your last point, how about the following text for 2.7:

      <section title="Support Communication Between Virtual and
        Traditional Networks">
        <t>
          Not all communication will be between devices connected to
          virtualized networks.  Devices using overlays will continue
          to access devices and make use of services on traditional,
          non-virtualized networks, whether in the data center, the
          public Internet, or at remote/branch campuses.  Any virtual
          network solution must be capable of interoperating with
          existing routers, VPN services, load balancers, intrusion
          detection servics, firewalls, etc.
        </t>
        
        <t>
          Communication between devices attached to a virtual network
          and devices connected to non-virtualized networks is handled
          architecturally by having specialized gateway devices that
          receive packets from a virtualized network, process them as
          appropriate, and forward them on to non-virtualized networks
          (and vice versa). A simple implementation could do as little
          as forward packets to/from a virtual network, while more
          sophisticated gateways could include full router
          functionality, load balancing or firewall support, etc., or
          have the overlay encapsulation/decapsulation functionality
          implemented with hardware assist for efficiency.
        </t>
        
> >    Overlays are designed to allow a set of VMs to be placed within a
> >    single virtual network instance, whether that virtual network
> >    provides the bridged network or a routed network.

> (typo above: s/the bridged network/a bridged network/ ? )

Thanks.

Thomas

_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to