Hi Thomas, 

>> I think the problem is a matter of perspective/experience.  If the
>> perspective of NVO3 is through the lens of VLAN/VXLAN/NVGRE/VNET/OTV
>> where the VNID has global/network/domain-wide scope and is used
>> as-is on the wire, then your questions are quite reasonable as they
>> are based on the limitations of these solutions.  However if the
>> perspective is through the lens of IPVPN and E-VPN then the
>> restrictions of one VN-per-VNI and one VNI-per-VN-per-NVE are not
>> reasonable since IPVPN/EVPN inherently have support for multiple
>> VN-per-VNI and multiple VNI-per-VN-per-NVE.
> 
> Let me respin the above a bit differently. If you approach the problem
> (as I did) from a DC centric view, where the basic connectivity model
> is all nodes on a specific VN have connectivity to other nodes on the
> VN, then the idea of some VNs not being able to communicate with other
> VNs (by policy) doesn't seem like a particularly important feature to
> have. Or, if useful, whether that can just be done outside of NVO3 or
> whether it needs to be part of the NVO3 framework.

Larry points out that VXLAN (a "one-VN-per-VNI" tunneling protocol) cannot 
support PVLAN which is quite common in data centers.  As you are also alluding 
to, Larry points out that the function of restricting connectivity between 
members of such a VN might belong to a firewall function rather than to NVO3 
solutions.  What he suggests is possible if you push the firewall function to 
all VAP/VIF.  Interestingly we see this already happening with openflow and 
openvswitch which I suppose would be considered an NVE.  I happen to know other 
commercial solutions that can do this as well.  Aside from the arp issue, one 
could technically have one single giant subnet per tenant with edge virtual 
firewalls not only enforcing intended network level reachability (CUGs) but go 
further and enforce application-level reachability.  Sounds like the role of 
the NVO3 would boil down to only dealing with the overlapping address space 
issue between endpoints with firewalls doing the detail work.  E
 nterprise clouds could just use one giant subnet with edge virtual firewalls, 
which might indeed be better than multiple smaller VNs connected by gateways or 
multiple interfaces on endpoints.  This might very well be how it would be done 
in the simple one VN-per-VNI world.

> What I think may be happening here is an assumption that some (or all)
> of those existing BGP VPN features apply and/or are also required in a
> DC environment.  I don't think we should do that. For each feature, we
> need to make the case that is useful and a requirement for NVO3, not
> just assume that NVO3 inherits all requirements from existing VPN
> solutions. We should not assume that DC based overlays have the same
> requirements as WAN-oriented VPN solutions.

I suppose in DC, as I point out above, we could use firewalls to do what we do 
with VPNs in the WAN -- certainly for where we need PVLAN.

>> This is indeed in my illustration.  I'm resending it.  If you look
>> at NVE1 and NVE2 you will see that the 2 L2VNI on each of these NVE
>> share a "DC<n> mesh" L2VN.  The illustration depicts both multiple
>> VN per VNI as well as multiple VNI on a single NVE being members of
>> the same VN.
> 
> I'm having a problem understanding the terminology being used here.
> 
> From the framework doc:
> 
>        VN: Virtual Network. This is a virtual L2 or L3 domain that belongs 
>        a tenant.
>       
>        VNI: Virtual Network Instance. This is one instance of a virtual 
>        overlay network. Two Virtual Networks are isolated from one another 
>        and may use overlapping addresses. 
> 
> A VNI is one specific VN instance.
> 
> I do not understand how one can have multiple VNIs associated with a
> single VN, it sort of contradicts the defintion of a VNI.
> 
> What do you mean when you use the term VNI?
> 

A VNI is a table populated with match-action rules.  Protocols such openflow 
would be used to add/remove/modify these rules.  A Virtual Network (VN) allows 
a group of endpoints (VM, not NVE) to reach one another.  The VN is created by 
match-forward rules in the VNI to which those endpoints are attached.  Some 
might call this set of endpoints a CUG.  This is my view of VN and VNI.  

Best -- aldrin
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to