Hi Linda,  comments inline. Best regards -- aldrin

On Tuesday, July 3, 2012, Linda Dunbar wrote:

 Adrin, ****

** **

Thanks for sharing the nice design figure. ****

** **

A few comments to the figure: ****

**1)      **what about VMs (or TES) in DC which are connected by IPSec?  If
you purchase Private Virtual Networks from Amazon’s EC2, you can only
connect to your VPC (VMs) via IPSec. ****

This is as Truman mentioned in an earlier email.  The idea is to not
restrict the entry point of the IPSec connection.  As a matter of fact if
the tenant has multiple subnets/VN there is only need for a single IPSec
tunnel as the illustration depicts.

**2)      **What about the VMs under your “Overlay Module” which are not
part of L3VPN or L2VPN? Are they terminated by the GW or terminated at the
“Overlay Module”? ****

The far left and far right NVE are NVE-only (no hypervisors) while the NVE
at the bottom are combined hypervisor/NVE.  All the VM in the illustration
are connected to a VNI through a "bump-in-the-virtual-wire" virtual
firewall. All the VNI are members of at least one VN. I'm not entirely
clear as to what you mean by "terminated by the GW".  From network
perspective I see end station as terminated at VNI.


 **3)      **In your picture, L3VNI/L2VNI are all terminated at the Overlay
Modules. Do you mean that L3VPN/L2VPN encapsulations are actually done  by
the “overlay module”? If yes, then existing L3VPN/L2VPN mechanisms  or
solutions are already defined. What else is needed then?

Here [I believe] I am using the models outlined in Marc's framework draft.
 The transport tunnel encapsulation is handled by the Overlay Tunnel.  The
Ethernet/IP encapsulation is done by the end station.  The VNI is simply a
forwarding table that implements some match+action rules.
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to