Thomas, Murari, David, Dinesh, and Lawrence,

I think the NVo3 problem statement draft has done a good job in capturing 
problems facing modern data centers with massive number of VMs and requirement 
to make them scale.

However, the "nvo3 problem statement draft" hasn't covered the problems 
introduced by VMs mobility in overlay environment.

To support tens of thousands of virtual networks, the local VID associated with 
client payload under each NVE has to be locally significant. When some VMs 
associated with Virtual Network X using VID 120 under NVE1 are moved to NVE2, 
VID#120 might be used by other virtual networks under NVE2.

"nvo3 problem statement" hasn't addressed the scenario when  data frames from 
VMs already have VID encoded, which is not uncommon in data centers, e.g. 
applications on different subnets. The VIDs encoded in the VMs will move with 
the VMs, making the VIDs encoded in the VMs global.

If ingress NVE simply encapsulates an outer header to data frames received from 
VMs and forward the encapsulated data frames to egress NVE via underlay 
network, the egress NVE can't simply decapsulate the outer header and send the 
decapsulated data frames to attached VMs as done by TRILL.


When a VM communicate with peers in different subnets, all the egress NVEs will 
be at the gateway router. It is tremendous amount of load at Gateway router's 
NVE. Plus all the local VIDs will be terminated by the gateway routers.  If 
using multiple gateway routers, with each handling a subset of tenants in data 
centers,  then each tenant's VMs are only reachable by their designated routers 
or router ports, which limit the range of VM mobility.

Therefore, I think that the problems identified by 
http://www.ietf.org/id/draft-dunbar-nvo3-overlay-mobility-issues-00.txt should 
be merged to the "nvo3 problem statement" draft.

Thanks, Linda Dunbar






_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to