[... deleted ...]
* Convergence time requirements - obviously highly relevant to hot VM
mobility but not for the cold case;

[Linda] Can you be specific on which issues described by the draft are impacted by the 
"convergence time"?
        If VMs maintain the same IP addresses in the new location, regardless of "Hot" or 
"Cold" with different convergence time, all the issues described in the draft still exists: 
e.g.  Usage of VLAN-IDs, L2 extension, and the Optimal routing (in-bound&  out-bound), etc

Regular IP routing is more than good enough for cold VM migration. We might need something with guaranteed convergence time for hot VM migration (or decide on a technique where the source NVO3 node temporarily forwards packets to destination NVO3 node).

* ARP cache expiration - after a cold VM move, the hypervisor can
bounce the VM virtual LAN interface (the TCP sessions are gone anyway)
hopefully clearing the ARP cache.

[Linda] With Hot migration, most, it not all,  hypervisors today send out a 
gratuitous ARP when the new VMs are instantiated in the new place. So the Cache 
would have been refreshed.
Besides, you can't depend on various ARP cache timer in different OSs. Therefore you 
can't say that because it is "cold" migration therefore no need to address 
wrong cache entries.

The VM you're migrating has ARP cache in its TCP stack. Under the most generic case the hypervisor cannot influence the contents of that cache. Do I have to spell out the consequences?

2.1 Terminology
===============
You might want to make the ToR definition more precise in case of
port/fabric extenders. Reference to "controlling bridge" from EVB and
802.1BR would probably make most sense.

[Linda] Do you refer to IEEE802.1Qbg? The TOR in this draft doesn't mean the 
"controlling bridge". The ToR is the first external switch connecting to 
servers.
Of course, servers could be embedded with blade switch or virtual switch. The 
"controlling bridge" have different meaning.

What is the ToR switch in a scenario where a server is connected to a Nexus 2000 Fabric Extender, which is in turn connected to a Nexus 5000? How about a virtual NIC that uses 802.1Qbg S-component to access a switch?

Also, when defining L2 CUG, it would make sense to specify whether a VM
participating in multiple L2 CUGs has multiple (logical) interfaces
(one per L2 CUG), requiring simple VPNs, or one interface in multiple
L2 CUGs, requiring overlapping VPNs.

[Linda] The draft states " If a given VM is a member of more than one L2-based CUG, 
this VM would have multiple IP addresses, one per each such CUG.".

I was not asking about CUG-to-subnet mappings, I got that. I was asking about CUG-to-interface mappings.

3.1 VLAN IDs
============
I would assume that most people familiar with network virtualization
come to an immediate conclusion that you either need multiple (non-
tagged) interfaces per VM or a VLAN-tagged VM interface, but these
conclusions might be worth documenting.

[Linda] Do you mean each VM have multiple (non-tagged) physical interfaces?

That's how VMs belonging to multiple subnets (load balancers, firewalls, web caches ...) are commonly deployed today. And unless you use hypervisor bypass (which is rarely used unless you work with SR-IOR or something similar) the VMs have virtual not physical interfaces.

Ivan
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to