That's not enough resolution

Cold migration means that I am not maintaining tcp sessions. It may mean I even 
have the opportunity to bring interfaces up and down to apply new 
ipaddr/netmask/gateway info. It may mean that i reboot the VM. It may also mean 
the VM comes up in a "quarantine" area for validation before promotion back to 
a production state.

And yes it means my distance limitation goes from 10ms to 350ms which is a big 
deal. But the above details matter much as it opens many options.

---> sidebar

As a side note, VMs would not be shut down to wait for data sync. That data 
sync with no seed could take 12hrs or more.

An asynchronous type copy/snapshot type technology is employed to get "most" of 
the data to the new site while the VM is still running. Then you can do a 
sync->suspend->true-up->unsuspend on other side type maneuver or a 
shutdown->trueup->bring up on the other side. In many cases you will h e been 
distributing snapshots to many locations for a long time giving you the chance 
to just reappear somewhere else as a clone even if VM' fell into a black hole.

On Aug 28, 2012, at 8:11 AM, Linda Dunbar <[email protected]> wrote:

> Robert, 
> 
> The only difference I see between "Cold" migration and "Hot" migration is the 
> distance on how far VMs can be moved from their original server racks. 
> 
> For "Cold" migration, VMs can be shut down, wait for their needed storage to 
> be moved to the new location, and then the VMs can be re-instantiated again. 
> 
> For "Hot" migration, the VMs can only be moved to racks which still have 
> access to their image and data. Most likely they are not too far. 
> 
> Linda Dunbar
> 
> 
> 
>> -----Original Message-----
>> From: Robert Raszuk [mailto:[email protected]]
>> Sent: Saturday, August 25, 2012 10:55 AM
>> To: Ivan Pepelnjak
>> Cc: Linda Dunbar; Black, David; [email protected]
>> Subject: Re: [nvo3] Let's refocus on real world
>> 
>> Ivan,
>> 
>>> ... or I may be completely wrong.
>> 
>> I think you are actually right on the spot correct.
>> 
>> However I am afraid authors of this document are not likely to admit
>> that TOR switches should be just basic IP nodes providing only
>> transport
>> between servers.
>> 
>> Likewise they will not likely to admit that all logic of encapsulation
>> should happen on the hyper-visors as they are simply not in that
>> technology space.
>> 
>> Similarly I very much agree and support providing clear distinction
>> between "cold" and "hot" VM mobility cases and perhaps even further
>> provide number of sub-classes hot VM mobility can be accomplish today -
>> clearly there is more then one way.
>> 
>> Also as you have pointed out storage discussion can not be just swapped
>> under carpet and addressed by quote: "storage issues are out of the
>> scope".
>> 
>> While Linda was perhaps right to say that today most storage today is
>> coming to servers via backend this is what I would call very
>> inefficient
>> and legacy way. If we are to think ahead one needs to observe how the
>> industry advances in storage virtualization via front-end IP very often
>> not co-located with the compute racks.
>> 
>> In my view network related mobility discussion is not about TOR or
>> about
>> VLANs. It is about an IP layer above IP transport which would carry all
>> necessary information of the actual location of the VMs and which in
>> fact would play the main role in shortening or eliminating the
>> triangular routing problem.
>> 
>> Rgs,
>> R.
>> 
>> 
>> 
>>> On 8/24/12 11:11 PM, Linda Dunbar wrote:
>>> [...]
>>> 
>>>> But most, if not all, data centers today don't have the Hypervisors
>>>> which can encapsulate the NVo3 defined header. The deployment to all
>>>> 100% NVo3 header based servers won't happen overnight. One thing
>> for
>>>> sure that you will see data centers with mixed types of servers
>> for
>>>> very long time.
>>>> 
>>>> If NVEs are in the ToR, you will see mixed scenario of blade servers,
>>>> servers with simple virtual switches, or even IEEE802.1Qbg's VEPA.
>> So
>>>> it is necessary for NVo3 to deal with the "L2 Site" defined in this
>>>> draft.
>>> 
>>> There are two hypothetical ways of implementing NVO3: existing layer-
>> 2
>>> technologies (with well-known scaling properties that prompted the
>>> creation of NVO3 working group) or something-over-IP encapsulation.
>>> 
>>> I might be myopic, but from what I see most data centers today (at
>> least
>>> based on market shares of individual vendors) don't have ToR switches
>>> that would be able to encapsulate MAC frames or IP datagrams in UDP,
>> GRE
>>> or MPLS envelopes. I am not familiar enough with the commonly used
>>> merchant silicon hardware to understand whether that's a software or
>>> hardware limitation. In any case, I wouldn't expect switch vendors to
>>> roll out NVO3-like something-over-IP solutions any time soon.
>>> 
>>> On the hypervisor front, VXLAN is shipping for months, NVGRE is
>> included
>>> in the next version of Hyper-V and MAC-over-GRE is available (with
>> Open
>>> vSwitch) for both KVM and Xen. Open vSwitch is also part of standard
>>> Linux kernel distribution and thus available to any other Linux-based
>>> hypervisor product.
>>> 
>>> So: all major hypervisors have MAC-over-IP solutions, each one using
>> a
>>> proprietary encapsulation because there's no standard way of doing it,
>>> and yet we're spending time discussing and documenting the history of
>>> evolution of virtual networking. Maybe we should be a bit more
>>> forward-looking, acknowledge the world has changed, and come up with
>> a
>>> relevant hypervisor-based solution.
>>> 
>>> Furthermore, performing something-in-IP encapsulation in the
>> hypervisors
>>> greatly simplifies the data center network, removes the need for
>>> bridging (each ToR switch can be a L3 switch) and all associated
>>> bridging kludges (including large-scale bridging solutions). Maybe we
>>> should remember that "Perfection is achieved, not when there is
>> nothing
>>> more to add, but when there is nothing left to take away" along with
>> a
>>> few lessons from RFC 3439.
>>> 
>>> I am positive a decade from now we'll see ancient servers still using
>>> VLAN-only hypervisor switches (or untagged interfaces), so there
>> might
>>> definitely be an need for an NVO3-to-VLAN gateway, but we shouldn't
>>> continuously focus our efforts on something that's probably going to
>> be
>>> a rare corner case a few years from now.
>>> 
>>> ... or I may be completely wrong. Wouldn't be the first time.
>>> Ivan
>>> _______________________________________________
>>> nvo3 mailing list
>>> [email protected]
>>> https://www.ietf.org/mailman/listinfo/nvo3
>>> 
>>> 
> 
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to