Indeed, the save/restore functionality works only for Intel CPUs and
it's experimental. We are working to extend the save/restore
functionality so it can be enabled by default in the future. We are
working on the file format, too.

The warm migration is a wrapper on the save/restore functionality,
indeed. However, it doesn't use files and sends the guest's state
(memory, kernel structures, emulated devices) over the network, using
a socket. Basically, the work does the following on the source host:
pause the guest state, use the save mechanism to extract the guest's
state, send the state though the network. On the destination host, we
have the reverse process: receive the guest's state, use the restore
mechanism to restore the guest's state, resume the guest.

Beside the guest's state, we check whether the source and destination
hosts are compatible (e.g. they have to have the same CPU, otherwise,
the resume on the destination host will fail).

Elena

On Mon, 25 Jan 2021 at 22:03, Rob Wing <rob.fx...@gmail.com> wrote:
>
> The experimental suspend/resume feature for bhyve exists, but isn't turned on 
> by default. Also, in my testing, it only works with intel gear.
>
> I understand the migration feature is a wrapper around the suspend/resume 
> functionality.
>
> A question, is the migration feature dependent upon the underlying file 
> format of suspend/resume? If so, something to consider is that there will be 
> additional work to fix the migration feature after the suspend/resume file 
> format is dialed in.
>
> My concern is that building experimental features on top of experimental 
> features could become problematic when the foundation
> of the experimental features is changed.
>
> When suspend/resume was brought in, the commit log made it very explicit that 
> the file format needs to be fleshed out a bit more before being enabled by 
> default.
>
> -Rob
>
> p.s.
>
> part of me thinks some of this should happen at the process level (i.e. 
> suspend the bhyve process, migrate it, and resume it elsewhere) - easier said 
> than done.
>
> On Mon, Jan 25, 2021 at 10:43 AM Matt Churchyard <matt.churchy...@userve.net> 
> wrote:
>>
>> -----Original Message-----
>> From: Elena Mihailescu <elenamihailesc...@gmail.com>
>> Sent: 25 January 2021 14:25
>> To: Matt Churchyard <matt.churchy...@userve.net>
>> Cc: John-Mark Gurney <j...@funkthat.com>; freebsd-virtualization@freebsd.org
>> Subject: Re: Warm Migration feature for bhyve - review on Phabricator
>>
>> On Mon, 25 Jan 2021 at 13:26, Matt Churchyard <matt.churchy...@userve.net> 
>> wrote:
>> >
>> > -----Original Message-----
>> > From: John-Mark Gurney <j...@funkthat.com>
>> > Sent: 25 January 2021 06:21
>> > To: Matt Churchyard <matt.churchy...@userve.net>
>> > Cc: Elena Mihailescu <elenamihailesc...@gmail.com>;
>> > freebsd-virtualization@freebsd.org
>> > Subject: Re: Warm Migration feature for bhyve - review on Phabricator
>> >
>> > Matt Churchyard wrote this message on Fri, Jan 22, 2021 at 10:09 +0000:
>> > > > Hello, all,
>> > >
>> > > > We have recently opened a review on Phabricator for the warm migration 
>> > > > code for > bhyve [1]. Please take a look and let us know if it is 
>> > > > anything we can improve.
>> > >
>> > > > [1] https://reviews.freebsd.org/D28270
>> > >
>> > > > Thank you,
>> > > > Elena
>> > >
>> > > I appreciate that this isn't really related to the current review,
>> > > and commend the work being put into bhyve - it's an invaluable
>> > > addition to FreeBSD. I'm just wondering if any thought has been put
>> > > into the future possibility for transfer of disk data during
>> > > migration (i.e. the equivalent of "storage vmotion")
>> > >
>> > > The current process (from a mile high overview) effectively seems to
>> > > be the following -
>> > >
>> > > * Start guest on host2 pointing at a shared disk, and halt any
>> > > execution
>> > > * use bhyvectl to pause and begin migration on host1
>> > > * Start the guest on host2
>> > >
>> > > What would be the feasibility of being able to run a process such as
>> > > the following? Obviously it would likely need to be orchestrated by
>> > > an external tool, but to me it seems the main requirement is really
>> > > just to be able to provide separate control over the pause and
>> > > migrate steps on host1 -
>> > >
>> > > * send a ZFS snapshot of the running machine to host2
>> > > * start the guest in migrate recv mode on host2
>> > > * pause the guest on host1
>> > > * send a new snapshot
>> > > * initiate the migration of memory/device data
>> > > * start guest on host2
>> > >
>> > > Are there any major complications here I'm not aware of other than the 
>> > > requirement to pause the guest and kick off the state migration as two 
>> > > separate calls?
>> >
>> > > There's also hastd which can aid with this...
>> >
>> > Thanks for the reply. I've always been wary of the additional complexity 
>> > of HAST and ZFS, as it doesn't seem to have widespread usage or support, 
>> > and things get ugly fast when storage is involved.
>> >
>> > However, the idea of using HAST on top of zvols to provide network 
>> > mirrored storage for a guest is interesting. It adds a lot of extra 
>> > complexity, and probably performance impact though if it's just for the 
>> > ability to move a guest between systems that may only happen every now and 
>> > then. I'm also not sure it would help (or would at least be even more 
>> > complex) if I have 4 hosts and want to be able to move guests anywhere.
>> >
>> > The main reason for the email was to explore my theory that, by
>> > leveraging ZFS, any bhyve user could roll their own storage migration
>> > ability with a few commands as long as the following two (seemingly
>> > minor) abilities were present
>> >
>> > 1) the ability to suspend a guest without it being done as part of the
>> > migrate call. (I assume suspend/resume support via bhyvectl is planned
>> > anyway, if not already in place at this point)
>> > 2) modification of the migrate call to skip suspend if the guest is 
>> > already suspended.
>> >
>> > The main thing I'm not sure of is whether the migrate call has any 
>> > specific reliance on doing the suspend itself (e.g. if it needs to do 
>> > anything before the suspend, which will obviously be problematic if 
>> > suspend & migrate are called separately). Or if there's something else I 
>> > missed that means this is not feasible. I'm not really after massive 
>> > changes to the current review to implement disk migration in bhyve itself.
>>
>> > Thank you for your input related to the disk migration. We were (still 
>> > are, actually) focusing on the live migration feature for bhyve and did 
>> > not take into consideration the disk migration. As > Mihai said, the 
>> > patches for the two (guest's state and guest's disk migration) are or will 
>> > be quite big by themselves and we want the review process to go as 
>> > smoothly as possible.
>>
>> > After we have a pretty clear view of the live migration implementation (a 
>> > patch on this feature will be uploaded on Phabricator in a couple of 
>> > weeks) and how we should improve it, we will  look into the disk migration 
>> > feature and how it can be implemented for bhyve.
>>
>> > As I said, our objective is to add the live migration functionality to 
>> > bhyve which is based on the warm migration implementation which in turn is 
>> > based on the suspend option in bhyve (the snapshotting/suspend option was 
>> > added in the upstream last year, the FreeBSD code must be compiled with 
>> > the BHYVE_SNAPSHOT option for it to work).
>>
>> > Elena
>>
>> Thanks for the additional response. I was actually in the middle of writing 
>> a followup message to the mailing list as I may have confused things by 
>> using the term "storage vmotion". It appears in vmware land that refers to 
>> moving disk data for a running vm, and effectively getting the existing 
>> running process to switch to the new path, which is a slightly different 
>> process and obviously an additional feature of the hypervisor. (especially 
>> in vmware seeing as it doesn't rely on any backend file system manager and 
>> handles sending disk data and changed blocks itself)
>>
>> I was simply intrigued by the possibility of using the code that already 
>> exists fully in this review, but whether it would be possible to actually 
>> move the data at the same time by squeezing in a zfs send before & between 
>> the suspend/migrate steps. (Again obviously via an external tool, I wouldn't 
>> expect bhyve/bhyvectl to get involved with zfs commands)
>>
>> Matt
>>
>> >
>> > Matt
>> >
>> > --
>> >   John-Mark Gurney                              Voice: +1 415 225 5579
>> >
>> >      "All that I will do, has been done, All that I have, has not."
>> > _______________________________________________
>> > freebsd-virtualization@freebsd.org mailing list
>> > https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
>> > To unsubscribe, send any mail to 
>> > "freebsd-virtualization-unsubscr...@freebsd.org"
>> _______________________________________________
>> freebsd-virtualization@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
>> To unsubscribe, send any mail to 
>> "freebsd-virtualization-unsubscr...@freebsd.org"
_______________________________________________
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"

Reply via email to