Re: Warm Migration feature for bhyve - review on Phabricator

2021-01-25 Thread Elena Mihailescu
Indeed, the save/restore functionality works only for Intel CPUs and
it's experimental. We are working to extend the save/restore
functionality so it can be enabled by default in the future. We are
working on the file format, too.

The warm migration is a wrapper on the save/restore functionality,
indeed. However, it doesn't use files and sends the guest's state
(memory, kernel structures, emulated devices) over the network, using
a socket. Basically, the work does the following on the source host:
pause the guest state, use the save mechanism to extract the guest's
state, send the state though the network. On the destination host, we
have the reverse process: receive the guest's state, use the restore
mechanism to restore the guest's state, resume the guest.

Beside the guest's state, we check whether the source and destination
hosts are compatible (e.g. they have to have the same CPU, otherwise,
the resume on the destination host will fail).

Elena

On Mon, 25 Jan 2021 at 22:03, Rob Wing  wrote:
>
> The experimental suspend/resume feature for bhyve exists, but isn't turned on 
> by default. Also, in my testing, it only works with intel gear.
>
> I understand the migration feature is a wrapper around the suspend/resume 
> functionality.
>
> A question, is the migration feature dependent upon the underlying file 
> format of suspend/resume? If so, something to consider is that there will be 
> additional work to fix the migration feature after the suspend/resume file 
> format is dialed in.
>
> My concern is that building experimental features on top of experimental 
> features could become problematic when the foundation
> of the experimental features is changed.
>
> When suspend/resume was brought in, the commit log made it very explicit that 
> the file format needs to be fleshed out a bit more before being enabled by 
> default.
>
> -Rob
>
> p.s.
>
> part of me thinks some of this should happen at the process level (i.e. 
> suspend the bhyve process, migrate it, and resume it elsewhere) - easier said 
> than done.
>
> On Mon, Jan 25, 2021 at 10:43 AM Matt Churchyard  
> wrote:
>>
>> -Original Message-
>> From: Elena Mihailescu 
>> Sent: 25 January 2021 14:25
>> To: Matt Churchyard 
>> Cc: John-Mark Gurney ; freebsd-virtualization@freebsd.org
>> Subject: Re: Warm Migration feature for bhyve - review on Phabricator
>>
>> On Mon, 25 Jan 2021 at 13:26, Matt Churchyard  
>> wrote:
>> >
>> > -Original Message-
>> > From: John-Mark Gurney 
>> > Sent: 25 January 2021 06:21
>> > To: Matt Churchyard 
>> > Cc: Elena Mihailescu ;
>> > freebsd-virtualization@freebsd.org
>> > Subject: Re: Warm Migration feature for bhyve - review on Phabricator
>> >
>> > Matt Churchyard wrote this message on Fri, Jan 22, 2021 at 10:09 +:
>> > > > Hello, all,
>> > >
>> > > > We have recently opened a review on Phabricator for the warm migration 
>> > > > code for > bhyve [1]. Please take a look and let us know if it is 
>> > > > anything we can improve.
>> > >
>> > > > [1] https://reviews.freebsd.org/D28270
>> > >
>> > > > Thank you,
>> > > > Elena
>> > >
>> > > I appreciate that this isn't really related to the current review,
>> > > and commend the work being put into bhyve - it's an invaluable
>> > > addition to FreeBSD. I'm just wondering if any thought has been put
>> > > into the future possibility for transfer of disk data during
>> > > migration (i.e. the equivalent of "storage vmotion")
>> > >
>> > > The current process (from a mile high overview) effectively seems to
>> > > be the following -
>> > >
>> > > * Start guest on host2 pointing at a shared disk, and halt any
>> > > execution
>> > > * use bhyvectl to pause and begin migration on host1
>> > > * Start the guest on host2
>> > >
>> > > What would be the feasibility of being able to run a process such as
>> > > the following? Obviously it would likely need to be orchestrated by
>> > > an external tool, but to me it seems the main requirement is really
>> > > just to be able to provide separate control over the pause and
>> > > migrate steps on host1 -
>> > >
>> > > * send a ZFS snapshot of the running machine to host2
>> > > * start the guest in migrate recv mode on host2
>> > > * pause the guest on host1
>> > > * send a new snapshot
>> > > * initiate the migration of memory/device data
>> > > * start guest on host2
>> > >
>> > > Are there any major complications here I'm not aware of other than the 
>> > > requirement to pause the guest and kick off the state migration as two 
>> > > separate calls?
>> >
>> > > There's also hastd which can aid with this...
>> >
>> > Thanks for the reply. I've always been wary of the additional complexity 
>> > of HAST and ZFS, as it doesn't seem to have widespread usage or support, 
>> > and things get ugly fast when storage is involved.
>> >
>> > However, the idea of using HAST on top of zvols to provide network 
>> > mirrored storage for a guest is interesting. It adds a lot of extra 
>> > complexity, and 

Re: Warm Migration feature for bhyve - review on Phabricator

2021-01-25 Thread Rob Wing
The experimental suspend/resume feature for bhyve exists, but isn't turned
on by default. Also, in my testing, it only works with intel gear.

I understand the migration feature is a wrapper around the suspend/resume
functionality.

A question, is the migration feature dependent upon the underlying file
format of suspend/resume? If so, something to consider is that there will
be additional work to fix the migration feature after the suspend/resume
file format is dialed in.

My concern is that building experimental features on top of experimental
features could become problematic when the foundation
of the experimental features is changed.

When suspend/resume was brought in, the commit log made it very explicit
that the file format needs to be fleshed out a bit more before being
enabled by default.

-Rob

p.s.

part of me thinks some of this should happen at the process level (i.e.
suspend the bhyve process, migrate it, and resume it elsewhere) - easier
said than done.

On Mon, Jan 25, 2021 at 10:43 AM Matt Churchyard 
wrote:

> -Original Message-
> From: Elena Mihailescu 
> Sent: 25 January 2021 14:25
> To: Matt Churchyard 
> Cc: John-Mark Gurney ;
> freebsd-virtualization@freebsd.org
> Subject: Re: Warm Migration feature for bhyve - review on Phabricator
>
> On Mon, 25 Jan 2021 at 13:26, Matt Churchyard 
> wrote:
> >
> > -Original Message-
> > From: John-Mark Gurney 
> > Sent: 25 January 2021 06:21
> > To: Matt Churchyard 
> > Cc: Elena Mihailescu ;
> > freebsd-virtualization@freebsd.org
> > Subject: Re: Warm Migration feature for bhyve - review on Phabricator
> >
> > Matt Churchyard wrote this message on Fri, Jan 22, 2021 at 10:09 +:
> > > > Hello, all,
> > >
> > > > We have recently opened a review on Phabricator for the warm
> migration code for > bhyve [1]. Please take a look and let us know if it is
> anything we can improve.
> > >
> > > > [1] https://reviews.freebsd.org/D28270
> > >
> > > > Thank you,
> > > > Elena
> > >
> > > I appreciate that this isn't really related to the current review,
> > > and commend the work being put into bhyve - it's an invaluable
> > > addition to FreeBSD. I'm just wondering if any thought has been put
> > > into the future possibility for transfer of disk data during
> > > migration (i.e. the equivalent of "storage vmotion")
> > >
> > > The current process (from a mile high overview) effectively seems to
> > > be the following -
> > >
> > > * Start guest on host2 pointing at a shared disk, and halt any
> > > execution
> > > * use bhyvectl to pause and begin migration on host1
> > > * Start the guest on host2
> > >
> > > What would be the feasibility of being able to run a process such as
> > > the following? Obviously it would likely need to be orchestrated by
> > > an external tool, but to me it seems the main requirement is really
> > > just to be able to provide separate control over the pause and
> > > migrate steps on host1 -
> > >
> > > * send a ZFS snapshot of the running machine to host2
> > > * start the guest in migrate recv mode on host2
> > > * pause the guest on host1
> > > * send a new snapshot
> > > * initiate the migration of memory/device data
> > > * start guest on host2
> > >
> > > Are there any major complications here I'm not aware of other than the
> requirement to pause the guest and kick off the state migration as two
> separate calls?
> >
> > > There's also hastd which can aid with this...
> >
> > Thanks for the reply. I've always been wary of the additional complexity
> of HAST and ZFS, as it doesn't seem to have widespread usage or support,
> and things get ugly fast when storage is involved.
> >
> > However, the idea of using HAST on top of zvols to provide network
> mirrored storage for a guest is interesting. It adds a lot of extra
> complexity, and probably performance impact though if it's just for the
> ability to move a guest between systems that may only happen every now and
> then. I'm also not sure it would help (or would at least be even more
> complex) if I have 4 hosts and want to be able to move guests anywhere.
> >
> > The main reason for the email was to explore my theory that, by
> > leveraging ZFS, any bhyve user could roll their own storage migration
> > ability with a few commands as long as the following two (seemingly
> > minor) abilities were present
> >
> > 1) the ability to suspend a guest without it being done as part of the
> > migrate call. (I assume suspend/resume support via bhyvectl is planned
> > anyway, if not already in place at this point)
> > 2) modification of the migrate call to skip suspend if the guest is
> already suspended.
> >
> > The main thing I'm not sure of is whether the migrate call has any
> specific reliance on doing the suspend itself (e.g. if it needs to do
> anything before the suspend, which will obviously be problematic if suspend
> & migrate are called separately). Or if there's something else I missed
> that means this is not feasible. I'm not really after 

RE: Warm Migration feature for bhyve - review on Phabricator

2021-01-25 Thread Matt Churchyard
-Original Message-
From: Elena Mihailescu  
Sent: 25 January 2021 14:25
To: Matt Churchyard 
Cc: John-Mark Gurney ; freebsd-virtualization@freebsd.org
Subject: Re: Warm Migration feature for bhyve - review on Phabricator

On Mon, 25 Jan 2021 at 13:26, Matt Churchyard  
wrote:
>
> -Original Message-
> From: John-Mark Gurney 
> Sent: 25 January 2021 06:21
> To: Matt Churchyard 
> Cc: Elena Mihailescu ; 
> freebsd-virtualization@freebsd.org
> Subject: Re: Warm Migration feature for bhyve - review on Phabricator
>
> Matt Churchyard wrote this message on Fri, Jan 22, 2021 at 10:09 +:
> > > Hello, all,
> >
> > > We have recently opened a review on Phabricator for the warm migration 
> > > code for > bhyve [1]. Please take a look and let us know if it is 
> > > anything we can improve.
> >
> > > [1] https://reviews.freebsd.org/D28270
> >
> > > Thank you,
> > > Elena
> >
> > I appreciate that this isn't really related to the current review, 
> > and commend the work being put into bhyve - it's an invaluable 
> > addition to FreeBSD. I'm just wondering if any thought has been put 
> > into the future possibility for transfer of disk data during 
> > migration (i.e. the equivalent of "storage vmotion")
> >
> > The current process (from a mile high overview) effectively seems to 
> > be the following -
> >
> > * Start guest on host2 pointing at a shared disk, and halt any 
> > execution
> > * use bhyvectl to pause and begin migration on host1
> > * Start the guest on host2
> >
> > What would be the feasibility of being able to run a process such as 
> > the following? Obviously it would likely need to be orchestrated by 
> > an external tool, but to me it seems the main requirement is really 
> > just to be able to provide separate control over the pause and 
> > migrate steps on host1 -
> >
> > * send a ZFS snapshot of the running machine to host2
> > * start the guest in migrate recv mode on host2
> > * pause the guest on host1
> > * send a new snapshot
> > * initiate the migration of memory/device data
> > * start guest on host2
> >
> > Are there any major complications here I'm not aware of other than the 
> > requirement to pause the guest and kick off the state migration as two 
> > separate calls?
>
> > There's also hastd which can aid with this...
>
> Thanks for the reply. I've always been wary of the additional complexity of 
> HAST and ZFS, as it doesn't seem to have widespread usage or support, and 
> things get ugly fast when storage is involved.
>
> However, the idea of using HAST on top of zvols to provide network mirrored 
> storage for a guest is interesting. It adds a lot of extra complexity, and 
> probably performance impact though if it's just for the ability to move a 
> guest between systems that may only happen every now and then. I'm also not 
> sure it would help (or would at least be even more complex) if I have 4 hosts 
> and want to be able to move guests anywhere.
>
> The main reason for the email was to explore my theory that, by 
> leveraging ZFS, any bhyve user could roll their own storage migration 
> ability with a few commands as long as the following two (seemingly 
> minor) abilities were present
>
> 1) the ability to suspend a guest without it being done as part of the 
> migrate call. (I assume suspend/resume support via bhyvectl is planned 
> anyway, if not already in place at this point)
> 2) modification of the migrate call to skip suspend if the guest is already 
> suspended.
>
> The main thing I'm not sure of is whether the migrate call has any specific 
> reliance on doing the suspend itself (e.g. if it needs to do anything before 
> the suspend, which will obviously be problematic if suspend & migrate are 
> called separately). Or if there's something else I missed that means this is 
> not feasible. I'm not really after massive changes to the current review to 
> implement disk migration in bhyve itself.

> Thank you for your input related to the disk migration. We were (still are, 
> actually) focusing on the live migration feature for bhyve and did not take 
> into consideration the disk migration. As > Mihai said, the patches for the 
> two (guest's state and guest's disk migration) are or will be quite big by 
> themselves and we want the review process to go as smoothly as possible.

> After we have a pretty clear view of the live migration implementation (a 
> patch on this feature will be uploaded on Phabricator in a couple of weeks) 
> and how we should improve it, we will  look into the disk migration feature 
> and how it can be implemented for bhyve.

> As I said, our objective is to add the live migration functionality to bhyve 
> which is based on the warm migration implementation which in turn is based on 
> the suspend option in bhyve (the snapshotting/suspend option was added in the 
> upstream last year, the FreeBSD code must be compiled with the BHYVE_SNAPSHOT 
> option for it to work).

> Elena

Thanks for the additional response. 

[Bug 253004] Win 10 guest fails to restart unless I --destroy

2021-01-25 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=253004

--- Comment #1 from berger...@yahoo.co.uk ---
Yes, the bug mentioned above (and patched successfully more than a year ago, I
believe) happened on first boot and prevented bhyve from booting at all with
certain PCI cards.

This thing, however, only happens when trying to restart the bhyve guest (Win
10 in my case).

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 253004] Win 10 guest fails to restart unless I --destroy

2021-01-25 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=253004

Bug ID: 253004
   Summary: Win 10 guest fails to restart unless I --destroy
   Product: Base System
   Version: 13.0-STABLE
  Hardware: amd64
OS: Any
Status: New
  Severity: Affects Only Me
  Priority: ---
 Component: bhyve
  Assignee: virtualizat...@freebsd.org
  Reporter: berger...@yahoo.co.uk

In 13.0-ALPHA2 I still have this problem:
Running Win 10 guest with 2 passthrough devices (USB controller + NIC) I can't
start guest after shutdown. Get this error:

Assertion failed: (error == 0), function modify_bar_registration, file
/usr/src/usr.sbin/bhyve/pci_emul.c, line 501.
   .twm/programs/./bhyve-hda.sh: line 13: 90467 Abort
trap  bhyve -S -c sockets=1,cores=2,threads=2 -m 6G -H -w -s
0,hostbridge -s 3,passthru,7/0/1 -s
4,virtio-blk,/bhyve/win-alt.img,sectorsize=512/4096 -s
6,hda,rec=/dev/dsp,play=/dev/dsp -s 8,passthru,12/0/0 -s
29,fbuf,tcp=0.0.0.0:5900,w=1600,h=900,wait -s 30,xhci,tablet -s 31,lpc -l
com1,stdio -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd windows-10

Interestingly, this error particular error was reported and addressed by a
patch, so it doesn't happen again. But it comes out when I restar guest after
shutdown.

It was mentioned here, including also the patch:
http://freebsd.1045724.x6.nabble.com/Windows-10-guests-fail-to-boot-when-attempting-to-passthrough-network-card-td6330452.html

Solution: the only one that works is to run:
#bhyvectl --vm=$my_vm_name --destroy

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Warm Migration feature for bhyve - review on Phabricator

2021-01-25 Thread John-Mark Gurney
Matt Churchyard wrote this message on Mon, Jan 25, 2021 at 10:46 +:
> -Original Message-
> From: John-Mark Gurney  
> Sent: 25 January 2021 06:21
> To: Matt Churchyard 
> Cc: Elena Mihailescu ; 
> freebsd-virtualization@freebsd.org
> Subject: Re: Warm Migration feature for bhyve - review on Phabricator
> 
> Matt Churchyard wrote this message on Fri, Jan 22, 2021 at 10:09 +:
> > > Hello, all,
> > 
> > > We have recently opened a review on Phabricator for the warm migration 
> > > code for > bhyve [1]. Please take a look and let us know if it is 
> > > anything we can improve.
> > 
> > > [1] https://reviews.freebsd.org/D28270
> > 
> > > Thank you,
> > > Elena
> > 
> > I appreciate that this isn't really related to the current review, and 
> > commend the work being put into bhyve - it's an invaluable addition to 
> > FreeBSD. I'm just wondering if any thought has been put into the future 
> > possibility for transfer of disk data during migration (i.e. the equivalent 
> > of "storage vmotion")
> > 
> > The current process (from a mile high overview) effectively seems to be the 
> > following -
> > 
> > * Start guest on host2 pointing at a shared disk, and halt any execution
> > * use bhyvectl to pause and begin migration on host1
> > * Start the guest on host2
> > 
> > What would be the feasibility of being able to run a process such as the 
> > following? Obviously it would likely need to be orchestrated by an external 
> > tool, but to me it seems the main requirement is really just to be able to 
> > provide separate control over the pause and migrate steps on host1 -
> > 
> > * send a ZFS snapshot of the running machine to host2
> > * start the guest in migrate recv mode on host2
> > * pause the guest on host1
> > * send a new snapshot
> > * initiate the migration of memory/device data
> > * start guest on host2
> > 
> > Are there any major complications here I'm not aware of other than the 
> > requirement to pause the guest and kick off the state migration as two 
> > separate calls?
> 
> > There's also hastd which can aid with this...
> 
> Thanks for the reply. I've always been wary of the additional complexity of 
> HAST and ZFS, as it doesn't seem to have widespread usage or support, and 
> things get ugly fast when storage is involved.

Totally agree...

> However, the idea of using HAST on top of zvols to provide network mirrored 
> storage for a guest is interesting. It adds a lot of extra complexity, and 
> probably performance impact though if it's just for the ability to move a 
> guest between systems that may only happen every now and then. I'm also not 
> sure it would help (or would at least be even more complex) if I have 4 hosts 
> and want to be able to move guests anywhere.

gmirror + ggate is another option as well...

> The main reason for the email was to explore my theory that, by leveraging 
> ZFS, any bhyve user could roll their own storage migration ability with a few 
> commands as long as the following two (seemingly minor) abilities were present
> 
> 1) the ability to suspend a guest without it being done as part of the 
> migrate call. (I assume suspend/resume support via bhyvectl is planned 
> anyway, if not already in place at this point)

Yeah, I'd hope that there would be ability to insert custom commands
in between the suspend on one host and the resumption on the new host..
But it is good explicitly state this..

> 2) modification of the migrate call to skip suspend if the guest is already 
> suspended.
> 
> The main thing I'm not sure of is whether the migrate call has any specific 
> reliance on doing the suspend itself (e.g. if it needs to do anything before 
> the suspend, which will obviously be problematic if suspend & migrate are 
> called separately). Or if there's something else I missed that means this is 
> not feasible. I'm not really after massive changes to the current review to 
> implement disk migration in bhyve itself.

+1

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 "All that I will do, has been done, All that I have, has not."
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: RHEL virtualization

2021-01-25 Thread John Kennedy
On Sat, Jan 23, 2021 at 03:14:53PM -0800, John Kennedy wrote:
> At work, we have RHEL (-ish; some RHEL, some CentOS, some OEL).  Mostly v7,
> some v8.  Since I'm doing the Covid work-from-home telecommute, I'm trying to
> recreate some of my work infrastructure while trying to plan a bit towards
> the future (migrating a lot of VMs to Azure).
> 
> What I'd like to recreate is my existing kickstart infrastructure, where I
> PXE boot the system, feed it anaconda goodness which dovetails into puppet
> and I can generate a clean system from a template.  Works great for VMWare
> and HyperV, not so much for Azure but if I can generate a snapshot disk
> image Azure can ingest, I'll be happy on that score.
> 
> I've been very happy with bhyve for FreeBSD.  I messed with VirtualBox for
> a while (a long time ago), but with my tendency to track stable (think:
> kernel modules) and keep very current on ports-from-source (frequent
> package updates, upon which VirtualBox has MANY dependencies) made that a
> poorer experience than I had with it on Windows.  I've been very happy with
> bhyve since it's basically baked right in.

  Let me restate some of this in a different way to maybe get some more
thinking.

  Using the BHYVE_UEFI.fd from uefi-edk2-bhyve, I can boot my OEL8 (RHEL8
clone).  That currently worries me because it has the big python-2.7 warning
on it (as does uefi-edk2-bhyve-csm).  On physical boxes, I've been able to
grab a PXEBOOT ISO when the firmware lacks PXE booting, but I haven't got
that to work yet for these.  Those python worries are basically what is
driving me to look elsewhere (like fighting with grub-bhyve and away from
the only UEFI booting that I know about).


  I personally like PXE-booting a new system (and possibly making a gold image
from that, depending on what I'm doing) because it basically answers that
little auditor-voice in the back of my head that, in the event of some possible
security problem, how do I know that my backups haven't been compromised.  In
all of those gigabytes, after all of the toxic recursive mindless non-logic,
how do you *know*?  My happy answer to myself is: "here is a configuration
file that I can review, all the binaries are on the vendor's site or
re-downloaded, here are the puppet customization rules, blam!  done!
10 minutes later I have a clean system."

  In any case, that is why I'm chasing PXE booting, although I'd be interested
in the way other people solve that problem.  That really doesn't work that
way in Azure, thus the gold images approach I'll probably have to take with
them in the future.

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Warm Migration feature for bhyve - review on Phabricator

2021-01-25 Thread Elena Mihailescu
On Mon, 25 Jan 2021 at 13:26, Matt Churchyard
 wrote:
>
> -Original Message-
> From: John-Mark Gurney 
> Sent: 25 January 2021 06:21
> To: Matt Churchyard 
> Cc: Elena Mihailescu ; 
> freebsd-virtualization@freebsd.org
> Subject: Re: Warm Migration feature for bhyve - review on Phabricator
>
> Matt Churchyard wrote this message on Fri, Jan 22, 2021 at 10:09 +:
> > > Hello, all,
> >
> > > We have recently opened a review on Phabricator for the warm migration 
> > > code for > bhyve [1]. Please take a look and let us know if it is 
> > > anything we can improve.
> >
> > > [1] https://reviews.freebsd.org/D28270
> >
> > > Thank you,
> > > Elena
> >
> > I appreciate that this isn't really related to the current review, and 
> > commend the work being put into bhyve - it's an invaluable addition to 
> > FreeBSD. I'm just wondering if any thought has been put into the future 
> > possibility for transfer of disk data during migration (i.e. the equivalent 
> > of "storage vmotion")
> >
> > The current process (from a mile high overview) effectively seems to be the 
> > following -
> >
> > * Start guest on host2 pointing at a shared disk, and halt any execution
> > * use bhyvectl to pause and begin migration on host1
> > * Start the guest on host2
> >
> > What would be the feasibility of being able to run a process such as the 
> > following? Obviously it would likely need to be orchestrated by an external 
> > tool, but to me it seems the main requirement is really just to be able to 
> > provide separate control over the pause and migrate steps on host1 -
> >
> > * send a ZFS snapshot of the running machine to host2
> > * start the guest in migrate recv mode on host2
> > * pause the guest on host1
> > * send a new snapshot
> > * initiate the migration of memory/device data
> > * start guest on host2
> >
> > Are there any major complications here I'm not aware of other than the 
> > requirement to pause the guest and kick off the state migration as two 
> > separate calls?
>
> > There's also hastd which can aid with this...
>
> Thanks for the reply. I've always been wary of the additional complexity of 
> HAST and ZFS, as it doesn't seem to have widespread usage or support, and 
> things get ugly fast when storage is involved.
>
> However, the idea of using HAST on top of zvols to provide network mirrored 
> storage for a guest is interesting. It adds a lot of extra complexity, and 
> probably performance impact though if it's just for the ability to move a 
> guest between systems that may only happen every now and then. I'm also not 
> sure it would help (or would at least be even more complex) if I have 4 hosts 
> and want to be able to move guests anywhere.
>
> The main reason for the email was to explore my theory that, by leveraging 
> ZFS, any bhyve user could roll their own storage migration ability with a few 
> commands as long as the following two (seemingly minor) abilities were present
>
> 1) the ability to suspend a guest without it being done as part of the 
> migrate call. (I assume suspend/resume support via bhyvectl is planned 
> anyway, if not already in place at this point)
> 2) modification of the migrate call to skip suspend if the guest is already 
> suspended.
>
> The main thing I'm not sure of is whether the migrate call has any specific 
> reliance on doing the suspend itself (e.g. if it needs to do anything before 
> the suspend, which will obviously be problematic if suspend & migrate are 
> called separately). Or if there's something else I missed that means this is 
> not feasible. I'm not really after massive changes to the current review to 
> implement disk migration in bhyve itself.

Thank you for your input related to the disk migration. We were (still
are, actually) focusing on the live migration feature for bhyve and
did not take into consideration the disk migration. As Mihai said, the
patches for the two (guest's state and guest's disk migration) are or
will be quite big by themselves and we want the review process to go
as smoothly as possible.

After we have a pretty clear view of the live migration implementation
(a patch on this feature will be uploaded on Phabricator in a couple
of weeks) and how we should improve it, we will look into the disk
migration feature and how it can be implemented for bhyve.

As I said, our objective is to add the live migration functionality to
bhyve which is based on the warm migration implementation which in
turn is based on the suspend option in bhyve (the snapshotting/suspend
option was added in the upstream last year, the FreeBSD code must be
compiled with the BHYVE_SNAPSHOT option for it to work).

Elena
>
> Matt
>
> --
>   John-Mark Gurney  Voice: +1 415 225 5579
>
>  "All that I will do, has been done, All that I have, has not."
> ___
> freebsd-virtualization@freebsd.org mailing list
> 

RE: Warm Migration feature for bhyve - review on Phabricator

2021-01-25 Thread Matt Churchyard
-Original Message-
From: Miroslav Lachman <000.f...@quip.cz> 
Sent: 25 January 2021 10:37
To: Matt Churchyard 
Subject: Re: Warm Migration feature for bhyve - review on Phabricator

On 22/01/2021 11:09, Matt Churchyard wrote:

[...]

> Shared storage is great but becomes complex and expensive if you want high 
> performance and reliability (seeing as the storage quickly becomes a major 
> single point of failure without enterprise active/active kit). I suspect the 
> most common deployment of bhyve is independent hosts with local ZFS pools as 
> this is easy/cheap and gives great performance. Most hypervisors only had 
> "shared storage" migration for a long time but the big ones now also support 
> transferring disk data live. It would be great to be able to do this out of 
> the gate.
> 
> I did have a poor mans version of this in vm-bhyve, but obviously relied on 
> stopping and restarting the guest. I always had problems with the nc commands 
> not exiting cleanly though and hanging the process so never made it official.

>I don't know much details about your setup and your problems with nc
> (netcat) but I guess it is used for zfs send & receive. You can try to use 
> mbuffer from ports instead of nc:
> https://everycity.co.uk/alasdair/2010/07/using-mbuffer-to-speed-up-slow-zfs-send-zfs-receive/

> It will add some dependency on other port but if somebody wants storage 
> migration with your greate tool vm-bhyve I think it is small price to pay. 
> (another way is to properly setup SSH and > do not use nc nor mbuffer)

Thanks for the reply. Yes, I was using it to send zfs snapshots between hosts. 
I think I actually have the same issue when doing it by hand. The nc call seems 
like it hangs once complete but will instantly return to the prompt when 
pressing enter.

As you say, a better solution would be to use ssh. It may even be possible to 
trigger the receive command directly from the sending host if I get a bit hacky 
and split off a second background task. I may have another look at it. It 
worked pretty well all in all even if it did have to stop/start the guest 
briefly.

Matt

> Kind regards
> Miroslav Lachman
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


RE: Warm Migration feature for bhyve - review on Phabricator

2021-01-25 Thread Matt Churchyard
-Original Message-
From: John-Mark Gurney  
Sent: 25 January 2021 06:21
To: Matt Churchyard 
Cc: Elena Mihailescu ; 
freebsd-virtualization@freebsd.org
Subject: Re: Warm Migration feature for bhyve - review on Phabricator

Matt Churchyard wrote this message on Fri, Jan 22, 2021 at 10:09 +:
> > Hello, all,
> 
> > We have recently opened a review on Phabricator for the warm migration code 
> > for > bhyve [1]. Please take a look and let us know if it is anything we 
> > can improve.
> 
> > [1] https://reviews.freebsd.org/D28270
> 
> > Thank you,
> > Elena
> 
> I appreciate that this isn't really related to the current review, and 
> commend the work being put into bhyve - it's an invaluable addition to 
> FreeBSD. I'm just wondering if any thought has been put into the future 
> possibility for transfer of disk data during migration (i.e. the equivalent 
> of "storage vmotion")
> 
> The current process (from a mile high overview) effectively seems to be the 
> following -
> 
> * Start guest on host2 pointing at a shared disk, and halt any execution
> * use bhyvectl to pause and begin migration on host1
> * Start the guest on host2
> 
> What would be the feasibility of being able to run a process such as the 
> following? Obviously it would likely need to be orchestrated by an external 
> tool, but to me it seems the main requirement is really just to be able to 
> provide separate control over the pause and migrate steps on host1 -
> 
> * send a ZFS snapshot of the running machine to host2
> * start the guest in migrate recv mode on host2
> * pause the guest on host1
> * send a new snapshot
> * initiate the migration of memory/device data
> * start guest on host2
> 
> Are there any major complications here I'm not aware of other than the 
> requirement to pause the guest and kick off the state migration as two 
> separate calls?

> There's also hastd which can aid with this...

Thanks for the reply. I've always been wary of the additional complexity of 
HAST and ZFS, as it doesn't seem to have widespread usage or support, and 
things get ugly fast when storage is involved.

However, the idea of using HAST on top of zvols to provide network mirrored 
storage for a guest is interesting. It adds a lot of extra complexity, and 
probably performance impact though if it's just for the ability to move a guest 
between systems that may only happen every now and then. I'm also not sure it 
would help (or would at least be even more complex) if I have 4 hosts and want 
to be able to move guests anywhere.

The main reason for the email was to explore my theory that, by leveraging ZFS, 
any bhyve user could roll their own storage migration ability with a few 
commands as long as the following two (seemingly minor) abilities were present

1) the ability to suspend a guest without it being done as part of the migrate 
call. (I assume suspend/resume support via bhyvectl is planned anyway, if not 
already in place at this point)
2) modification of the migrate call to skip suspend if the guest is already 
suspended.

The main thing I'm not sure of is whether the migrate call has any specific 
reliance on doing the suspend itself (e.g. if it needs to do anything before 
the suspend, which will obviously be problematic if suspend & migrate are 
called separately). Or if there's something else I missed that means this is 
not feasible. I'm not really after massive changes to the current review to 
implement disk migration in bhyve itself.

Matt

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 "All that I will do, has been done, All that I have, has not."
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"