Hello,

sorry for the late response, but now I'm back from vacation and did some new 
tests.

I installed a new VM using the 
FreeBSD-11.0-ALPHA6-amd64-20160701-r302303-disc1.iso, then updated to HEAD 
r303908 and installed xe-guest-utilities-6.2.0_2 from HEAD ports r420055. 
Xenguest_enable="YES" is set in /etc/rc.conf.

I tested VM migration from XenCenter (XenMotion).

There are no more time issues after migration.

I did 4 migrations and at 2 migrations I noticed the following problems:

When the migration finishes then the VM console switches to the VGA BIOS screen 
(Plex86/Bochs VGABios ...), the VM console freezes, the VM is not reachable 
over network and in XenCenter I see the first 2 CPU's raising up to 100 
percent. Screenshots attached.

Then after 10 minutes the CPU usage goes down, the VM console is responsive 
again and the machine is reachable over network.

Regards,

Christian

> -----Original Message-----
> From: Roger Pau Monné [mailto:roger....@citrix.com]
> Sent: Friday, July 29, 2016 10:29 AM
> To: Wei Liu
> Cc: Karl Pielorz; Hoyer-Reuther, Christian; freebsd-xen@freebsd.org
> Subject: Re: 'Live' Migrate messes up NTP on FreeBSD domU - any suggestions?
> 
> On Mon, Jul 25, 2016 at 04:37:14PM +0100, Wei Liu wrote:
> > On Mon, Jul 25, 2016 at 04:43:43PM +0200, Roger Pau Monné wrote:
> > > Adding Wei to the Cc list since he added the multiqueue functionality.
> > >
> > > On Mon, Jul 25, 2016 at 02:59:02PM +0100, Karl Pielorz wrote:
> > > >
> > > > --On 22 July 2016 13:55 +0200 Roger Pau Monné <roger....@citrix.com>
> wrote:
> > > >
> > > > > In my environment I've migrated a FreeBSD VM with 2 cpus for > 100
> > > > > consecutive times without seeing any issues (or freezes), although
> this
> > > > > was  with OSS Xen and without xe-guest-utilities. Karl, have you
> tested
> > > > > HEAD  recently?
> > > >
> > > > Ok, I have tested this with r303286 - it seems to work OK. The hosts
> gain no
> > > > time that I can see while migrating, and NTP stays happy.
> > > >
> > > > I did get a panic after about 40 migrations - but that seems to be
> some
> > > > network issue or something...
> > > >
> > > >   ('panic called with 0 available queues / dbt_trace_self_wrapper /
> vpanic /
> > > > kassert_panic / xn_txq_mq_start / ether_output / udp_send /
> sosend_dgram /
> > > > kern_sendit / sendit / sys_sendto / amd64_syscall / Xfast_syscall)
> > >
> > > I haven't been able to reproduce this, but I think it's possible that if
> you
> > > migrate an active netfront xn_txq_mq_start might be called during the
> > > migration, just in the middle of the setup_device reconfiguation (while
> > > info->num_queues is 0).
> > >
> > > Wei, I think netif_disconnect_backend should set IFF_DRV_OACTIVE in
> order to
> > > notify the net subsystem that the queues are full, so no further calls
> to
> > > xn_txq_mq_start happen until the resume has finished, do you agree?
> > >
> >
> > Perhaps clear IFF_DRV_RUNNING and only set it when the device is ready?
> > Looking at the manpage is seems more appropriate to me semantically.
> 
> Hello Karl and Christian, I have the following patches that solve all the
> issues I've seen with live migration, with those I've been able to migrate a
> VM > 100 times without seeing any issues. Could you give them a try?
> 
> BTW, I haven't been able to reproduce Karl's crash ("called with 0 available
> queues"), but I've added a condition that should prevent it from triggering
> anyway. Patches are here:
> 
> https://reviews.freebsd.org/D7349
> https://reviews.freebsd.org/D7362
> https://reviews.freebsd.org/D7363
> 
> It doesn't really matter in which order you apply them as long as both 3 are
> applied. Ideally I would like to commit them on Monday, so that I can MFC
> them to stable/11 before the releng/11 branch, could you please provide some
> feedback before then?
> 
> Thanks, Roger.
_______________________________________________
freebsd-xen@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-xen
To unsubscribe, send any mail to "freebsd-xen-unsubscr...@freebsd.org"

Reply via email to