My setup...

Hosts: Two Dell R310's, each one as follows: 32gb ram, L3480 cpu, 4 gigabit
nics, two 300gb disks (mirrored), where the local disk is used only to hold
xenserver 7.0 with all patches up through and including today. These
machines are 100% up to date on firmware, patches, bios, etc.

Network: Two Dell powerconnect 6224's, stacked via dual round-robin CX4
cables. There's an untagged management vlan, an untagged data vlan, and an
untagged ISCSI vlan - identical ports are members on each switch. On each
host, the two builtin nics (data) are connected one leg in each switch (same
vlan). The two addin ports (Intel Pro/1000) (iSCSI) on each host are also
connected one leg in each switch. The switches are 100% up to date on
firmware/OS. Client access to this is via a stack of Juniper EX2200's
trunked back to the 6224's.

Storage: OEM version of a Tyan 2U 12 bay SAS box, similar to Tyan
S8226WGM3NR but with 7 gigabit NICS builtin. 32gb ram, FreeNAS 9.10.1-U4,
and dual AMD C32's (12 cores total). One nic is used for ILO, another for
management, another is unused , and the remaining four gigabit ports are
connected two legs in each switch (same vlan, iscsi only, 9000mtu).
Multipath is configured and active.

The storage box is using mirrored vdev's with ZFS on top, 100% of which
present an iSCSI target so the box is doing nothing but iscsi (and an NFS
iso share for installing vm's). So in Xencenter... there is one storage
repository containing all the NAS space. Xencenter then creates the vdisks
inside that for each VM to use.

FreeBSD 10X doesn't seem to have this problem. FreeBSD11 definitely does,
and apparently I'm not the only one who can see it. I should also point out
that Windows VM's (both Server 2012 R2 and 7 pro - both 64bit) have no
problem migrating to another host and then back. And FreeBSD can definitely
migrate to another host - just not then back to the first (at least...
immediately. I haven't tried waiting an hour or so and then trying the
migration back).

I also cannot select the source host as the destination in Xencenter. The
host servers are completely identical in every respect. All vm's disk is via
iSCSI as above.

I also have a completely separate architecture that is identical to the
above, except much larger, using xenserver 6.5, HP DL1000's, and cisco 3750G
stacks. I have not yet tested freebsd11 on that installation; I assumed it
wouldn't be much help as it's older versions of all the code.

The smaller architecture above is not yet in production, so I can do testing
on it. The larger installation mentioned later above is production, and I
can't do much major testing there.

Hopefully some of this info helps!

-----Original Message-----
From: []
On Behalf Of Hoyer-Reuther, Christian
Sent: Monday, December 12, 2016 4:40 AM
To: Roger Pau Monné <>;
Subject: Re: 11-RELEASE and live migration


I cannot select the source host as destination when I migrate from

All 3 hosts use the same hardware. Disk backend for all VMs is iSCSI.



> -----Ursprüngliche Nachricht-----
> Von: Roger Pau Monné []
> Gesendet: Montag, 12. Dezember 2016 11:32
> An: Hoyer-Reuther, Christian;
> Cc:;
> Betreff: Re: 11-RELEASE and live migration
> On Thu, Dec 08, 2016 at 10:33:37AM +0100, Hoyer-Reuther, Christian wrote:
> > I did some tests and I see the problem too.
> >
> > XenServer 6.5 SP1 with almost all patches (3 hosts in pool), FreeBSD 
> > 11.0-
> RELEASE-p2, xe-guest-utilities-6.2.0_2 installed via pkg.
> >
> > First migration from host 3 to host 1 is ok.
> >
> > Some seconds later I start a new migration from host 1 to host 2 and 
> > when
> migration finishes (as seen in XenCenter) then the VM switches to the 
> VGABios screen ("Plex86/Bochs VGABios (PCI) current-cvs 01 Sep 2016 ...
> cirrus-compatible VGA is detected"). The VM seem to hang and does not 
> respond. In XenCenter I see that all the CPU's of the VM go up to 100 
> percent.
> >
> > Then after 17 minutes the VGABios screen disappears and I see the 
> > console,
> the CPU usage as seen in XenCenter goes down. I logged in as root 
> before I started the first migration and root is still logged in. So 
> it was a hang and not a reboot.
> >
> > 20 minutes later I start a new migration from host 2 to host 3 and 
> > the
> problem occurs again.
> >
> > This problem does not exist with 10.3-RELEASE on the same hosts.
> Hello,
> So far I've only tested local migration (migration using the same host 
> as source and destination), and that seems to work fine (I've done +50 
> consecutive migrations with only 10s separation between them). Is 
> there a change that you could also try to reproduce this with local 
> migration?
> Are you using the same exact hardware on the source and the 
> destination? Are you all using iSCSI as the disk backend for this VMs?
> Thanks, Roger.
_______________________________________________ mailing list
To unsubscribe, send any mail to ""

_______________________________________________ mailing list
To unsubscribe, send any mail to ""

Reply via email to