Re: 11-RELEASE and live migration

2016-12-14 Thread Hoyer-Reuther, Christian
> I'm able to see these glitches, as said the VM doesn't really hang, it just
> gets
> stuck for a long time (and I think that depends on the uptime differences
> between the source and destination hosts). In the meantime, could you please
> test if changing the timecounter from XENTIMER to any other (like HPET or
> ACPI-fast) solves the issue?
> 
> # sysctl -w kern.timecounter.hardware=ACPI-fast

Hello Roger,

I changed the timecounter from XENTIMER to ACPI-fast and then I did about 10 
migrations between our 3 hosts. I always started the next migration a few 
seconds after the previous migration finished. The VM didn't stuck now and I 
didn't see the VGABios screen on the console.

Regards,

Christian

___
freebsd-xen@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-xen
To unsubscribe, send any mail to "freebsd-xen-unsubscr...@freebsd.org"


Re: [PATCH]netfront: need release all resources after adding and removing NICs time and again

2016-12-14 Thread Liuyingdong
Hello Roger,
Thank you for the time and patience you devoted to reading my messages 
and e-mails. I really appreciate that.
I can't use git send-email so I attach the patches directly. In the 
0001 patch I introduce suspend_cancel mechanism for frontend devices and in the 
0002 patch I release all resources after hot unplug NICs.

Note: These two patches is on the base of the origin/release/10.2.0 branch 
and the 0002 patch is made after the 0001 patch.

-邮件原件-
发件人: freebsd xen [mailto:roger@citrix.com] 
发送时间: 2016年12月13日 22:29
收件人: Liuyingdong
抄送: freebsd-xen@freebsd.org; Zhaojun (Euler); Suoben; Ouyangzhaowei (Charles)
主题: Re: [PATCH]netfront: need release all resources after adding and removing 
NICs time and again

On Tue, Dec 13, 2016 at 02:03:08PM +0800, liuyingdong wrote:
> Hello Roger,
> I want to know how about this patch,Please let me know if you have any 
> questions.
> Thanks.

Hello,

Thanks for the patches! This one is looking fine, it's just that I'm a little 
bit busy at the moment, and there are some issues in HEAD related to Xen that I 
would like to fix before pushing anything new.

In any case, I see that you are sending the patches using Thunderbird, which is 
not ideal (MUAs tend to mangle patches). The preferred way for sending patches 
is using "git send-email"[0] directly. There are also several tutorials online 
that will help you setup git send-email correctly. If there's some reason why 
you can't use git send-email I would recommend that you also attach the patches 
directly to emails, that way they won't probably get mangled.

Roger.

[0] https://git-scm.com/docs/git-send-email


0001-introduce-suspend-cancel-mechanism-for-frontend-devices.patch
Description: 0001-introduce-suspend-cancel-mechanism-for-frontend-devices.patch


0002-netfront-need-release-all-resources-after-hot-plug.patch
Description: 0002-netfront-need-release-all-resources-after-hot-plug.patch
___
freebsd-xen@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-xen
To unsubscribe, send any mail to "freebsd-xen-unsubscr...@freebsd.org"

Re: 11-RELEASE and live migration

2016-12-14 Thread 'Roger Pau Monné'
On Mon, Dec 12, 2016 at 08:33:48AM -0600, Jay West wrote:
> My setup...
> 
> Hosts: Two Dell R310's, each one as follows: 32gb ram, L3480 cpu, 4 gigabit
> nics, two 300gb disks (mirrored), where the local disk is used only to hold
> xenserver 7.0 with all patches up through and including today. These
> machines are 100% up to date on firmware, patches, bios, etc.
> 
> Network: Two Dell powerconnect 6224's, stacked via dual round-robin CX4
> cables. There's an untagged management vlan, an untagged data vlan, and an
> untagged ISCSI vlan - identical ports are members on each switch. On each
> host, the two builtin nics (data) are connected one leg in each switch (same
> vlan). The two addin ports (Intel Pro/1000) (iSCSI) on each host are also
> connected one leg in each switch. The switches are 100% up to date on
> firmware/OS. Client access to this is via a stack of Juniper EX2200's
> trunked back to the 6224's.
> 
> Storage: OEM version of a Tyan 2U 12 bay SAS box, similar to Tyan
> S8226WGM3NR but with 7 gigabit NICS builtin. 32gb ram, FreeNAS 9.10.1-U4,
> and dual AMD C32's (12 cores total). One nic is used for ILO, another for
> management, another is unused , and the remaining four gigabit ports are
> connected two legs in each switch (same vlan, iscsi only, 9000mtu).
> Multipath is configured and active.
> 
> The storage box is using mirrored vdev's with ZFS on top, 100% of which
> present an iSCSI target so the box is doing nothing but iscsi (and an NFS
> iso share for installing vm's). So in Xencenter... there is one storage
> repository containing all the NAS space. Xencenter then creates the vdisks
> inside that for each VM to use.
> 
> FreeBSD 10X doesn't seem to have this problem. FreeBSD11 definitely does,
> and apparently I'm not the only one who can see it. I should also point out
> that Windows VM's (both Server 2012 R2 and 7 pro - both 64bit) have no
> problem migrating to another host and then back. And FreeBSD can definitely
> migrate to another host - just not then back to the first (at least...
> immediately. I haven't tried waiting an hour or so and then trying the
> migration back).
> 
> I also cannot select the source host as the destination in Xencenter. The
> host servers are completely identical in every respect. All vm's disk is via
> iSCSI as above.
> 
> I also have a completely separate architecture that is identical to the
> above, except much larger, using xenserver 6.5, HP DL1000's, and cisco 3750G
> stacks. I have not yet tested freebsd11 on that installation; I assumed it
> wouldn't be much help as it's older versions of all the code.
> 
> The smaller architecture above is not yet in production, so I can do testing
> on it. The larger installation mentioned later above is production, and I
> can't do much major testing there.

Thanks for such accurate description. I've now setup a similar environment and 
I'm able to see these glitches, as said the VM doesn't really hang, it just 
gets 
stuck for a long time (and I think that depends on the uptime differences 
between the source and destination hosts). In the meantime, could you please 
test if changing the timecounter from XENTIMER to any other (like HPET or 
ACPI-fast) solves the issue?

# sysctl -w kern.timecounter.hardware=ACPI-fast

Thanks, Roger.
___
freebsd-xen@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-xen
To unsubscribe, send any mail to "freebsd-xen-unsubscr...@freebsd.org"