Hi,
Has anyone here managed to get cpu masking working via libvirt?
Intention to enable VM migrations between hosts of a different CPU
generation.
Inside my xml I'm providing the model as well as a list of features to
specifically disable, but none of it seems to take any effect. On
booting
On 8/26/2014 4:52 PM, Nathan March wrote:
Has anyone here managed to get cpu masking working via libvirt?
Intention to enable VM migrations between hosts of a different CPU
generation.
To add to this, I've tried using the boot options to set the cpu mask
instead:
xen_commandline
http://cbs.centos.org/kojifiles/work/tasks/8801/8801/
If you could test those and let me know if it fixes your problem, I'd
appreciate it. :-)
Confirmed, both issues are fixed. Thanks! Any plans to push those packages to
main mirrors?
- Nathan
Hi All,
I'm seeing tapdisk processes not being terminated after a HVM vm is shutdown or
migrated away. I don't see this problem with linux paravirt domu's, just
windows hvm ones.
xl.cfg:
name = 'nathanwin'
memory = 4096
vcpus = 2
disk = [
So you're working from the command line tools in the EPEL 'cloud-init'
package, not the AWS GUI? Because when I tried expanding the size of
the base disk image in the GUI, I wound up with an an 8 Gig default
/dev/xvda1 on a 20 Gig /dev/xvda. That's why I was looking at how do
I resize
Hi All,
Some more data on this, I've reproduced this on another host that's a
completely stock centos/xen deployment with a centos 6.6 domU.
Since I’m seeing the retransmissions on the VIF, I don't think it's related to
the network stack but just in case.. Each host is connected via LACP with
that originally lead me
down this path.
- Nathan
-Original Message-
From: centos-virt-boun...@centos.org [mailto:centos-virt-
boun...@centos.org] On Behalf Of Nathan March
Sent: Wednesday, April 15, 2015 1:13 PM
To: 'Discussion about the virtualization on CentOS'
Subject: Re: [CentOS-virt
Hi All,
I've tracked this down... We do rate limiting of our vms with a mix of
ebtables/tc.
Running these commands (replace vif1.0 with the correct vif for your VM) will
reproduce this:
ebtables -A FORWARD -i vif1.0 -j mark --set-mark 990 --mark-target CONTINUE
tc qdisc add dev bond0 root
Hi All,
I'm seeing clock issues with live migrations on the latest kernel packages,
migrating a VM from 3.10.68-11 to 3.18.17-13 results in the VM clock being off
by 7 hours (I'm PST, so appears to be a timezone issue). This is also between
xen versions, but rolling the target back to 3.10
On 07/30/2015 06:38 AM, Johnny Hughes wrote:
On 07/29/2015 11:38 AM, Nathan March wrote:
Hi All,
I'm seeing clock issues with live migrations on the latest kernel
packages, migrating a VM from 3.10.68-11 to 3.18.17-13 results in the
VM clock being off by 7 hours (I'm PST, so
If you'd like to extend that a little bit, here's example configs on how to do
LACP and vlan tagging on c6:
host network-scripts # cat ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
USEERCTL=no
BOOTPROTO=none
IPV6INIT=no
MTU=1500
MASTER=bond0
SLAVE=yes
host network-scripts # cat ifcfg-eth1
DEVICE=eth1
> It seems the patch you mentioned was merged to upstream Linux here:
>
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i
> d=71472fa9c52b1da27663c275d416d8654b905f05
>
> and then reverted/removed here:
>
> > I have no issues rolling this patch in , while we wait on upstream, if
> > it makes our tree more stable.
> >
>
> I think we should do that.. What do others think?
>
I've had the patch deployed to a group of 32 hosts (with hundreds of vms)
for about 10 days now and no sign of any issues.
Hi,
I'm seeing numerous crashes on the xen 4.6.6-1 / 4.6.6-2 releases, on both
the 4.9.34-29 and 4.9.39-29 kernels.
I've attached a txt with two different servers outputs.
Xen-028: This crashed this morning while running 4.6.6-1 and 4.9.39-29
Xen-001: This crashed shortly after being
would almost always cause the oops.
Cheers,
Nathan
From: CentOS-virt [mailto:centos-virt-boun...@centos.org] On Behalf Of
Nathan March
Sent: Wednesday, August 23, 2017 3:32 PM
To: 'Discussion about the virtualization on CentOS' <centos-virt@centos.org>
Subject: Re: [CentOS-virt]
Hi,
It's been almost a week now since XSA-226 through XSA-230 were released and
just wondering when updated packages are expected to be posted?
https://cbs.centos.org/koji/packageinfo?packageID=88 has nothing for the
past month.
Thanks!
- Nathan
___
Since moving from 4.4 to 4.6, I've been seeing an increasing number of
stability issues on our hypervisors. I'm not clear if there's a singular
root cause here, or if I'm dealing with multiple bugs.
One of the more common ones I've seen, is a VM on shutdown will remain in
the null state and a
> -Original Message-
> From: CentOS-virt [mailto:centos-virt-boun...@centos.org] On Behalf Of
> Peter Peltonen
> Sent: Thursday, January 18, 2018 11:19 AM
> To: Discussion about the virtualization on CentOS
> Subject: Re: [CentOS-virt] Xen 4.6.6-9 (with XPTI
ages making their way to centos-virt-xen-testing
>
> On 01/24/2018 01:01 AM, Pasi Kärkkäinen wrote:
> > On Tue, Jan 23, 2018 at 06:20:39PM -0600, Kevin Stange wrote:
> >> On 01/23/2018 05:57 PM, Karl Johnson wrote:
> >>>
> >>>
> >>> On Tue, Ja
Just a heads up that I'm seeing major stability problems on these builds.
Didn't have console capture setup unfortunately, but have seen my test
hypervisor hard lock twice over the weekend.
This is with xpti being used, rather than the shim.
Cheers,
Nathan
> -Original Message-
> From:
> Thanks for the heads-up. It's been running through XenServer's tests
> as well as the XenProject's "osstest" -- I haven't heard of any
> additional issues, but I'll ask.
Looks like I can reproduce this pretty easily, this happened upon ssh'ing
into the server while I had a VM migrating into
Hi,
> Hmm.. isn't this the ldisc bug that was discussed a few months ago on this
list,
> and a patch was applied to virt-sig kernel aswell?
>
> Call trace looks similar..
Good memory! I'd forgotten about that despite being the one who ran into it.
Looks like that patch was just removed in
22 matches
Mail list logo