[libvirt] interactions between virDomainSetVcpusFlags and NUMA/pinning?

2018-07-25 Thread Chris Friesen
Hi, Just wondering about interactions between virDomainSetVcpusFlags() and virDomainPinVcpuFlags() and the domain XML. 1) If I add a vCPU to a domain, do I need to pin it after or does it respect the vCPU-to-pCPU mapping specified in the domain XML? 2) Are vCPUs added/removed in strict

Re: [libvirt] anyone ever seen virDomainCreateWithFlags() essentially hang?

2018-04-05 Thread Chris Friesen
On 04/05/2018 12:17 PM, Jiri Denemark wrote: On Thu, Apr 05, 2018 at 12:00:44 -0600, Chris Friesen wrote: I'm investigating something weird with libvirt 1.2.17 and qemu 2.3.0. I'm using the python bindings, and I seem to have a case where libvirtmod.virDomainCreateWithFlags() hung rather than

[libvirt] anyone ever seen virDomainCreateWithFlags() essentially hang?

2018-04-05 Thread Chris Friesen
I'm investigating something weird with libvirt 1.2.17 and qemu 2.3.0. I'm using the python bindings, and I seem to have a case where libvirtmod.virDomainCreateWithFlags() hung rather than returned. Then, about 15min later a subsequent call to libvirtmod.virDomainDestroy() from a different

Re: [libvirt] [PATCH] qemu: fix migration with local and VIR_STORAGE_TYPE_NETWORK disks

2018-02-09 Thread Chris Friesen
On 02/09/2018 04:15 AM, Daniel P. Berrangé wrote: On Thu, Feb 08, 2018 at 01:24:58PM -0600, Chris Friesen wrote: Given your comment above about "I don't want to see the semantics of that change", it sounds like you're suggesting: 1) If there are any non-shared non-readonly netw

Re: [libvirt] [PATCH] qemu: fix migration with local and VIR_STORAGE_TYPE_NETWORK disks

2018-02-08 Thread Chris Friesen
On 02/08/2018 03:07 AM, Daniel P. Berrangé wrote: On Wed, Feb 07, 2018 at 01:11:33PM -0600, Chris Friesen wrote: Are you okay with the other change? That part of the code was intended to be funtionally identical to what QEMU's previous built-in storage migration code would do. I don't want

Re: [libvirt] [PATCH] qemu: fix migration with local and VIR_STORAGE_TYPE_NETWORK disks

2018-02-07 Thread Chris Friesen
On 02/07/2018 12:05 PM, Daniel P. Berrangé wrote: On Wed, Feb 07, 2018 at 11:57:19AM -0600, Chris Friesen wrote: In the current implementation of qemuMigrateDisk() the value of the "nmigrate_disks" parameter wrongly impacts the decision whether or not to migrate a disk that is no

[libvirt] [PATCH] qemu: fix migration with local and VIR_STORAGE_TYPE_NETWORK disks

2018-02-07 Thread Chris Friesen
urce. The end result is that disks not in "migrate_disks" are treated uniformly regardless of the value of "nmigrate_disks". Signed-off-by: Chris Friesen <chris.frie...@windriver.com> --- src/qemu/qemu_migration.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff

Re: [libvirt] Redesigning Libvirt: Adopting use of a safe language

2017-11-21 Thread Chris Friesen
On 11/20/2017 09:25 AM, Daniel P. Berrange wrote: When I worked in OpenStack it was a constant battle to get people to consider enhancements to libvirt instead of reinventing it in Python. It was a hard sell because most python dev just didn't want to use C at all because it has a high curve to

Re: [libvirt] Redesigning Libvirt: Adopting use of a safe language

2017-11-17 Thread Chris Friesen
On 11/17/2017 06:37 AM, Daniel P. Berrange wrote: On Fri, Nov 17, 2017 at 01:34:54PM +0100, Markus Armbruster wrote: "Daniel P. Berrange" writes: [...] Goroutines are basically a union of the thread + coroutine concepts. The Go runtime will create N OS level threads,

Re: [libvirt] Redesigning Libvirt: Adopting use of a safe language

2017-11-16 Thread Chris Friesen
On 11/16/2017 03:55 PM, John Ferlan wrote: On 11/14/2017 12:27 PM, Daniel P. Berrange wrote: Part of the problem is that, despite Linux having very low overhead thread spawning, threads still consume non-trivial resources, so we try to constrain how many we use, which forces an M:N

Re: [libvirt] [PATCH v4 0/4] Implement migrate-getmaxdowntime command

2017-08-17 Thread Chris Friesen
On 08/17/2017 04:17 PM, Scott Garfinkle wrote: Currently, the maximum tolerable downtime for a domain being migrated is write-only. This patch implements a way to query that value nondestructively. I'd like register my support for the concept in general. Seems odd to have something you can

[libvirt] question about locking in qemuDomainObjBeginJobInternal()

2017-08-15 Thread Chris Friesen
Hi, I'm hitting a scenario (on libvirt 1.2.12, so yeah it's a bit old) where I'm attempting to create two domains at the same time, and they both end up erroring out with "cannot acquire state change lock": 2017-08-14T12:57:00.000 79674: warning : qemuDomainObjBeginJobInternal:1380 :

Re: [libvirt] status of support for cache allocation technology?

2017-07-27 Thread Chris Friesen
On 07/27/2017 05:08 AM, Martin Kletzander wrote: Is the "[PATH V10 00/12] Support cache tune in libvirt" patch series the most recent set of patches? No, then there were several RFCs and then patch series again, IIRC, but you can expect a new one written from scratch to be posted soon. I

[libvirt] status of support for cache allocation technology?

2017-07-26 Thread Chris Friesen
Hi, I'm just wondering what the current status is about exposing/controlling cache banks. Looking at the code, it appears that we report the banks as part of "virsh capabilities". Is it possible to associate a particular bank with a particular domain, or has that not yet merged? Is the

Re: [libvirt] libvirtd not responding to virsh, results in virsh hanging

2017-03-31 Thread Chris Friesen
On 03/31/2017 11:30 AM, Chris Friesen wrote: On 03/31/2017 11:21 AM, Chris Friesen wrote: I ran tcpdump looking for TCP traffic between the two libvirtd processes, and was unable to see any after several minutes. So it doesn't look like there is any regular keepalive messaging going on (/etc

Re: [libvirt] libvirtd not responding to virsh, results in virsh hanging -- correction

2017-03-31 Thread Chris Friesen
On 03/31/2017 11:21 AM, Chris Friesen wrote: I ran tcpdump looking for TCP traffic between the two libvirtd processes, and was unable to see any after several minutes. So it doesn't look like there is any regular keepalive messaging going on (/etc/libvirt/libvirtd.conf doesn't specify any

Re: [libvirt] libvirtd not responding to virsh, results in virsh hanging

2017-03-31 Thread Chris Friesen
Hi, I finally got a chance to take another look at this issue. We've reproduced it in another test lab. New information below. On 03/18/2017 12:41 AM, Michal Privoznik wrote: On 17.03.2017 23:21, Chris Friesen wrote: Hi, We've recently run into an issue with libvirt 1.2.17 in the context

[libvirt] libvirtd not responding to virsh, results in virsh hanging

2017-03-17 Thread Chris Friesen
Hi, We've recently run into an issue with libvirt 1.2.17 in the context of an OpenStack deployment. Occasionally after doing live migrations from a compute node with libvirt 1.2.17 to a compute node with libvirt 2.0.0 we see libvirtd on the 1.2.17 side stop responding. When this happens,

Re: [libvirt] inconsistent handling of "qemu64" CPU model

2016-05-26 Thread Chris Friesen
On 05/26/2016 04:41 AM, Jiri Denemark wrote: The qemu64 CPU model contains svm and thus libvirt will always consider it incompatible with any Intel CPUs (which have vmx instead of svm). On the other hand, QEMU by default ignores features that are missing in the host CPU and has no problem using

[libvirt] inconsistent handling of "qemu64" CPU model

2016-05-25 Thread Chris Friesen
Hi, I'm not sure where the problem lies, hence the CC to both lists. Please copy me on the reply. I'm playing with OpenStack's devstack environment on an Ubuntu 14.04 host with a Celeron 2961Y CPU. (libvirt detects it as a Nehalem with a bunch of extra features.) Qemu gives version 2.2.0

[libvirt] high outage times for qemu virtio network links during live migration, trying to debug

2016-01-26 Thread Chris Friesen
Hi, I'm using libvirt (1.2.12) with qemu (2.2.0) in the context of OpenStack. If I live-migrate a guest with virtio network interfaces, I see a ~1200msec delay in processing the network packets, and several hundred of them get dropped. I get the dropped packets, but I'm not sure why the

Re: [libvirt] high outage times for qemu virtio network links during live migration, trying to debug

2016-01-26 Thread Chris Friesen
On 01/26/2016 10:50 AM, Paolo Bonzini wrote: On 26/01/2016 17:41, Chris Friesen wrote: I'm using libvirt (1.2.12) with qemu (2.2.0) in the context of OpenStack. If I live-migrate a guest with virtio network interfaces, I see a ~1200msec delay in processing the network packets, and several

Re: [libvirt] high outage times for qemu virtio network links during live migration, trying to debug

2016-01-26 Thread Chris Friesen
On 01/26/2016 11:31 AM, Paolo Bonzini wrote: On 26/01/2016 18:21, Chris Friesen wrote: My question is, why doesn't qemu continue processing virtio packets while the dirty page scanning and memory transfer over the network is proceeding? QEMU (or vhost) _are_ processing virtio traffic

Re: [libvirt] [Qemu-devel] high outage times for qemu virtio network links during live migration, trying to debug

2016-01-26 Thread Chris Friesen
On 01/26/2016 10:45 AM, Daniel P. Berrange wrote: On Tue, Jan 26, 2016 at 10:41:12AM -0600, Chris Friesen wrote: My question is, why doesn't qemu continue processing virtio packets while the dirty page scanning and memory transfer over the network is proceeding

Re: [libvirt] is there a notification when watchdog triggers?

2014-11-07 Thread Chris Friesen
On 11/07/2014 03:14 AM, Eric Blake wrote: On 11/06/2014 11:08 PM, Chris Friesen wrote: The libvirt.org docs say A virtual hardware watchdog device can be added to the guest via the watchdog elementCurrently libvirt does not support notification when the watchdog fires. This feature

[libvirt] is there a notification when watchdog triggers?

2014-11-06 Thread Chris Friesen
The libvirt.org docs say A virtual hardware watchdog device can be added to the guest via the watchdog elementCurrently libvirt does not support notification when the watchdog fires. This feature is planned for a future version of libvirt. Is that still accurate? Or does libvirt now

[libvirt] [bug] python-libvirt vcpus mismatch

2014-05-27 Thread Chris Friesen
I've got a libvirt-created instance where I've been messing with affinity, and now something is strange. I did the following in python: import libvirt conn=libvirt.open(qemu:///system) dom = conn.lookupByName('instance-0027') dom.vcpus() ([(0, 1, 52815000L, 2), (1, 1,

[libvirt] [bug] problem with python interface, dom.vcpus() cpu info doesn't match cpu map

2014-05-20 Thread Chris Friesen
Hi, I was playing around with vcpupin and emulatorpin and managed to get into a strange state. From within python I get the following: (Pdb) dom = self._lookup_by_name(instance.name) (Pdb) dom.vcpus() ([(0, 1, 597000L, 2), (1, 1, 458000L, 3)], [(False, False, True, False), (False,

Re: [libvirt] [Qemu-devel] qemu leaving unix sockets behind after VM is shut down

2014-05-06 Thread Chris Friesen
On 05/06/2014 07:39 AM, Stefan Hajnoczi wrote: On Tue, Apr 01, 2014 at 02:34:58PM -0600, Chris Friesen wrote: When running qemu with something like this -device virtio-serial \ -chardev socket,path=/tmp/foo,server,nowait,id=foo \ -device virtserialport,chardev=foo,name=host.port.0 the VM

Re: [libvirt] why doesn't libvirt let qemu autostart live-migrated VMs?

2014-04-15 Thread Chris Friesen
On 04/15/2014 02:28 AM, Daniel P. Berrange wrote: On Mon, Apr 14, 2014 at 05:50:07PM -0600, Chris Friesen wrote: Hi, I've been digging through the libvirt code and something that struck me was that it appears that when using qemu libvirt will migrate the instance with autostart disabled

[libvirt] why doesn't libvirt let qemu autostart live-migrated VMs?

2014-04-14 Thread Chris Friesen
Hi, I've been digging through the libvirt code and something that struck me was that it appears that when using qemu libvirt will migrate the instance with autostart disabled, then sit on the source host periodically polling for migration completion, then once the host detects that migration

[libvirt] [bug?] unix sockets opened via chardev devices not being closed on shutdown

2014-04-01 Thread Chris Friesen
I have a case where I'm creating a virtio channel between the host and guest using something like this: channel type=unix source mode=bind path=/path/in/host/instance_name/ target type=virtio name=name_in_guest/ /channel When qemu is started up this gets created as expected,

[libvirt] virsh domstate output when kvm killed vs guest OS panic

2013-09-05 Thread Chris Friesen
Hi, If I kill a libvirt-managed kvm process with kill -9, running virsh domstate --reason name gives shut off (crashed) Looking at the code, that corresponds to VIR_DOMAIN_SHUTOFF/VIR_DOMAIN_SHUTOFF_CRASHED. The comment says that VIR_DOMAIN_SHUTOFF_CRASHED corresponds to domain crashed.