Re: [libvirt] [PATCH] qemu: handle multicast overflow on macvtap for NIC_RX_FILTER_CHANGED
On Wed, Nov 21, 2018 at 10:04:56AM -0500, Jason Baron wrote: > Guest network devices can set 'overflow' when there are a number of multicast > ips configured. For virtio_net, the limit is only 64. In this case, the list > of mac addresses is empty and the 'overflow' condition is set. Thus, the guest > will currently receive no multicast traffic in this state. > > When 'overflow' is set in the guest, let's turn this into ALLMULTI on the > host. > > Signed-off-by: Jason Baron Good catch, thanks! Acked-by: Michael S. Tsirkin > --- > src/qemu/qemu_driver.c | 26 +++--- > 1 file changed, 19 insertions(+), 7 deletions(-) > > diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c > index 7fb9102..ea36db8 100644 > --- a/src/qemu/qemu_driver.c > +++ b/src/qemu/qemu_driver.c > @@ -4443,11 +4443,11 @@ static void > syncNicRxFilterMultiMode(char *ifname, virNetDevRxFilterPtr guestFilter, > virNetDevRxFilterPtr hostFilter) > { > -if (hostFilter->multicast.mode != guestFilter->multicast.mode) { > +if (hostFilter->multicast.mode != guestFilter->multicast.mode || > +guestFilter->multicast.overflow) { > switch (guestFilter->multicast.mode) { > case VIR_NETDEV_RX_FILTER_MODE_ALL: > if (virNetDevSetRcvAllMulti(ifname, true)) { > - > VIR_WARN("Couldn't set allmulticast flag to 'on' for " > "device %s while responding to " > "NIC_RX_FILTER_CHANGED", ifname); > @@ -4455,17 +4455,29 @@ syncNicRxFilterMultiMode(char *ifname, > virNetDevRxFilterPtr guestFilter, > break; > > case VIR_NETDEV_RX_FILTER_MODE_NORMAL: > -if (virNetDevSetRcvMulti(ifname, true)) { > +if (guestFilter->multicast.overflow && > +(hostFilter->multicast.mode == > VIR_NETDEV_RX_FILTER_MODE_ALL)) { > +break; > +} > > +if (virNetDevSetRcvMulti(ifname, true)) { > VIR_WARN("Couldn't set multicast flag to 'on' for " > "device %s while responding to " > "NIC_RX_FILTER_CHANGED", ifname); > } > > -if (virNetDevSetRcvAllMulti(ifname, false)) { > -VIR_WARN("Couldn't set allmulticast flag to 'off' for " > - "device %s while responding to " > - "NIC_RX_FILTER_CHANGED", ifname); > +if (guestFilter->multicast.overflow == true) { > +if (virNetDevSetRcvAllMulti(ifname, true)) { > +VIR_WARN("Couldn't set allmulticast flag to 'on' for > " > + "device %s while responding to " > + "NIC_RX_FILTER_CHANGED", ifname); > +} > +} else { > +if (virNetDevSetRcvAllMulti(ifname, false)) { > + VIR_WARN("Couldn't set allmulticast flag to 'off' > for " > + "device %s while responding to " > + "NIC_RX_FILTER_CHANGED", ifname); > +} > } > break; > > -- > 2.7.4 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCHv2 01/16] qemu: Add KVM CPUs into cache only if KVM is present
On Wed, Nov 21, 2018 at 20:50:50 +0300, Roman Bolshakov wrote: > On Wed, Nov 21, 2018 at 05:04:07PM +0100, Jiri Denemark wrote: > > On Wed, Nov 21, 2018 at 17:01:44 +0300, Roman Bolshakov wrote: > > > From: Roman Bolshakov > > > > > > virQEMUCapsFormatCache/virQEMUCapsLoadCache adds/reads KVM CPUs to/from > > > capabilities cache regardless of QEMU_CAPS_KVM. That can cause undesired > > > side-effects when KVM CPUs are present in the cache on a platform that > > > doesn't support it, e.g. macOS or Linux without KVM support. > > > > > > Signed-off-by: Daniel P. Berrangé > > > Signed-off-by: Roman Bolshakov > > > > This doesn't look like a patch written by Daniel so why did you include > > the Signed-off-by line? Or did I miss anything? > > Daniel kindly helped to root cause an issue I had with > qemucapabilitiestest in v1: > https://www.redhat.com/archives/libvir-list/2018-November/msg00740.html > > and provided a diff that resolves the issue: > https://www.redhat.com/archives/libvir-list/2018-November/msg00767.html I see, I missed the diff. > Should I remove his Signed-off-by tag? Dunno, I guess it's up to Daniel. But if the final patch is going to look very differently anyway, I don't see a reason to keep the tag. > > > > > --- > > > src/qemu/qemu_capabilities.c | 18 -- > > > 1 file changed, 12 insertions(+), 6 deletions(-) > > > > > > diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c > > > index fde27010e4..4ba8369e3a 100644 > > > --- a/src/qemu/qemu_capabilities.c > > > +++ b/src/qemu/qemu_capabilities.c > > > @@ -3467,11 +3467,13 @@ virQEMUCapsLoadCache(virArch hostArch, > > > } > > > VIR_FREE(str); > > > > > > -if (virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, > > > VIR_DOMAIN_VIRT_KVM) < 0 || > > > +if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && > > > + virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, > > > VIR_DOMAIN_VIRT_KVM) < 0) || > > > virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, > > > VIR_DOMAIN_VIRT_QEMU) < 0) > > > goto cleanup; > > > > I don't think we should introduce these guards in all the places. All > > the loading and formatting functions should return success if the > > appropriate info is not available, so you should just make sure the > > relevant info is NULL in qemuCaps. > > Do you mean the capabilities checks should be moved inside the > functions? virQEMUCapsLoadHostCPUModelInfo does (not literally, but effectively) hostCPUNode = virXPathNode("./hostCPU[@type='kvm']", ctxt); if (!hostCPUNode) return 0; virQEMUCapsLoadCPUModels does n = virXPathNodeSet("./cpu[@type='kvm']", ctxt, ); if (n == 0) return 0; virQEMUCapsInitHostCPUModel always fills in something and your check should probably remain in place for it virQEMUCapsFormatHostCPUModelInfo does virQEMUCapsHostCPUDataPtr cpuData = >kvmCPU; qemuMonitorCPUModelInfoPtr model = cpuData->info; if (!model) return; virQEMUCapsFormatCPUModels cpus = qemuCaps->kvmCPUModels; if (!cpus) return; So to me it looks like all functions are ready to see NULL pointers and just do nothing if that's the case. Thus the only thing this patch should need to do is to make sure virQEMUCapsInitHostCPUModel does not set something non-NULL there. Jirka -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCHv2 01/16] qemu: Add KVM CPUs into cache only if KVM is present
On Wed, Nov 21, 2018 at 05:04:07PM +0100, Jiri Denemark wrote: > On Wed, Nov 21, 2018 at 17:01:44 +0300, Roman Bolshakov wrote: > > From: Roman Bolshakov > > > > virQEMUCapsFormatCache/virQEMUCapsLoadCache adds/reads KVM CPUs to/from > > capabilities cache regardless of QEMU_CAPS_KVM. That can cause undesired > > side-effects when KVM CPUs are present in the cache on a platform that > > doesn't support it, e.g. macOS or Linux without KVM support. > > > > Signed-off-by: Daniel P. Berrangé > > Signed-off-by: Roman Bolshakov > > This doesn't look like a patch written by Daniel so why did you include > the Signed-off-by line? Or did I miss anything? Daniel kindly helped to root cause an issue I had with qemucapabilitiestest in v1: https://www.redhat.com/archives/libvir-list/2018-November/msg00740.html and provided a diff that resolves the issue: https://www.redhat.com/archives/libvir-list/2018-November/msg00767.html Should I remove his Signed-off-by tag? > > > --- > > src/qemu/qemu_capabilities.c | 18 -- > > 1 file changed, 12 insertions(+), 6 deletions(-) > > > > diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c > > index fde27010e4..4ba8369e3a 100644 > > --- a/src/qemu/qemu_capabilities.c > > +++ b/src/qemu/qemu_capabilities.c > > @@ -3467,11 +3467,13 @@ virQEMUCapsLoadCache(virArch hostArch, > > } > > VIR_FREE(str); > > > > -if (virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, > > VIR_DOMAIN_VIRT_KVM) < 0 || > > +if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && > > + virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, > > VIR_DOMAIN_VIRT_KVM) < 0) || > > virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, > > VIR_DOMAIN_VIRT_QEMU) < 0) > > goto cleanup; > > I don't think we should introduce these guards in all the places. All > the loading and formatting functions should return success if the > appropriate info is not available, so you should just make sure the > relevant info is NULL in qemuCaps. > Do you mean the capabilities checks should be moved inside the functions? Either way they're needed to avoid loading KVM cpus into QEMU caps cache on the hosts without KVM support. > > @@ -3584,7 +3586,8 @@ virQEMUCapsLoadCache(virArch hostArch, > > if (virQEMUCapsParseSEVInfo(qemuCaps, ctxt) < 0) > > goto cleanup; > > > > -virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); > > +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) > > + virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); > > Please, follow our coding style, i.e., indent by 4 spaces. > > > virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_QEMU); > > > > ret = 0; > ... Will do, thank you for catching this! Best regards, Roman -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH go] Add back compat constants for iothreads
Signed-off-by: Daniel P. Berrangé --- domain_compat.h | 16 1 file changed, 16 insertions(+) Pushed as a build break fix for old libvirt diff --git a/domain_compat.h b/domain_compat.h index 371bcc4..19a3e24 100644 --- a/domain_compat.h +++ b/domain_compat.h @@ -917,4 +917,20 @@ struct _virDomainInterface { #define VIR_DOMAIN_SHUTOFF_DAEMON 8 #endif +#ifndef VIR_DOMAIN_STATS_IOTHREAD +#define VIR_DOMAIN_STATS_IOTHREAD (1 << 7) +#endif + +#ifndef VIR_DOMAIN_IOTHREAD_POLL_GROW +#define VIR_DOMAIN_IOTHREAD_POLL_GROW "poll_grow" +#endif + +#ifndef VIR_DOMAIN_IOTHREAD_POLL_SHRINK +#define VIR_DOMAIN_IOTHREAD_POLL_SHRINK "poll_shrink" +#endif + +#ifndef VIR_DOMAIN_IOTHREAD_POLL_MAX_NS +#define VIR_DOMAIN_IOTHREAD_POLL_MAX_NS "poll_max_ns" +#endif + #endif /* LIBVIRT_GO_DOMAIN_COMPAT_H__ */ -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH for-4.0 v2] virtio: Provide version-specific variants of virtio PCI devices
On Tue, 2018-11-20 at 14:14 -0500, Michael S. Tsirkin wrote: > On Tue, Nov 20, 2018 at 01:27:05PM +0100, Andrea Bolognani wrote: > > On Mon, 2018-11-19 at 14:14 -0500, Michael S. Tsirkin wrote: > > > Well it works now - connect it to a bus and it figures out whether it > > > should do transitional or not. You can force transitional in PCIe anyway > > > but then you are limited to about 15 devices - probably sufficient for > > > most people ... > > > > That's not how it works, though: current virtio-*-pci devices will > > be transitional (and thus support older guest OS) or not based on > > the kind of slot you plug them into. > > > > From the management point of view that's problematic, because libvirt > > (which takes care of the virtual hardware, including assigning PCI > > addresses to devices) has no knowledge of the guest OS running on > > said hardware, and management apps (which know about the guest OS and > > can figure out its capabilities using libosinfo) don't want to be in > > the business of assigning PCI addresses themselves. > > > > Having separate transitional and non-transitional variants solves the > > issue because now management apps can query libosinfo to figure out > > whether the guest OS supports non-transitional virtio devices, and > > based on that they can ask libvirt to use either the transitional or > > non-transitional variant; from that, libvirt will be able to choose > > the correct slot for the device. > > > > None of the above quite works if we have a single variant that > > morphs based on the slot, as we have today. > > So can we get an ack on the patchset then? Sure thing - whatever it might be worth :) Acked-by: Andrea Bolognani -- Andrea Bolognani / Red Hat / Virtualization -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCHv2 01/16] qemu: Add KVM CPUs into cache only if KVM is present
On Wed, Nov 21, 2018 at 17:01:44 +0300, Roman Bolshakov wrote: > From: Roman Bolshakov > > virQEMUCapsFormatCache/virQEMUCapsLoadCache adds/reads KVM CPUs to/from > capabilities cache regardless of QEMU_CAPS_KVM. That can cause undesired > side-effects when KVM CPUs are present in the cache on a platform that > doesn't support it, e.g. macOS or Linux without KVM support. > > Signed-off-by: Daniel P. Berrangé > Signed-off-by: Roman Bolshakov This doesn't look like a patch written by Daniel so why did you include the Signed-off-by line? Or did I miss anything? > --- > src/qemu/qemu_capabilities.c | 18 -- > 1 file changed, 12 insertions(+), 6 deletions(-) > > diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c > index fde27010e4..4ba8369e3a 100644 > --- a/src/qemu/qemu_capabilities.c > +++ b/src/qemu/qemu_capabilities.c > @@ -3467,11 +3467,13 @@ virQEMUCapsLoadCache(virArch hostArch, > } > VIR_FREE(str); > > -if (virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) > < 0 || > +if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && > + virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, > VIR_DOMAIN_VIRT_KVM) < 0) || > virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, > VIR_DOMAIN_VIRT_QEMU) < 0) > goto cleanup; I don't think we should introduce these guards in all the places. All the loading and formatting functions should return success if the appropriate info is not available, so you should just make sure the relevant info is NULL in qemuCaps. > > -if (virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0 || > +if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && > + virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) > || > virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) > goto cleanup; > > @@ -3584,7 +3586,8 @@ virQEMUCapsLoadCache(virArch hostArch, > if (virQEMUCapsParseSEVInfo(qemuCaps, ctxt) < 0) > goto cleanup; > > -virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); > +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) > + virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); Please, follow our coding style, i.e., indent by 4 spaces. > virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_QEMU); > > ret = 0; ... Jirka -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] qemu: handle multicast overflow on macvtap for NIC_RX_FILTER_CHANGED
Guest network devices can set 'overflow' when there are a number of multicast ips configured. For virtio_net, the limit is only 64. In this case, the list of mac addresses is empty and the 'overflow' condition is set. Thus, the guest will currently receive no multicast traffic in this state. When 'overflow' is set in the guest, let's turn this into ALLMULTI on the host. Signed-off-by: Jason Baron --- src/qemu/qemu_driver.c | 26 +++--- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 7fb9102..ea36db8 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -4443,11 +4443,11 @@ static void syncNicRxFilterMultiMode(char *ifname, virNetDevRxFilterPtr guestFilter, virNetDevRxFilterPtr hostFilter) { -if (hostFilter->multicast.mode != guestFilter->multicast.mode) { +if (hostFilter->multicast.mode != guestFilter->multicast.mode || +guestFilter->multicast.overflow) { switch (guestFilter->multicast.mode) { case VIR_NETDEV_RX_FILTER_MODE_ALL: if (virNetDevSetRcvAllMulti(ifname, true)) { - VIR_WARN("Couldn't set allmulticast flag to 'on' for " "device %s while responding to " "NIC_RX_FILTER_CHANGED", ifname); @@ -4455,17 +4455,29 @@ syncNicRxFilterMultiMode(char *ifname, virNetDevRxFilterPtr guestFilter, break; case VIR_NETDEV_RX_FILTER_MODE_NORMAL: -if (virNetDevSetRcvMulti(ifname, true)) { +if (guestFilter->multicast.overflow && +(hostFilter->multicast.mode == VIR_NETDEV_RX_FILTER_MODE_ALL)) { +break; +} +if (virNetDevSetRcvMulti(ifname, true)) { VIR_WARN("Couldn't set multicast flag to 'on' for " "device %s while responding to " "NIC_RX_FILTER_CHANGED", ifname); } -if (virNetDevSetRcvAllMulti(ifname, false)) { -VIR_WARN("Couldn't set allmulticast flag to 'off' for " - "device %s while responding to " - "NIC_RX_FILTER_CHANGED", ifname); +if (guestFilter->multicast.overflow == true) { +if (virNetDevSetRcvAllMulti(ifname, true)) { +VIR_WARN("Couldn't set allmulticast flag to 'on' for " + "device %s while responding to " + "NIC_RX_FILTER_CHANGED", ifname); +} +} else { +if (virNetDevSetRcvAllMulti(ifname, false)) { + VIR_WARN("Couldn't set allmulticast flag to 'off' for " + "device %s while responding to " + "NIC_RX_FILTER_CHANGED", ifname); +} } break; -- 2.7.4 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCHv2 15/16] docs: Note hvf support for domain elements
Many domain elements have "QEMU and KVM only" or "QEMU/KVM since x.y.z" remarks. Most of the elements work for HVF domain, so it makes sense to add respective notices for HVF domain. All the elements have been manually tested. Signed-off-by: Roman Bolshakov --- docs/formatdomain.html.in | 133 ++ 1 file changed, 77 insertions(+), 56 deletions(-) diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 25dd4bbbd6..b1a64c7c74 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -158,10 +158,10 @@ which is specified by absolute path, used to assist the domain creation process. It is used by Xen fully virtualized domains as well as setting the QEMU BIOS file -path for QEMU/KVM domains. Xen since 0.1.0, -QEMU/KVM since 0.9.12 Then, since -1.2.8 it's possible for the element to have two -optional attributes: readonly (accepted values are +path for QEMU/KVM/HVF domains. Xen since 0.1.0, +QEMU/KVM since 0.9.12, HVF since 4.10.0 Then, since 1.2.8 it's possible for the element to have +two optional attributes: readonly (accepted values are yes and no) to reflect the fact that the image should be writable or read-only. The second attribute type accepts values rom and @@ -680,7 +680,7 @@ IOThreads are dedicated event loop threads for supported disk devices to perform block I/O requests in order to improve scalability especially on an SMP host/guest with many LUNs. -Since 1.2.8 (QEMU only) +QEMU/KVM since 1.2.8, HVF since 4.10.0 @@ -1603,12 +1603,13 @@ Both host-model and host-passthrough modes make sense when a domain can run directly on the host CPUs (for -example, domains with type kvm). The actual host CPU is -irrelevant for domains with emulated virtual CPUs (such as domains with -type qemu). However, for backward compatibility -host-model may be implemented even for domains running on -emulated CPUs in which case the best CPU the hypervisor is able to -emulate may be used rather then trying to mimic the host CPU model. +example, domains with type kvm or hvf). The +actual host CPU is irrelevant for domains with emulated virtual CPUs +(such as domains with type qemu). However, for backward +compatibility host-model may be implemented even for +domains running on emulated CPUs in which case the best CPU the +hypervisor is able to emulate may be used rather then trying to mimic +the host CPU model. model @@ -1902,12 +1903,12 @@ -QEMU/KVM supports the on_poweroff and on_reboot -events handling the destroy and restart actions. -The preserve action for an on_reboot event -is treated as a destroy and the rename-restart -action for an on_poweroff event is treated as a -restart event. +QEMU/KVM/HVF domains support the on_poweroff and +on_reboot events handling the destroy and +restart actions. The preserve action for an +on_reboot event is treated as a destroy and the +rename-restart action for an on_poweroff event is +treated as a restart event. @@ -2043,7 +2044,7 @@ to address more than 4 GB of memory. acpi ACPI is useful for power management, for example, with -KVM guests it is required for graceful shutdown to work. +KVM or HVF guests it is required for graceful shutdown to work. apic APIC allows the use of programmable IRQ @@ -2286,7 +2287,8 @@ vmcoreinfo Enable QEMU vmcoreinfo device to let the guest kernel save debug - details. Since 4.4.0 (QEMU only) + details. QEMU/KVM since 4.4.0, HVF since + 4.10.0 htm Configure HTM (Hardware Transational Memory) availability for @@ -3559,7 +3561,7 @@ Copy-on-read avoids accessing the same backing file sectors repeatedly and is useful when the backing file is over a slow network. By default copy-on-read is off. -Since 0.9.10 (QEMU and KVM only) +QEMU/KVM since 0.9.10, HVF since 4.10.0 The optional discard attribute controls whether @@ -3567,7 +3569,7 @@ ignored or passed to the filesystem. The value can be either "unmap" (allow the discard request to be passed) or "ignore" (ignore the discard request). -Since 1.0.6 (QEMU and KVM only) +QEMU/KVM since 1.0.6, HVF since 4.10.0 The optional detect_zeroes attribute controls whether @@ -3723,7 +3725,7 @@ blockio If present, the blockio element allows to override any of the block device properties listed below. -Since 0.10.2
[libvirt] [PATCHv2 16/16] docs: Add support page for libvirt on macOS
While at it, rename OS-X on index page to macOS. Signed-off-by: Roman Bolshakov --- docs/docs.html.in | 3 + docs/index.html.in | 3 +- docs/macos.html.in | 229 + 3 files changed, 234 insertions(+), 1 deletion(-) create mode 100644 docs/macos.html.in diff --git a/docs/docs.html.in b/docs/docs.html.in index 40e0e3b82e..84a51a55fb 100644 --- a/docs/docs.html.in +++ b/docs/docs.html.in @@ -12,6 +12,9 @@ Windows Downloads for Windows +macOS +Working with libvirt on macOS + Migration Migrating guests between machines diff --git a/docs/index.html.in b/docs/index.html.in index b02802fdd9..34b491ec69 100644 --- a/docs/index.html.in +++ b/docs/index.html.in @@ -39,7 +39,8 @@ LXC, BHyve and more -targets Linux, FreeBSD, Windows and OS-X +targets Linux, FreeBSD, Windows and + macOS is used by many applications Recent / forthcoming release changes diff --git a/docs/macos.html.in b/docs/macos.html.in new file mode 100644 index 00..54c93ea2fb --- /dev/null +++ b/docs/macos.html.in @@ -0,0 +1,229 @@ + + +http://www.w3.org/1999/xhtml;> + +macOS support + + + + + Libvirt works both as client and server (for + "qemu" domain) on macOS High Sierra (10.13) and macOS Mojave (10.14) + since 4.7.0. Other macOS variants likely work but we neither tested nor + received reports for them. + + + + "hvf" domain type adds support of https://developer.apple.com/documentation/hypervisor;> + Hypervisor.framework since 4.10.0. To use "hvf" domain, QEMU must + be at least 2.12 and macOS must be no less than Yosemite (10.10). "hvf" + domain type is similar to "kvm" but it has less features. + + + + Hypervisor.framework is available on your machine if the sysctl command + returns 1: + + sysctl -n kern.hv_support + + +Installation + + + libvirt client (virsh), server (libvirtd) and development headers can be + installed from https://brew.sh;>homebrew: + + brew install libvirt + + http://virt-manager.org;>virt-manager and virt-viewer can be + installed from source via https://github.com/jeffreywildman/homebrew-virt-manager;> + Jeffrey Wildman's tap: + + brew tap jeffreywildman/homebrew-virt-manager +brew install virt-manager virt-viewer + + +Running libvirtd locally + + + The server can be started manually: + libvirtd + or on system boot: + brew services start libvirt + + + Once started, you can use virsh to work with libvirtd: + virsh define domain.xml +virsh start domain +virsh shutdown domain + + For more details on virsh, please see virsh + command reference or built-in help: + virsh help + + + + Domain XML examples can be found on QEMU + driver page. Full reference is available on domain XML format page. + + + + You can use virt-manager to connect to libvirtd (connection URI must be + specified on the first connection, then it'll be possible to omit it): + virt-manager -c qemu:///session + or, if you only need an access to the virtual display of a VM you can use + virt-viewer: + virt-viewer -c qemu:///session + + +Working with external hypervisors + + Details on the example domain XML files, capabilities and connection + string syntax used for connecting to external hypervisors can be found + online on hypervisor specific driver + pages. + + +TLS Certificates + + + TLS certificates must be placed in the correct locations, before you will + be able to connect to QEMU servers over TLS. + + + + Information on generating TLS certificates can be found here: + + +http://wiki.libvirt.org/page/TLSSetup;>http://wiki.libvirt.org/page/TLSSetup + + + The Certificate Authority (CA) certificate file must be placed in: + + + + ~/.cache/libvirt/pki/CA/cacert.pem + + + + The Client certificate file must be placed in: + + + + ~/.cache/libvirt/pki/libvirt/clientcert.pem + + + + The Client key file must be placed in: + + + + ~/.cache/libvirt/pki/libvirt/private/clientkey.pem + + +Known issues + + This is a list of issues that can be easily fixed and provide + substantial improvement of user experience: + + + +virt-install doesn't work unless disks are created upfront. The reason +is because VIR_STORAGE_VOL_CREATE_PREALLOC_METADATA sets +preallocate=falloc which is not supported by qemu-img on macOS. + + +"hvf" is not default domain type when virt-install connects to the +local libvirtd on macOS + + +QXL VGA device and SPICE display cannot be used unless QEMU
[libvirt] [PATCHv2 13/16] news: Mention hvf domain type
Signed-off-by: Roman Bolshakov --- docs/news.xml | 12 1 file changed, 12 insertions(+) diff --git a/docs/news.xml b/docs/news.xml index 4406aeb775..90e378187d 100644 --- a/docs/news.xml +++ b/docs/news.xml @@ -68,6 +68,18 @@ be viewed via the domain statistics. + + + qemu: Add hvf domain type for Hypervisor.framework + + + QEMU introduced experimental support of Hypervisor.framework + since 2.12. + + It's supported on machines with Intel VT-x feature set that includes + Extended Page Tables (EPT) and Unrestricted Mode since macOS 10.10. + + -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCHv2 14/16] docs: Add hvf on QEMU driver page
It's worth to make the domain type a little bit more visible than a row in news. An example of hvf domain is available on QEMU driver page. While at it, mention Hypervisor.framework on index page. Signed-off-by: Roman Bolshakov --- docs/drvqemu.html.in | 49 +--- docs/index.html.in | 1 + 2 files changed, 47 insertions(+), 3 deletions(-) diff --git a/docs/drvqemu.html.in b/docs/drvqemu.html.in index 0d14027646..7c511ce3b6 100644 --- a/docs/drvqemu.html.in +++ b/docs/drvqemu.html.in @@ -2,13 +2,16 @@ http://www.w3.org/1999/xhtml;> -KVM/QEMU hypervisor driver +QEMU/KVM/HVF hypervisor driver - The libvirt KVM/QEMU driver can manage any QEMU emulator from - version 1.5.0 or later. + The libvirt QEMU driver can manage any QEMU emulator from + version 1.5.0 or later. It supports multiple QEMU accelerators: software + emulation also known as TCG, hardware-assisted virtualization on Linux + with KVM and hardware-assisted virtualization on macOS with + Hypervisor.framework (since 4.10.0). Project Links @@ -21,6 +24,9 @@ The https://wiki.qemu.org/Index.html;>QEMU emulator + +https://developer.apple.com/documentation/hypervisor;>Hypervisor.framework reference + Deployment pre-requisites @@ -41,6 +47,13 @@ node. If both are found, then KVM fullyvirtualized, hardware accelerated guests will be available. + +Hypervisor.framework (HVF): The driver will probe +sysctl for the presence of +Hypervisor.framework. If it is found and QEMU is newer +than 2.12, then it will be possible to create hardware accelerated +guests. + Connections to QEMU driver @@ -640,5 +653,35 @@ $ virsh domxml-to-native qemu-argv demo.xml /devices /domain +HVF hardware accelerated guest on x86_64 + +domain type='hvf' + namehvf-demo/name + uuid4dea24b3-1d52-d8f3-2516-782e98a23fa0/uuid + memory131072/memory + vcpu1/vcpu + os +type arch="x86_64"hvm/type + /os + features +acpi/ + /features + clock sync="localtime"/ + devices +emulator/usr/local/bin/qemu-system-x86_64/emulator +controller type='scsi' index='0' model='virtio-scsi'/ +disk type='volume' device='disk' + driver name='qemu' type='qcow2'/ + source pool='default' volume='myos'/ + target bus='scsi' dev='sda'/ +/disk +interface type='user' + mac address='24:42:53:21:52:45'/ + model type='virtio'/ +/interface +graphics type='vnc' port='-1'/ + /devices +/domain + diff --git a/docs/index.html.in b/docs/index.html.in index 1f9f448399..b02802fdd9 100644 --- a/docs/index.html.in +++ b/docs/index.html.in @@ -32,6 +32,7 @@ is accessible from C, Python, Perl, Java and more is licensed under open source licenses supports KVM, + Hypervisor.framework, QEMU, Xen, Virtuozzo, VMWare ESX, -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCHv2 10/16] qemu: Introduce virQEMUCapsAccelStr
This makes possible to add more accelerators by touching less code and reduces code duplication. Signed-off-by: Roman Bolshakov --- src/qemu/qemu_capabilities.c | 21 ++--- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 1c6b79594d..1cee9a833b 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -658,6 +658,16 @@ virQEMUCapsToVirtType(virQEMUCapsPtr qemuCaps) return VIR_DOMAIN_VIRT_QEMU; } +static const char * +virQEMUCapsAccelStr(virDomainVirtType type) +{ +if (type == VIR_DOMAIN_VIRT_KVM) { +return "kvm"; +} else { +return "tcg"; +} +} + /* Checks whether a domain with @guest arch can run natively on @host. */ bool @@ -3670,7 +3680,7 @@ virQEMUCapsFormatHostCPUModelInfo(virQEMUCapsPtr qemuCaps, { virQEMUCapsHostCPUDataPtr cpuData = virQEMUCapsGetHostCPUData(qemuCaps, type); qemuMonitorCPUModelInfoPtr model = cpuData->info; -const char *typeStr = type == VIR_DOMAIN_VIRT_KVM ? "kvm" : "tcg"; +const char *typeStr = virQEMUCapsAccelStr(type); size_t i; if (!model) @@ -3725,16 +3735,13 @@ virQEMUCapsFormatCPUModels(virQEMUCapsPtr qemuCaps, virDomainVirtType type) { virDomainCapsCPUModelsPtr cpus; -const char *typeStr; +const char *typeStr = virQEMUCapsAccelStr(type); size_t i; -if (virQEMUCapsTypeIsAccelerated(type)) { -typeStr = "kvm"; +if (virQEMUCapsTypeIsAccelerated(type)) cpus = qemuCaps->accelCPUModels; -} else { -typeStr = "tcg"; +else cpus = qemuCaps->tcgCPUModels; -} if (!cpus) return; -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCHv2 11/16] qemu: Make error message accel-agnostic
With more acceleration types, KVM should be used only in error messages related to KVM. Signed-off-by: Roman Bolshakov --- src/qemu/qemu_capabilities.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 1cee9a833b..8a1fb2b5d9 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -4993,8 +4993,8 @@ virQEMUCapsCacheLookupDefault(virFileCachePtr cache, if (virQEMUCapsTypeIsAccelerated(virttype) && !virQEMUCapsTypeIsAccelerated(capsType)) { virReportError(VIR_ERR_INVALID_ARG, - _("KVM is not supported by '%s' on this host"), - binary); + _("the accel '%s' is not supported by '%s' on this host"), + virQEMUCapsAccelStr(virttype), binary); goto cleanup; } -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCHv2 05/16] qemu: Expose hvf domain type if hvf is supported
Signed-off-by: Roman Bolshakov Reviewed-by: Daniel P. Berrangé --- src/qemu/qemu_capabilities.c | 11 +++ 1 file changed, 11 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 5ebe3f1afe..645ce2c89e 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -859,6 +859,17 @@ virQEMUCapsInitGuestFromBinary(virCapsPtr caps, } } +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) { +if (virCapabilitiesAddGuestDomain(guest, + VIR_DOMAIN_VIRT_HVF, + NULL, + NULL, + 0, + NULL) == NULL) { +goto cleanup; +} +} + if ((ARCH_IS_X86(guestarch) || guestarch == VIR_ARCH_AARCH64) && virCapabilitiesAddGuestFeature(guest, "acpi", true, true) == NULL) { goto cleanup; -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCHv2 07/16] qemu: Introduce virQEMUCapsTypeIsAccelerated
It replaces hardcoded checks that select accelCPU/accelCPUModels (formerly known as kvmCPU/kvmCPUModels) for KVM. It'll be cleaner to use the function when multiple accelerators are supported in qemu driver. Explicit KVM domain checks should be done only when a feature is available only for KVM. Signed-off-by: Roman Bolshakov --- src/qemu/qemu_capabilities.c | 28 +--- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index ad15d2853e..e302fbb48f 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -637,6 +637,11 @@ static const char *virQEMUCapsArchToString(virArch arch) return virArchToString(arch); } +static bool +virQEMUCapsTypeIsAccelerated(virDomainVirtType type) +{ +return type == VIR_DOMAIN_VIRT_KVM; +} /* Checks whether a domain with @guest arch can run natively on @host. */ @@ -1794,7 +1799,7 @@ virQEMUCapsAddCPUDefinitions(virQEMUCapsPtr qemuCaps, size_t i; virDomainCapsCPUModelsPtr cpus = NULL; -if (type == VIR_DOMAIN_VIRT_KVM && qemuCaps->accelCPUModels) +if (virQEMUCapsTypeIsAccelerated(type) && qemuCaps->accelCPUModels) cpus = qemuCaps->accelCPUModels; else if (type == VIR_DOMAIN_VIRT_QEMU && qemuCaps->tcgCPUModels) cpus = qemuCaps->tcgCPUModels; @@ -1803,7 +1808,7 @@ virQEMUCapsAddCPUDefinitions(virQEMUCapsPtr qemuCaps, if (!(cpus = virDomainCapsCPUModelsNew(count))) return -1; -if (type == VIR_DOMAIN_VIRT_KVM) +if (virQEMUCapsTypeIsAccelerated(type)) qemuCaps->accelCPUModels = cpus; else qemuCaps->tcgCPUModels = cpus; @@ -1822,7 +1827,7 @@ virDomainCapsCPUModelsPtr virQEMUCapsGetCPUDefinitions(virQEMUCapsPtr qemuCaps, virDomainVirtType type) { -if (type == VIR_DOMAIN_VIRT_KVM) +if (virQEMUCapsTypeIsAccelerated(type)) return qemuCaps->accelCPUModels; else return qemuCaps->tcgCPUModels; @@ -1833,7 +1838,7 @@ static virQEMUCapsHostCPUDataPtr virQEMUCapsGetHostCPUData(virQEMUCapsPtr qemuCaps, virDomainVirtType type) { -if (type == VIR_DOMAIN_VIRT_KVM) +if (virQEMUCapsTypeIsAccelerated(type)) return >accelCPU; else return >tcgCPU; @@ -1889,7 +1894,7 @@ virQEMUCapsIsCPUModeSupported(virQEMUCapsPtr qemuCaps, switch (mode) { case VIR_CPU_MODE_HOST_PASSTHROUGH: -return type == VIR_DOMAIN_VIRT_KVM && +return virQEMUCapsTypeIsAccelerated(type) && virQEMUCapsGuestIsNative(caps->host.arch, qemuCaps->arch); case VIR_CPU_MODE_HOST_MODEL: @@ -1897,7 +1902,7 @@ virQEMUCapsIsCPUModeSupported(virQEMUCapsPtr qemuCaps, VIR_QEMU_CAPS_HOST_CPU_REPORTED); case VIR_CPU_MODE_CUSTOM: -if (type == VIR_DOMAIN_VIRT_KVM) +if (virQEMUCapsTypeIsAccelerated(type)) cpus = qemuCaps->accelCPUModels; else cpus = qemuCaps->tcgCPUModels; @@ -3004,7 +3009,7 @@ virQEMUCapsInitHostCPUModel(virQEMUCapsPtr qemuCaps, virArchToString(qemuCaps->arch), virDomainVirtTypeToString(type)); goto error; -} else if (type == VIR_DOMAIN_VIRT_KVM && +} else if (virQEMUCapsTypeIsAccelerated(type) && virCPUGetHostIsSupported(qemuCaps->arch)) { if (!(fullCPU = virCPUGetHost(qemuCaps->arch, VIR_CPU_TYPE_GUEST, NULL, NULL))) @@ -3231,7 +3236,7 @@ virQEMUCapsLoadCPUModels(virQEMUCapsPtr qemuCaps, if (!(cpus = virDomainCapsCPUModelsNew(n))) goto cleanup; -if (type == VIR_DOMAIN_VIRT_KVM) +if (virQEMUCapsTypeIsAccelerated(type)) qemuCaps->accelCPUModels = cpus; else qemuCaps->tcgCPUModels = cpus; @@ -3708,7 +3713,7 @@ virQEMUCapsFormatCPUModels(virQEMUCapsPtr qemuCaps, const char *typeStr; size_t i; -if (type == VIR_DOMAIN_VIRT_KVM) { +if (virQEMUCapsTypeIsAccelerated(type)) { typeStr = "kvm"; cpus = qemuCaps->accelCPUModels; } else { @@ -4966,7 +4971,8 @@ virQEMUCapsCacheLookupDefault(virFileCachePtr cache, if (virttype == VIR_DOMAIN_VIRT_NONE) virttype = capsType; -if (virttype == VIR_DOMAIN_VIRT_KVM && capsType == VIR_DOMAIN_VIRT_QEMU) { +if (virQEMUCapsTypeIsAccelerated(virttype) && +!virQEMUCapsTypeIsAccelerated(capsType)) { virReportError(VIR_ERR_INVALID_ARG, _("KVM is not supported by '%s' on this host"), binary); @@ -5106,7 +5112,7 @@ virQEMUCapsFillDomainCPUCaps(virCapsPtr caps, if (virCPUGetModels(domCaps->arch, ) >= 0) { virDomainCapsCPUModelsPtr cpus; -if (domCaps->virttype == VIR_DOMAIN_VIRT_KVM) +if (virQEMUCapsTypeIsAccelerated(domCaps->virttype)) cpus =
[libvirt] [PATCHv2 09/16] qemu: Introduce virQEMUCapsToVirtType
The function is needed to support multiple accelerators without cluttering codebase by conditionals. At the first glance that might cause an issue related to the ordering capabilities being checked on a system with many accelerators but in the current code base it should be just fine because virQEMUCapsGetHostCPUData is not interested in the exact type of accelarator. Signed-off-by: Roman Bolshakov --- src/qemu/qemu_capabilities.c | 16 +++- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index f80ee62019..1c6b79594d 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -649,6 +649,15 @@ virQEMUCapsHaveAccel(virQEMUCapsPtr qemuCaps) return virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM); } +static virDomainVirtType +virQEMUCapsToVirtType(virQEMUCapsPtr qemuCaps) +{ +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) +return VIR_DOMAIN_VIRT_KVM; +else +return VIR_DOMAIN_VIRT_QEMU; +} + /* Checks whether a domain with @guest arch can run natively on @host. */ bool @@ -2423,7 +2432,7 @@ virQEMUCapsProbeQMPHostCPU(virQEMUCapsPtr qemuCaps, virtType = VIR_DOMAIN_VIRT_QEMU; model = "max"; } else { -virtType = VIR_DOMAIN_VIRT_KVM; +virtType = virQEMUCapsToVirtType(qemuCaps); model = "host"; } @@ -4969,10 +4978,7 @@ virQEMUCapsCacheLookupDefault(virFileCachePtr cache, machine = virQEMUCapsGetPreferredMachine(qemuCaps); } -if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) -capsType = VIR_DOMAIN_VIRT_KVM; -else -capsType = VIR_DOMAIN_VIRT_QEMU; +capsType = virQEMUCapsToVirtType(qemuCaps); if (virttype == VIR_DOMAIN_VIRT_NONE) virttype = capsType; -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCHv2 03/16] qemu: Define hvf capability
Signed-off-by: Roman Bolshakov --- src/qemu/qemu_capabilities.c | 1 + src/qemu/qemu_capabilities.h | 1 + 2 files changed, 2 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 4ba8369e3a..0bbda80782 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -515,6 +515,7 @@ VIR_ENUM_IMPL(virQEMUCaps, QEMU_CAPS_LAST, /* 320 */ "memory-backend-memfd.hugetlb", "iothread.poll-max-ns", + "hvf", ); diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h index c2caaf6fe1..7d08e8d243 100644 --- a/src/qemu/qemu_capabilities.h +++ b/src/qemu/qemu_capabilities.h @@ -499,6 +499,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for syntax-check */ /* 320 */ QEMU_CAPS_OBJECT_MEMORY_MEMFD_HUGETLB, /* -object memory-backend-memfd.hugetlb */ QEMU_CAPS_IOTHREAD_POLLING, /* -object iothread.poll-max-ns */ +QEMU_CAPS_HVF, /* Whether Hypervisor.framework is available */ QEMU_CAPS_LAST /* this must always be the last item */ } virQEMUCapsFlags; -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCHv2 06/16] qemu: Rename kvmCPU to accelCPU
QEMU supports a number of accelerators. It'd be good to have more generic name for kvmCPUModels and kvmCPU. Signed-off-by: Roman Bolshakov --- src/qemu/qemu_capabilities.c | 36 ++-- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 645ce2c89e..ad15d2853e 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -578,7 +578,7 @@ struct _virQEMUCaps { virArch arch; -virDomainCapsCPUModelsPtr kvmCPUModels; +virDomainCapsCPUModelsPtr accelCPUModels; virDomainCapsCPUModelsPtr tcgCPUModels; size_t nmachineTypes; @@ -589,7 +589,7 @@ struct _virQEMUCaps { virSEVCapability *sevCapabilities; -virQEMUCapsHostCPUData kvmCPU; +virQEMUCapsHostCPUData accelCPU; virQEMUCapsHostCPUData tcgCPU; }; @@ -1564,9 +1564,9 @@ virQEMUCapsPtr virQEMUCapsNewCopy(virQEMUCapsPtr qemuCaps) ret->arch = qemuCaps->arch; -if (qemuCaps->kvmCPUModels) { -ret->kvmCPUModels = virDomainCapsCPUModelsCopy(qemuCaps->kvmCPUModels); -if (!ret->kvmCPUModels) +if (qemuCaps->accelCPUModels) { +ret->accelCPUModels = virDomainCapsCPUModelsCopy(qemuCaps->accelCPUModels); +if (!ret->accelCPUModels) goto error; } @@ -1576,7 +1576,7 @@ virQEMUCapsPtr virQEMUCapsNewCopy(virQEMUCapsPtr qemuCaps) goto error; } -if (virQEMUCapsHostCPUDataCopy(>kvmCPU, >kvmCPU) < 0 || +if (virQEMUCapsHostCPUDataCopy(>accelCPU, >accelCPU) < 0 || virQEMUCapsHostCPUDataCopy(>tcgCPU, >tcgCPU) < 0) goto error; @@ -1623,7 +1623,7 @@ void virQEMUCapsDispose(void *obj) } VIR_FREE(qemuCaps->machineTypes); -virObjectUnref(qemuCaps->kvmCPUModels); +virObjectUnref(qemuCaps->accelCPUModels); virObjectUnref(qemuCaps->tcgCPUModels); virBitmapFree(qemuCaps->flags); @@ -1636,7 +1636,7 @@ void virQEMUCapsDispose(void *obj) virSEVCapabilitiesFree(qemuCaps->sevCapabilities); -virQEMUCapsHostCPUDataClear(>kvmCPU); +virQEMUCapsHostCPUDataClear(>accelCPU); virQEMUCapsHostCPUDataClear(>tcgCPU); } @@ -1794,8 +1794,8 @@ virQEMUCapsAddCPUDefinitions(virQEMUCapsPtr qemuCaps, size_t i; virDomainCapsCPUModelsPtr cpus = NULL; -if (type == VIR_DOMAIN_VIRT_KVM && qemuCaps->kvmCPUModels) -cpus = qemuCaps->kvmCPUModels; +if (type == VIR_DOMAIN_VIRT_KVM && qemuCaps->accelCPUModels) +cpus = qemuCaps->accelCPUModels; else if (type == VIR_DOMAIN_VIRT_QEMU && qemuCaps->tcgCPUModels) cpus = qemuCaps->tcgCPUModels; @@ -1804,7 +1804,7 @@ virQEMUCapsAddCPUDefinitions(virQEMUCapsPtr qemuCaps, return -1; if (type == VIR_DOMAIN_VIRT_KVM) -qemuCaps->kvmCPUModels = cpus; +qemuCaps->accelCPUModels = cpus; else qemuCaps->tcgCPUModels = cpus; } @@ -1823,7 +1823,7 @@ virQEMUCapsGetCPUDefinitions(virQEMUCapsPtr qemuCaps, virDomainVirtType type) { if (type == VIR_DOMAIN_VIRT_KVM) -return qemuCaps->kvmCPUModels; +return qemuCaps->accelCPUModels; else return qemuCaps->tcgCPUModels; } @@ -1834,7 +1834,7 @@ virQEMUCapsGetHostCPUData(virQEMUCapsPtr qemuCaps, virDomainVirtType type) { if (type == VIR_DOMAIN_VIRT_KVM) -return >kvmCPU; +return >accelCPU; else return >tcgCPU; } @@ -1898,7 +1898,7 @@ virQEMUCapsIsCPUModeSupported(virQEMUCapsPtr qemuCaps, case VIR_CPU_MODE_CUSTOM: if (type == VIR_DOMAIN_VIRT_KVM) -cpus = qemuCaps->kvmCPUModels; +cpus = qemuCaps->accelCPUModels; else cpus = qemuCaps->tcgCPUModels; return cpus && cpus->nmodels > 0; @@ -2385,7 +2385,7 @@ virQEMUCapsProbeQMPCPUDefinitions(virQEMUCapsPtr qemuCaps, if (tcg || !virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) qemuCaps->tcgCPUModels = models; else -qemuCaps->kvmCPUModels = models; +qemuCaps->accelCPUModels = models; return 0; } @@ -3232,7 +3232,7 @@ virQEMUCapsLoadCPUModels(virQEMUCapsPtr qemuCaps, goto cleanup; if (type == VIR_DOMAIN_VIRT_KVM) -qemuCaps->kvmCPUModels = cpus; +qemuCaps->accelCPUModels = cpus; else qemuCaps->tcgCPUModels = cpus; @@ -3710,7 +3710,7 @@ virQEMUCapsFormatCPUModels(virQEMUCapsPtr qemuCaps, if (type == VIR_DOMAIN_VIRT_KVM) { typeStr = "kvm"; -cpus = qemuCaps->kvmCPUModels; +cpus = qemuCaps->accelCPUModels; } else { typeStr = "tcg"; cpus = qemuCaps->tcgCPUModels; @@ -5107,7 +5107,7 @@ virQEMUCapsFillDomainCPUCaps(virCapsPtr caps, virDomainCapsCPUModelsPtr cpus; if (domCaps->virttype == VIR_DOMAIN_VIRT_KVM) -cpus = qemuCaps->kvmCPUModels; +cpus =
[libvirt] [PATCHv2 04/16] qemu: Query hvf capability on macOS
There's no QMP command for querying if hvf is supported, therefore we use sysctl interface that tells if Hypervisor.framwork works/available on the host. Signed-off-by: Roman Bolshakov --- src/qemu/qemu_capabilities.c | 34 ++ 1 file changed, 34 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 0bbda80782..5ebe3f1afe 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -54,6 +54,10 @@ #include #include #include +#ifdef __APPLE__ +# include +# include +#endif #define VIR_FROM_THIS VIR_FROM_QEMU @@ -2599,6 +2603,33 @@ virQEMUCapsProbeQMPKVMState(virQEMUCapsPtr qemuCaps, return 0; } +#ifdef __APPLE__ +static int +virQEMUCapsProbeHVF(virQEMUCapsPtr qemuCaps) +{ +int hv_support; +size_t len = sizeof(hv_support); +if (sysctlbyname("kern.hv_support", _support, , NULL, 0)) +hv_support = 0; + +if (qemuCaps->version >= 2012000 && +ARCH_IS_X86(qemuCaps->arch) && +hv_support) { +virQEMUCapsSet(qemuCaps, QEMU_CAPS_HVF); +} + +return 0; +} +#else +static int +virQEMUCapsProbeHVF(virQEMUCapsPtr qemuCaps) +{ + (void) qemuCaps; + + return 0; +} +#endif + struct virQEMUCapsCommandLineProps { const char *option; const char *param; @@ -4150,6 +4181,9 @@ virQEMUCapsInitQMPMonitor(virQEMUCapsPtr qemuCaps, if (virQEMUCapsProbeQMPKVMState(qemuCaps, mon) < 0) goto cleanup; +if (virQEMUCapsProbeHVF(qemuCaps) < 0) +goto cleanup; + if (virQEMUCapsProbeQMPEvents(qemuCaps, mon) < 0) goto cleanup; if (virQEMUCapsProbeQMPDevices(qemuCaps, mon) < 0) -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCHv2 01/16] qemu: Add KVM CPUs into cache only if KVM is present
From: Roman Bolshakov virQEMUCapsFormatCache/virQEMUCapsLoadCache adds/reads KVM CPUs to/from capabilities cache regardless of QEMU_CAPS_KVM. That can cause undesired side-effects when KVM CPUs are present in the cache on a platform that doesn't support it, e.g. macOS or Linux without KVM support. Signed-off-by: Daniel P. Berrangé Signed-off-by: Roman Bolshakov --- src/qemu/qemu_capabilities.c | 18 -- 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index fde27010e4..4ba8369e3a 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -3467,11 +3467,13 @@ virQEMUCapsLoadCache(virArch hostArch, } VIR_FREE(str); -if (virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0 || +if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && + virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup; -if (virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0 || +if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && + virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup; @@ -3584,7 +3586,8 @@ virQEMUCapsLoadCache(virArch hostArch, if (virQEMUCapsParseSEVInfo(qemuCaps, ctxt) < 0) goto cleanup; -virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) + virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_QEMU); ret = 0; @@ -3766,10 +3769,12 @@ virQEMUCapsFormatCache(virQEMUCapsPtr qemuCaps) virBufferAsprintf(, "%s\n", virArchToString(qemuCaps->arch)); -virQEMUCapsFormatHostCPUModelInfo(qemuCaps, , VIR_DOMAIN_VIRT_KVM); +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) + virQEMUCapsFormatHostCPUModelInfo(qemuCaps, , VIR_DOMAIN_VIRT_KVM); virQEMUCapsFormatHostCPUModelInfo(qemuCaps, , VIR_DOMAIN_VIRT_QEMU); -virQEMUCapsFormatCPUModels(qemuCaps, , VIR_DOMAIN_VIRT_KVM); +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) + virQEMUCapsFormatCPUModels(qemuCaps, , VIR_DOMAIN_VIRT_KVM); virQEMUCapsFormatCPUModels(qemuCaps, , VIR_DOMAIN_VIRT_QEMU); for (i = 0; i < qemuCaps->nmachineTypes; i++) { @@ -4566,7 +4571,8 @@ virQEMUCapsNewForBinaryInternal(virArch hostArch, qemuCaps->libvirtCtime = virGetSelfLastChanged(); qemuCaps->libvirtVersion = LIBVIR_VERSION_NUMBER; -virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) + virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_QEMU); if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) { -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCHv2 12/16] qemu: Correct CPU capabilities probing for hvf
With this change virsh domcapabilites shows: Signed-off-by: Roman Bolshakov Reviewed-by: Daniel P. Berrangé --- src/qemu/qemu_capabilities.c | 28 +--- 1 file changed, 25 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 8a1fb2b5d9..4297a11b27 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -640,13 +640,15 @@ static const char *virQEMUCapsArchToString(virArch arch) static bool virQEMUCapsTypeIsAccelerated(virDomainVirtType type) { -return type == VIR_DOMAIN_VIRT_KVM; +return type == VIR_DOMAIN_VIRT_KVM || + type == VIR_DOMAIN_VIRT_HVF; } static bool virQEMUCapsHaveAccel(virQEMUCapsPtr qemuCaps) { -return virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM); +return virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) || + virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF); } static virDomainVirtType @@ -654,6 +656,8 @@ virQEMUCapsToVirtType(virQEMUCapsPtr qemuCaps) { if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) return VIR_DOMAIN_VIRT_KVM; +else if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) +return VIR_DOMAIN_VIRT_HVF; else return VIR_DOMAIN_VIRT_QEMU; } @@ -663,6 +667,8 @@ virQEMUCapsAccelStr(virDomainVirtType type) { if (type == VIR_DOMAIN_VIRT_KVM) { return "kvm"; +} else if (type == VIR_DOMAIN_VIRT_HVF) { +return "hvf"; } else { return "tcg"; } @@ -3109,6 +3115,8 @@ virQEMUCapsLoadHostCPUModelInfo(virQEMUCapsPtr qemuCaps, if (virtType == VIR_DOMAIN_VIRT_KVM) hostCPUNode = virXPathNode("./hostCPU[@type='kvm']", ctxt); +else if (virtType == VIR_DOMAIN_VIRT_HVF) +hostCPUNode = virXPathNode("./hostCPU[@type='hvf']", ctxt); else hostCPUNode = virXPathNode("./hostCPU[@type='tcg']", ctxt); @@ -3244,6 +3252,8 @@ virQEMUCapsLoadCPUModels(virQEMUCapsPtr qemuCaps, if (type == VIR_DOMAIN_VIRT_KVM) n = virXPathNodeSet("./cpu[@type='kvm']", ctxt, ); +else if (type == VIR_DOMAIN_VIRT_HVF) +n = virXPathNodeSet("./cpu[@type='hvf']", ctxt, ); else n = virXPathNodeSet("./cpu[@type='tcg']", ctxt, ); @@ -3542,11 +3552,15 @@ virQEMUCapsLoadCache(virArch hostArch, if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || +(virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF) && + virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_HVF) < 0) || virQEMUCapsLoadHostCPUModelInfo(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup; if ((virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) && virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_KVM) < 0) || +(virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF) && + virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_HVF) < 0) || virQEMUCapsLoadCPUModels(qemuCaps, ctxt, VIR_DOMAIN_VIRT_QEMU) < 0) goto cleanup; @@ -3661,6 +3675,8 @@ virQEMUCapsLoadCache(virArch hostArch, if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) + virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_HVF); virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_QEMU); ret = 0; @@ -3841,10 +3857,14 @@ virQEMUCapsFormatCache(virQEMUCapsPtr qemuCaps) if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) virQEMUCapsFormatHostCPUModelInfo(qemuCaps, , VIR_DOMAIN_VIRT_KVM); +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) + virQEMUCapsFormatHostCPUModelInfo(qemuCaps, , VIR_DOMAIN_VIRT_HVF); virQEMUCapsFormatHostCPUModelInfo(qemuCaps, , VIR_DOMAIN_VIRT_QEMU); if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) virQEMUCapsFormatCPUModels(qemuCaps, , VIR_DOMAIN_VIRT_KVM); +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) + virQEMUCapsFormatCPUModels(qemuCaps, , VIR_DOMAIN_VIRT_HVF); virQEMUCapsFormatCPUModels(qemuCaps, , VIR_DOMAIN_VIRT_QEMU); for (i = 0; i < qemuCaps->nmachineTypes; i++) { @@ -4455,7 +4475,7 @@ virQEMUCapsInitQMPCommandRun(virQEMUCapsInitQMPCommandPtr cmd, if (forceTCG) machine = "none,accel=tcg"; else -machine = "none,accel=kvm:tcg"; +machine = "none,accel=kvm:hvf:tcg"; VIR_DEBUG("Try to probe capabilities of '%s' via QMP, machine %s", cmd->binary, machine); @@ -4646,6 +4666,8 @@ virQEMUCapsNewForBinaryInternal(virArch hostArch, if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_KVM); +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_HVF)) + virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_HVF); virQEMUCapsInitHostCPUModel(qemuCaps, hostArch, VIR_DOMAIN_VIRT_QEMU); if
[libvirt] [PATCHv2 08/16] qemu: Introduce virQEMUCapsHaveAccel
The function should be used to check if qemu capabilities include a hardware acceleration, i.e. accel is not TCG. Signed-off-by: Roman Bolshakov --- src/qemu/qemu_capabilities.c | 12 +--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index e302fbb48f..f80ee62019 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -643,6 +643,12 @@ virQEMUCapsTypeIsAccelerated(virDomainVirtType type) return type == VIR_DOMAIN_VIRT_KVM; } +static bool +virQEMUCapsHaveAccel(virQEMUCapsPtr qemuCaps) +{ +return virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM); +} + /* Checks whether a domain with @guest arch can run natively on @host. */ bool @@ -2387,7 +2393,7 @@ virQEMUCapsProbeQMPCPUDefinitions(virQEMUCapsPtr qemuCaps, if (!(models = virQEMUCapsFetchCPUDefinitions(mon))) return -1; -if (tcg || !virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) +if (tcg || !virQEMUCapsHaveAccel(qemuCaps)) qemuCaps->tcgCPUModels = models; else qemuCaps->accelCPUModels = models; @@ -2413,7 +2419,7 @@ virQEMUCapsProbeQMPHostCPU(virQEMUCapsPtr qemuCaps, if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_QUERY_CPU_MODEL_EXPANSION)) return 0; -if (tcg || !virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) { +if (tcg || !virQEMUCapsHaveAccel(qemuCaps)) { virtType = VIR_DOMAIN_VIRT_QEMU; model = "max"; } else { @@ -4528,7 +4534,7 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps, if (virQEMUCapsInitQMPMonitor(qemuCaps, cmd->mon) < 0) goto cleanup; -if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) { +if (virQEMUCapsHaveAccel(qemuCaps)) { virQEMUCapsInitQMPCommandAbort(cmd); if ((rc = virQEMUCapsInitQMPCommandRun(cmd, true)) != 0) { if (rc == 1) -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCHv2 02/16] conf: Add hvf domain type
QEMU supports Hypervisor.framework since 2.12 as hvf accel. Hypervisor.framework provides a lightweight interface to run a virtual cpu on macOS without the need to install third-party kernel extensions (KEXTs). It's supported since macOS 10.10 on machines with Intel VT-x feature set that includes Extended Page Tables (EPT) and Unrestricted Mode. Signed-off-by: Roman Bolshakov --- docs/formatdomain.html.in | 8 docs/schemas/domaincommon.rng | 1 + src/conf/domain_conf.c| 4 +++- src/conf/domain_conf.h| 1 + src/qemu/qemu_command.c | 4 5 files changed, 13 insertions(+), 5 deletions(-) diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 2af4960981..25dd4bbbd6 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -22,10 +22,10 @@ type specifies the hypervisor used for running the domain. The allowed values are driver specific, but - include "xen", "kvm", "qemu", "lxc" and "kqemu". The - second attribute is id which is a unique - integer identifier for the running guest machine. Inactive - machines have no id value. + include "xen", "kvm", "hvf" (since 4.10.0 and QEMU + 2.12), "qemu", "lxc" and "kqemu". The second attribute is + id which is a unique integer identifier for the running + guest machine. Inactive machines have no id value. diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 5ee727eefa..596e347eda 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -213,6 +213,7 @@ phyp vz bhyve +hvf diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 13874837c2..369d4bd634 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -119,7 +119,8 @@ VIR_ENUM_IMPL(virDomainVirt, VIR_DOMAIN_VIRT_LAST, "phyp", "parallels", "bhyve", - "vz") + "vz", + "hvf") VIR_ENUM_IMPL(virDomainOS, VIR_DOMAIN_OSTYPE_LAST, "hvm", @@ -15024,6 +15025,7 @@ virDomainVideoDefaultType(const virDomainDef *def) case VIR_DOMAIN_VIRT_HYPERV: case VIR_DOMAIN_VIRT_PHYP: case VIR_DOMAIN_VIRT_NONE: +case VIR_DOMAIN_VIRT_HVF: case VIR_DOMAIN_VIRT_LAST: default: return VIR_DOMAIN_VIDEO_TYPE_DEFAULT; diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 467785cd83..65f00692b7 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -245,6 +245,7 @@ typedef enum { VIR_DOMAIN_VIRT_PARALLELS, VIR_DOMAIN_VIRT_BHYVE, VIR_DOMAIN_VIRT_VZ, +VIR_DOMAIN_VIRT_HVF, VIR_DOMAIN_VIRT_LAST } virDomainVirtType; diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 23a6661c10..0fb796e15c 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7251,6 +7251,10 @@ qemuBuildMachineCommandLine(virCommandPtr cmd, virBufferAddLit(, ",accel=kvm"); break; +case VIR_DOMAIN_VIRT_HVF: +virBufferAddLit(, ",accel=hvf"); +break; + case VIR_DOMAIN_VIRT_KQEMU: case VIR_DOMAIN_VIRT_XEN: case VIR_DOMAIN_VIRT_LXC: -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCHv2 00/16] Introduce hvf domain type for Hypevisor.framework
Hypervisor.framework provides a lightweight interface to run a virtual cpu on macOS without the need to install third-party kernel extensions (KEXTs). It's supported since macOS 10.10 on machines with Intel VT-x feature set that includes Extended Page Tables (EPT) and Unrestricted Mode. QEMU supports Hypervisor.framework since 2.12. The patch series adds "hvf" domain that uses Hypevisor.framework. v1: https://www.redhat.com/archives/libvir-list/2018-October/msg01090.html Changes since v1: - [x] Fixed unconditional addition of KVM CPU models into capabilities cache. That fixed a "make check" issue in qemucapabilitiestest on Linux. - [x] Fixed missing brace in virQEMUCapsFormatCPUModels in PATCH 6 - [x] Squashed patch 12 into the first patch (second one in the patch series) - [x] Added hvf domain definition to docs/formatdomain.html.in into the first patch (second in the patch series) - [x] Removed redundant argument in virQEMUCapsProbeHVF (patch 3) - [x] Added separate virQEMUCapsProbeHVF for non-apple platforms (patch 3) - [x] Added macOS support page - [x] Marked HVF support for all working domain elements I wasn't able to resolve the issues below, but I think they should go into separate patches/patch series: - [ ] To make qemucapabilitiestests work regardless of OS, accelerator probing should be done via QMP command. So, there's a need to add a new generic command to QEMU "query-accelerator accel=NAME" - [ ] VIRT_TEST_PRELOAD doesn't work on macOS. There are a few reasons: * DYLD_INSERT_LIBRARIES should be used instead of LD_PRELOAD * -module flag shouldn't be added to LDFLAGS in tests/Makefile.am. The flag instructs libtool to creates bundles (MH_BUNDLE) instead of dynamic libraries (MH_DYLIB) and unlike dylibs they cannot be preloaded. * Either symbol interposing or flat namespaces should be used to perform overrides of the calls to the mocks. I've tried both but neither worked for me, need to make a minimal example. I haven't completed the investigation as it looks like a separate work item. - [ ] Can't retrieve qemucapsprobe replies for macOS because qemucapsprobemock is not getting injected because of the issue with VIRT_TEST_PRELOAD - [ ] Can't add to tests/qemuxml2argvtest.c to illustrate the hvf example because qemucapsprobe doesn't work yet. Roman Bolshakov (16): qemu: Add KVM CPUs into cache only if KVM is present conf: Add hvf domain type qemu: Define hvf capability qemu: Query hvf capability on macOS qemu: Expose hvf domain type if hvf is supported qemu: Rename kvmCPU to accelCPU qemu: Introduce virQEMUCapsTypeIsAccelerated qemu: Introduce virQEMUCapsHaveAccel qemu: Introduce virQEMUCapsToVirtType qemu: Introduce virQEMUCapsAccelStr qemu: Make error message accel-agnostic qemu: Correct CPU capabilities probing for hvf news: Mention hvf domain type docs: Add hvf on QEMU driver page docs: Note hvf support for domain elements docs: Add support page for libvirt on macOS docs/docs.html.in | 3 + docs/drvqemu.html.in | 49 +++- docs/formatdomain.html.in | 141 - docs/index.html.in| 4 +- docs/macos.html.in| 229 ++ docs/news.xml | 12 ++ docs/schemas/domaincommon.rng | 1 + src/conf/domain_conf.c| 4 +- src/conf/domain_conf.h| 1 + src/qemu/qemu_capabilities.c | 201 + src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_command.c | 4 + 12 files changed, 534 insertions(+), 116 deletions(-) create mode 100644 docs/macos.html.in -- 2.19.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [sandbox PATCH 2/2] Use "/boot/vmlinuz-linux" as default kernel path
On Tue, Nov 20, 2018 at 07:10:20PM +, Radostin Stoyanov wrote: > On some linux distributions "/boot/vmlinuz-linux" is set as default > kernel path. If this file does not exist we fallback to the value > "/boot/vmlinuz-$KERNEL-VERSION" > > Signed-off-by: Radostin Stoyanov > --- > bin/virt-sandbox.c| 5 +++-- > libvirt-sandbox/libvirt-sandbox-builder-machine.c | 4 > 2 files changed, 7 insertions(+), 2 deletions(-) Reviewed-by: Daniel P. Berrangé Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [sandbox PATCH 1/2] builder: Use prefix '=> /' to identify lib path
On Tue, Nov 20, 2018 at 07:10:19PM +, Radostin Stoyanov wrote: > The output of ldd might contain a fully qualified path on the left > hand side of the '=>'. For example: > > (glibc 2.28) > $ ldd /usr/libexec/libvirt-sandbox-init-common | grep ld > /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 > (0x7fcdceb96000) > > (glibc 2.27) > $ ldd /usr/libexec/libvirt-sandbox-init-common | grep ld > /lib64/ld-linux-x86-64.so.2 (0x7f18135eb000) > > Signed-off-by: Radostin Stoyanov > --- > libvirt-sandbox/libvirt-sandbox-builder.c | 14 ++ > 1 file changed, 6 insertions(+), 8 deletions(-) > > diff --git a/libvirt-sandbox/libvirt-sandbox-builder.c > b/libvirt-sandbox/libvirt-sandbox-builder.c > index 8cfc2f4..7058112 100644 > --- a/libvirt-sandbox/libvirt-sandbox-builder.c > +++ b/libvirt-sandbox/libvirt-sandbox-builder.c > @@ -297,7 +297,7 @@ static gboolean gvir_sandbox_builder_copy_program(const > char *program, > /* Loop over the output lines to get the path to the libraries to copy */ > line = out; > while ((tmp = strchr(line, '\n'))) { > -gchar *start, *end; > +gchar *start, *end, *tmp2; > *tmp = '\0'; > > /* Search the line for the library path */ > @@ -308,22 +308,20 @@ static gboolean gvir_sandbox_builder_copy_program(const > char *program, > const gchar *newname = NULL; > *end = '\0'; > > +if ((tmp2 = strstr(start, "=> "))) > +start = tmp2 + 3; > + > /* There are countless different naming schemes for > * the ld-linux.so library across architectures. Pretty > * much the only thing in common is they start with > - * the two letters 'ld'. The LDD program prints it > - * out differently too - it doesn't include " => " > - * as this library is special - its actually a static > - * linked executable not a library. > + * the two letters 'ld'. > * > * To make life easier for libvirt-sandbox-init-{qemu,lxc} > * we just call the file 'ld.so' when we copy it into our > * scratch dir, no matter what it was called on the host. > */ > -if (!strstr(line, " => ") && > -strstr(start, "/ld")) { > +if (strstr(start, "/ld")) > newname = "ld.so"; > -} > > if (!gvir_sandbox_builder_copy_file(start, dest, newname, error)) > goto cleanup; Reviewed-by: Daniel P. Berrangé Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [libvirt-go PATCH] Add virDomainSetIOThreadParams binding
On Tue, Nov 20, 2018 at 01:50:03PM -0500, John Ferlan wrote: > Signed-off-by: John Ferlan > --- > > Well I've given it a "go", hopefully it's (more or less) right. The build > and test at least pass ;-) Code looks good apart from a data type mixup > > domain.go | 52 +++ > domain_wrapper.go | 20 ++ > domain_wrapper.h | 8 > 3 files changed, 80 insertions(+) > > diff --git a/domain.go b/domain.go > index e011980..3a6811f 100644 > --- a/domain.go > +++ b/domain.go > @@ -769,6 +769,7 @@ const ( > DOMAIN_STATS_INTERFACE = DomainStatsTypes(C.VIR_DOMAIN_STATS_INTERFACE) > DOMAIN_STATS_BLOCK = DomainStatsTypes(C.VIR_DOMAIN_STATS_BLOCK) > DOMAIN_STATS_PERF = DomainStatsTypes(C.VIR_DOMAIN_STATS_PERF) > + DOMAIN_STATS_IOTHREAD = DomainStatsTypes(C.VIR_DOMAIN_STATS_IOTHREAD) > ) > > type DomainCoreDumpFlags int > @@ -4207,6 +4208,57 @@ func (d *Domain) DelIOThread(id uint, flags > DomainModificationImpact) error { > return nil > } > > +// See also > https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainSetIOThreadParams > + > +type DomainSetIOThreadParams struct { > + PollMaxNsSet bool > + PollMaxNs uint64 > + PollGrowSetbool > + PollGrow uint > + PollShrinkSet bool > + PollShrink uint64 > +} In the QEMU driver code, MAX_NS is a uint64 but GROW and SHRINK are both uints, so this type is wrong. Incidentally the data types should be mentioned in the header file docs comments for the constants. > + > +func getSetIOThreadParamsFieldInfo(params *DomainSetIOThreadParams) > map[string]typedParamsFieldInfo { > + return map[string]typedParamsFieldInfo{ > + C.VIR_DOMAIN_IOTHREAD_POLL_MAX_NS: typedParamsFieldInfo{ > + set: , > + ul: , > + }, > + C.VIR_DOMAIN_IOTHREAD_POLL_GROW: typedParamsFieldInfo{ > + set: , > + ui: , > + }, > + C.VIR_DOMAIN_IOTHREAD_POLL_SHRINK: typedParamsFieldInfo{ > + set: , > + ul: , > + }, And here s/ul/ui/ > + } > +} If that is fixed Reviewed-by: Daniel P. Berrangé Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] Reporting of IP detected by network filter
Thanks for the comprehensive answer. On Tue, Nov 20, 2018 at 4:49 PM Laine Stump wrote: > On 11/20/18 10:17 AM, Daniel P. Berrangé wrote: > > On Tue, Nov 20, 2018 at 04:05:43PM +0100, Marcin Mirecki wrote: > >> Hello, > >> > >> The network filters feature has an option of automatically detecting > the IP > >> of a VM [1]. > >> Is it possible to retrieve this IP by any means? > > It is possibly visible in the live XML in the XML as a > > parameter. > > > It would be kind of cool if it did, but alas it does not. As far as I > can tell the variables/parameters in nwfilter rules are a one way street > - stuff that's set automatically by the driver are not reflected back to > the hypervisor for its dumpxml (nor to the nwfilter-bindings-dumpxml). > > > > >> If not, would you considering adding such a feature? > > We should make it visible via the API for fetching guest IP addrs. > > > It would be neat, but I don't know how much info it would actually give > us (see below). > > > > > > The snooping code should be moved out of nwfilter and into the > > QEMU driver. > > > That would: > > > 1) only work when the domain is defined in qemu:///system (emphasis on > the _system_ part), so in our future utopia where qemu:///session > domains have access to all of the same networking as qemu:///system, > this code would not work. > > > 2) only work when the domain is defined in qemu:///system (emphasis on > the _qemu_ part), so xen, libxl, etc, would left out in the cold. > > > For those reasons, I think it would be better suited to > network:///system or nwfilter:///system. > > > Also, due to the extra overhead in having pcap examine every packet, we > don't want to ever actually setup the pcap socket for this unless there > is an nwfilter that uses it. > > > Finally, we should look at the trustability of this information, and > what are the cases that the info wouldn't be available from somewhere else: > > > 1) in the case of nwfilter snooping ARP packets, the results of all of > those can be found by examining the ARP cache on the host, and there is > already a mode of virsh domifaddr that looks to the ARP cache ("virsh > ifaddr --source arp"). > > > 2) for guests that are doing DHCP on a libvirt virtual network, the > results of that are already available from "virsh domifaddr --source > leases". > Unfortunately it's not likely that libvirt dhcp will be used in the solution. We have one vm per libvirt instance (it's kubevirt), and the interfaces will rather be managed by some sdn solution like OVN. > > 3) for guests that are connected to a host bridge that's directly > connected to the physical network, and getting a DHCP address from an > external DHCP server, those results can also be seen in the ARP cache > ("virsh domifaddr --source arp"). > The vm is connected to a bridge on the host, with no L3 traffic to the host, so the arp tables on the host don't have the required entries. > > > > The QEMU driver should simply update the nwfilter > > binding with the IP once it has snooped it. > > > >> It would be very useful for uses cases where there is no guest agent. > > NB, there are potentially trust issues when using a snooped IP addr. > > > > eg if snooping DHCP responses, a malicious guest could act as a DHCP > > server and send bogus responses. If snooping ARPs a malicious > > guest can send gratuituous ARPs. Thus for nwfilter we tend to recommend > > setting explicit IP addrs, or using filters that block guests from > > sending bogus DHCP response > > > Agreed. Of course the info in the ARP cache can be poisoned with > incorrect data, but so can the results that come from snooping the tap > device for ARP packets (both of them in the same manner, actually). So > (to get back to my suggestion above) I don't think it would be lowering > security at all to use results from the ARP cache vs results from > snooping dhcp/arp packets from the tap device. > > > -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list