Re: [libvirt] [PATCH 11/41] qemu: Separate guest CPU validation from command line creation
On 08/12/2016 09:33 AM, Jiri Denemark wrote: > qemu_command.c should deal with translating our domain definition into a > QEMU command line and nothing else. > > Signed-off-by: Jiri Denemark> --- > src/qemu/qemu_command.c | 71 > src/qemu/qemu_process.c | 105 > +++ > tests/qemuxml2argvtest.c | 8 ++-- > 3 files changed, 117 insertions(+), 67 deletions(-) > [...] > diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c > index f9ed6f5..2b94ff0 100644 > --- a/tests/qemuxml2argvtest.c > +++ b/tests/qemuxml2argvtest.c > @@ -1394,13 +1394,13 @@ mymain(void) > DO_TEST_FAILURE("cpu-host-passthrough", NONE); > DO_TEST_FAILURE("cpu-qemu-host-passthrough", QEMU_CAPS_KVM); > > -driver.caps->host.cpu = cpuHaswell; > +qemuTestSetHostCPU(driver.caps, cpuHaswell); > DO_TEST("cpu-Haswell", QEMU_CAPS_KVM); > DO_TEST("cpu-Haswell2", QEMU_CAPS_KVM); > DO_TEST("cpu-Haswell3", QEMU_CAPS_KVM); > DO_TEST("cpu-Haswell-noTSX", QEMU_CAPS_KVM); > DO_TEST("cpu-host-model-cmt", NONE); > -driver.caps->host.cpu = cpuDefault; > +qemuTestSetHostCPU(driver.caps, NULL); Adding this to the previous patch seems to work, but... > > DO_TEST("encrypted-disk", NONE); > DO_TEST("encrypted-disk-usage", NONE); > @@ -1947,14 +1947,14 @@ mymain(void) > QEMU_CAPS_KVM, QEMU_CAPS_MACHINE_OPT, > QEMU_CAPS_MACH_VIRT_GIC_VERSION); > > -driver.caps->host.cpu->arch = VIR_ARCH_AARCH64; > +qemuTestSetHostArch(driver.caps, VIR_ARCH_AARCH64); > DO_TEST("aarch64-kvm-32-on-64", > QEMU_CAPS_NODEFCONFIG, QEMU_CAPS_DEVICE_VIRTIO_MMIO, > QEMU_CAPS_KVM, QEMU_CAPS_CPU_AARCH64_OFF); > DO_TEST_FAILURE("aarch64-kvm-32-on-64", > QEMU_CAPS_NODEFCONFIG, QEMU_CAPS_DEVICE_VIRTIO_MMIO, > QEMU_CAPS_KVM); > -driver.caps->host.cpu->arch = cpuDefault->arch; > +qemuTestSetHostArch(driver.caps, VIR_ARCH_NONE); Adding this one causes failure... Perhaps due to the: +if (cpu) +caps->host.arch = cpu->arch; in previous patch? or perhaps some shortly subsequent patch... It's not so much an objection to the patch, but by simply adding those API's in patch 10 I guess I would have expected them to just work, but trial and error proves otherwise. Of course reading future patches perhaps sheds even more light (or black magic, voodoo) ;-). > > DO_TEST("kvm-pit-device", QEMU_CAPS_KVM_PIT_TICK_POLICY); > DO_TEST("kvm-pit-delay", QEMU_CAPS_NO_KVM_PIT); > -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 20/41] domcaps: Add CPU usable flag
On 08/12/2016 09:33 AM, Jiri Denemark wrote: > In case a hypervisor is able to tell us a list of supported CPU models > and whether each CPU models can be used on the current host, we can > propagate this to domain capabilities. This is a better alternative > to calling virConnectCompareCPU for each supported CPU model. > > Signed-off-by: Jiri Denemark> --- > docs/formatdomaincaps.html.in | 10 ++-- > docs/schemas/domaincaps.rng| 7 +++ > src/conf/domain_capabilities.c | 50 +- > src/conf/domain_capabilities.h | 16 +- > src/libvirt_private.syms | 2 + > src/qemu/qemu_capabilities.c | 15 -- > tests/domaincapsschemadata/full.xml| 6 +-- > tests/domaincapsschemadata/qemu_1.7.0.x86_64.xml | 48 - > .../qemu_2.6.0-gicv2-virt.aarch64.xml | 60 > +++--- > .../qemu_2.6.0-gicv3-virt.aarch64.xml | 60 > +++--- > tests/domaincapsschemadata/qemu_2.6.0.aarch64.xml | 60 > +++--- > tests/domaincapsschemadata/qemu_2.6.0.ppc64le.xml | 4 +- > tests/domaincapsschemadata/qemu_2.6.0.x86_64.xml | 56 ++-- > tests/domaincapstest.c | 9 ++-- > 14 files changed, 229 insertions(+), 174 deletions(-) > So 11-19 with at least some adjustments have an ACK from me. Since you're fixing issues the point I noted in patch 11 is probably something that patches 10-12 resolve, so no big deal. The coverity issue in 15 would be nice to resolve, but since it already exists afaict it's doable afterwards. My note from patch 19 probably needs to be handled as I imagine it'll be an "issue" on "some" platform with "some" compiler. I'll hold off on an ACK for this one. I'm going to stop here for today and pick it up again tomorrow. I do have some notations below. I have some questions, but perhaps they'll be answered in future changes. John > diff --git a/docs/formatdomaincaps.html.in b/docs/formatdomaincaps.html.in > index 15a13c1..49ccbfc 100644 > --- a/docs/formatdomaincaps.html.in > +++ b/docs/formatdomaincaps.html.in > @@ -156,9 +156,9 @@ > mode name='host-passthrough' supported='yes'/ > mode name='host-model' supported='yes'/ > mode name='custom' supported='yes' > - modelBroadwell/model > - modelBroadwell-noTSX/model > - modelHaswell/model > + model usable='no'Broadwell/model > + model usable='yes'Broadwell-noTSX/model > + model usable='no'Haswell/model >... > /mode >/cpu > @@ -183,6 +183,10 @@ > > The mode element contains a list of supported CPU > models, each described by a dedicated model element. > +The usable attribute specifies whether the model can > +be used on the host. A special value unknown says s/says/indicates/ > +libvirt does not have enough information to provide the usability > +data. > > > > diff --git a/docs/schemas/domaincaps.rng b/docs/schemas/domaincaps.rng > index 9f3d225..5a605a7 100644 > --- a/docs/schemas/domaincaps.rng > +++ b/docs/schemas/domaincaps.rng > @@ -105,6 +105,13 @@ > > > > + > + > + yes > + no > + unknown > + > + Why is this not optional? Hmm... Seems to be output only - still an odd construct though. Not sure I have a better idea (yet). > > > > diff --git a/src/conf/domain_capabilities.c b/src/conf/domain_capabilities.c > index 6f9f7e7..c9e3a28 100644 > --- a/src/conf/domain_capabilities.c > +++ b/src/conf/domain_capabilities.c > @@ -29,6 +29,9 @@ > > #define VIR_FROM_THIS VIR_FROM_CAPABILITIES > > +VIR_ENUM_IMPL(virDomainCapsCPUUsable, VIR_DOMCAPS_CPU_USABLE_LAST, > + "unknown", "yes", "no"); > + > static virClassPtr virDomainCapsClass; > static virClassPtr virDomainCapsCPUModelsClass; > > @@ -157,7 +160,9 @@ virDomainCapsCPUModelsCopy(virDomainCapsCPUModelsPtr old) > return NULL; > > for (i = 0; i < old->count; i++) { > -if (virDomainCapsCPUModelsAdd(cpuModels, old->models[i].name, -1) < > 0) > +if (virDomainCapsCPUModelsAdd(cpuModels, > + old->models[i].name, -1, > + old->models[i].usable) < 0) > goto error; > } > > @@ -184,7 +189,8 @@ virDomainCapsCPUModelsFilter(virDomainCapsCPUModelsPtr > old, > continue; > > if (virDomainCapsCPUModelsAdd(cpuModels, > - old->models[i].name, -1) < 0) > + old->models[i].name, -1, > + old->models[i].usable) < 0) > goto error; > } > > @@
Re: [libvirt] [PATCH 15/41] qemu: Introduce virQEMUCapsGuestIsNative
On 08/12/2016 09:33 AM, Jiri Denemark wrote: > To have a single place where we decide whether a guest can run natively > on a host. > > Signed-off-by: Jiri Denemark> --- > src/qemu/qemu_capabilities.c | 43 +++ > 1 file changed, 27 insertions(+), 16 deletions(-) > > diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c > index 58a96d4..aeea3a3 100644 > --- a/src/qemu/qemu_capabilities.c > +++ b/src/qemu/qemu_capabilities.c > @@ -428,6 +428,29 @@ static const char *virQEMUCapsArchToString(virArch arch) > return virArchToString(arch); > } > > + > +/* Checks whether a domain with @guest arch can run natively on @host. > + */ > +static bool > +virQEMUCapsGuestIsNative(virArch host, > + virArch guest) > +{ > +if (host == guest) > +return true; > + > +if (host == VIR_ARCH_X86_64 && guest == VIR_ARCH_I686) > +return true; > + > +if (host == VIR_ARCH_AARCH64 && guest == VIR_ARCH_ARMV7L) > +return true; > + > +if (ARCH_IS_PPC64(host) && ARCH_IS_PPC64(guest)) > +return true; > + > +return false; > +} > + > + > /* Given a host and guest architectures, find a suitable QEMU target. > * > * This is meant to be used as a second attempt if qemu-system-$guestarch > @@ -441,12 +464,8 @@ virQEMUCapsFindTarget(virArch hostarch, > if (ARCH_IS_PPC64(guestarch)) > guestarch = VIR_ARCH_PPC64; > > -/* armv7l guests on aarch64 hosts can use the aarch64 target > - * i686 guests on x86_64 hosts can use the x86_64 target */ > -if ((guestarch == VIR_ARCH_ARMV7L && hostarch == VIR_ARCH_AARCH64) || > -(guestarch == VIR_ARCH_I686 && hostarch == VIR_ARCH_X86_64)) { > -return hostarch; > -} > +if (virQEMUCapsGuestIsNative(hostarch, guestarch)) > +guestarch = hostarch; > > return guestarch; > } > @@ -807,7 +826,6 @@ virQEMUCapsInitGuest(virCapsPtr caps, > char *binary = NULL; > virQEMUCapsPtr qemubinCaps = NULL; > virQEMUCapsPtr kvmbinCaps = NULL; > -bool native_kvm, x86_32on64_kvm, arm_32on64_kvm, ppc64_kvm; > int ret = -1; > > /* Check for existence of base emulator, or alternate base > @@ -829,14 +847,7 @@ virQEMUCapsInitGuest(virCapsPtr caps, > * - hostarch is aarch64 and guest arch is armv7l (needs -cpu > aarch64=off) > * - hostarch and guestarch are both ppc64* > */ > -native_kvm = (hostarch == guestarch); > -x86_32on64_kvm = (hostarch == VIR_ARCH_X86_64 && > -guestarch == VIR_ARCH_I686); > -arm_32on64_kvm = (hostarch == VIR_ARCH_AARCH64 && > -guestarch == VIR_ARCH_ARMV7L); > -ppc64_kvm = (ARCH_IS_PPC64(hostarch) && ARCH_IS_PPC64(guestarch)); > - > -if (native_kvm || x86_32on64_kvm || arm_32on64_kvm || ppc64_kvm) { > +if (virQEMUCapsGuestIsNative(hostarch, guestarch)) { > const char *kvmbins[] = { > "/usr/libexec/qemu-kvm", /* RHEL */ > "qemu-kvm", /* Fedora */ > @@ -852,7 +863,7 @@ virQEMUCapsInitGuest(virCapsPtr caps, > * arm is different in that 32-on-64 _only_ works with > * qemu-system-aarch64. So we have to add it to the kvmbins list > */ > -if (arm_32on64_kvm) > +if (hostarch == VIR_ARCH_AARCH64 && guestarch == VIR_ARCH_ARMV7L) > kvmbins[3] = "qemu-system-aarch64"; > > for (i = 0; i < ARRAY_CARDINALITY(kvmbins); ++i) { > Noted by Coverity in this module - existing I think as well, it's just that the change piqued Coverity's interest in analyzing things... At the top of this function we have a: /* Ignore binary if extracting version info fails */ if (binary) { if (!(qemubinCaps = virQEMUCapsCacheLookup(cache, binary))) { virResetLastError(); ... Then there's the replace if condition w/ virQEMUCapsGuestIsNative followed by a: ret = virQEMUCapsInitGuestFromBinary(caps, binary, qemubinCaps, kvmbin, kvmbinCaps, guestarch); where it's noted that virQEMUCapsInitGuestFromBinary will dereference qemubinCaps in the call to virQEMUCapsGetMachineTypesCaps and it's possible that qemubinCaps is NULL if "binary" is set. The analysis doesn't go into the virQEMUCapsGuestIsNative condition. It's also notable that if !binary is checked in virQEMUCapsInitGuestFromBinary, so this is somewhat of an "edge" condition. Whether this is even possible - I'm not even sure! -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 17/41] cpu: Special case models == NULL in cpuGetModels
On 08/12/2016 09:33 AM, Jiri Denemark wrote: > Some CPU drivers (such as arm) do not provide list of CPUs libvirt > supports and just pass any CPU model from domain XML directly to QEMU. > Such driver need to return models == NULL and success from cpuGetModels. > > Signed-off-by: Jiri Denemark> --- > src/cpu/cpu.c | 15 +-- > src/libvirt-host.c | 3 ++- > tools/virsh-host.c | 10 +++--- > 3 files changed, 18 insertions(+), 10 deletions(-) > [...] > diff --git a/src/libvirt-host.c b/src/libvirt-host.c > index 2a3de03..335798a 100644 > --- a/src/libvirt-host.c > +++ b/src/libvirt-host.c > @@ -1005,7 +1005,8 @@ virConnectCompareCPU(virConnectPtr conn, > * > * Get the list of supported CPU models for a specific architecture. > * > - * Returns -1 on error, number of elements in @models on success. > + * Returns -1 on error, number of elements in @models on success (0 means > + * libvirt accepts any CPU model). > */ > int > virConnectGetCPUModelNames(virConnectPtr conn, const char *arch, char > ***models, > diff --git a/tools/virsh-host.c b/tools/virsh-host.c > index 57f0c0e..7ac2812 100644 > --- a/tools/virsh-host.c > +++ b/tools/virsh-host.c > @@ -1127,9 +1127,13 @@ cmdCPUModelNames(vshControl *ctl, const vshCmd *cmd) > return false; > } > > -for (i = 0; i < nmodels; i++) { > -vshPrint(ctl, "%s\n", models[i]); > -VIR_FREE(models[i]); > +if (nmodels == 0) { > +vshPrint(ctl, "%s\n", _("all CPU models are accepted")); > +} else { > +for (i = 0; i < nmodels; i++) { > +vshPrint(ctl, "%s\n", models[i]); > +VIR_FREE(models[i]); > +} I seem to recall some recent work around the -q[uiet] switch - is that something that this code needs to be concerned with? (as silly as that seems while I'm typing it!) > } > VIR_FREE(models); > > -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 16/41] qemu: Fill in CPU domain capabilities
On 08/12/2016 09:33 AM, Jiri Denemark wrote: > Signed-off-by: Jiri Denemark> --- > src/qemu/qemu_capabilities.c | 24 +- > src/qemu/qemu_capabilities.h | 3 +- > src/qemu/qemu_driver.c | 7 +- > tests/domaincapsschemadata/qemu_1.7.0.x86_64.xml | 31 +- > .../qemu_2.6.0-gicv2-virt.aarch64.xml | 37 +- > .../qemu_2.6.0-gicv3-virt.aarch64.xml | 37 +- > tests/domaincapsschemadata/qemu_2.6.0.aarch64.xml | 37 +- > tests/domaincapsschemadata/qemu_2.6.0.ppc64le.xml | 437 > - > tests/domaincapsschemadata/qemu_2.6.0.x86_64.xml | 35 +- > tests/domaincapstest.c | 56 ++- > 10 files changed, 682 insertions(+), 22 deletions(-) > > diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c > index aeea3a3..7a7ddb8 100644 > --- a/src/qemu/qemu_capabilities.c > +++ b/src/qemu/qemu_capabilities.c > @@ -4194,6 +4194,26 @@ virQEMUCapsFillDomainOSCaps(virDomainCapsOSPtr os, > > > static int > +virQEMUCapsFillDomainCPUCaps(virCapsPtr caps, > + virQEMUCapsPtr qemuCaps, > + virDomainCapsPtr domCaps) > +{ > + > +if (domCaps->virttype == VIR_DOMAIN_VIRT_KVM && > +virQEMUCapsGuestIsNative(caps->host.arch, qemuCaps->arch)) Can caps be NULL? > +domCaps->cpu.hostPassthrough = true; > + > +if (qemuCaps->cpuDefinitions && caps && caps->host.cpu) Since you check here So far as this patch it doesn't seem so and if I check at the end the two callers to virQEMUCapsFillDomainCaps have failure paths for !caps prior to the calls. Perhaps adding a NONNULL to the prototype will just help clear things up. > +domCaps->cpu.hostModel = virQEMUCapsGuestIsNative(caps->host.arch, > + qemuCaps->arch); > + > +domCaps->cpu.custom = virObjectRef(qemuCaps->cpuDefinitions); > + > +return 0; > +} > + > + [...] -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH V2] virpci: support driver_override sysfs interface
Hi Laine, Did you have a chance to look at V2 of this patch? As discussed in V1, I left the existing code untouched and added new functions for the driver_override interface. Thanks! Regards, Jim Jim Fehlig wrote: > libvirt uses the new_id PCI sysfs interface to bind a PCI stub driver > to a PCI device. The new_id interface is known to be buggy and racey, > hence a more deterministic interface was introduced in the 3.12 kernel: > driver_override. For more details see > > https://www.redhat.com/archives/libvir-list/2016-June/msg02124.html > > This patch adds support for the driver_override interface by > > - adding new virPCIDevice{BindTo,UnbindFrom}StubWithOverride functions > that use the driver_override interface > - renames the existing virPCIDevice{BindTo,UnbindFrom}Stub functions > to virPCIDevice{BindTo,UnbindFrom}StubWithNewid to perserve existing > behavior on new_id interface > - changes virPCIDevice{BindTo,UnbindFrom}Stub function to call one of > the above depending on availability of driver_override > > The patch includes a bit of duplicate code, but allows for easily > dropping the new_id code once support for older kernels is no > longer desired. > > Signed-off-by: Jim Fehlig> --- > > V1: > > https://www.redhat.com/archives/libvir-list/2016-July/msg00370.html > > Changes since V1: > - drop patch1 > - change patch2 to preserve the existing new_id code and add new > functions to implement the driver_override interface > > src/util/virpci.c | 151 > +- > 1 file changed, 149 insertions(+), 2 deletions(-) > > diff --git a/src/util/virpci.c b/src/util/virpci.c > index 132948d..6c8174a 100644 > --- a/src/util/virpci.c > +++ b/src/util/virpci.c > @@ -1089,8 +1089,54 @@ virPCIDeviceUnbind(virPCIDevicePtr dev) > return ret; > } > > +/* > + * Bind a PCI device to a driver using driver_override sysfs interface. > + * E.g. > + * > + * echo driver-name > /sys/bus/pci/devices/:03:00.0/driver_override > + * echo :03:00.0 > /sys/bus/pci/devices/:03:00.0/driver/unbind > + * echo :03:00.0 > /sys/bus/pci/drivers_probe > + * > + * An empty driverName will cause the device to be bound to its > + * preferred driver. > + */ > static int > -virPCIDeviceUnbindFromStub(virPCIDevicePtr dev) > +virPCIDeviceBindWithDriverOverride(virPCIDevicePtr dev, > + const char *driverName) > +{ > +int ret = -1; > +char *path; > + > +if (!(path = virPCIFile(dev->name, "driver_override"))) > +return -1; > + > +if (virFileWriteStr(path, driverName, 0) < 0) { > +virReportSystemError(errno, > + _("Failed to add driver '%s' to driver_override > " > + " interface of PCI device '%s'"), > + driverName, dev->name); > +goto cleanup; > +} > + > +if (virPCIDeviceUnbind(dev) < 0) > +goto cleanup; > + > +if (virFileWriteStr(PCI_SYSFS "drivers_probe", dev->name, 0) < 0) { > +virReportSystemError(errno, > + _("Failed to trigger a probe for PCI device > '%s'"), > + dev->name); > +goto cleanup; > +} > + > +ret = 0; > + > + cleanup: > +VIR_FREE(path); > +return ret; > +} > + > +static int > +virPCIDeviceUnbindFromStubWithNewid(virPCIDevicePtr dev) > { > int result = -1; > char *drvdir = NULL; > @@ -1191,9 +1237,41 @@ virPCIDeviceUnbindFromStub(virPCIDevicePtr dev) > return result; > } > > +static int > +virPCIDeviceUnbindFromStubWithOverride(virPCIDevicePtr dev) > +{ > +if (!dev->unbind_from_stub) { > +VIR_DEBUG("Unbind from stub skipped for PCI device %s", dev->name); > +return 0; > +} > + > +return virPCIDeviceBindWithDriverOverride(dev, "\n"); > +} > + > +static int > +virPCIDeviceUnbindFromStub(virPCIDevicePtr dev) > +{ > +int ret; > +char *path; > + > +/* > + * Prefer using the device's driver_override interface, falling back > + * to the unpleasant new_id interface. > + */ > +if (!(path = virPCIFile(dev->name, "driver_override"))) > +return -1; > + > +if (virFileExists(path)) > +ret = virPCIDeviceUnbindFromStubWithOverride(dev); > +else > +ret = virPCIDeviceUnbindFromStubWithNewid(dev); > + > +VIR_FREE(path); > +return ret; > +} > > static int > -virPCIDeviceBindToStub(virPCIDevicePtr dev) > +virPCIDeviceBindToStubWithNewid(virPCIDevicePtr dev) > { > int result = -1; > bool reprobe = false; > @@ -1345,6 +1423,75 @@ virPCIDeviceBindToStub(virPCIDevicePtr dev) > return result; > } > > +static int > +virPCIDeviceBindToStubWithOverride(virPCIDevicePtr dev) > +{ > +int ret = -1; > +const char *stubDriverName; > +char *stubDriverPath = NULL; > +char *driverLink = NULL; > + > +/* Check the device is configured
Re: [libvirt] [PATCH 22/41] qemu: Propagate virCapsPtr to virQEMUCapsNewForBinaryInternal
On 08/12/2016 09:33 AM, Jiri Denemark wrote: > Signed-off-by: Jiri Denemark> --- > src/qemu/qemu_capabilities.c | 25 +++-- > src/qemu/qemu_capabilities.h | 6 -- > src/qemu/qemu_capspriv.h | 3 ++- > src/qemu/qemu_domain.c | 13 - > src/qemu/qemu_driver.c | 9 ++--- > src/qemu/qemu_process.c | 14 +++--- > tests/qemucapsprobe.c| 2 +- > 7 files changed, 47 insertions(+), 25 deletions(-) > OK I lied slightly... 21 ACK ... build breaks here - probably because of new function qemuDomainDefValidate... luckily caps is passed as ATTRIBUTE_UNUSED, so it seems to be easily handled... John > diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c > index 232ae1f..8f55fcc 100644 > --- a/src/qemu/qemu_capabilities.c > +++ b/src/qemu/qemu_capabilities.c > @@ -837,7 +837,7 @@ virQEMUCapsInitGuest(virCapsPtr caps, > > /* Ignore binary if extracting version info fails */ > if (binary) { > -if (!(qemubinCaps = virQEMUCapsCacheLookup(cache, binary))) { > +if (!(qemubinCaps = virQEMUCapsCacheLookup(caps, cache, binary))) { > virResetLastError(); > VIR_FREE(binary); > } > @@ -877,7 +877,7 @@ virQEMUCapsInitGuest(virCapsPtr caps, > if (!kvmbin) > continue; > > -if (!(kvmbinCaps = virQEMUCapsCacheLookup(cache, kvmbin))) { > +if (!(kvmbinCaps = virQEMUCapsCacheLookup(caps, cache, kvmbin))) > { > virResetLastError(); > VIR_FREE(kvmbin); > continue; > @@ -1986,7 +1986,7 @@ int virQEMUCapsGetDefaultVersion(virCapsPtr caps, > return -1; > } > > -qemucaps = virQEMUCapsCacheLookup(capsCache, capsdata->emulator); > +qemucaps = virQEMUCapsCacheLookup(caps, capsCache, capsdata->emulator); > VIR_FREE(capsdata); > if (!qemucaps) > return -1; > @@ -3783,7 +3783,8 @@ virQEMUCapsLogProbeFailure(const char *binary) > > > virQEMUCapsPtr > -virQEMUCapsNewForBinaryInternal(const char *binary, > +virQEMUCapsNewForBinaryInternal(virCapsPtr caps ATTRIBUTE_UNUSED, > +const char *binary, > const char *libDir, > const char *cacheDir, > uid_t runUid, > @@ -3861,13 +3862,14 @@ virQEMUCapsNewForBinaryInternal(const char *binary, > } > > static virQEMUCapsPtr > -virQEMUCapsNewForBinary(const char *binary, > +virQEMUCapsNewForBinary(virCapsPtr caps, > +const char *binary, > const char *libDir, > const char *cacheDir, > uid_t runUid, > gid_t runGid) > { > -return virQEMUCapsNewForBinaryInternal(binary, libDir, cacheDir, > +return virQEMUCapsNewForBinaryInternal(caps, binary, libDir, cacheDir, > runUid, runGid, false); > } > > @@ -3960,7 +3962,9 @@ virQEMUCapsCacheNew(const char *libDir, > const char *qemuTestCapsName; > > virQEMUCapsPtr > -virQEMUCapsCacheLookup(virQEMUCapsCachePtr cache, const char *binary) > +virQEMUCapsCacheLookup(virCapsPtr caps, > + virQEMUCapsCachePtr cache, > + const char *binary) > { > virQEMUCapsPtr ret = NULL; > > @@ -3980,7 +3984,7 @@ virQEMUCapsCacheLookup(virQEMUCapsCachePtr cache, const > char *binary) > if (!ret) { > VIR_DEBUG("Creating capabilities for %s", >binary); > -ret = virQEMUCapsNewForBinary(binary, cache->libDir, > +ret = virQEMUCapsNewForBinary(caps, binary, cache->libDir, >cache->cacheDir, >cache->runUid, cache->runGid); > if (ret) { > @@ -4000,11 +4004,12 @@ virQEMUCapsCacheLookup(virQEMUCapsCachePtr cache, > const char *binary) > > > virQEMUCapsPtr > -virQEMUCapsCacheLookupCopy(virQEMUCapsCachePtr cache, > +virQEMUCapsCacheLookupCopy(virCapsPtr caps, > + virQEMUCapsCachePtr cache, > const char *binary, > const char *machineType) > { > -virQEMUCapsPtr qemuCaps = virQEMUCapsCacheLookup(cache, binary); > +virQEMUCapsPtr qemuCaps = virQEMUCapsCacheLookup(caps, cache, binary); > virQEMUCapsPtr ret; > > if (!qemuCaps) > diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h > index 9fd38d9..df49809 100644 > --- a/src/qemu/qemu_capabilities.h > +++ b/src/qemu/qemu_capabilities.h > @@ -445,9 +445,11 @@ void virQEMUCapsSetGICCapabilities(virQEMUCapsPtr > qemuCaps, > virQEMUCapsCachePtr virQEMUCapsCacheNew(const char *libDir, > const char *cacheDir, > uid_t uid, gid_t
Re: [libvirt] [PATCH v5 0/8] perf: add more perf events support
Hi John, How about this patch series? Thanks, Qiaowei > -Original Message- > From: John Ferlan [mailto:jfer...@redhat.com] > Sent: Thursday, August 4, 2016 6:31 AM > To: libvir-list@redhat.com > Cc: Ren, Qiaowei> Subject: [PATCH v5 0/8] perf: add more perf events support > > v4: http://www.redhat.com/archives/libvir-list/2016-July/msg00607.html > > Reworked/reworded the series slightly. The end result is mostly the same as > the > original, but with the suggested tweaks. > > > John Ferlan (3): > virsh: Add a forward reference to perf command from domstats --perf > virsh: Rework the perf event names into a table. > util: Move virPerfNew and virPerfFree > > Qiaowei Ren (5): > perf: rename qemuDomainGetStatsPerfRdt() > perf: Remove the switch from qemuDomainGetStatsPerf > util: Add some comment details for virPerfEventType > perf: Adjust the perf initialization > perf: add more perf events support > > docs/formatdomain.html.in | 24 +++ > docs/schemas/domaincommon.rng | 4 + > include/libvirt/libvirt-domain.h| 39 + > src/libvirt-domain.c| 9 ++ > src/qemu/qemu_driver.c | 23 ++- > src/util/virperf.c | 230 > +--- > src/util/virperf.h | 14 +- > tests/genericxml2xmlindata/generic-perf.xml | 4 + > tools/virsh.pod | 40 +++-- > 9 files changed, 275 insertions(+), 112 deletions(-) > > -- > 2.7.4 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] RFC: Qemu/SeaBIOS/VGA/PXE upgrades vs. longterm-snapshot/migration
On 29/08/2016 08:57, Philipp Hahn wrote: > Hello, > > If my understanding of migration/snapshots is correct, the > target/loading VM must be a clone of the source/saving VM, that is have > the same devices, RAM/PCI layout, etc. In the past I have had several > issues with Qemu, when the distribution (Debian) updates their packaging > of SeaBIOS/iPXE/VgaROM, which changes the the next/previous power-of-2 > size and thus changes the PCI layout, preventing me from loading old > snapshots. > > As the BIOS file is provided externally, it is prone to such > (accidental) updates. As (AFAIK) libvirt does not track the Qemu > version, BIOS files inside its XML data, there's always the danger of > making snapshots invalid by updating Qemu and the BIOS files. Right. For the BIOS we distribute two versions, one which stays under 128k and one which stays under 256k. For iPXE we also distribute two versions. The basic version has no UEFI support is under 64k (under 128k for pxe-e1000.rom), the one with UEFI support is under 256k. VGA is always under 64k. There is quite some leeway, but it's possible for distributions to screw up. One possibility would be to add "expected" sizes to QEMU and fail to start if the files don't fit in the expected size. > 1. Does Qemu use ROM shadowing, that is copying the ROM to RAM? This > makes sense on real HW as ROMs are a low slower than RAM. In that case > the content of the "ROM" would already be contained inside the VM-State > and it would be enough to provide an "empty ROM file of the right size". QEMU does ROM shadowing, but it's not enough to provide an empty ROM file because the unshadowed ROM is available through BAR6. > 2. If Qemu does no ROM shadowing, what happens if I suspend a VM while > executing ROM or SMM code and then doing an SeaBIOS update? (my > DOS-assembler knowledge is quiet old nowadays) Even though QEMU does do ROM shadowing, this is a good question. The contents of the ROM are migrated from the source to the destination, so the destination sees exactly the same contents. This means that an old BIOS might run on a new QEMU. Occasionally this may cause bugs, but we try to test migration and detect them before the release. > 3. How do others handle long-term snapshots? Just say "good-bye" to your > old snapshots when upgrading to a newer Qemu version or keeping the old, > probably unmaintained and vulnerable Qemu/BIOS binaries until the > end-of-time? It's up to the distros to ensure that compatibility ROM files (bios.bin and pxe-*.rom) have the right size (128k for bios.bin and pxe-e1000.rom, 64k for the others). If they don't, complain. You can have the right size even if you compile from newer sources, thus any vulnerabilities get fixed the next time you start the VM with a fresh QEMU installation. Paolo -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
Hi Michal well no luck :( br. Umar On Mon, Aug 29, 2016 at 11:55 AM, Michal Privoznikwrote: > On 29.08.2016 08:32, Umar Draz wrote: > > Hi Michal > > > > Thanks for your reply, how I can upgrade my guest agent on windows? > > I think if you download msi corresponding to your architecture from here > > https://fedorapeople.org/groups/virt/virtio-win/direct- > downloads/archive-qemu-ga/qemu-ga-win-7.3.2-1/ > > and run it it'll do the trick. > > Michal > -- Umar Draz Network Architect -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] RFC: Limited dynamic ownership
On Tue, Aug 23, 2016 at 07:25:56PM -0400, Daniel P. Berrange wrote: On Tue, Aug 23, 2016 at 06:17:44PM -0400, Martin Kletzander wrote: On Tue, Aug 23, 2016 at 05:54:29PM -0400, Daniel P. Berrange wrote: > On Tue, Aug 23, 2016 at 05:06:20PM -0400, Martin Kletzander wrote: > > Hi everyone, > > > > so there was an idea about limiting the relabelling of images that > > libvirt does. And I'm taking the liberty of pitching my idea how to > > approach this. I feel like it's pretty simple thing and there's not > > much to talk about, but a) I could've missed something and b) you might > > hate the way I approach it. > > > > The idea is to extend the seclabel XML, for example: > > > > > >/var/lib/libvirt/images > >/data/virt-stuff > > > > > > Either we allow 'relabel' to be set to 'whitelist' or add a new > > attribute with a name like 'mode' or something, which will control how > > we relabel the files (actually relabel='no' can mean 'whitelist' and > > relabel='yes' can mean blacklist without adding anything there). After > > that you can specify what paths are (dis)allowed to be labelled. > > > > Actually thinking about it I like the following the most: > > > > > > > > > > > > > > which I believe is pretty explanatory. Feel free to ask if it's not. > > And let me know what you think. > > I don't think we need to get involved in the configuration. > I forgot to mention that I like that because you would be able to specify this per-disk as well as for the whole VM. Of course using info in directly achieves the same thing > We've already got the ability in the config to provide the > full backing chain explicitly. If a management app provides a full > backing chain in the XML, we could validate the app provided chain > against the chain we probe and report error if they mis-match. (Maybe > we in fact already report this?) > Not yet: "It is currently ignored on input and only used for output to describe the detected backing chains of running domains." It would help with this, but I don't feel like it's that usable. Also I feel like everyone will overuse that while misunderstanding what it actually does. Also if you do some block-merge or whatever, you need to update the backing chain and everyone will just re-probe it because no management layer likes keeping more information than needed. If you provide the whitelist though, you're essentially going to want to provide the same info that you would provide with the data, as the whitelist you're providing there is essentially trying to whitelist only the expected backing file. I don't feel like we should be inventing new seclabel elements to duplicate info we could already provide via existing backing data elements. It just introduces more complexity that might result in more bugs, but yes, it is information duplication. > Thinking bout this in the context of a recent OpenStack Nova CVE, > where Nova mistakenly set format=qcow2, instead of format=raw, allowing > a malicious guest to write a qcow2 header with backing file. If Nova > had provided the full backing chain it expected (no backing chain at > all), then libvirt would have seen the maliciously created backing > chain, and caught the problem despite Nova giving the wrong format. > > > Separately from this, in the context of storage pools, when giving > a in the domain XML, we could do validation to > ensure the backing file of the volume always pointed to a volume > that was also inside a storage pool. This would allow us to have > backing files pointing to volumes in different storage pools, but > would prevent them pointing to arbitrary files that were not in > storage pools at all (eg /etc/password, or /dev/root, etc). > That is good idea, but it won't cover all the cases, for example if you're not using storage pools. Yep, at least from OpenStack POV our goal is to switch 100% to using storage pools. For non-storage pool deployments though, I think it is common that the mgmt app will have a specific place where it will always put disks. eg on OpenStack file based disks will always live under /var/lib/nova/instances. So if there was a qemu.conf setting to whitelist allowed disk image locations, we could lock down where we permit relabelling for the openstack host as a whole, and not need to change anything on a per guest basis. I like the per-guest approach better, but qemu.conf is satisfactory enough, I guess. It could be nice to get feedback from upper mgmt layers, any idea where else to post this questions? ovirt might have feedback i guess Cc'd ovirt-devel, even though I don't think they use it (or at least not yet). If you feel like it, don't hesitate to Cc appropriate OpenStack list as well (if that's not much cross-posting). Martin Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |:
Re: [libvirt] [PATCHv5 11/13] Add compatibility attribute to video
Hi Ján, I apply your patches of virtio compatibility attribute. And I find virtio video and input devices using virtio modern when set compatibility as legacy. The xml: Check virtio version in guest: # for i in `find /sys/devices -name 'virtio[0-9]*'`;do > echo -e $(lspci -s $(echo $i|egrep -o > '[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[01][0-9a-fA-F]\.[0-7]'))'\t'$(cat > $i/features|cut -c 33) > done 00:02.0 VGA compatible controller: Red Hat, Inc Virtio GPU (rev 01)1 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device1 00:05.0 Communication controller: Red Hat, Inc Virtio console1 00:07.0 SCSI storage controller: Red Hat, Inc Virtio block device1 00:08.0 Unclassified device [00ff]: Red Hat, Inc Virtio memory balloon1 00:09.0 Mouse controller: Red Hat, Inc Virtio input (rev 01)1 00:0a.0 Keyboard controller: Red Hat, Inc Virtio input (rev 01)1 For the issue I got Gerd's reply on virt-devel: > Right now the default is "disable-legacy=off,disable-modern=on" (with > the exception of virtio-input and virtio-gpu, those two are newer than > virtio 1.0 and thus don't support legacy virtio). So, maybe we should make compatibility='legacy' invalid in virtio video and input devices in case of confusion. - Original Message - From: "Ján Tomko"To: libvir-list@redhat.com Sent: Tuesday, August 23, 2016 6:20:53 PM Subject: [libvirt] [PATCHv5 11/13] Add compatibility attribute to video https://bugzilla.redhat.com/show_bug.cgi?id=1227354 --- docs/formatdomain.html.in | 8 docs/schemas/domaincommon.rng | 5 + src/conf/domain_conf.c | 17 +++-- src/conf/domain_conf.h | 1 + tests/qemuxml2argvdata/qemuxml2argv-virtio-revision.xml | 1 + .../qemuxml2xmlout-virtio-revision.xml | 1 + 6 files changed, 31 insertions(+), 2 deletions(-) diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 41a9325..d640e8d 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -5641,6 +5641,14 @@ qemu-kvm -net nic,model=? /dev/null The optional address sub-element can be used to tie the video device to a particular PCI slot. + + driver + +The compatibility attribute can be used to specify the +compatibility of virtio devices. Allowed values are legacy, +transitional and modern. +Since 2.2.0. + Consoles, serial, parallel channel devices diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index ed00ace..f250cbc 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -3080,6 +3080,11 @@ + + + + + diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 166f5fb..03c856c 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -12605,11 +12605,13 @@ virDomainVideoAccelDefParseXML(xmlNodePtr node) static virDomainVideoDefPtr virDomainVideoDefParseXML(xmlNodePtr node, + xmlXPathContextPtr ctxt, const virDomainDef *dom, unsigned int flags) { virDomainVideoDefPtr def; xmlNodePtr cur; +xmlNodePtr saved = ctxt->node; char *type = NULL; char *heads = NULL; char *vram = NULL; @@ -12618,6 +12620,8 @@ virDomainVideoDefParseXML(xmlNodePtr node, char *vgamem = NULL; char *primary = NULL; +ctxt->node = node; + if (VIR_ALLOC(def) < 0) return NULL; @@ -12719,7 +12723,12 @@ virDomainVideoDefParseXML(xmlNodePtr node, if (virDomainDeviceInfoParseXML(node, NULL, >info, flags) < 0) goto error; +if (virDomainDriverCompatibilityParseXML(ctxt, >compatibility) < 0) +goto error; + cleanup: +ctxt->node = saved; + VIR_FREE(type); VIR_FREE(ram); VIR_FREE(vram); @@ -13414,7 +13423,7 @@ virDomainDeviceDefParse(const char *xmlStr, goto error; break; case VIR_DOMAIN_DEVICE_VIDEO: -if (!(dev->data.video = virDomainVideoDefParseXML(node, def, flags))) +if (!(dev->data.video = virDomainVideoDefParseXML(node, ctxt, def, flags))) goto error; break; case VIR_DOMAIN_DEVICE_HOSTDEV: @@ -17084,7 +17093,7 @@ virDomainDefParseXML(xmlDocPtr xml, virDomainVideoDefPtr video; ssize_t insertAt = -1; -if (!(video = virDomainVideoDefParseXML(nodes[i], def, flags))) +if (!(video = virDomainVideoDefParseXML(nodes[i], ctxt, def, flags))) goto error; if (video->primary) { @@ -21896,6 +21905,10 @@
Re: [libvirt] [PATCH v2] tests: fix segfault in objecteventtest
On 25.08.2016 07:50, Roman Bogorodskiy wrote: > Test 12 from objecteventtest (createXML add event) segaults on FreeBSD > with bus error. > > At some point it calls testNodeDeviceDestroy() from the test driver. And > it fails when it tries to unlock the device in the "out:" label of this > function. > > Unlocking fails because the previous step was a call to > virNodeDeviceObjRemove from conf/node_device_conf.c. This function > removes the given device from the device list and cleans up the object, > including destroying of its mutex. However, it does not nullify the pointer > that was given to it. > > As a result, we end up in testNodeDeviceDestroy() here: > > out: > if (obj) > virNodeDeviceObjUnlock(obj); > > And instead of skipping this, we try to do Unlock and fail because of > malformed mutex. > > Change virNodeDeviceObjRemove to use double pointer and set pointer to > NULL. > --- > src/conf/node_device_conf.c | 13 +++-- > src/conf/node_device_conf.h | 2 +- > src/node_device/node_device_hal.c | 4 ++-- > src/test/test_driver.c| 2 +- > 4 files changed, 11 insertions(+), 10 deletions(-) Almost. You've forgotten about udev (perhaps due to BSD? :-P) ACK if you squash this in: diff --git a/src/node_device/node_device_udev.c b/src/node_device/node_device_udev.c index ddf3d88..520269f 100644 --- a/src/node_device/node_device_udev.c +++ b/src/node_device/node_device_udev.c @@ -1044,7 +1044,7 @@ static int udevRemoveOneDevice(struct udev_device *device) VIR_DEBUG("Removing device '%s' with sysfs path '%s'", dev->def->name, name); -virNodeDeviceObjRemove(>devs, dev); +virNodeDeviceObjRemove(>devs, ); ret = 0; cleanup: Safe for freeze. Michal -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] bhyve: fix disks address allocation
Laine Stump wrote: > On 08/28/2016 08:45 AM, Roman Bogorodskiy wrote: > > As bhyve currently doesn't use controller addressing and simply > > uses 1 implicit controller for 1 disk device, the scheme looks the > > following: > > > > pci addrees -> (implicit controller) -> disk device > > > > So in fact we identify disk devices by pci address of implicit > > controller and just pass it this way to bhyve in a form: > > > > -s pci_addr,ahci-(cd|hd),/path/to/disk > > > > Therefore, we cannot use virDeviceInfoPCIAddressWanted() because it > > does not expect that disk devices might need PCI address assignment. > > > > As a result, if a disk was specified without address, it will not be > > generated and domain will to start. > > > > Until proper controller addressing is not implemented in the bhyve > > s/not// :-) > > > driver, force each disk to have PCI address generated if it was not > > specified by user. > > --- > > src/bhyve/bhyve_device.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > ACK. Pushed with the commit message fixed, thanks! Roman Bogorodskiy signature.asc Description: PGP signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
On 28.08.2016 16:53, Umar Draz wrote: > Hello All, > > I have install qemu guest agent on windows 10, but unable to get the ip > address using this command > > virsh qemu-agent-command myvm '{ "execute": "guest-network-get-interfaces" > }' > > I am getting the following error on above command. > > ibvirt: QEMU Driver error : internal error: unable to execute QEMU agent > command 'guest-network-get-interfaces': this feature or command is not > currently supported Right. The windows implementation for this qemu-ga command was introduced in this commit: http://git.qemu.org/?p=qemu.git;a=commitdiff;h=d6c5528b0 and should be contained in 2.4 release. So updating your guest agent should help you. On the other hand, I don't have Windows 10 installed anywhere (nor a VM) to try this out. Michal -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
On 29.08.2016 08:32, Umar Draz wrote: > Hi Michal > > Thanks for your reply, how I can upgrade my guest agent on windows? I think if you download msi corresponding to your architecture from here https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-qemu-ga/qemu-ga-win-7.3.2-1/ and run it it'll do the trick. Michal -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] RFC: Qemu/SeaBIOS/VGA/PXE upgrades vs. longterm-snapshot/migration
Hello, If my understanding of migration/snapshots is correct, the target/loading VM must be a clone of the source/saving VM, that is have the same devices, RAM/PCI layout, etc. In the past I have had several issues with Qemu, when the distribution (Debian) updates their packaging of SeaBIOS/iPXE/VgaROM, which changes the the next/previous power-of-2 size and thus changes the PCI layout, preventing me from loading old snapshots. As the BIOS file is provided externally, it is prone to such (accidental) updates. As (AFAIK) libvirt does not track the Qemu version, BIOS files inside its XML data, there's always the danger of making snapshots invalid by updating Qemu and the BIOS files. Some questions: 1. Does Qemu use ROM shadowing, that is copying the ROM to RAM? This makes sense on real HW as ROMs are a low slower than RAM. In that case the content of the "ROM" would already be contained inside the VM-State and it would be enough to provide an "empty ROM file of the right size". 2. If Qemu does no ROM shadowing, what happens if I suspend a VM while executing ROM or SMM code and then doing an SeaBIOS update? (my DOS-assembler knowledge is quiet old nowadays) 3. How do others handle long-term snapshots? Just say "good-bye" to your old snapshots when upgrading to a newer Qemu version or keeping the old, probably unmaintained and vulnerable Qemu/BIOS binaries until the end-of-time? Thanks in advance Philipp -- Philipp Hahn Open Source Software Engineer Univention GmbH be open. Mary-Somerville-Str. 1 D-28359 Bremen Tel.: +49 421 22232-0 Fax : +49 421 22232-99 h...@univention.de http://www.univention.de/ Geschäftsführer: Peter H. Ganten HRB 20755 Amtsgericht Bremen Steuer-Nr.: 71-597-02876 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
Hi Michal Thanks for your reply, how I can upgrade my guest agent on windows? right now this is running on windows ("version":"0.12.1") Br. Umar On Mon, Aug 29, 2016 at 11:22 AM, Michal Privoznikwrote: > On 28.08.2016 16:53, Umar Draz wrote: > > Hello All, > > > > I have install qemu guest agent on windows 10, but unable to get the ip > > address using this command > > > > virsh qemu-agent-command myvm '{ "execute": > "guest-network-get-interfaces" > > }' > > > > I am getting the following error on above command. > > > > ibvirt: QEMU Driver error : internal error: unable to execute QEMU agent > > command 'guest-network-get-interfaces': this feature or command is not > > currently supported > > Right. The windows implementation for this qemu-ga command was > introduced in this commit: > > http://git.qemu.org/?p=qemu.git;a=commitdiff;h=d6c5528b0 > > and should be contained in 2.4 release. So updating your guest agent > should help you. On the other hand, I don't have Windows 10 installed > anywhere (nor a VM) to try this out. > > Michal > -- Umar Draz Network Architect -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
On 29.08.2016 11:48, Umar Draz wrote: > Hi Michal, > > well after the upgrade I am still getting the old version > > {"return":{"version":"0.12.1","supported_commands":[{"enabled":true,"name":"guest-set-user-password"},{"enabled":true,"name":"guest-set-vcpus"},{"enabled":true,"name":"guest-get-vcpus"},{"enabled":true,"name":"guest-network-get-interfaces"},{"enabled":true,"name":"guest-suspend-hybrid"},{"enabled":true,"name":"guest-suspend-ram"},{"enabled":true,"name":"guest-suspend-disk"},{"enabled":true,"name":"guest-fstrim"},{"enabled":true,"name":"guest-fsfreeze-thaw"},{"enabled":true,"name":"guest-fsfreeze-freeze"},{"enabled":true,"name":"guest-fsfreeze-status"},{"enabled":true,"name":"guest-file-flush"},{"enabled":true,"name":"guest-file-seek"},{"enabled":true,"name":"guest-file-write"},{"enabled":true,"name":"guest-file-read"},{"enabled":true,"name":"guest-file-close"},{"enabled":true,"name":"guest-file-open"},{"enabled":true,"name":"guest-shutdown"},{"enabled":true,"name":"guest-info"},{"enabled":true,"name":"guest-set-time"},{"enabled":true,"name":"guest-get-time"},{"enabled":tru! e,"name":"guest-ping"},{"enabled":true,"name":"guest-sync"},{"enabled":true,"name":"guest-sync-delimited"}]}} > > > even I had install this qemu-ga on newly installed Windows 10, but that > also showing the same version as above. Then you need to build the qemu-ga from the sources. Michal -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
On 29.08.2016 12:11, Umar Draz wrote: > Ok then I will check that as well, > > Ok I have some FereeBSD vms as well, now I want qemu-guest-agent on them as > well, is it possible? > Yes. There are two implementations for qemu-ga: one for POSIX-like systems (where *BSD does belong to) and for Windows. So you should be able to find qemu-ga for *BSD too. Michal -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
Hi Michal, Thanks, I have installed qemu-ga on FreeBSD. Now I just need a help regarding what will be the xml for my this FreeBSD ? Br. Umar On Mon, Aug 29, 2016 at 3:30 PM, Michal Privoznikwrote: > On 29.08.2016 12:11, Umar Draz wrote: > > Ok then I will check that as well, > > > > Ok I have some FereeBSD vms as well, now I want qemu-guest-agent on them > as > > well, is it possible? > > > > Yes. There are two implementations for qemu-ga: one for POSIX-like > systems (where *BSD does belong to) and for Windows. So you should be > able to find qemu-ga for *BSD too. > > Michal > -- Umar Draz Network Architect -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
Hi Michal, well after the upgrade I am still getting the old version {"return":{"version":"0.12.1","supported_commands":[{"enabled":true,"name":"guest-set-user-password"},{"enabled":true,"name":"guest-set-vcpus"},{"enabled":true,"name":"guest-get-vcpus"},{"enabled":true,"name":"guest-network-get-interfaces"},{"enabled":true,"name":"guest-suspend-hybrid"},{"enabled":true,"name":"guest-suspend-ram"},{"enabled":true,"name":"guest-suspend-disk"},{"enabled":true,"name":"guest-fstrim"},{"enabled":true,"name":"guest-fsfreeze-thaw"},{"enabled":true,"name":"guest-fsfreeze-freeze"},{"enabled":true,"name":"guest-fsfreeze-status"},{"enabled":true,"name":"guest-file-flush"},{"enabled":true,"name":"guest-file-seek"},{"enabled":true,"name":"guest-file-write"},{"enabled":true,"name":"guest-file-read"},{"enabled":true,"name":"guest-file-close"},{"enabled":true,"name":"guest-file-open"},{"enabled":true,"name":"guest-shutdown"},{"enabled":true,"name":"guest-info"},{"enabled":true,"name":"guest-set-time"},{"enabled":true,"name":"guest-get-time"},{"enabled":true,"name":"guest-ping"},{"enabled":true,"name":"guest-sync"},{"enabled":true,"name":"guest-sync-delimited"}]}} even I had install this qemu-ga on newly installed Windows 10, but that also showing the same version as above. Br. Umar On Mon, Aug 29, 2016 at 2:45 PM, Michal Privoznikwrote: > On 29.08.2016 10:27, Umar Draz wrote: > > Hi Michal > > > > well no luck :( > > What do you mean? I need more info. The update was successful, but the > qemu-ga is still not returning any IP addresses? What's the new version > of qemu-ga then? > What might work is to clone qemu.git and compile the qemu-ga from there > on your own. > > Michal > -- Umar Draz Network Architect -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] Check for --live flag for postcopy-after-precopy migration
On 08/27/2016 02:21 AM, Jiri Denemark wrote: On Fri, Aug 26, 2016 at 21:41:31 +0200, Michal Privoznik wrote: On 26.08.2016 11:25, Kothapally Madhu Pavan wrote: Unlike postcopy migration there is no --live flag check for postcopy-after-precopy. Signed-off-by: Kothapally Madhu Pavan--- tools/virsh-domain.c |6 ++ 1 file changed, 6 insertions(+) ACKed and pushed. This doesn't make any sense. First, post-copy migration is enabled with --postcopy option to migrate command and --postcopy-after-precopy is just an additional flag for post-copy migration. So if virsh was to report such an error, it should check for --postcopy option. But such check doesn't belong to libvirt at all, the appropriate libvirt driver is supposed to check for the flags and report invalid combinations. I have proposed this patch as the qemu driver doesn't have postcopy-after-precopy flag and this bug can be fixed by minimal changes in libvirt. If we have to check for invalid combinations in appropriate libvirt drivers, we need to create a flag for postcopy-after-precopy migration. I will be happy to send another patch if this is what needed. Thanks, Madhu Pavan. -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
On 29.08.2016 10:27, Umar Draz wrote: > Hi Michal > > well no luck :( What do you mean? I need more info. The update was successful, but the qemu-ga is still not returning any IP addresses? What's the new version of qemu-ga then? What might work is to clone qemu.git and compile the qemu-ga from there on your own. Michal -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
Ok then I will check that as well, Ok I have some FereeBSD vms as well, now I want qemu-guest-agent on them as well, is it possible? Br. Umar On Mon, Aug 29, 2016 at 2:54 PM, Michal Privoznikwrote: > On 29.08.2016 11:48, Umar Draz wrote: > > Hi Michal, > > > > well after the upgrade I am still getting the old version > > > > {"return":{"version":"0.12.1","supported_commands":[{" > enabled":true,"name":"guest-set-user-password"},{"enabled" > :true,"name":"guest-set-vcpus"},{"enabled":true,"name":" > guest-get-vcpus"},{"enabled":true,"name":"guest-network- > get-interfaces"},{"enabled":true,"name":"guest-suspend- > hybrid"},{"enabled":true,"name":"guest-suspend-ram"},{" > enabled":true,"name":"guest-suspend-disk"},{"enabled": > true,"name":"guest-fstrim"},{"enabled":true,"name":"guest- > fsfreeze-thaw"},{"enabled":true,"name":"guest-fsfreeze- > freeze"},{"enabled":true,"name":"guest-fsfreeze-status"} > ,{"enabled":true,"name":"guest-file-flush"},{"enabled": > true,"name":"guest-file-seek"},{"enabled":true,"name":" > guest-file-write"},{"enabled":true,"name":"guest-file-read"} > ,{"enabled":true,"name":"guest-file-close"},{"enabled": > true,"name":"guest-file-open"},{"enabled":true,"name":" > guest-shutdown"},{"enabled":true,"name":"guest-info"},{" > enabled":true,"name":"guest-set-time"},{"enabled":true," > name":"guest-get-time"},{"enabled":true,"name":"guest- > ping"},{"enabled":true,"name":"guest-sync"},{"enabled":true, > "name":"guest-sync-delimited"}]}} > > > > > > even I had install this qemu-ga on newly installed Windows 10, but that > > also showing the same version as above. > > Then you need to build the qemu-ga from the sources. > > Michal > -- Umar Draz Network Architect -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v2] tests: fix segfault in objecteventtest
Michal Privoznik wrote: > On 25.08.2016 07:50, Roman Bogorodskiy wrote: > > Test 12 from objecteventtest (createXML add event) segaults on FreeBSD > > with bus error. > > > > At some point it calls testNodeDeviceDestroy() from the test driver. And > > it fails when it tries to unlock the device in the "out:" label of this > > function. > > > > Unlocking fails because the previous step was a call to > > virNodeDeviceObjRemove from conf/node_device_conf.c. This function > > removes the given device from the device list and cleans up the object, > > including destroying of its mutex. However, it does not nullify the pointer > > that was given to it. > > > > As a result, we end up in testNodeDeviceDestroy() here: > > > > out: > > if (obj) > > virNodeDeviceObjUnlock(obj); > > > > And instead of skipping this, we try to do Unlock and fail because of > > malformed mutex. > > > > Change virNodeDeviceObjRemove to use double pointer and set pointer to > > NULL. > > --- > > src/conf/node_device_conf.c | 13 +++-- > > src/conf/node_device_conf.h | 2 +- > > src/node_device/node_device_hal.c | 4 ++-- > > src/test/test_driver.c| 2 +- > > 4 files changed, 11 insertions(+), 10 deletions(-) > > Almost. You've forgotten about udev (perhaps due to BSD? :-P) Sorry, should have tested this on Linux as well. :( Thanks for catching that! > > ACK if you squash this in: > > diff --git a/src/node_device/node_device_udev.c > b/src/node_device/node_device_udev.c > index ddf3d88..520269f 100644 > --- a/src/node_device/node_device_udev.c > +++ b/src/node_device/node_device_udev.c > @@ -1044,7 +1044,7 @@ static int udevRemoveOneDevice(struct udev_device > *device) > > VIR_DEBUG("Removing device '%s' with sysfs path '%s'", >dev->def->name, name); > -virNodeDeviceObjRemove(>devs, dev); > +virNodeDeviceObjRemove(>devs, ); > > ret = 0; > cleanup: > > > Safe for freeze. Pushed with the udev fix squashed in, thanks! Roman Bogorodskiy signature.asc Description: PGP signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
Well here is the xml of my linux virtual machines for qemu-guest-agent now what will be for freebsd ? Br. Umar On Mon, Aug 29, 2016 at 6:06 PM, Michal Privoznikwrote: > On 29.08.2016 13:59, Umar Draz wrote: > > Hi Michal, > > > > Thanks, I have installed qemu-ga on FreeBSD. > > > > Now I just need a help regarding what will be the xml for my this > FreeBSD ? > > What do you mean? Domain XML? Domain XML is guest OS agnostic. So > whatever qemu-ga config you have in a domain that is working for you you > can copy it to BSD domains too. > > Michal > -- Umar Draz Network Architect -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
HI Thanks Got it. On Mon, Aug 29, 2016 at 6:36 PM, Vasiliy Tolstovwrote: > 2016-08-29 16:20 GMT+03:00 Umar Draz : > > Well here is the xml of my linux virtual machines for qemu-guest-agent > > > > > > > > > > > > now what will be for freebsd ? > > > This is virtio-serial, on freebsd it does not exists. You can use > serial channel and run qemu-ga with this ttyS0.. > Sometimes ago we discuss in list about ability to assign some id to > serial channel to use it via libvirt for qemu-ga communication. But it > not ends with final decision. > > > -- > Vasiliy Tolstov, > e-mail: v.tols...@selfip.ru > -- Umar Draz Network Architect -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] qemu-guest-agent windows
On 29.08.2016 13:59, Umar Draz wrote: > Hi Michal, > > Thanks, I have installed qemu-ga on FreeBSD. > > Now I just need a help regarding what will be the xml for my this FreeBSD ? What do you mean? Domain XML? Domain XML is guest OS agnostic. So whatever qemu-ga config you have in a domain that is working for you you can copy it to BSD domains too. Michal -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] libxl: advertise support for migration V3
The libxl driver has long supported migration V3 but has never indicated so in the connectSupportsFeature API. As a result, apps such as virt-manager that use the more generic virDomainMigrate API fail with libvirtError: this function is not supported by the connection driver: virDomainMigrate Add VIR_DRV_FEATURE_MIGRATION_V3 to the list of features marked as supported in the connectSupportsFeature API. Signed-off-by: Jim Fehlig--- src/libxl/libxl_driver.c | 1 + 1 file changed, 1 insertion(+) diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index a573c82..3ffaa74 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -5677,6 +5677,7 @@ libxlConnectSupportsFeature(virConnectPtr conn, int feature) return -1; switch (feature) { +case VIR_DRV_FEATURE_MIGRATION_V3: case VIR_DRV_FEATURE_TYPED_PARAM_STRING: case VIR_DRV_FEATURE_MIGRATION_PARAMS: case VIR_DRV_FEATURE_MIGRATION_P2P: -- 2.9.2 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 03/41] qemu: Use virDomainCapsCPUModels for cpuDefinitions
On 08/12/2016 09:32 AM, Jiri Denemark wrote: > The list of supported CPU models in domain capabilities is stored in > virDomainCapsCPUModels. Let's use the same object for storing CPU models > in QEMU capabilities. > > Signed-off-by: Jiri Denemark> --- > src/qemu/qemu_capabilities.c | 162 > +++ > src/qemu/qemu_capabilities.h | 10 +-- > src/qemu/qemu_command.c | 8 ++- > src/qemu/qemu_monitor.c | 12 +++- > src/qemu/qemu_monitor.h | 10 ++- > src/qemu/qemu_monitor_json.c | 24 +-- > src/qemu/qemu_monitor_json.h | 2 +- > tests/qemumonitorjsontest.c | 8 +-- > tests/qemuxml2argvtest.c | 18 ++--- > 9 files changed, 164 insertions(+), 90 deletions(-) > [..] > > > -size_t virQEMUCapsGetCPUDefinitions(virQEMUCapsPtr qemuCaps, > -char ***names) > +int > +virQEMUCapsGetCPUDefinitions(virQEMUCapsPtr qemuCaps, > + char ***names, > + size_t *count) > { > +size_t i; > +char **models = NULL; > + > +*count = 0; > if (names) > -*names = qemuCaps->cpuDefinitions; > -return qemuCaps->ncpuDefinitions; > +*names = NULL; > + > +if (!qemuCaps->cpuDefinitions) > +return 0; > + > +if (names && VIR_ALLOC_N(models, qemuCaps->cpuDefinitions->count) < 0) > +return -1; > + So if we don't have names, we don't get models and the following loop will only add to models if we've allocated it because we have names, right? Thus could there be an optimization to set/return ->count if !names prior to this check? e.g. if (!names) { *count = qemuCaps->cpuDefinitions->count; return 0; } This works, but it's tough to read. > +for (i = 0; i < qemuCaps->cpuDefinitions->count; i++) { > +virDomainCapsCPUModelPtr cpu = qemuCaps->cpuDefinitions->models + i; > +if (models && VIR_STRDUP(models[i], cpu->name) < 0) > +goto error; > +} > + > +if (names) > +*names = models; > +*count = qemuCaps->cpuDefinitions->count; > +return 0; > + > + error: > +virStringFreeListCount(models, i); > +return -1; > } > [...] > --- a/src/qemu/qemu_monitor_json.c > +++ b/src/qemu/qemu_monitor_json.c > @@ -4855,14 +4855,15 @@ int qemuMonitorJSONGetMachines(qemuMonitorPtr mon, > } > > > -int qemuMonitorJSONGetCPUDefinitions(qemuMonitorPtr mon, > - char ***cpus) > +int > +qemuMonitorJSONGetCPUDefinitions(qemuMonitorPtr mon, > + qemuMonitorCPUDefInfoPtr **cpus) > { > int ret = -1; > virJSONValuePtr cmd; > virJSONValuePtr reply = NULL; > virJSONValuePtr data; > -char **cpulist = NULL; > +qemuMonitorCPUDefInfoPtr *cpulist = NULL; > int n = 0; > size_t i; > > @@ -4898,13 +4899,18 @@ int qemuMonitorJSONGetCPUDefinitions(qemuMonitorPtr > mon, > goto cleanup; > } > > -/* null-terminated list */ > -if (VIR_ALLOC_N(cpulist, n + 1) < 0) > +if (VIR_ALLOC_N(cpulist, n) < 0) > goto cleanup; > > for (i = 0; i < n; i++) { > virJSONValuePtr child = virJSONValueArrayGet(data, i); > const char *tmp; > +qemuMonitorCPUDefInfoPtr cpu; > + > +if (VIR_ALLOC(cpu) < 0) > +goto cleanup; > + > +cpulist[i] = cpu; > > if (!(tmp = virJSONValueObjectGetString(child, "name"))) { > virReportError(VIR_ERR_INTERNAL_ERROR, "%s", > @@ -4912,7 +4918,7 @@ int qemuMonitorJSONGetCPUDefinitions(qemuMonitorPtr mon, > goto cleanup; > } > > -if (VIR_STRDUP(cpulist[i], tmp) < 0) > +if (VIR_STRDUP(cpu->name, tmp) < 0) > goto cleanup; > } > > @@ -4921,7 +4927,11 @@ int qemuMonitorJSONGetCPUDefinitions(qemuMonitorPtr > mon, > cpulist = NULL; > > cleanup: > -virStringFreeList(cpulist); > +if (ret < 0 && cpulist) { I think "ret < 0" is superfluous... Coverity whines too > +for (i = 0; i < n; i++) > +qemuMonitorCPUDefInfoFree(cpulist[i]); > +VIR_FREE(cpulist); > +} > virJSONValueFree(cmd); > virJSONValueFree(reply); > return ret; [...] -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 02/41] domcaps: Add support for listing supported CPU models
On 08/12/2016 09:32 AM, Jiri Denemark wrote: > The patch adds element to domain capabilities XML: > > > > > > Broadwell > Broadwell-noTSX > ... > > > > Applications can use it to inspect what CPU configuration modes are > supported for a specific combination of domain type, emulator binary, > guest architecture and machine type. > > Signed-off-by: Jiri Denemark> --- > docs/formatdomaincaps.html.in | 44 ++ > docs/schemas/domaincaps.rng| 43 ++ > src/conf/domain_capabilities.c | 150 > + > src/conf/domain_capabilities.h | 33 + > src/libvirt_private.syms | 4 + > tests/domaincapsschemadata/basic.xml | 5 + > tests/domaincapsschemadata/full.xml| 9 ++ > tests/domaincapsschemadata/libxl-xenfv-usb.xml | 5 + > tests/domaincapsschemadata/libxl-xenfv.xml | 5 + > tests/domaincapsschemadata/libxl-xenpv-usb.xml | 5 + > tests/domaincapsschemadata/libxl-xenpv.xml | 5 + > tests/domaincapsschemadata/qemu_1.7.0.x86_64.xml | 5 + > .../qemu_2.6.0-gicv2-virt.aarch64.xml | 5 + > .../qemu_2.6.0-gicv3-virt.aarch64.xml | 5 + > tests/domaincapsschemadata/qemu_2.6.0.aarch64.xml | 5 + > tests/domaincapsschemadata/qemu_2.6.0.ppc64le.xml | 5 + > tests/domaincapsschemadata/qemu_2.6.0.x86_64.xml | 5 + > tests/domaincapstest.c | 9 ++ > 18 files changed, 347 insertions(+) [...] > diff --git a/src/conf/domain_capabilities.c b/src/conf/domain_capabilities.c > index 1676f0e..c07f995 100644 > --- a/src/conf/domain_capabilities.c > +++ b/src/conf/domain_capabilities.c [...] > + > +int > +virDomainCapsCPUModelsAddSteal(virDomainCapsCPUModelsPtr cpuModels, > + char **name) > +{ > +if (VIR_RESIZE_N(cpuModels->models, cpuModels->alloc, > + cpuModels->count, 1) < 0) > +return -1; > + > +cpuModels->models[cpuModels->count++].name = *name; > +*name = NULL; Similar to VIR_STEAL_PTR with the added count++ twist... > +return 0; > +} > + > + > +int > +virDomainCapsCPUModelsAdd(virDomainCapsCPUModelsPtr cpuModels, > + const char *name, > + ssize_t nameLen) > +{ > +char *copy = NULL; > + > +if (VIR_STRNDUP(copy, name, nameLen) < 0) > +goto error; > + > +if (virDomainCapsCPUModelsAddSteal(cpuModels, ) < 0) > +goto error; > + > +return 0; > + > + error: > +VIR_FREE(copy); > +return -1; > +} > + > + > int > virDomainCapsEnumSet(virDomainCapsEnumPtr capsEnum, > const char *capsEnumName, > @@ -234,6 +338,51 @@ virDomainCapsOSFormat(virBufferPtr buf, > } > > static void [...] > diff --git a/src/conf/domain_capabilities.h b/src/conf/domain_capabilities.h > index 492a9cf..509c306 100644 > --- a/src/conf/domain_capabilities.h > +++ b/src/conf/domain_capabilities.h > @@ -102,6 +102,30 @@ struct _virDomainCapsFeatureGIC { > virDomainCapsEnum version; /* Info about virGICVersion */ > }; > > +typedef struct _virDomainCapsCPUModel virDomainCapsCPUModel; > +typedef virDomainCapsCPUModel *virDomainCapsCPUModelPtr; > +struct _virDomainCapsCPUModel { > +char *name; > +}; > + > +typedef struct _virDomainCapsCPUModels virDomainCapsCPUModels; > +typedef virDomainCapsCPUModels *virDomainCapsCPUModelsPtr; > +struct _virDomainCapsCPUModels { > +virObject parent; > + > +size_t alloc; > +size_t count; Easier for me to read/see "->nmodels_max" and "->nmodels"... Not sure how pervasive it is to change... Not a requirement, your call... > +virDomainCapsCPUModelPtr models; > +}; > + > +typedef struct _virDomainCapsCPU virDomainCapsCPU; > +typedef virDomainCapsCPU *virDomainCapsCPUPtr; > +struct _virDomainCapsCPU { > +bool hostPassthrough; > +bool hostModel; > +virDomainCapsCPUModelsPtr custom; > +}; > + [...] -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 10/41] testutilsqemu: Helpers for changing host CPU and arch
On 08/12/2016 09:33 AM, Jiri Denemark wrote: > Changing a host architecture or a CPU is not as easy as assigning a new > value to the appropriate element in virCaps since there is a relation > between the CPU and host architecture (we don't really want to test > anything on an AArch64 host with core2duo CPU). This patch introduces > qemuTestSetHostArch and qemuTestSetHostCPU helpers which will make sure > the host architecture matches the host CPU. > > Signed-off-by: Jiri Denemark> --- > tests/testutilsqemu.c | 51 > ++- > tests/testutilsqemu.h | 8 ++-- > 2 files changed, 52 insertions(+), 7 deletions(-) > > diff --git a/tests/testutilsqemu.c b/tests/testutilsqemu.c > index 11dd20e..9b66101 100644 > --- a/tests/testutilsqemu.c > +++ b/tests/testutilsqemu.c > @@ -16,6 +16,7 @@ > > virCPUDefPtr cpuDefault; > virCPUDefPtr cpuHaswell; > +virCPUDefPtr cpuPower8; ^^ Nothing in the commit message that indicates that we're also adding a power8 type. Thus it would seem that this could be two commits - I would think adding the cpuPower8Data [1] Also, should each of these should be initialized to NULL? > > static virCPUFeatureDef cpuDefaultFeatures[] = { > { (char *) "lahf_lm", -1 }, > @@ -92,6 +93,15 @@ static virCPUDef cpuHaswellData = { > cpuHaswellFeatures, /* features */ > }; > > +static virCPUDef cpuPower8Data = { > +.type = VIR_CPU_TYPE_HOST, > +.arch = VIR_ARCH_PPC64, > +.model = (char *) "POWER8", > +.sockets = 1, > +.cores = 8, > +.threads = 8, > +}; > + > static virCapsGuestMachinePtr *testQemuAllocMachines(int *nmachines) > { > virCapsGuestMachinePtr *machines; > @@ -331,7 +341,8 @@ virCapsPtr testQemuCapsInit(void) > goto cleanup; > > if (!(cpuDefault = virCPUDefCopy()) || > -!(cpuHaswell = virCPUDefCopy())) > +!(cpuHaswell = virCPUDefCopy()) || > +!(cpuPower8 = virCPUDefCopy())) > goto cleanup; > > caps->host.cpu = cpuDefault; Should this change to a qemuTestSetHostCPU(caps, NULL) to work properly on power8? > @@ -440,15 +451,45 @@ virCapsPtr testQemuCapsInit(void) > > cleanup: > virCapabilitiesFreeMachines(machines, nmachines); > -if (caps->host.cpu != cpuDefault) > -virCPUDefFree(cpuDefault); > -if (caps->host.cpu != cpuHaswell) > -virCPUDefFree(cpuHaswell); > +caps->host.cpu = NULL; > +virCPUDefFree(cpuDefault); > +virCPUDefFree(cpuHaswell); > +virCPUDefFree(cpuPower8); [1] Since we can get to cleanup without ever initializing any of these... ACK 1-10 - just be sure to check/consider thoughts left in 2, 3, and here. Of course we're post freeze now, so I guess there's a bit more waiting to do anyway. John > virObjectUnref(caps); > return NULL; > } > > > +void > +qemuTestSetHostArch(virCapsPtr caps, > +virArch arch) > +{ > +if (arch == VIR_ARCH_NONE) > +arch = VIR_ARCH_X86_64; > +caps->host.arch = arch; > +qemuTestSetHostCPU(caps, NULL); > +} > + > + > +void > +qemuTestSetHostCPU(virCapsPtr caps, > + virCPUDefPtr cpu) > +{ > +virArch arch = caps->host.arch; > + > +if (!cpu) { > +if (ARCH_IS_X86(arch)) > +cpu = cpuDefault; > +else if (ARCH_IS_PPC64(arch)) > +cpu = cpuPower8; > +} > + > +if (cpu) > +caps->host.arch = cpu->arch; > +caps->host.cpu = cpu; > +} > + > + > virQEMUCapsPtr > qemuTestParseCapabilities(const char *capsFile) > { > diff --git a/tests/testutilsqemu.h b/tests/testutilsqemu.h > index f2b71e9..6d35116f 100644 > --- a/tests/testutilsqemu.h > +++ b/tests/testutilsqemu.h > @@ -19,8 +19,12 @@ virQEMUCapsPtr qemuTestParseCapabilities(const char > *capsFile); > > extern virCPUDefPtr cpuDefault; > extern virCPUDefPtr cpuHaswell; > -void testQemuCapsSetCPU(virCapsPtr caps, > -virCPUDefPtr hostCPU); > +extern virCPUDefPtr cpuPower8; > + > +void qemuTestSetHostArch(virCapsPtr caps, > +virArch arch); > +void qemuTestSetHostCPU(virCapsPtr caps, > +virCPUDefPtr cpu); > > int qemuTestDriverInit(virQEMUDriver *driver); > void qemuTestDriverFree(virQEMUDriver *driver); > -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list