Re: [PATCH 1/2] qemu_migration: set bandwidth in priv during migration

2021-10-08 Thread Jiri Denemark
On Fri, Oct 08, 2021 at 10:19:04 +0200, Kristina Hanicova wrote:
> We did not set priv->migMaxBandwidth if '--bandwidth' was
> specified as an option in the 'migrate' virsh command. This
> caused in printing the wrong value if virsh command
> 'migrate-getspeed' was called during the migration. This patch
> first sets the value to the given bandwidth (if one was
> specified) and restores the previous value after the migration.
> 
> Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1806856

I was wondering whether we should just read the bandwidth back from QEMU
instead when miration is running. But I guess this is good enough.

Reviewed-by: Jiri Denemark 



Re: [libvirt PATCH v2 1/2] qemu: remove use of (+|-)name syntax for -cpu featres

2021-10-07 Thread Jiri Denemark
On Thu, Oct 07, 2021 at 11:09:19 +0100, Daniel P. Berrangé wrote:
> On Thu, Oct 07, 2021 at 11:53:00AM +0200, Jiri Denemark wrote:
> > On Thu, Oct 07, 2021 at 10:05:12 +0100, Daniel P. Berrangé wrote:
> > > The -cpu arg gained support for feature=on|off syntax for the x86
> > > emulator in 2.4.0
> > > 
> > >   commit 38e5c119c2925812bd441450ab9e5e00fc79e662
> > >   Author: Eduardo Habkost 
> > >   Date:   Mon Mar 23 17:29:32 2015 -0300
> > > 
> > > target-i386: Register QOM properties for feature flags
> > > 
> > > Most other targets gained this syntax even earlier in 1.4.1
> > > 
> > >   commit 1590bbcb02921dfe8e3cf66e3a3aafd31193babf
> > >   Author: Andreas Färber 
> > >   Date:   Mon Mar 3 23:33:51 2014 +0100
> > > 
> > > cpu: Implement CPUClass::parse_features() for the rest of CPUs
> > > 
> > > CPUs who do not provide their own implementation of feature parsing
> > > will treat each option as a QOM property and set it to the supplied
> > > value.
> > > 
> > > There appears no reason to keep supporting "+|-feature" syntax,
> > > given the current minimum QEMU version.
> > > 
> > > Signed-off-by: Daniel P. Berrangé 
> > > ---
> > >  src/qemu/qemu_command.c   | 34 ++-
> > >  tests/qemuxml2argvdata/cpu-Haswell2.args  |  2 +-
> > >  tests/qemuxml2argvdata/cpu-Haswell3.args  |  2 +-
> > >  .../qemuxml2argvdata/cpu-cache-disable3.args  |  2 +-
> > >  .../cpu-check-default-partial.args|  2 +-
> > >  tests/qemuxml2argvdata/cpu-eoi-disabled.args  |  2 +-
> > >  tests/qemuxml2argvdata/cpu-eoi-enabled.args   |  2 +-
> > >  tests/qemuxml2argvdata/cpu-exact1.args|  2 +-
> > >  .../cpu-exact2-nofallback.args|  2 +-
> > >  tests/qemuxml2argvdata/cpu-exact2.args|  2 +-
> > >  tests/qemuxml2argvdata/cpu-fallback.args  |  2 +-
> > >  tests/qemuxml2argvdata/cpu-host-kvmclock.args |  2 +-
> > >  .../qemuxml2argvdata/cpu-host-model-cmt.args  |  2 +-
> > >  .../cpu-host-model-fallback.args  |  2 +-
> > >  .../cpu-host-model-vendor.args|  2 +-
> > >  tests/qemuxml2argvdata/cpu-host-model.args|  2 +-
> > >  .../cpu-host-passthrough-features.args|  2 +-
> > >  tests/qemuxml2argvdata/cpu-kvmclock.args  |  2 +-
> > >  tests/qemuxml2argvdata/cpu-minimum1.args  |  2 +-
> > >  tests/qemuxml2argvdata/cpu-minimum2.args  |  2 +-
> > >  tests/qemuxml2argvdata/cpu-strict1.args   |  2 +-
> > >  .../cpu-translation.x86_64-latest.args|  2 +-
> > >  tests/qemuxml2argvdata/cpu-tsc-frequency.args |  2 +-
> > >  .../eoi-disabled.x86_64-latest.args   |  2 +-
> > >  .../eoi-enabled.x86_64-latest.args|  2 +-
> > >  .../graphics-spice-timeout.args   |  2 +-
> > >  .../kvmclock+eoi-disabled.x86_64-latest.args  |  2 +-
> > >  tests/qemuxml2argvdata/kvmclock.args  |  2 +-
> > >  .../pci-bridge-many-disks.args|  2 +-
> > >  .../pv-spinlock-disabled.x86_64-latest.args   |  2 +-
> > >  .../pv-spinlock-enabled.x86_64-latest.args|  2 +-
> > >  31 files changed, 41 insertions(+), 53 deletions(-)
> > > 
> > > diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
> > > index eaa1e0deb9..0f1cdd9372 100644
> > > --- a/src/qemu/qemu_command.c
> > > +++ b/src/qemu/qemu_command.c
> > > @@ -6242,21 +6242,6 @@ qemuBuildGlobalControllerCommandLine(virCommand 
> > > *cmd,
> > >  }
> > >  
> > >  
> > > -static void
> > > -qemuBuildCpuFeature(virQEMUCaps *qemuCaps,
> > > -virBuffer *buf,
> > > -const char *name,
> > > -bool state)
> > > -{
> > > -name = virQEMUCapsCPUFeatureToQEMU(qemuCaps, name);
> > > -
> > > -if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_QUERY_CPU_MODEL_EXPANSION))
> > > -virBufferAsprintf(buf, ",%s=%s", name, state ? "on" : "off");
> > > -else
> > > -virBufferAsprintf(buf, ",%c%s", state ? '+' : '-', name);
> > > -}
> > 
> > I guess it would have been easier and perhaps clearer to just remove the
> > else branch and the test itself, because...
> > 
> > > @@ -6433,13 +6420,14 @@ qemuBuildCpuCommandLine(virCommand *cm

Re: [libvirt PATCH v2 1/2] qemu: remove use of (+|-)name syntax for -cpu featres

2021-10-07 Thread Jiri Denemark
On Thu, Oct 07, 2021 at 10:05:12 +0100, Daniel P. Berrangé wrote:
> The -cpu arg gained support for feature=on|off syntax for the x86
> emulator in 2.4.0
> 
>   commit 38e5c119c2925812bd441450ab9e5e00fc79e662
>   Author: Eduardo Habkost 
>   Date:   Mon Mar 23 17:29:32 2015 -0300
> 
> target-i386: Register QOM properties for feature flags
> 
> Most other targets gained this syntax even earlier in 1.4.1
> 
>   commit 1590bbcb02921dfe8e3cf66e3a3aafd31193babf
>   Author: Andreas Färber 
>   Date:   Mon Mar 3 23:33:51 2014 +0100
> 
> cpu: Implement CPUClass::parse_features() for the rest of CPUs
> 
> CPUs who do not provide their own implementation of feature parsing
> will treat each option as a QOM property and set it to the supplied
> value.
> 
> There appears no reason to keep supporting "+|-feature" syntax,
> given the current minimum QEMU version.
> 
> Signed-off-by: Daniel P. Berrangé 
> ---
>  src/qemu/qemu_command.c   | 34 ++-
>  tests/qemuxml2argvdata/cpu-Haswell2.args  |  2 +-
>  tests/qemuxml2argvdata/cpu-Haswell3.args  |  2 +-
>  .../qemuxml2argvdata/cpu-cache-disable3.args  |  2 +-
>  .../cpu-check-default-partial.args|  2 +-
>  tests/qemuxml2argvdata/cpu-eoi-disabled.args  |  2 +-
>  tests/qemuxml2argvdata/cpu-eoi-enabled.args   |  2 +-
>  tests/qemuxml2argvdata/cpu-exact1.args|  2 +-
>  .../cpu-exact2-nofallback.args|  2 +-
>  tests/qemuxml2argvdata/cpu-exact2.args|  2 +-
>  tests/qemuxml2argvdata/cpu-fallback.args  |  2 +-
>  tests/qemuxml2argvdata/cpu-host-kvmclock.args |  2 +-
>  .../qemuxml2argvdata/cpu-host-model-cmt.args  |  2 +-
>  .../cpu-host-model-fallback.args  |  2 +-
>  .../cpu-host-model-vendor.args|  2 +-
>  tests/qemuxml2argvdata/cpu-host-model.args|  2 +-
>  .../cpu-host-passthrough-features.args|  2 +-
>  tests/qemuxml2argvdata/cpu-kvmclock.args  |  2 +-
>  tests/qemuxml2argvdata/cpu-minimum1.args  |  2 +-
>  tests/qemuxml2argvdata/cpu-minimum2.args  |  2 +-
>  tests/qemuxml2argvdata/cpu-strict1.args   |  2 +-
>  .../cpu-translation.x86_64-latest.args|  2 +-
>  tests/qemuxml2argvdata/cpu-tsc-frequency.args |  2 +-
>  .../eoi-disabled.x86_64-latest.args   |  2 +-
>  .../eoi-enabled.x86_64-latest.args|  2 +-
>  .../graphics-spice-timeout.args   |  2 +-
>  .../kvmclock+eoi-disabled.x86_64-latest.args  |  2 +-
>  tests/qemuxml2argvdata/kvmclock.args  |  2 +-
>  .../pci-bridge-many-disks.args|  2 +-
>  .../pv-spinlock-disabled.x86_64-latest.args   |  2 +-
>  .../pv-spinlock-enabled.x86_64-latest.args|  2 +-
>  31 files changed, 41 insertions(+), 53 deletions(-)
> 
> diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
> index eaa1e0deb9..0f1cdd9372 100644
> --- a/src/qemu/qemu_command.c
> +++ b/src/qemu/qemu_command.c
> @@ -6242,21 +6242,6 @@ qemuBuildGlobalControllerCommandLine(virCommand *cmd,
>  }
>  
>  
> -static void
> -qemuBuildCpuFeature(virQEMUCaps *qemuCaps,
> -virBuffer *buf,
> -const char *name,
> -bool state)
> -{
> -name = virQEMUCapsCPUFeatureToQEMU(qemuCaps, name);
> -
> -if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_QUERY_CPU_MODEL_EXPANSION))
> -virBufferAsprintf(buf, ",%s=%s", name, state ? "on" : "off");
> -else
> -virBufferAsprintf(buf, ",%c%s", state ? '+' : '-', name);
> -}

I guess it would have been easier and perhaps clearer to just remove the
else branch and the test itself, because...

> @@ -6433,13 +6420,14 @@ qemuBuildCpuCommandLine(virCommand *cmd,
>  }
>  
>  if (def->apic_eoi) {
> -qemuBuildCpuFeature(qemuCaps, , "kvm_pv_eoi",
> -def->apic_eoi == VIR_TRISTATE_SWITCH_ON);
> +virBufferAsprintf(, ",kvm_pv_eoi=%s", def->apic_eoi ==
> +  VIR_TRISTATE_SWITCH_ON ? "on" : "off");

This is affected by the same issue spotted by Peter in v1. In other
words, when replacing qemuBuildCpuFeature you need to make sure the
feature name goes through virQEMUCapsCPUFeatureToQEMU().

...
> diff --git a/tests/qemuxml2argvdata/pv-spinlock-disabled.x86_64-latest.args 
> b/tests/qemuxml2argvdata/pv-spinlock-disabled.x86_64-latest.args
> index a7a107f4b8..2188ff477d 100644
> --- a/tests/qemuxml2argvdata/pv-spinlock-disabled.x86_64-latest.args
> +++ b/tests/qemuxml2argvdata/pv-spinlock-disabled.x86_64-latest.args
> @@ -11,7 +11,7 @@ XDG_CONFIG_HOME=/tmp/lib/domain--1-QEMUGuest1/.config \
>  -S \
>  -object 
> '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/tmp/lib/domain--1-QEMUGuest1/master-key.aes"}'
>  \
>  -machine pc,accel=tcg,usb=off,dump-guest-core=off,memory-backend=pc.ram \
> --cpu qemu64,kvm-pv-unhalt=off \
> +-cpu qemu64,kvm_pv_unhalt=off \
>  -m 214 \
>  -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":224395264}' \
>  -overcommit 

Release of libvirt-7.8.0

2021-10-01 Thread Jiri Denemark
The 7.8.0 release of both libvirt and libvirt-python is tagged and
signed tarballs and source RPMs are available at

https://libvirt.org/sources/
https://libvirt.org/sources/python/

Thanks everybody who helped with this release by sending patches,
reviewing, testing, or providing feedback. Your work is greatly
appreciated.

* New features

  * nodedev: Add ability to automatically start mediated devices

The autostart status of a persistent mediated devices can be managed with
the new APIs ``virNodeDeviceSetAutostart()`` and
``virNodeDeviceGetAutostart()``. The corresponding virsh command is
``nodedev-autostart``. In addition, two new APIs were added to get
additional information about node devices: ``virNodeDeviceIsPersistent()``
checks whether the device is persistently defined, and
``virNodeDeviceIsActive()`` checks whether the node device is currently
active. This information can also be retrieved with the new virsh command
``nodedev-info``.

Enjoy.

Jirka



libvirt-7.8.0 release candidate 2

2021-09-29 Thread Jiri Denemark
I have just tagged v7.8.0-rc2 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



Entering freeze for libvirt-7.8.0

2021-09-27 Thread Jiri Denemark
I have just tagged v7.8.0-rc1 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



Plans for the next release

2021-09-21 Thread Jiri Denemark
We are getting close to the next release of libvirt. To aim for the
release on Oct 01 I suggest entering the freeze on Monday Sep 27 and
tagging RC2 on Wednesday Sep 29.

I hope this works for everyone.

Jirka



Re: [libvirt PATCH] gitlab: remove obsolete job rules for TEMPORARILY_DISABLED variable

2021-09-20 Thread Jiri Denemark
On Mon, Sep 20, 2021 at 13:41:07 +0100, Daniel P. Berrangé wrote:
> We previously had a 'rules:' entry that caused a job to be skipped if
> the variable "TEMPORARILY_DISABLED" was set. This is no longer needed
> since we can set a similar flag in ci/manifest.yml and re-generate
> to temporarily skip a job.
> 
> Unfortunately the 'rules:' entry had an unexpected side-effect on
> the pipelines that was never previously noticed. Instead of only
> running pipelines on push, the mere existance of the 'rules:' entry
> caused triggering of pipelines on merge requests too.
> 
> The newly auto-generated ci/gitlab.yml file does not have a 'rules:'
> for the container job template, and thus only runs on git push.
> 
> The result is that build jobs try to run on merge requests and the
> container jobs they depend on don't exist. This breaks the entire
> pipeline with a message that the config is invalid due to broken
> job dependencies.
> 
> This fixes a regression introduced in
> 
>   commit ccc7a44adbea003d6a0dc2f156adb2856c28bd4c
>   Author: Daniel P. Berrangé 
>   Date:   Thu Sep 9 14:49:01 2021 +0100
> 
> ci: re-generate containers/gitlab config from manifest
> 
> Signed-off-by: Daniel P. Berrangé 
> ---
>  .gitlab-ci.yml | 8 
>  1 file changed, 8 deletions(-)

Reviewed-by: Jiri Denemark 



[libvirt PATCH] virsh: Make code flow in cmdManagedSaveRemove more straightforward

2021-09-15 Thread Jiri Denemark
By doing so we can get rid of the code which violates our coding style
guidelines.

Signed-off-by: Jiri Denemark 
---
 tools/virsh-domain.c | 21 +++--
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index e5bd1fdd75..ca56c3b33d 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -4788,18 +4788,19 @@ cmdManagedSaveRemove(vshControl *ctl, const vshCmd *cmd)
 return false;
 }
 
-if (hassave) {
-if (virDomainManagedSaveRemove(dom, 0) < 0) {
-vshError(ctl, _("Failed to remove managed save image for domain 
'%s'"),
- name);
-return false;
-}
-else
-vshPrintExtra(ctl, _("Removed managedsave image for domain '%s'"), 
name);
-}
-else
+if (hassave == 0) {
 vshPrintExtra(ctl, _("Domain '%s' has no manage save image; removal 
skipped"),
   name);
+return true;
+}
+
+if (virDomainManagedSaveRemove(dom, 0) < 0) {
+vshError(ctl, _("Failed to remove managed save image for domain '%s'"),
+ name);
+return false;
+}
+
+vshPrintExtra(ctl, _("Removed managedsave image for domain '%s'"), name);
 
 return true;
 }
-- 
2.33.0



Re: [PATCH 0/2] virsh: Fix fallback code path for vcpuinfo

2021-09-15 Thread Jiri Denemark
On Wed, Sep 15, 2021 at 15:21:10 +0200, Peter Krempa wrote:
> Peter Krempa (2):
>   virshDomainGetVcpuBitmap: Return bitmap when taking the fallback path
>   virshDomainGetVcpuBitmap: Refactor cleanup
> 
>  tools/virsh-domain.c | 15 +++
>  1 file changed, 7 insertions(+), 8 deletions(-)

Reviewed-by: Jiri Denemark 



Release of libvirt-7.7.0

2021-09-01 Thread Jiri Denemark
The 7.7.0 release of both libvirt and libvirt-python is tagged and
signed tarballs and source RPMs are available at

https://libvirt.org/sources/
https://libvirt.org/sources/python/

Thanks everybody who helped with this release by sending patches,
reviewing, testing, or providing any other feedback. Your work is
greatly appreciated.

* New features

  * Add support for Fibre Channel VMID

New VM element  was added to allow users to set
their ``appid`` for each VM which will be used by kernel to create Fibre
Channel VMID. This allows various QoS levels, access control or collecting
telemetry data per VM.

* Improvements

  * virsh: Allow XML validation for define of: storage pool, network, secret,
nwfilter, interface

* Add flag ``VIR_STORAGE_POOL_DEFINE_VALIDATE`` to validate storage pool
  input xml. For virsh, users can use it as ``virsh pool-define 
--validate``.
* Add flag ``VIR_NETWORK_DEFINE_VALIDATE`` to validate network input xml. 
For
  virsh, users can use it as ``net-define --validate``.
* Add flag ``VIR_SECRET_DEFINE_VALIDATE`` to validate secret input xml. For
  virsh, users can use it as ``secret-define --validate``.
* Add flag ``VIR_NWFILTER_DEFINE_VALIDATE`` to validate nwfilter input xml.
  For virsh, users can use it as ``nwfilter-define --validate``.
* Add flag ``VIR_INTERFACE_DEFINE_VALIDATE`` to validate interface input 
xml.
  For virsh, users can use it as ``iface-define --validate``.

  * Add SecurityManager APIs for labeling network devices

New ``virSecurityManagerSetNetdevLabel`` and 
``virSecurityManagerSetNetdevLabel``
APIs are introduced and implemented in the Apparmor security driver.
The qemu driver uses the APIs to label vhostuser ports on hotplug and
restore labeling on unplug.

  * vmx: Parse vm.genid and support super wide SCSI bus

The genid attribute is now reported for VMX guests. Libvirt can now
properly process super wide SCSI bus (64 units).

  * qemu: Lifecycle action (``on_poweroff``/``on_reboot``) handling improvements

The handling of lifecycle actions was fixed and improved in multiple ways:

- ``restart-rename`` action was forbidden

  The action was never properly implemented in the qemu driver and didn't
  actually result in a restart of the VM but rather termination. The qemu
  driver now rejects such configurations.

- ``preserve`` action was forbidden

  Similarly to the previous case this never worked as the intended semantics
  of the actions dictate. It's better to not allow it at all until there's a
  proper implementation

- ``reboot`` action of ``on_poweroff`` now actually works

  The guest OS is now rebooted instead of terminating the VM when the
  ``reboot`` action is used and the guest OS powers down. Note that it's
  incompatible with ``on_reboot`` set to ``destroy``.

- Changes in action action of ``on_reboot`` are now updated with qemu

  Libvirtd can now properly update the ``on_reboot`` action in qemu which
  allows proper handling when changing between ``reboot`` and ``destroy``
  actions. In addition, switching from ``reboot`` to ``destroy`` was
  forbidden for older qemus which don't support the update API as the guest
  could still reboot and execute some instructions until it was terminated.

* Bug fixes

  * qemu: Open chardev logfile on behalf of QEMU

Guests with a logfile configured for their chardevs are now able to start
even when no virtlogd is configured.

  * virhostmem: Handle numactl-less build in hugepages allocation/reporting

Some architectures don't have notion of NUMA (e.g. s390x) but do support
hugepages. Libvirt silently ignored requests to allocate/report hugepage
pool when built without numactl. This is now fixed and the pool can be
allocated/reported on properly.

  * qemu: Record proper ``backing`` format for overlays of qcow2+luks images

Libvirt would record ``luks`` instead of ``qcow2`` into the metadata. In
practice this is a problem only when inspecting images manually via
``qemu-img`` as with libvirt users must use full specification of the
backing chain in the domain XML which supersedes information recorded in
the image metadata.

Enjoy.

Jirka



Re: [libvirt PATCH] qemu, xen: add missing deps on virtlockd/virtlogd sockets

2021-08-31 Thread Jiri Denemark
On Tue, Aug 31, 2021 at 12:04:14 +0100, Daniel P. Berrangé wrote:
> The QEMU driver uses both virtlogd and virtlockd, while the Xen driver
> uses virtlockd. The libvirtd.service unit contains deps on the socket
> units for these services, but these deps were missed in the modular
> daemons. As a result the virtlockd/virtlogd sockets are not started
> when the virtqemud/virtxend daemons are started.
> 
> Signed-off-by: Daniel P. Berrangé 
> ---
>  src/libxl/virtxend.service.in | 2 ++
>  src/qemu/virtqemud.service.in | 4 
>  2 files changed, 6 insertions(+)

Reviewed-by: Jiri Denemark 



Re: [libvirt PATCH] rpm: fix typo in post transaction scriptlet name

2021-08-31 Thread Jiri Denemark
On Tue, Aug 31, 2021 at 12:03:33 +0100, Daniel P. Berrangé wrote:
> Signed-off-by: Daniel P. Berrangé 
> ---
>  libvirt.spec.in | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/libvirt.spec.in b/libvirt.spec.in
> index b01379d242..624b0e0302 100644
> --- a/libvirt.spec.in
> +++ b/libvirt.spec.in
> @@ -1441,7 +1441,7 @@ fi
>  %preun daemon-driver-secret
>  %libvirt_daemon_systemd_preun virtsecretd
>  
> -%posttranstrans daemon-driver-secret
> +%posttrans daemon-driver-secret
>  %libvirt_daemon_perform_restart virtsecretd

Reviewed-by: Jiri Denemark 



libvirt-7.7.0 release candidate 2

2021-08-30 Thread Jiri Denemark
I have just tagged v7.7.0-rc2 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



Entering freeze for libvirt-7.7.0

2021-08-26 Thread Jiri Denemark
I have just tagged v7.7.0-rc1 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



[libvirt PATCH] util: Fix memory leak when clearing Open vSwitch QoS

2021-08-12 Thread Jiri Denemark
No need to overwrite vmid_ex_id with a pointer to another copy of the
same string when the original is still alive.

Signed-off-by: Jiri Denemark 
---
 src/util/virnetdevopenvswitch.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/util/virnetdevopenvswitch.c b/src/util/virnetdevopenvswitch.c
index 7a64a8dbe6..f6a8ed4a31 100644
--- a/src/util/virnetdevopenvswitch.c
+++ b/src/util/virnetdevopenvswitch.c
@@ -824,7 +824,6 @@ virNetDevOpenvswitchInterfaceClearQos(const char *ifname,
 /* find queue */
 virCommandFree(cmd);
 cmd = virNetDevOpenvswitchCreateCmd();
-vmid_ex_id = g_strdup_printf("external-ids:vm-id=\"%s\"", vmuuidstr);
 virCommandAddArgList(cmd, "--no-heading", "--columns=_uuid", "find", 
"queue", vmid_ex_id, NULL);
 virCommandSetOutputBuffer(cmd, _uuid);
 if (virCommandRun(cmd, NULL) < 0) {
-- 
2.32.0



Plans for the next release

2021-08-12 Thread Jiri Denemark
I'm sending this early as I'll be out until Aug 25, but don't worry I'll
be back in time to handle the whole release process.

To aim for the release on Sep 01 I suggest entering the freeze on
Thursday Aug 26 and tagging RC2 on Monday Aug 30.

I hope this works for everyone.

Jirka



Re: [libvirt PATCH 0/3] Invalidate the cpu flags cache on changes of kernel command line

2021-08-11 Thread Jiri Denemark
On Fri, Aug 06, 2021 at 18:12:21 +0100, Daniel P. Berrangé wrote:
> On Fri, Aug 06, 2021 at 05:07:45PM +0200, Jiri Denemark wrote:
> > On Thu, Aug 05, 2021 at 14:50:51 +0100, Daniel P. Berrangé wrote:
> > > On Thu, Aug 05, 2021 at 03:36:37PM +0200, Tim Wiederhake wrote:
> > > > The kernel command line can contain settings affecting the availability
> > > > of cpu features, eg. "tsx=on". This series adds the kernel command line
> > > > to the cpu flags cache and declares the cache invalid if the current
> > > > kernel command line differs.
> > > 
> > > Multiple things can change the CPU features. kernel version,
> > > microcode version, bios settings change, kernel command line. We've
> > > been playing whack-a-mole in cache invalidation for ages adding ever
> > > more criteria for things which have side effects on CPU features
> > > available.
> > > 
> > > Running the CPUID instruction is cheap. Could we directly query the
> > > set of host CPUID leaves we care about, and compare that, and
> > > potentially even get rid of some of the other checks we have ?
> > 
> > I guess it could help in some cases, but we wouldn't be able to drop
> > some of the existing checks anyway. Because the settings usually do not
> > result in the CPU dropping a particular bit from CPUID, the feature just
> > becomes unusable by reporting a failure when used. So the settings would
> > only be reflected in what features QEMU can enable on the host. Although
> > checking CPUID might be enough for TSX, checking the command line is
> > helpful in other cases.
> 
> Would that be reflected by the answer to KVM_GET_SUPPORTED_CPUID
> which is the intersection of physical CPUID and what KVM is actally
> willing to enable ?  That ioctl would be cheap too.

I don't know, to be honest. I guess it should work unless QEMU does some
additional processing/filtering of the results it gets from KVM.

Eduardo, do you know if KVM_GET_SUPPORTED_CPUID would be sufficient to
check any configuration changes (bios settings, kernel command line,
module options, ...) that affect usable CPU features?

Jirka



Re: [libvirt PATCH 0/3] Invalidate the cpu flags cache on changes of kernel command line

2021-08-06 Thread Jiri Denemark
On Thu, Aug 05, 2021 at 14:50:51 +0100, Daniel P. Berrangé wrote:
> On Thu, Aug 05, 2021 at 03:36:37PM +0200, Tim Wiederhake wrote:
> > The kernel command line can contain settings affecting the availability
> > of cpu features, eg. "tsx=on". This series adds the kernel command line
> > to the cpu flags cache and declares the cache invalid if the current
> > kernel command line differs.
> 
> Multiple things can change the CPU features. kernel version,
> microcode version, bios settings change, kernel command line. We've
> been playing whack-a-mole in cache invalidation for ages adding ever
> more criteria for things which have side effects on CPU features
> available.
> 
> Running the CPUID instruction is cheap. Could we directly query the
> set of host CPUID leaves we care about, and compare that, and
> potentially even get rid of some of the other checks we have ?

I guess it could help in some cases, but we wouldn't be able to drop
some of the existing checks anyway. Because the settings usually do not
result in the CPU dropping a particular bit from CPUID, the feature just
becomes unusable by reporting a failure when used. So the settings would
only be reflected in what features QEMU can enable on the host. Although
checking CPUID might be enough for TSX, checking the command line is
helpful in other cases.

I'm afraid the only 100% correct solution would be to stop caching CPU
data at all and always probe QEMU (for host CPU model expansion only and
drop all the cache if the set of features changes), but this might be
quite expensive. On the other hand, we would need to do it only for KVM,
which means a single (or two for 32b vs 64b) on a host..

Jirka



Release of libvirt-7.6.0

2021-08-02 Thread Jiri Denemark
The 7.6.0 release of both libvirt and libvirt-python is tagged and
signed tarballs and source RPMs are available at

https://libvirt.org/sources/
https://libvirt.org/sources/python/

Thanks everybody who helped with this release by sending patches,
reviewing, testing, or providing any other feedback. Your work is
greatly appreciated.

* Security

  * storage: Unlock pool objects on ACL check failures in 
``storagePoolLookupByTargetPath`` (CVE-2021-3667)

A logic bug in ``storagePoolLookupByTargetPath`` where the storage pool
object was left locked after a failure of the ACL check could potentially
deprive legitimate users access to a storage pool object by users who don't
have access.

* New features

  * qemu: Incremental backup support via ``virDomainBackupBegin``

libvirt-7.6 along with the unreleased qemu-6.1 will fully support the change
block tracking features (block-dirty-bitmaps) to be able to do incremental
backups and management of the checkpoint states via the appropriate APIs.

  * qemu: Add support for launch security type s390-pv

Specifying s390-pv as launch security type in an s390 domain prepares for
running the guest in protected virtualization secure mode, also known as
IBM Secure Execution. This simplifies the definition and reduces the risk
of an incorrect definition, e.g. by forgetting to specify ``iommu=on`` on
all virtio devices.

  * domstats: Add haltpolling time statistic interface

Domstats now provide the data of cpu haltpolling time. This feature relies
on statistics available after kernel version 5.8. This will allow the user
to get more accurate CPU usage information if needed.

* Bug fixes

  * qemu: Fix migration with ``VIR_MIGRATE_NON_SHARED_INC``

libvirt 7.3.0 introduced a bug where ``VIR_MIGRATE_NON_SHARED_INC`` would
not actually migrate the contents of the disk due to broken logic and at
the same time could trigger migration of storage when
``VIR_MIGRATE_TUNNELLED`` is requested. This release fixes the bug.

  * qemu: Don't emit ``VIR_DOMAIN_EVENT_ID_BLOCK_THRESHOLD`` twice when 
registered with index

When registering the threshold event with the index notation (e.g.
``vda[3]``) libvirt would emit the event also for ``vda`` if the image is
in the top layer. The intention was to emit two events only when the
original registration was done without the index.

  * qemu: Pass discard requests for disks with ``copy_on_read='on'``

When a disk using the ``copy_on_read='on'`` option is configured also with
``discard='unmap'`` the discard requests will now be passed to the
underlying image freeing up the space.

Enjoy.

Jirka



Re: [PATCH] qemu_migration: Unregister close callback only if connection still exists

2021-07-21 Thread Jiri Denemark
On Tue, Jul 20, 2021 at 18:19:54 +0200, Michal Privoznik wrote:
> When doing a peer-to-peer migration it may happen that the
> connection to the destination disappears. If that happens,
> there's no point in trying to unregister the close callback
> because the connection is closed already. It results only in
> polluting logs with this message:
> 
>   error : virNetSocketReadWire:1814 : End of file while reading data: : 
> Input/output error
> 
> and the reason for that is unregistering a connection callback
> results in RPC (among other things).
> 
> Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1918211
> Signed-off-by: Michal Privoznik 
> ---
>  src/qemu/qemu_migration.c | 8 +---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
> index a4f44b465d..4d651aeb1a 100644
> --- a/src/qemu/qemu_migration.c
> +++ b/src/qemu/qemu_migration.c
> @@ -5214,9 +5214,11 @@ qemuMigrationSrcPerformPeer2Peer(virQEMUDriver *driver,
>  
>   cleanup:
>  virErrorPreserveLast(_err);
> -qemuDomainObjEnterRemote(vm);
> -virConnectUnregisterCloseCallback(dconn, 
> qemuMigrationSrcConnectionClosed);
> -ignore_value(qemuDomainObjExitRemote(vm, false));
> +if (dconn && virConnectIsAlive(dconn) == 1) {
> +qemuDomainObjEnterRemote(vm);
> +virConnectUnregisterCloseCallback(dconn, 
> qemuMigrationSrcConnectionClosed);
> +ignore_value(qemuDomainObjExitRemote(vm, false));
> +}
>  virErrorRestore(_err);
>  return ret;
>  }

Reviewed-by: Jiri Denemark 



Plans for the next release

2021-07-20 Thread Jiri Denemark
We are getting close to the next release of libvirt. To aim for the
release on Aug 02 I suggest entering the freeze on Tuesday Jul 27 and
tagging RC2 on Thursday Jul 29.

I'll be on PTO next week and thus I won't be able to make the RC
releases, Pavel Hrdina volunteered to make them while I'm away. This
means RC1 and RC2 releases won't be signed by my PGP key, but it
shouldn't be a big deal as the final release is what matters and I will
be back in time to handle it myself.

Thanks Pavel for backing me up.

Jirka



Re: [libvirt PATCH] qemu: Signal domain condition in qemuProcessStop a bit later

2021-07-19 Thread Jiri Denemark
On Mon, Jul 19, 2021 at 09:37:01 +0200, Michal Prívozník wrote:
> On 7/16/21 5:06 PM, Jiri Denemark wrote:
> > Signaling the condition before vm->def->id is reset to -1 is dangerous:
> > in case a waiting thread wakes up, it does not see anything interesting
> > (the domain is still marked as running) and just enters virDomainObjWait
> > where it waits forever because the condition will never be signalled
> > again.
> > 
> > Originally it was impossible to get into such situation because the vm
> > object was locked all the time between signaling the condition and
> > resetting vm->def->id, but after commit 860a999802 released in 6.8.0,
> > qemuDomainObjStopWorker called in qemuProcessStop between
> > virDomainObjBroadcast and setting vm->def->id to -1 unlocks the vm
> > object giving other threads a chance to wake up and possibly hang.
> > 
> > In real world, this can be easily reproduced by killing, destroying, or
> > just shutting down (from the guest OS) a domain while it is being
> > migrated somewhere else. The migration job would never finish.
> > 
> > We can't fix this by reseting vm->def->id earlier because other
> > functions (such as qemuProcessKill) do nothing when the domain is
> > already marked as inactive. So let's make sure we delay signaling the
> 
> Not true really. The VIR_QEMU_PROCESS_KILL_NOCHECK flag is passed to
> qemuProcessKill which means the check for domain state (aka vm->def->id
> == -1) is skipped.

Heh, I completely missed this flag when looking at the code :-/

I dropped this part of the commit message.

Jirka



[libvirt PATCH] qemu: Signal domain condition in qemuProcessStop a bit later

2021-07-16 Thread Jiri Denemark
Signaling the condition before vm->def->id is reset to -1 is dangerous:
in case a waiting thread wakes up, it does not see anything interesting
(the domain is still marked as running) and just enters virDomainObjWait
where it waits forever because the condition will never be signalled
again.

Originally it was impossible to get into such situation because the vm
object was locked all the time between signaling the condition and
resetting vm->def->id, but after commit 860a999802 released in 6.8.0,
qemuDomainObjStopWorker called in qemuProcessStop between
virDomainObjBroadcast and setting vm->def->id to -1 unlocks the vm
object giving other threads a chance to wake up and possibly hang.

In real world, this can be easily reproduced by killing, destroying, or
just shutting down (from the guest OS) a domain while it is being
migrated somewhere else. The migration job would never finish.

We can't fix this by reseting vm->def->id earlier because other
functions (such as qemuProcessKill) do nothing when the domain is
already marked as inactive. So let's make sure we delay signaling the
domain condition to the point when a woken up thread can detect the
domain is not active anymore.

https://bugzilla.redhat.com/show_bug.cgi?id=1949869

Signed-off-by: Jiri Denemark 
---
 src/qemu/qemu_process.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index c972c90801..914f936e45 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -7852,9 +7852,6 @@ void qemuProcessStop(virQEMUDriver *driver,
 if (!!g_atomic_int_dec_and_test(>nactive) && 
driver->inhibitCallback)
 driver->inhibitCallback(false, driver->inhibitOpaque);
 
-/* Wake up anything waiting on domain condition */
-virDomainObjBroadcast(vm);
-
 if ((timestamp = virTimeStringNow()) != NULL) {
 qemuDomainLogAppendMessage(driver, vm, "%s: shutting down, 
reason=%s\n",
timestamp,
@@ -7925,6 +7922,9 @@ void qemuProcessStop(virQEMUDriver *driver,
 
 vm->def->id = -1;
 
+/* Wake up anything waiting on domain condition */
+virDomainObjBroadcast(vm);
+
 virFileDeleteTree(priv->libDir);
 virFileDeleteTree(priv->channelTargetDir);
 
-- 
2.32.0



Release of libvirt-7.5.0

2021-07-01 Thread Jiri Denemark
The 7.5.0 release of both libvirt and libvirt-python is tagged and
signed tarballs and source RPMs are available at

https://libvirt.org/sources/
https://libvirt.org/sources/python/

Thanks everybody who helped with this release by sending patches,
reviewing, testing, or providing any other feedback. Your work is
greatly appreciated.

* Security

  * svirt: fix MCS label generation (CVE-2021-3631)

A flaw in the way MCS labels were generated could result in a VM's
resource not being fully protected from access by another VM were
it to be compromised. https://gitlab.com/libvirt/libvirt/-/issues/153

* Removed features

  * xen: Remove support for Xen < 4.9

In accordance with our platform support policy, the oldest supported Xen
version is now bumped from 4.6 to 4.9.

* Improvements

  * docs: Document disk serial truncation status quo

Disk  is being truncated by QEMU before passed to the guest.
Since it's impossible to fix it without running into further regressions
the documentation was improved to document the intricacies.

* Bug fixes

  * qemu: Fixed validation of disk ``iothread`` configuration

The validation of ``iothread`` config was previously moved to a place where
it caused bogus errors when address wasn't allocated when hotplugging a
disk. The check is now removed as it wasn't actually necessary at all.

Enjoy.

Jirka



libvirt-7.5.0 release candidate 2

2021-06-29 Thread Jiri Denemark
I have just tagged v7.5.0-rc2 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



Entering freeze for libvirt-7.5.0

2021-06-25 Thread Jiri Denemark
I have just tagged v7.5.0-rc1 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



Re: [PATCH] tests: qemucapabilities: Bump test data for qemu-6.1 on x86_64

2021-06-24 Thread Jiri Denemark
On Tue, Jun 22, 2021 at 13:06:11 +0200, Peter Krempa wrote:
> Update the caps data for the upcoming qemu version.
> 
> Notable changes are:
> 
> - 'query-sev-attestation-report' command added
> - 'sample-pages' members for dirty rate calculation added
> - 'qtest' device added
> - 'share' member added to query-memdev and 'reserve' members added to
>   query-memdev/memory-backend-[file,memfd,ram]
> - 'qemu-vdagent' chardev added
> - 'mptcp' toggle added to inet servers
> - 'zstd' compression for qcow2
> - new cpu models: - "Snowridge-v3"
>   - "Skylake-Server-v5"
>   - "Skylake-Client-v4"
>   - "Icelake-Server-v5"
>   - "Icelake-Client-v3"
>   - "Dhyana-v2"
>   - "Denverton-v3"
>   - "Cooperlake-v2"
>   - "Cascadelake-Server-v5"
> - 'avx-vnni' added to some existing cpu models
> - 'model-id' is now being reported as the host cpu again rather than
>   QEMU TCG as I've noted in previous bump
> 
> Signed-off-by: Peter Krempa 
> ---
>  .../caps_6.1.0.x86_64.replies     | 3663 ++---
>  .../caps_6.1.0.x86_64.xml |  366 +-
>  2 files changed, 2525 insertions(+), 1504 deletions(-)

Reviewed-by: Jiri Denemark 



[libvirt PATCH] spec: Drop libiscsi support in RHEL-9

2021-06-24 Thread Jiri Denemark
https://bugzilla.redhat.com/show_bug.cgi?id=1975677

Signed-off-by: Jiri Denemark 
---
 libvirt.spec.in | 4 
 1 file changed, 4 insertions(+)

diff --git a/libvirt.spec.in b/libvirt.spec.in
index b8a698e81e..c1ccd2f74e 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -85,6 +85,10 @@
 %endif
 
 %define with_storage_iscsi_direct 0%{!?_without_storage_iscsi_direct:1}
+# libiscsi has been dropped in RHEL-9
+%if 0%{?rhel} > 8
+%define with_storage_iscsi_direct 0
+%endif
 
 # Other optional features
 %define with_numactl  0%{!?_without_numactl:1}
-- 
2.32.0



Plans for the next release

2021-06-18 Thread Jiri Denemark
We are getting close to the next release of libvirt. To aim for the
release on Jul 01 I suggest entering the freeze on Friday Jun 25 and
tagging RC2 on Tuesday Jun 29.

I hope this works for everyone.

Jirka



Release of libvirt-7.4.0

2021-06-01 Thread Jiri Denemark
The 7.4.0 release of both libvirt and libvirt-python is tagged and
signed tarballs and source RPMs are available at

https://libvirt.org/sources/
https://libvirt.org/sources/python/

Thanks everybody who helped with this release by sending patches,
reviewing, testing, or providing any other feedback. Your work is
greatly appreciated.

* Removed features

  * qemu: Remove support for QEMU < 2.11

In accordance with our platform support policy, the oldest supported QEMU
version is now bumped from 1.5 to 2.11.

* New features

  * qemu: Add support for hotplugging  disks

The disk hotplug code in the qemu driver now can handle hotplug of disks
with automatically added overlay.

  * qemu: Add support for sharing base image of  disks

Users can use  to tell the qemu driver to
never open the base image in write mode thus multiple VMs can share the
same image. Note that the disk will be hotplugged during startup.

* Improvements

  * Add win-dmp crashdump format

New ``win-dmp`` format for ``virDomainCoreDumpWithFormat`` API and/or virsh
``dump --format`` was introduced.

* Bug fixes

  * Allow 0 offset in XML schema for 

Having a 0 offset so that the size of the image can be limited is a
valid configuration so it was allowed in the XML schema.

Enjoy.

Jirka



Re: [PATCH] NEWS: disks: Mention improvements and XML fix

2021-06-01 Thread Jiri Denemark
On Tue, Jun 01, 2021 at 09:29:29 +0200, Peter Krempa wrote:
> Signed-off-by: Peter Krempa 
> ---
>  NEWS.rst | 15 +++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/NEWS.rst b/NEWS.rst
> index 8358cbd369..66ce95d7cb 100644
> --- a/NEWS.rst
> +++ b/NEWS.rst
> @@ -20,6 +20,17 @@ v7.4.0 (unreleased)
> 
>  * **New features**
> 
> +  * qemu: Add support for hotplugging  disks
> +
> +The disk hotplug code in the qemu driver now can handle hotplug of disks
> +with automatically added overlay.
> +
> +  * qemu: Add support for sharing base image of  disks
> +
> +Users can use  to tell the qemu 
> driver to
> +never open the base image in write mode thus multiple VMs can share the
> +same image. Note that the disk will be hotplugged during startup.
> +
>  * **Improvements**
> 
>* Add win-dmp crashdump format
> @@ -29,6 +40,10 @@ v7.4.0 (unreleased)
> 
>  * **Bug fixes**
> 
> +  * Allow 0 offset in XML schema for `` size='321'/>``
> +
> +Having a 0 offset so that the size of the image can be limited is a
> +valid configuration so it was allowd in the XML schema.

s/allowd/allowed/

Reviewed-by: Jiri Denemark 



libvirt-7.4.0 release candidate 2

2021-05-27 Thread Jiri Denemark
I have just tagged v7.4.0-rc2 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



Re: Entering freeze for libvirt-7.4.0

2021-05-27 Thread Jiri Denemark
On Wed, May 26, 2021 at 11:37:48 -0500, Jonathon Jongsma wrote:
> On Tue, May 25, 2021 at 10:12 AM Jiri Denemark  wrote:
> >
> > I have just tagged v7.4.0-rc1 in the repository and pushed signed
> > tarballs and source RPMs to https://libvirt.org/sources/
> >
> > Please give the release candidate some testing and in case you find a
> > serious issue which should have a fix in the upcoming release, feel
> > free to reply to this thread to make sure the issue is more visible.
> >
> > If you have not done so yet, please update NEWS.rst to document any
> > significant change you made since the last release.
> >
> > Thanks,
> >
> > Jirka
> >
> 
> 
> I just posted a couple of patches to revert an XML change that Daniel
> Berrange objected to. We should get this reverted before release so
> that we don't need to support it going forward.

Sure, I will make rc2 only after this revert is pushed.

Jirka



Entering freeze for libvirt-7.4.0

2021-05-25 Thread Jiri Denemark
I have just tagged v7.4.0-rc1 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



Plans for the next release

2021-05-19 Thread Jiri Denemark
We are getting close to the next release of libvirt. To aim for the
release on Jun 01 I suggest entering the freeze on Tuesday May 25 and
tagging RC2 on Thursday May 27.

I hope this works for everyone.

Jirka



Re: [PATCH 4/4] tests: qemucapabilities: Add test-data for the qemu-6.1 cycle

2021-05-17 Thread Jiri Denemark
On Mon, May 17, 2021 at 11:16:35 +0200, Peter Krempa wrote:
> Add test data based on qemu commit v6.0.0-540-g6005ee07c3.
> 
> Notable changes are the removal of 'sheepdog' disk storage protocol.
> 
> Additionally the cpu model reported when probing seems to have changed
> from:
> 
> "model-id": "AMD Ryzen 9 3900X 12-Core Processor"
> 
> to:
> 
> "model-id": "QEMU TCG CPU version 2.5+"

This is indeed strange as KVM used to report the host's CPU name
directly, but it shouldn't cause any harm except for confusion humans
looking at the libvirtd logs or capabilities cache. In other words, it's
worth fixing unless it was an intentional change with a very reason
behind.

> despite building on the same machine. This probably also results in the
> 2 test changes in the CPU definition which popped up in this update.

I'm not sure if it's related (i.e., caused by the same QEMU changes),
but it's definitely not caused by the difference in model-id. Libvirt
does not even look at model-id, we only check individual CPU properties.

QEMU just reports svm, npt, and nrip-save are disabled... might be worth
reporting to them to check whether it is expected.

Jirka



Release of libvirt-7.3.0

2021-05-03 Thread Jiri Denemark
The 7.3.0 release of both libvirt and libvirt-python is tagged and
signed tarballs and source RPMs are available at

https://libvirt.org/sources/
https://libvirt.org/sources/python/

Thanks everybody who helped with this release by sending patches,
reviewing, testing, or providing any other feedback. Your work is
greatly appreciated.

* New features

  * xen: Support domains with more than 4TB

The xen driver now supports domains with more than 4TB of memory with
xen >= 4.13.

  * qemu: add socket for virtiofs filesystems

Libvirt now supports ``filesystem`` devices that connect to
a ``virtiofsd`` daemon launched outside of libvirtd, via the
``socket`` attribute of the ``source`` element.

  * nodedev: Add ability to manage persistent mediated devices

Persistent mediated devices can now be managed with libvirt.
``virNodeDeviceDefineXML()`` defines a new device,
``virNodeDeviceUndefine()`` removes an existing definition, and
``virNodeDeviceCreate()`` starts a device definition that is currently
inactive. Corresponding virsh commands ``nodedev-define``,
``nodedev-undefine``, and ``nodedev-start`` were also added.
``nodedev-list`` only lists active devices by default. Inactive device
definitions can be shown with the new ``--inactive`` and ``--all`` flags.

  * qemu: Allow use of qemu's ``-compat`` option

Curious developers or testers now can enable certain ``-compat`` modes which
allow to notice use of deprecated commands and options as qemu will use the
selected method to notify the user. The new behaviour can be requested using
either the ``deprecation_behavior`` option in ``qemu.conf`` for all VMs or
using  in the VM XML.

* Improvements

  * virsh: Improve errors with ``virsh snapshot-create-as``

The XML document constructed by virsh was forced through XML schema
validation which yielded unintelligible error messages in cases such as
when the path to the new image did not start with a slash. XML documents
are no longer validated as the XML parser actually has better error
messages which allow users to figure the problem out quickly.

  * qemu: Terminate backing store when doing a full-chain block pull

When pulling everything into the overlay image the chain can be terminated
since we know that it won't depend on any backing image and thus can prevent
attempts to probe the backing chain.

  * qemu: Expose disk serial in virDomainGetGuestInfo()

The ``virDomainGetGuestInfo()`` reports disk serial number among with other
disk information.

* Bug fixes

  * qemu: Fix crash of libvirt on full block pull of a disk

When the persistent definition contains a compatible disk (meaning the
definition of the running and persistent config match) a block pull job
would leave a dangling pointer in the config definition which resulted
in a crash.

  * qemu: Use proper job cancelling command

Libvirt's API contract for aborting a block copy job in 'ready' state
declares that the destination image of the copy will contain a consistent
image of the disk from the time when the block job was aborted. This
requires that libvirt uses the proper cancelling qemu command to ensure
that the data is consistent which was not the case.

  * qemu: Don't attempt storage migration when there are no migratable disks

Due to a logic bug introduced in the previous release libvirt would attempt
to migrate disks in case when no disks are selected/eligible for migration.

  * qemu: Fix very rare race when two block job 'ready' events are delivered

In certain high-load scenarios, qemu might deliver the 'ready' event twice
and if it's delivered when pivoting to the destination during a block copy
job, libvirt would get confused and execute the code as if the job were
aborted.

  * lxc: Fix container destroy with CGroupsV2

When an LXC container was started and the host used CGroupsV2 it might have
had created nested controllers under the container's scope. Libvirt was
unaware and thus destroying the container failed with a cryptic error:
``failed to get cgroup backend for 'pathOfController'``. The CGroup removal
code was reworked and is now capable of dealing with such scenario.

  * bash-completion: Fix argument passing to $1

Due to a bug in bash completion script, the auto completion did not work
properly when a connection URI or read only flag were specified on
``virsh`` or ``virt-admin`` command line.

Enjoy.

Jirka



Re: [PATCH for 7.3] conf: Fix heap corruption when hot-adding a lease

2021-05-03 Thread Jiri Denemark
On Sun, May 02, 2021 at 12:13:50 +0200, Peter Krempa wrote:
> Commit 28a86993162f7d2f ( v6.9.0-179-g28a8699316 ) incorrectly replaced
> VIR_EXPAND_N by g_renew.
> 
> VIR_EXPAND_N has these two extra effects apart from reallocating memory:
> 
> 1) The newly allocated memory is zeroed out
> 2) The number of elements in the array which is passed to VIR_EXPAND_N
>is increased.
> 
> This comes into play when used with virDomainLeaseInsertPreAlloced,
> which expects that the array element count already includes the space
> for the added 'lease', by plainly just assigning to 'leases[nleases - 1'

s/1/1]/

> 
> Since g_renew does not increase the number of elements in the array
> any existing code which calls virDomainLeaseInsertPreAlloced thus either
> overwrites a lease definition or corrupts the heap if there are no
> leases to start with.
> 
> To preserve existing functionality we revert the code back to using
> VIR_EXPAND_N which at this point doesn't return any value, so other
> commits don't need to be reverted.

The second point could have been solved by passing ++def->nleases to
g_renew. But using VIR_EXPAND_N instead solves both issues and we have a
lot places with VIR_EXPAND_N so we can fix them all at some point if we
want to drop this wrapper for some reason.

Reviewed-by: Jiri Denemark 



[libvirt PATCH] virnetdevbridge: Ignore EEXIST when adding an entry to fdb

2021-04-30 Thread Jiri Denemark
When updating entries in a bridge forwarding database (i.e., when
macTableManager='libvirt' is configured for the bridge), we may end up
in a situation when the entry we want to add is already present. Let's
just ignore the error in such a case.

This fixes an error to resume a domain when fdb entries were not
properly removed when the domain was paused:

virsh # resume test
error: Failed to resume domain test
error: error adding fdb entry for vnet2: File exists

For some reason, fdb entries are only removed when libvirt explicitly
stops CPUs, but nothing happens when we just get STOP event from QEMU.
An alternative approach would be to make sure we always remove the
entries regardless on why a domain was paused (e.g., during migration),
but that would be a significantly more disruptive change with possible
side effects.

Signed-off-by: Jiri Denemark 
---
 src/util/virnetdevbridge.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/src/util/virnetdevbridge.c b/src/util/virnetdevbridge.c
index 7b5ea4fe1d..4fe84cc162 100644
--- a/src/util/virnetdevbridge.c
+++ b/src/util/virnetdevbridge.c
@@ -1063,9 +1063,13 @@ virNetDevBridgeFDBAddDel(const virMacAddr *mac, const 
char *ifname,
 if (resp->nlmsg_len < NLMSG_LENGTH(sizeof(*err)))
 goto malformed_resp;
 if (err->error) {
-virReportSystemError(-err->error,
- _("error adding fdb entry for %s"), ifname);
-return -1;
+if (isAdd && -err->error == EEXIST) {
+VIR_DEBUG("fdb entry for %s already exists", ifname);
+} else {
+virReportSystemError(-err->error,
+ _("error adding fdb entry for %s"), 
ifname);
+return -1;
+}
 }
 break;
 case NLMSG_DONE:
-- 
2.31.1



libvirt-7.3.0 release candidate 2

2021-04-29 Thread Jiri Denemark
I have just tagged v7.3.0-rc2 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



Entering freeze for libvirt-7.3.0

2021-04-27 Thread Jiri Denemark
I have just tagged v7.3.0-rc1 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



[libvirt PATCH] spec: Do not build qemu driver for Power on RHEL-9

2021-04-21 Thread Jiri Denemark
https://bugzilla.redhat.com/show_bug.cgi?id=1946529

Signed-off-by: Jiri Denemark 
---
 libvirt.spec.in | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/libvirt.spec.in b/libvirt.spec.in
index f9af330186..be74964b7b 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -19,7 +19,11 @@
 
 %define arches_qemu_kvm %{ix86} x86_64 %{power64} %{arm} aarch64 s390x
 %if 0%{?rhel}
-%define arches_qemu_kvm x86_64 %{power64} aarch64 s390x
+%if 0%{?rhel} <= 8
+%define arches_qemu_kvm x86_64 %{power64} aarch64 s390x
+%else
+%define arches_qemu_kvm x86_64 aarch64 s390x
+%endif
 %endif
 
 %define arches_64bitx86_64 %{power64} aarch64 s390x riscv64
-- 
2.31.1



Plans for the next release

2021-04-19 Thread Jiri Denemark
We are getting close to the next release of libvirt. To aim for the
release on May 03 I suggest entering the freeze on Tuesday Apr 27 and
tagging RC2 on Thursday Apr 29 in the afternoon.

I hope this works for everyone.

Jirka



[libvirt PATCH v2] downloads.html: Add a link to GPG key used signing releases

2021-04-09 Thread Jiri Denemark
While the key is available on public GPG key servers, having it locally
at https://libvirt.org/sources/gpg_key.asc is even better.

Signed-off-by: Jiri Denemark 
---

Notes:
Version 2:
- moved the new sentence to a dedicated paragraph as suggested by Andrea

 docs/downloads.html.in | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/docs/downloads.html.in b/docs/downloads.html.in
index ca14b3ecba..0187062cef 100644
--- a/docs/downloads.html.in
+++ b/docs/downloads.html.in
@@ -615,6 +615,12 @@ pub  4096R/10084C9C 2020-07-20 Jiří Denemark 
jdene...@redhat.com
 Fingerprint=453B 6531 0595 5628 5547  1199 CA68 BE80 1008 4C9C
 
 
+
+  It can be downloaded from
+  https://libvirt.org/sources/gpg_key.asc;>this site or from
+  public GPG key servers.
+
+
 
   Releases prior to libvirt-6.6 were signed with the following GPG key:
 
-- 
2.31.1



Re: [libvirt PATCH] downloads.html: Add a link to GPG key used signing releases

2021-04-09 Thread Jiri Denemark
On Thu, Apr 01, 2021 at 20:18:33 +0200, Ján Tomko wrote:
> On a Thursday in 2021, Jiri Denemark wrote:
> >While the key is available on public GPG key servers, having it locally
> >at https://libvirt.org/sources/gpg_key.asc is even better.
> >

Oops, I completely forgot I have this patch in queue :-)

> I don't remember where but I think someone was trying to find the
> key used to sign libvirt-glib. Also, Pavel uses his key to sign
> libvirt-dbus releases.

I guess such keys could be stored in the subdirectories associated with
these projects.

Jirka



[libvirt PATCH] downloads.html: Add a link to GPG key used signing releases

2021-04-01 Thread Jiri Denemark
While the key is available on public GPG key servers, having it locally
at https://libvirt.org/sources/gpg_key.asc is even better.

Signed-off-by: Jiri Denemark 
---
 docs/downloads.html.in | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/docs/downloads.html.in b/docs/downloads.html.in
index ca14b3ecba..90a0cf7717 100644
--- a/docs/downloads.html.in
+++ b/docs/downloads.html.in
@@ -608,7 +608,9 @@ git clone git://libvirt.org/[module name].git
   on this project site are signed with a GPG signature. You should always
   verify the package signature before using the source to compile binary
   packages. The following key is currently used to generate the GPG
-  signatures:
+  signatures and it can be
+  https://libvirt.org/sources/gpg_key.asc;>downloaded from 
this
+  site or from public GPG key servers:
 
 
 pub  4096R/10084C9C 2020-07-20 Jiří Denemark jdene...@redhat.com
-- 
2.31.1



Release of libvirt-7.2.0

2021-04-01 Thread Jiri Denemark
The 7.2.0 release of both libvirt and libvirt-python is tagged and
signed tarballs and source RPMs are available at

https://libvirt.org/sources/
https://libvirt.org/sources/python/

Thanks everybody who helped with this release by sending patches,
reviewing, testing, or providing any other feedback. Your work is
greatly appreciated.

* New features

  * qemu: Implement domain memory dirty rate calculation API

New API ``virDomainStartDirtyRateCalc()`` and virsh command
``domdirtyrate-calc`` are added to start calculating a live domain's
memory dirty rate.

  * qemu: Support reporting memory dirty rate stats

The memory dirty rate stats can be obtained through ``virsh domstats
--dirtyrate`` via the virConnectGetAllDomainStats API.

  * qemu: Full disk backups via ``virDomainBackupBegin``

The qemu hypervisor driver now allows taking full disk backups via the
``virDomainBackupBegin`` API and the corresponding virsh wrapper.

In future releases the feature will be extended to also support incremental
backups (where only the difference since the last backup is copied) when
qemu adds the required functionality.

  * Add support for audio backend specific settings

With this release a new  element is introduced that allows
users to configure audio output for their guests.

* Improvements

  * qemu: Compatibility with QEMU 6.0 for certain hot-(un)-plug operations

Libvirt 7.2.0 is required for compatibility with the upcoming QEMU 6.0
release for hotplug and hotunplug of certain devices and helpers, such as
iothreads, chardevs, RNG devices, disks with secret, ...

  * qemu: Various improvements to embedded mode

Embedded mode for the QEMU driver, as well as the ``virt-qemu-run`` tool
saw improvements in handling of domain life cycle, temporary directories
creation (important when using disk secrets) and other minor fixes.

  * Documentation of split daemon related config files

Split daemons read configuration files upon their start. These were never
documented though.

* Bug fixes

  * Check host CPU for forbidden features

CPU feature policy did not work as expected with ``host-passthrough`` and
features supported by physical host. CPU features were not filtered out
when ``@check`` was set to ``full``.

  * Fix virNetworkUpdate() to work with split daemons

Due to a bug in our code, virNetworkUpdate() did not work with split daemon
unless management application connected to virtnetworkd directly.

  * qemu: increase locked memory limit when a vDPA device is present

Just like VFIO devices, vDPA devices may need to have all guest memory
pages locked/pinned in order to operate properly. These devices are now
included when calculating the limit for memory lock.

  * Don't log error if SRIOV PF has no associated netdev

Some SRIOV PFs don't have a netdev associated with them in which case
libvirtd reported an error and refused to start. This is now fixed.

  * qemu: Only raise memlock limit if necessary

Attempting to set the memlock limit might fail if we're running
in a containerized environment where ``CAP_SYS_RESOURCE`` is not
available, and if the limit is already high enough there's no
point in trying to raise it anyway.

  * Restore security context of swtpm.log

If a guest with emulated TPM was started and the daemon was restarted
afterwards, the security context of the per-domain ``swtpm.log`` file was
not restored on domain shutdown leaving it unable to be started again.

  * virtlogd|virtlockd: Fixed crash when upgrading the daemons in-place

A bug preventing the in-place upgrade of ``virtlogd`` and ``virtlockd``
daemons was fixed, so they can again be upgraded without dropping the log
file descriptors or locks on files.

Enjoy.

Jirka



libvirt-7.2.0 release candidate 2

2021-03-30 Thread Jiri Denemark
I have just tagged v7.2.0-rc2 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



Re: [PATCH] qemu_driver: Acquire MODIFY job in qemuDomainStartDirtyRateCalc()

2021-03-29 Thread Jiri Denemark
On Mon, Mar 29, 2021 at 14:49:36 +0200, Michal Privoznik wrote:
> This API talks to QEMU and changes its internal state. Therefore,
> it should acquire QEMU_JOB_MODIFY instead of QEMU_JOB_QUERY.
> 
> Signed-off-by: Michal Privoznik 
> ---
>  src/qemu/qemu_driver.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 69dc704a44..af015f0625 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -20425,7 +20425,7 @@ qemuDomainStartDirtyRateCalc(virDomainPtr dom,
>  if (virDomainStartDirtyRateCalcEnsureACL(dom->conn, vm->def) < 0)
>  goto cleanup;
>  
> -if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0)
> +if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0)
>  goto cleanup;
>  
>  if (!virDomainObjIsActive(vm)) {

Reviewed-by: Jiri Denemark 



Entering freeze for libvirt-7.2.0

2021-03-26 Thread Jiri Denemark
I have just tagged v7.2.0-rc1 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



Re: [libvirt PATCH v3 0/3] qemuProcessUpdateGuestCPU: Check host cpu for forbidden features

2021-03-26 Thread Jiri Denemark
On Tue, Mar 23, 2021 at 11:01:52 +0100, Tim Wiederhake wrote:
> V1: https://listman.redhat.com/archives/libvir-list/2021-February/msg01275.ht=
> ml
> V2: https://listman.redhat.com/archives/libvir-list/2021-February/msg01289.ht=
> ml
> 
> Changes since V2:
> * Factored out into seperate function in src/cpu/cpu.c
> * Made virCPUDefFindFeature work on const pointers
> 
> Tim Wiederhake (3):
>   virCPUDefFindFeature: Make first argument const ptr
>   cpu: Introduce virCPUCheckForbiddenFeatures
>   qemuProcessUpdateGuestCPU: Check host cpu for forbidden features
> 
>  src/conf/cpu_conf.c  |  2 +-
>  src/conf/cpu_conf.h  |  2 +-
>  src/cpu/cpu.c| 37 +
>  src/cpu/cpu.h|  6 ++
>  src/libvirt_private.syms |  1 +
>  src/qemu/qemu_process.c  | 10 ++
>  6 files changed, 56 insertions(+), 2 deletions(-)
> 
> --=20
> 2.26.2

Reviewed-by: Jiri Denemark 

and pushed, thanks.



Re: [libvirt PATCH] qemu: Update asyncOwnerAPI when entering async job phase

2021-03-22 Thread Jiri Denemark
On Mon, Mar 22, 2021 at 11:17:12 +0100, Michal Privoznik wrote:
> On 3/19/21 10:42 PM, Jiri Denemark wrote:
> > In case an async job spans multiple APIs (e.g., incoming migration) the
> > API that started the job is recorded as the asyncOwnerAPI even though it
> > is no longer running and the owner thread is updated properly to the one
> > currently handling the job. Let's also update asyncOwnerAPI to make it
> > more obvious which is the current (or the most recent) API involved in
> > the job.
> > 
> > Signed-off-by: Jiri Denemark 
> > ---
> >   src/qemu/qemu_domainjob.c | 4 +++-
> >   1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c
> > index b58d6837ad..50cfc45f5b 100644
> > --- a/src/qemu/qemu_domainjob.c
> > +++ b/src/qemu/qemu_domainjob.c
> > @@ -711,7 +711,9 @@ qemuDomainObjSetJobPhase(virQEMUDriverPtr driver,
> > qemuDomainAsyncJobTypeToString(priv->job.asyncJob),
> > qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, phase));
> >   
> > -if (priv->job.asyncOwner && me != priv->job.asyncOwner) {
> > +if (priv->job.asyncOwner == 0) {
> > +priv->job.asyncOwnerAPI = g_strdup(virThreadJobGet());
> > +} else if (me != priv->job.asyncOwner) {
> >   VIR_WARN("'%s' async job is owned by thread %llu",
> >qemuDomainAsyncJobTypeToString(priv->job.asyncJob),
> >priv->job.asyncOwner);
> > 
> 
> Could this be related to a problem I've reviewed earlier?
> 
> https://listman.redhat.com/archives/libvir-list/2021-March/msg00933.html
> 
> Or maybe not. IDK.

I don't think so. This just making debug and error messages a bit more
logical as asyncOwnerAPI is not really used for anything but showing
which thread is "hanging" in a job in case another thread times out
waiting for a job.

Jirka



[libvirt PATCH 6/7] Do not check return value of VIR_REALLOC_N

2021-03-19 Thread Jiri Denemark
Signed-off-by: Jiri Denemark 
---
 src/bhyve/bhyve_parse_command.c |  8 +++---
 src/conf/capabilities.c |  3 +--
 src/conf/domain_conf.c  | 11 
 src/conf/storage_conf.c |  3 +--
 src/conf/virinterfaceobj.c  |  2 +-
 src/conf/virnetworkobj.c|  4 +--
 src/conf/virnodedeviceobj.c |  2 +-
 src/conf/virsecretobj.c |  2 +-
 src/conf/virstorageobj.c|  2 +-
 src/esx/esx_stream.c|  3 +--
 src/interface/interface_backend_netcf.c |  2 +-
 src/interface/interface_backend_udev.c  |  2 +-
 src/libxl/libxl_capabilities.c  |  3 +--
 src/libxl/libxl_conf.c  | 14 --
 src/libxl/libxl_driver.c| 15 ---
 src/qemu/qemu_agent.c   |  4 +--
 src/qemu/qemu_firmware.c|  5 ++--
 src/qemu/qemu_hotplug.c | 36 +
 src/qemu/qemu_monitor.c |  4 +--
 src/qemu/qemu_monitor_json.c| 12 ++---
 src/qemu/qemu_process.c | 25 ++---
 src/rpc/virnetclient.c  |  3 +--
 src/rpc/virnetmessage.c | 12 +++--
 src/storage/storage_backend_disk.c  |  4 +--
 src/storage/storage_backend_logical.c   |  3 +--
 src/storage/storage_backend_rbd.c   |  3 +--
 src/util/virarptable.c  |  3 +--
 src/util/vircommand.c   |  5 ++--
 src/util/virdnsmasq.c   | 13 +++--
 src/util/virfile.c  |  5 +---
 src/util/virjson.c  | 13 +++--
 src/util/virnuma.c  | 14 +-
 src/util/virstring.c|  3 +--
 src/util/virsysinfo.c   |  3 +--
 src/vbox/vbox_common.c  |  4 +--
 tests/domaincapstest.c  |  8 ++
 tests/qemublocktest.c   |  3 +--
 tests/viralloctest.c|  9 +++
 tools/virsh-console.c   | 22 ++-
 tools/virt-login-shell-helper.c |  3 +--
 tools/vsh.c |  6 ++---
 41 files changed, 96 insertions(+), 205 deletions(-)

diff --git a/src/bhyve/bhyve_parse_command.c b/src/bhyve/bhyve_parse_command.c
index 8d9a21e671..70f5ac42a0 100644
--- a/src/bhyve/bhyve_parse_command.c
+++ b/src/bhyve/bhyve_parse_command.c
@@ -219,8 +219,8 @@ bhyveCommandLineToArgv(const char *nativeConfig,
  * Otherwise, later argument lists may be assigned to _argv without
  * freeing the earlier ones. */
 if (!_bhyve_argv && STREQ(arglist[0], "/usr/sbin/bhyve")) {
-if ((VIR_REALLOC_N(_bhyve_argv, args_count + 1) < 0)
-|| (!bhyve_argc))
+VIR_REALLOC_N(_bhyve_argv, args_count + 1);
+if (!bhyve_argc)
 goto error;
 for (j = 0; j < args_count; j++)
 _bhyve_argv[j] = arglist[j];
@@ -228,8 +228,8 @@ bhyveCommandLineToArgv(const char *nativeConfig,
 *bhyve_argc = args_count-1;
 VIR_FREE(arglist);
 } else if (!_loader_argv) {
-if ((VIR_REALLOC_N(_loader_argv, args_count + 1) < 0)
-|| (!loader_argc))
+VIR_REALLOC_N(_loader_argv, args_count + 1);
+if (!loader_argc)
 goto error;
 for (j = 0; j < args_count; j++)
 _loader_argv[j] = arglist[j];
diff --git a/src/conf/capabilities.c b/src/conf/capabilities.c
index bc9035afae..573ac4e975 100644
--- a/src/conf/capabilities.c
+++ b/src/conf/capabilities.c
@@ -1476,8 +1476,7 @@ virCapabilitiesGetNUMASiblingInfo(int node,
 tmp_size++;
 }
 
-if (VIR_REALLOC_N(tmp, tmp_size) < 0)
-goto cleanup;
+VIR_REALLOC_N(tmp, tmp_size);
 
 *nsiblings = tmp_size;
 *siblings = g_steal_pointer();
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index afbd6fc7c1..95602ae57e 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -3433,8 +3433,7 @@ virDomainIOThreadIDDefArrayInit(virDomainDefPtr def,
def->iothreadids[i]->iothread_id));
 
 /* resize array */
-if (VIR_REALLOC_N(def->iothreadids, iothreads) < 0)
-return -1;
+VIR_REALLOC_N(def->iothreadids, iothreads);
 
 /* Populate iothreadids[] using the set bit number from thrmap */
 while (def->niothreadids < iothreads) {
@@ -17587,7 +17586,8 @@ virDomainChrPreAlloc(virDomainDefPtr vmdef,
   , ) < 0)
 return -1;
 
-return VIR_REALLOC_N(*arrPtr, *cntPtr + 1);
+VIR_REALLOC_N(*arrPtr, *cntPtr + 1);
+return 0;
 }
 
 void
@@ -21442,8 +21442,9 @@ virDomainDefParseXML(xmlDocPtr xml,
 /* analysis of the host devices */
 if ((n = virXPathNodeSet("./devices/hostdev", ctxt, )) < 0)

[libvirt PATCH 4/7] Do not check return value of VIR_EXPAND_N

2021-03-19 Thread Jiri Denemark
Signed-off-by: Jiri Denemark 
---
 src/access/viraccessdriverstack.c |  3 +-
 src/conf/backup_conf.c|  3 +-
 src/conf/capabilities.c   |  4 +-
 src/conf/domain_addr.c|  6 +--
 src/conf/domain_conf.c|  3 +-
 src/conf/nwfilter_conf.c  |  6 +--
 src/conf/nwfilter_params.c|  8 +---
 src/esx/esx_driver.c  |  4 +-
 src/hyperv/hyperv_wmi.c   |  3 +-
 src/hypervisor/domain_driver.c|  3 +-
 src/hypervisor/virclosecallbacks.c|  2 +-
 src/libxl/libxl_conf.c|  9 ++--
 src/libxl/xen_common.c|  6 +--
 src/locking/lock_driver_lockd.c   |  4 +-
 src/lxc/lxc_controller.c  | 18 ++--
 src/lxc/lxc_native.c  | 33 +--
 src/qemu/qemu_agent.c |  7 +---
 src/qemu/qemu_capabilities.c  |  3 +-
 src/qemu/qemu_conf.c  |  6 +--
 src/rpc/virnetclient.c| 16 +--
 src/rpc/virnetdaemon.c|  3 +-
 src/rpc/virnetlibsshsession.c |  8 +---
 src/rpc/virnetserver.c| 19 ++---
 src/rpc/virnetsocket.c|  3 +-
 src/rpc/virnetsshsession.c|  8 +---
 src/storage/storage_backend_rbd.c |  3 +-
 .../storage_file_backend_gluster.c|  3 +-
 .../storage_source_backingstore.c |  3 +-
 src/util/vircommand.c |  2 +-
 src/util/virfile.c| 14 ++-
 src/util/virfirewall.c|  3 +-
 src/util/virlockspace.c   |  7 +---
 src/util/virprocess.c | 17 +---
 src/util/virresctrl.c | 42 ---
 src/util/virstring.c  |  6 +--
 src/util/virsysinfo.c | 34 ---
 src/util/virsystemd.c |  3 +-
 src/util/virthreadpool.c  |  3 +-
 src/util/virtypedparam.c  |  3 +-
 src/vbox/vbox_snapshot_conf.c | 29 -
 tests/qemudomaincheckpointxml2xmltest.c   |  3 +-
 tests/qemumonitortestutils.c  | 12 ++
 tests/securityselinuxlabeltest.c  |  7 +---
 tests/viralloctest.c  |  3 +-
 tools/virsh-domain.c  | 27 +++-
 45 files changed, 106 insertions(+), 306 deletions(-)

diff --git a/src/access/viraccessdriverstack.c 
b/src/access/viraccessdriverstack.c
index 238caef115..0f0858e96e 100644
--- a/src/access/viraccessdriverstack.c
+++ b/src/access/viraccessdriverstack.c
@@ -40,8 +40,7 @@ int virAccessDriverStackAppend(virAccessManagerPtr manager,
 {
 virAccessDriverStackPrivatePtr priv = 
virAccessManagerGetPrivateData(manager);
 
-if (VIR_EXPAND_N(priv->managers, priv->managersLen, 1) < 0)
-return -1;
+VIR_EXPAND_N(priv->managers, priv->managersLen, 1);
 
 priv->managers[priv->managersLen-1] = child;
 
diff --git a/src/conf/backup_conf.c b/src/conf/backup_conf.c
index ba58b2e322..2d6d2d99f4 100644
--- a/src/conf/backup_conf.c
+++ b/src/conf/backup_conf.c
@@ -554,8 +554,7 @@ virDomainBackupAlignDisks(virDomainBackupDefPtr def,
 backup_all = true;
 
 ndisks = def->ndisks;
-if (VIR_EXPAND_N(def->disks, def->ndisks, dom->ndisks - def->ndisks) < 0)
-return -1;
+VIR_EXPAND_N(def->disks, def->ndisks, dom->ndisks - def->ndisks);
 
 for (i = 0; i < dom->ndisks; i++) {
 virDomainBackupDiskDefPtr backupdisk = NULL;
diff --git a/src/conf/capabilities.c b/src/conf/capabilities.c
index 14f0d48930..bc9035afae 100644
--- a/src/conf/capabilities.c
+++ b/src/conf/capabilities.c
@@ -580,9 +580,7 @@ 
virCapabilitiesHostSecModelAddBaseLabel(virCapsHostSecModelPtr secmodel,
 if (type == NULL || label == NULL)
 return -1;
 
-if (VIR_EXPAND_N(secmodel->labels, secmodel->nlabels, 1) < 0)
-return -1;
-
+VIR_EXPAND_N(secmodel->labels, secmodel->nlabels, 1);
 secmodel->labels[secmodel->nlabels - 1].type = g_strdup(type);
 secmodel->labels[secmodel->nlabels - 1].label = g_strdup(label);
 
diff --git a/src/conf/domain_addr.c b/src/conf/domain_addr.c
index 8b927689f8..9167b489d1 100644
--- a/src/conf/domain_addr.c
+++ b/src/conf/domain_addr.c
@@ -728,8 +728,7 @@ virDomainPCIAddressSetGrow(virDomainPCIAddressSetPtr addrs,
 
 i = addrs->nbuses;
 
-if (VIR_EXPAND_N(addrs->buses, addrs->nbuses, add) < 0)
-return -1;
+VIR_EXPAND_N(addrs->buses, addrs->nbuses, add);
 
 if (needDMIToPCIBridge) {
 /* first of the new buses is dmi-to-pci-bridge, the
@@ -1

[libvirt PATCH 3/7] util: Make virResizeN return void

2021-03-19 Thread Jiri Denemark
Signed-off-by: Jiri Denemark 
---
 src/util/viralloc.c | 17 +
 src/util/viralloc.h |  4 ++--
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/src/util/viralloc.c b/src/util/viralloc.c
index 1317537c8a..cd770eb601 100644
--- a/src/util/viralloc.c
+++ b/src/util/viralloc.c
@@ -98,13 +98,13 @@ int virExpandN(void *ptrptr,
  * failure, 'ptrptr' and 'allocptr' are not changed. Any newly
  * allocated memory in 'ptrptr' is zero-filled.
  *
- * Returns zero on success, aborts on OOM
+ * Aborts on OOM
  */
-int virResizeN(void *ptrptr,
-   size_t size,
-   size_t *allocptr,
-   size_t count,
-   size_t add)
+void virResizeN(void *ptrptr,
+size_t size,
+size_t *allocptr,
+size_t count,
+size_t add)
 {
 size_t delta;
 
@@ -112,12 +112,13 @@ int virResizeN(void *ptrptr,
 abort();
 
 if (count + add <= *allocptr)
-return 0;
+return;
 
 delta = count + add - *allocptr;
 if (delta < *allocptr / 2)
 delta = *allocptr / 2;
-return virExpandN(ptrptr, size, allocptr, delta);
+
+virExpandN(ptrptr, size, allocptr, delta);
 }
 
 /**
diff --git a/src/util/viralloc.h b/src/util/viralloc.h
index d656d4ba0e..878c9485cf 100644
--- a/src/util/viralloc.h
+++ b/src/util/viralloc.h
@@ -38,7 +38,7 @@ int virReallocN(void *ptrptr, size_t size, size_t count)
 ATTRIBUTE_NONNULL(1);
 int virExpandN(void *ptrptr, size_t size, size_t *count, size_t add)
 ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3);
-int virResizeN(void *ptrptr, size_t size, size_t *alloc, size_t count, size_t 
desired)
+void virResizeN(void *ptrptr, size_t size, size_t *alloc, size_t count, size_t 
desired)
 ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3);
 void virShrinkN(void *ptrptr, size_t size, size_t *count, size_t toremove)
 ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3);
@@ -102,7 +102,7 @@ int virDeleteElementsN(void *ptrptr, size_t size, size_t 
at, size_t *countptr,
  *
  * This macro is safe to use on arguments with side effects.
  *
- * Returns 0 on success, aborts on OOM
+ * Aborts on OOM
  */
 #define VIR_RESIZE_N(ptr, alloc, count, add) \
 virResizeN(&(ptr), sizeof(*(ptr)), &(alloc), count, add)
-- 
2.31.0



[libvirt PATCH 1/7] util: Drop G_GNUC_WARN_UNUSED_RESULT from reallocation APIs

2021-03-19 Thread Jiri Denemark
Our reallocation APIs already abort on OOM and thus can only return 0.
There's no need to force callers to check the result.

Signed-off-by: Jiri Denemark 
---
 src/util/viralloc.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/util/viralloc.h b/src/util/viralloc.h
index e3027622c4..d656d4ba0e 100644
--- a/src/util/viralloc.h
+++ b/src/util/viralloc.h
@@ -35,11 +35,11 @@
 
 /* Don't call these directly - use the macros below */
 int virReallocN(void *ptrptr, size_t size, size_t count)
-G_GNUC_WARN_UNUSED_RESULT ATTRIBUTE_NONNULL(1);
+ATTRIBUTE_NONNULL(1);
 int virExpandN(void *ptrptr, size_t size, size_t *count, size_t add)
-G_GNUC_WARN_UNUSED_RESULT ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3);
+ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3);
 int virResizeN(void *ptrptr, size_t size, size_t *alloc, size_t count, size_t 
desired)
-G_GNUC_WARN_UNUSED_RESULT ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3);
+ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3);
 void virShrinkN(void *ptrptr, size_t size, size_t *count, size_t toremove)
 ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3);
 int virInsertElementsN(void *ptrptr, size_t size, size_t at, size_t *countptr,
-- 
2.31.0



[libvirt PATCH 2/7] Do not check return value of VIR_RESIZE_N

2021-03-19 Thread Jiri Denemark
Signed-off-by: Jiri Denemark 
---
 src/bhyve/bhyve_capabilities.c  |  5 +
 src/bhyve/bhyve_parse_command.c | 11 ++
 src/conf/capabilities.c | 36 -
 src/conf/cpu_conf.c | 11 ++
 src/conf/domain_capabilities.c  |  5 ++---
 src/esx/esx_driver.c|  3 +--
 src/hyperv/hyperv_driver.c  |  3 +--
 src/qemu/qemu_domain.c  | 13 +---
 src/util/virbitmap.c|  5 ++---
 src/util/vircommand.c   | 16 +++
 src/util/virfile.c  |  3 +--
 src/util/virfirewall.c  |  5 +
 src/util/virnetlink.c   |  7 +++
 src/util/virtypedparam-public.c | 24 --
 src/util/virtypedparam.c|  6 ++
 src/util/viruri.c   |  8 +---
 src/util/virutil.c  | 14 -
 tests/viralloctest.c|  3 +--
 18 files changed, 53 insertions(+), 125 deletions(-)

diff --git a/src/bhyve/bhyve_capabilities.c b/src/bhyve/bhyve_capabilities.c
index e9b4e5176d..efd6a59aec 100644
--- a/src/bhyve/bhyve_capabilities.c
+++ b/src/bhyve/bhyve_capabilities.c
@@ -154,10 +154,7 @@ virBhyveDomainCapsBuild(bhyveConnPtr conn,
 
 if (virDirOpenIfExists(, firmware_dir) > 0) {
 while ((virDirRead(dir, , firmware_dir)) > 0) {
-if (VIR_RESIZE_N(firmwares->values,
-firmwares_alloc, firmwares->nvalues, 1) < 0)
-goto cleanup;
-
+VIR_RESIZE_N(firmwares->values, firmwares_alloc, 
firmwares->nvalues, 1);
 firmwares->values[firmwares->nvalues] = g_strdup_printf("%s/%s",
 firmware_dir, 
entry->d_name);
 firmwares->nvalues++;
diff --git a/src/bhyve/bhyve_parse_command.c b/src/bhyve/bhyve_parse_command.c
index 2762b7e921..8d9a21e671 100644
--- a/src/bhyve/bhyve_parse_command.c
+++ b/src/bhyve/bhyve_parse_command.c
@@ -153,10 +153,7 @@ bhyveCommandLineToArgv(const char *nativeConfig,
 else
 line = g_strdup(curr);
 
-if (VIR_RESIZE_N(lines, lines_alloc, line_count, 2) < 0) {
-VIR_FREE(line);
-goto error;
-}
+VIR_RESIZE_N(lines, lines_alloc, line_count, 2);
 
 if (*line)
 lines[line_count++] = line;
@@ -203,11 +200,7 @@ bhyveCommandLineToArgv(const char *nativeConfig,
 if (next && (*next == '\'' || *next == '"'))
 next++;
 
-if (VIR_RESIZE_N(arglist, args_alloc, args_count, 2) < 0) {
-VIR_FREE(arg);
-goto error;
-}
-
+VIR_RESIZE_N(arglist, args_alloc, args_count, 2);
 arglist[args_count++] = arg;
 arglist[args_count] = NULL;
 
diff --git a/src/conf/capabilities.c b/src/conf/capabilities.c
index f610975ae5..14f0d48930 100644
--- a/src/conf/capabilities.c
+++ b/src/conf/capabilities.c
@@ -280,10 +280,8 @@ int
 virCapabilitiesAddHostFeature(virCapsPtr caps,
   const char *name)
 {
-if (VIR_RESIZE_N(caps->host.features, caps->host.nfeatures_max,
- caps->host.nfeatures, 1) < 0)
-return -1;
-
+VIR_RESIZE_N(caps->host.features, caps->host.nfeatures_max,
+ caps->host.nfeatures, 1);
 caps->host.features[caps->host.nfeatures] = g_strdup(name);
 caps->host.nfeatures++;
 
@@ -301,10 +299,8 @@ int
 virCapabilitiesAddHostMigrateTransport(virCapsPtr caps,
const char *name)
 {
-if (VIR_RESIZE_N(caps->host.migrateTrans, caps->host.nmigrateTrans_max,
- caps->host.nmigrateTrans, 1) < 0)
-return -1;
-
+VIR_RESIZE_N(caps->host.migrateTrans, caps->host.nmigrateTrans_max,
+ caps->host.nmigrateTrans, 1);
 caps->host.migrateTrans[caps->host.nmigrateTrans] = g_strdup(name);
 caps->host.nmigrateTrans++;
 
@@ -447,9 +443,7 @@ virCapabilitiesAddGuest(virCapsPtr caps,
 guest->arch.defaultInfo.emulator = g_strdup(emulator);
 guest->arch.defaultInfo.loader = g_strdup(loader);
 
-if (VIR_RESIZE_N(caps->guests, caps->nguests_max,
- caps->nguests, 1) < 0)
-goto error;
+VIR_RESIZE_N(caps->guests, caps->nguests_max, caps->nguests, 1);
 caps->guests[caps->nguests++] = guest;
 
 if (nmachines) {
@@ -458,10 +452,6 @@ virCapabilitiesAddGuest(virCapsPtr caps,
 }
 
 return guest;
-
- error:
-virCapabilitiesFreeGuest(guest);
-return NULL;
 }
 
 
@@ -493,9 +483,8 @@ virCapabilitiesAddGuestDomain(virCapsGuestPtr guest,
 dom->info.emulator = g_strdup(emulator);
 dom->info.loader = g_strdup(loader);
 
-if (VIR_RESIZE_N(guest->arch.domains, guest->arch.ndomains_max,
- guest->arch.ndomains, 1) < 0)
-

[libvirt PATCH 0/7] Change reallocation APIs to return void

2021-03-19 Thread Jiri Denemark
They can never return anything but zero anyway.

Jiri Denemark (7):
  util: Drop G_GNUC_WARN_UNUSED_RESULT from reallocation APIs
  Do not check return value of VIR_RESIZE_N
  util: Make virResizeN return void
  Do not check return value of VIR_EXPAND_N
  util: Make virExpandN return void
  Do not check return value of VIR_REALLOC_N
  util: Make virReallocN return void

 src/access/viraccessdriverstack.c |  3 +-
 src/bhyve/bhyve_capabilities.c|  5 +--
 src/bhyve/bhyve_parse_command.c   | 19 +++-
 src/conf/backup_conf.c|  3 +-
 src/conf/capabilities.c   | 43 +-
 src/conf/cpu_conf.c   | 11 +
 src/conf/domain_addr.c|  6 +--
 src/conf/domain_capabilities.c|  5 +--
 src/conf/domain_conf.c| 14 +++---
 src/conf/nwfilter_conf.c  |  6 +--
 src/conf/nwfilter_params.c|  8 +---
 src/conf/storage_conf.c   |  3 +-
 src/conf/virinterfaceobj.c|  2 +-
 src/conf/virnetworkobj.c  |  4 +-
 src/conf/virnodedeviceobj.c   |  2 +-
 src/conf/virsecretobj.c   |  2 +-
 src/conf/virstorageobj.c  |  2 +-
 src/esx/esx_driver.c  |  7 +--
 src/esx/esx_stream.c  |  3 +-
 src/hyperv/hyperv_driver.c|  3 +-
 src/hyperv/hyperv_wmi.c   |  3 +-
 src/hypervisor/domain_driver.c|  3 +-
 src/hypervisor/virclosecallbacks.c|  2 +-
 src/interface/interface_backend_netcf.c   |  2 +-
 src/interface/interface_backend_udev.c|  2 +-
 src/libxl/libxl_capabilities.c|  3 +-
 src/libxl/libxl_conf.c| 23 --
 src/libxl/libxl_driver.c  | 15 +++
 src/libxl/xen_common.c|  6 +--
 src/locking/lock_driver_lockd.c   |  4 +-
 src/lxc/lxc_controller.c  | 18 ++--
 src/lxc/lxc_native.c  | 33 +-
 src/qemu/qemu_agent.c | 11 ++---
 src/qemu/qemu_capabilities.c  |  3 +-
 src/qemu/qemu_conf.c  |  6 +--
 src/qemu/qemu_domain.c| 13 +++---
 src/qemu/qemu_firmware.c  |  5 +--
 src/qemu/qemu_hotplug.c   | 36 +--
 src/qemu/qemu_monitor.c   |  4 +-
 src/qemu/qemu_monitor_json.c  | 12 +
 src/qemu/qemu_process.c   | 25 +--
 src/rpc/virnetclient.c| 19 ++--
 src/rpc/virnetdaemon.c|  3 +-
 src/rpc/virnetlibsshsession.c |  8 +---
 src/rpc/virnetmessage.c   | 12 ++---
 src/rpc/virnetserver.c| 19 ++--
 src/rpc/virnetsocket.c|  3 +-
 src/rpc/virnetsshsession.c|  8 +---
 src/storage/storage_backend_disk.c|  4 +-
 src/storage/storage_backend_logical.c |  3 +-
 src/storage/storage_backend_rbd.c |  6 +--
 .../storage_file_backend_gluster.c|  3 +-
 .../storage_source_backingstore.c |  3 +-
 src/util/viralloc.c   | 44 +--
 src/util/viralloc.h   | 18 
 src/util/virarptable.c|  3 +-
 src/util/virbitmap.c  |  5 +--
 src/util/vircommand.c | 23 +-
 src/util/virdnsmasq.c | 13 ++
 src/util/virfile.c| 22 +++---
 src/util/virfirewall.c|  8 +---
 src/util/virjson.c| 13 ++
 src/util/virlockspace.c   |  7 +--
 src/util/virnetlink.c |  7 ++-
 src/util/virnuma.c| 14 +++---
 src/util/virprocess.c | 17 +--
 src/util/virresctrl.c | 42 +++---
 src/util/virstring.c  |  9 ++--
 src/util/virsysinfo.c | 37 
 src/util/virsystemd.c |  3 +-
 src/util/virthreadpool.c  |  3 +-
 src/util/virtypedparam-public.c   | 24 --
 src/util/virtypedparam.c  |  9 ++--
 src/util/viruri.c |  8 +---
 src/util/virutil.c| 14 ++
 src/vbox/vbox_common.c|  4 +-
 src/vbox/vbox_snapshot_conf.c | 29 +++-
 tests/domaincapstest.c|  8 +---
 tests/qemublocktest.c |  3 +-
 tests

[libvirt PATCH 5/7] util: Make virExpandN return void

2021-03-19 Thread Jiri Denemark
Signed-off-by: Jiri Denemark 
---
 src/util/viralloc.c | 14 ++
 src/util/viralloc.h |  4 ++--
 2 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/src/util/viralloc.c b/src/util/viralloc.c
index cd770eb601..81f5ba9a09 100644
--- a/src/util/viralloc.c
+++ b/src/util/viralloc.c
@@ -66,12 +66,12 @@ int virReallocN(void *ptrptr,
  * allocated memory. On failure, 'ptrptr' and 'countptr' are not
  * changed. Any newly allocated memory in 'ptrptr' is zero-filled.
  *
- * Returns zero on success, aborts on OOM
+ * Aborts on OOM
  */
-int virExpandN(void *ptrptr,
-   size_t size,
-   size_t *countptr,
-   size_t add)
+void virExpandN(void *ptrptr,
+size_t size,
+size_t *countptr,
+size_t add)
 {
 if (*countptr + add < *countptr)
 abort();
@@ -80,7 +80,6 @@ int virExpandN(void *ptrptr,
 abort();
 memset(*(char **)ptrptr + (size * *countptr), 0, size * add);
 *countptr += add;
-return 0;
 }
 
 /**
@@ -192,8 +191,7 @@ virInsertElementsN(void *ptrptr, size_t size, size_t at,
 if (inPlace) {
 *countptr += add;
 } else {
-if (virExpandN(ptrptr, size, countptr, add) < 0)
-abort();
+virExpandN(ptrptr, size, countptr, add);
 }
 
 /* memory was successfully re-allocated. Move up all elements from
diff --git a/src/util/viralloc.h b/src/util/viralloc.h
index 878c9485cf..6051c91913 100644
--- a/src/util/viralloc.h
+++ b/src/util/viralloc.h
@@ -36,7 +36,7 @@
 /* Don't call these directly - use the macros below */
 int virReallocN(void *ptrptr, size_t size, size_t count)
 ATTRIBUTE_NONNULL(1);
-int virExpandN(void *ptrptr, size_t size, size_t *count, size_t add)
+void virExpandN(void *ptrptr, size_t size, size_t *count, size_t add)
 ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3);
 void virResizeN(void *ptrptr, size_t size, size_t *alloc, size_t count, size_t 
desired)
 ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3);
@@ -78,7 +78,7 @@ int virDeleteElementsN(void *ptrptr, size_t size, size_t at, 
size_t *countptr,
  *
  * This macro is safe to use on arguments with side effects.
  *
- * Returns 0 on success, aborts on OOM
+ * Aborts on OOM
  */
 #define VIR_EXPAND_N(ptr, count, add) virExpandN(&(ptr), sizeof(*(ptr)), 
&(count), add)
 
-- 
2.31.0



[libvirt PATCH 7/7] util: Make virReallocN return void

2021-03-19 Thread Jiri Denemark
Signed-off-by: Jiri Denemark 
---
 src/util/viralloc.c | 13 +
 src/util/viralloc.h |  4 ++--
 2 files changed, 7 insertions(+), 10 deletions(-)

diff --git a/src/util/viralloc.c b/src/util/viralloc.c
index 81f5ba9a09..cd7ae9e7d1 100644
--- a/src/util/viralloc.c
+++ b/src/util/viralloc.c
@@ -45,12 +45,11 @@ VIR_LOG_INIT("util.alloc");
  *
  * Returns zero on success, aborts on OOM
  */
-int virReallocN(void *ptrptr,
-size_t size,
-size_t count)
+void virReallocN(void *ptrptr,
+ size_t size,
+ size_t count)
 {
 *(void **)ptrptr = g_realloc_n(*(void**)ptrptr, size, count);
-return 0;
 }
 
 /**
@@ -76,8 +75,7 @@ void virExpandN(void *ptrptr,
 if (*countptr + add < *countptr)
 abort();
 
-if (virReallocN(ptrptr, size, *countptr + add) < 0)
-abort();
+virReallocN(ptrptr, size, *countptr + add);
 memset(*(char **)ptrptr + (size * *countptr), 0, size * add);
 *countptr += add;
 }
@@ -136,8 +134,7 @@ void virResizeN(void *ptrptr,
 void virShrinkN(void *ptrptr, size_t size, size_t *countptr, size_t toremove)
 {
 if (toremove < *countptr) {
-if (virReallocN(ptrptr, size, *countptr -= toremove) < 0)
-abort();
+virReallocN(ptrptr, size, *countptr -= toremove);
 } else {
 g_free(*((void **)ptrptr));
 *((void **)ptrptr) = NULL;
diff --git a/src/util/viralloc.h b/src/util/viralloc.h
index 6051c91913..553d2951cf 100644
--- a/src/util/viralloc.h
+++ b/src/util/viralloc.h
@@ -34,7 +34,7 @@
  */
 
 /* Don't call these directly - use the macros below */
-int virReallocN(void *ptrptr, size_t size, size_t count)
+void virReallocN(void *ptrptr, size_t size, size_t count)
 ATTRIBUTE_NONNULL(1);
 void virExpandN(void *ptrptr, size_t size, size_t *count, size_t add)
 ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3);
@@ -61,7 +61,7 @@ int virDeleteElementsN(void *ptrptr, size_t size, size_t at, 
size_t *countptr,
  *
  * This macro is safe to use on arguments with side effects.
  *
- * Returns 0 on success, aborts on OOM
+ * Aborts on OOM
  */
 #define VIR_REALLOC_N(ptr, count) virReallocN(&(ptr), sizeof(*(ptr)), (count))
 
-- 
2.31.0



[libvirt PATCH] qemu: Drop redundant checks for qemuCaps before virQEMUCapsGet

2021-03-19 Thread Jiri Denemark
virQEMUCapsGet checks for qemuCaps itself, no need to do it explicitly.

Signed-off-by: Jiri Denemark 
---
 src/qemu/qemu_domain.c  | 5 +
 src/qemu/qemu_process.c | 3 +--
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 5c98f8ff1a..fff67481a7 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -5289,7 +5289,6 @@ 
qemuDomainDeviceDiskDefPostParseRestoreSecAlias(virDomainDiskDefPtr disk,
 g_autofree char *encalias = NULL;
 
 if (!(parseFlags & VIR_DOMAIN_DEF_PARSE_STATUS) ||
-!qemuCaps ||
 virStorageSourceIsEmpty(disk->src) ||
 !virQEMUCapsGet(qemuCaps, QEMU_CAPS_OBJECT_SECRET))
 return 0;
@@ -5475,7 +5474,6 @@ 
qemuDomainDeviceHostdevDefPostParseRestoreSecAlias(virDomainHostdevDefPtr hostde
 g_autofree char *authalias = NULL;
 
 if (!(parseFlags & VIR_DOMAIN_DEF_PARSE_STATUS) ||
-!qemuCaps ||
 !virQEMUCapsGet(qemuCaps, QEMU_CAPS_OBJECT_SECRET))
 return 0;
 
@@ -5520,8 +5518,7 @@ 
qemuDomainDeviceHostdevDefPostParseRestoreBackendAlias(virDomainHostdevDefPtr ho
 if (!(parseFlags & VIR_DOMAIN_DEF_PARSE_STATUS))
 return 0;
 
-if (!qemuCaps ||
-hostdev->mode != VIR_DOMAIN_HOSTDEV_MODE_SUBSYS ||
+if (hostdev->mode != VIR_DOMAIN_HOSTDEV_MODE_SUBSYS ||
 hostdev->source.subsys.type != VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_SCSI ||
 !virQEMUCapsGet(qemuCaps, QEMU_CAPS_BLOCKDEV_HOSTDEV_SCSI))
 return 0;
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 0b79dde2c3..b86afe4daa 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8349,8 +8349,7 @@ qemuProcessReconnect(void *opaque)
 if (qemuHostdevUpdateActiveDomainDevices(driver, obj->def) < 0)
 goto error;
 
-if (priv->qemuCaps &&
-virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_CHARDEV_FD_PASS))
+if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_CHARDEV_FD_PASS))
 retry = false;
 
 if (qemuDomainObjStartWorker(obj) < 0)
-- 
2.31.0



[libvirt PATCH 1/2] qemu: Use g_autofree in qemuMigrationJobCheckStatus

2021-03-19 Thread Jiri Denemark
Signed-off-by: Jiri Denemark 
---
 src/qemu/qemu_migration.c | 15 +--
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 79dcb4a15d..ba2ee4f081 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -1703,9 +1703,8 @@ qemuMigrationJobCheckStatus(virQEMUDriverPtr driver,
 {
 qemuDomainObjPrivatePtr priv = vm->privateData;
 qemuDomainJobInfoPtr jobInfo = priv->job.current;
-char *error = NULL;
+g_autofree char *error = NULL;
 bool events = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_EVENT);
-int ret = -1;
 
 if (!events ||
 jobInfo->stats.mig.status == QEMU_MONITOR_MIGRATION_STATUS_ERROR) {
@@ -1719,18 +1718,18 @@ qemuMigrationJobCheckStatus(virQEMUDriverPtr driver,
 case QEMU_DOMAIN_JOB_STATUS_NONE:
 virReportError(VIR_ERR_OPERATION_FAILED, _("%s: %s"),
qemuMigrationJobName(vm), _("is not active"));
-goto cleanup;
+return -1;
 
 case QEMU_DOMAIN_JOB_STATUS_FAILED:
 virReportError(VIR_ERR_OPERATION_FAILED, _("%s: %s"),
qemuMigrationJobName(vm),
error ? error : _("unexpectedly failed"));
-goto cleanup;
+return -1;
 
 case QEMU_DOMAIN_JOB_STATUS_CANCELED:
 virReportError(VIR_ERR_OPERATION_ABORTED, _("%s: %s"),
qemuMigrationJobName(vm), _("canceled by client"));
-goto cleanup;
+return -1;
 
 case QEMU_DOMAIN_JOB_STATUS_COMPLETED:
 case QEMU_DOMAIN_JOB_STATUS_ACTIVE:
@@ -1741,11 +1740,7 @@ qemuMigrationJobCheckStatus(virQEMUDriverPtr driver,
 break;
 }
 
-ret = 0;
-
- cleanup:
-VIR_FREE(error);
-return ret;
+return 0;
 }
 
 
-- 
2.31.0



[libvirt PATCH 2/2] qemu: Use g_autoptr in qemuMonitorJSONSetCapabilities

2021-03-19 Thread Jiri Denemark
Signed-off-by: Jiri Denemark 
---
 src/qemu/qemu_monitor_json.c | 19 ---
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index b669630bc8..5e7f425495 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -1644,23 +1644,20 @@ qemuMonitorJSONHumanCommand(qemuMonitorPtr mon,
 int
 qemuMonitorJSONSetCapabilities(qemuMonitorPtr mon)
 {
-int ret = -1;
-virJSONValuePtr cmd = qemuMonitorJSONMakeCommand("qmp_capabilities", NULL);
-virJSONValuePtr reply = NULL;
-if (!cmd)
+g_autoptr(virJSONValue) cmd = NULL;
+g_autoptr(virJSONValue) reply = NULL;
+
+if (!(cmd = qemuMonitorJSONMakeCommand("qmp_capabilities",
+   NULL)))
 return -1;
 
 if (qemuMonitorJSONCommand(mon, cmd, ) < 0)
-goto cleanup;
+return -1;
 
 if (qemuMonitorJSONCheckError(cmd, reply) < 0)
-goto cleanup;
+return -1;
 
-ret = 0;
- cleanup:
-virJSONValueFree(cmd);
-virJSONValueFree(reply);
-return ret;
+return 0;
 }
 
 
-- 
2.31.0



[libvirt PATCH 0/2] qemu: Use g_auto* in a few more places

2021-03-19 Thread Jiri Denemark
I'll be touching these functions soon, let's modernize them first.

Jiri Denemark (2):
  qemu: Use g_autofree in qemuMigrationJobCheckStatus
  qemu: Use g_autoptr in qemuMonitorJSONSetCapabilities

 src/qemu/qemu_migration.c| 15 +--
 src/qemu/qemu_monitor_json.c | 19 ---
 2 files changed, 13 insertions(+), 21 deletions(-)

-- 
2.31.0



[libvirt PATCH] qemu: Update asyncOwnerAPI when entering async job phase

2021-03-19 Thread Jiri Denemark
In case an async job spans multiple APIs (e.g., incoming migration) the
API that started the job is recorded as the asyncOwnerAPI even though it
is no longer running and the owner thread is updated properly to the one
currently handling the job. Let's also update asyncOwnerAPI to make it
more obvious which is the current (or the most recent) API involved in
the job.

Signed-off-by: Jiri Denemark 
---
 src/qemu/qemu_domainjob.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c
index b58d6837ad..50cfc45f5b 100644
--- a/src/qemu/qemu_domainjob.c
+++ b/src/qemu/qemu_domainjob.c
@@ -711,7 +711,9 @@ qemuDomainObjSetJobPhase(virQEMUDriverPtr driver,
   qemuDomainAsyncJobTypeToString(priv->job.asyncJob),
   qemuDomainAsyncJobPhaseToString(priv->job.asyncJob, phase));
 
-if (priv->job.asyncOwner && me != priv->job.asyncOwner) {
+if (priv->job.asyncOwner == 0) {
+priv->job.asyncOwnerAPI = g_strdup(virThreadJobGet());
+} else if (me != priv->job.asyncOwner) {
 VIR_WARN("'%s' async job is owned by thread %llu",
  qemuDomainAsyncJobTypeToString(priv->job.asyncJob),
  priv->job.asyncOwner);
-- 
2.31.0



Plans for the next release

2021-03-19 Thread Jiri Denemark
We are getting close to the next release of libvirt. To aim for the
release on Apr 01 I suggest entering the freeze on Froday Mar 26 and
tagging RC2 on Tuesday Mar 30.

I hope this works for everyone.

Jirka



Re: [PATCH 6/6] lib: Drop internal virXXXPtr typedefs

2021-03-12 Thread Jiri Denemark
On Thu, Mar 11, 2021 at 18:47:37 +, Daniel P. Berrangé wrote:
> On Thu, Mar 11, 2021 at 06:54:20PM +0100, Michal Privoznik wrote:
> > Historically, we declared pointer type to our types:
> > 
> >   typedef struct _virXXX virXXX;
> >   typedef virXXX *virXXXPtr;
> > 
> > But usefulness of such declaration is questionable, at best.
> > Unfortunately, we can't drop every such declaration - we have to
> > carry some over, because they are part of public API (e.g.
> > virDomainPtr). But for internal types - we can do drop them and
> > use what every other C project uses 'virXXX *'.
> > 
> > This change was generated by a very ugly shell script that
> > generated sed script which was then called over each file in the
> > repository. For the shell script refer to the cover letter:
> > 
> > $URL
> > 
> > Signed-off-by: Michal Privoznik 
> > ---
> >  docs/advanced-tests.rst   |2 +-
> >  docs/api_extension.html.in|2 +-
> >  docs/coding-style.rst |2 +-
> 
> snip
> 
> >  tools/virt-login-shell-helper.c   |4 +-
> >  tools/vsh-table.c |   36 +-
> >  tools/vsh-table.h |   12 +-
> >  tools/vsh.c   |4 +-
> >  732 files changed, 29237 insertions(+), 30131 deletions(-)
> 
> Converting every single file at the same time in one commit is
> guaranting backporting conflict hell.
> 
> eg if I'm backporting a patch from src/qemu, it is much
> saner if I can cherry-pick the "Ptr" conversion from src/qemu
> and only deal with conflicts in that, not the entire soure
> tree.

I don't think it's a good idea to try to backport anything from this
patch. There's no need to do so, upstream will strictly use types
without Ptr and all of them already exist upstream. Backporting any
future changes to the code which still has *Ptr should not cause any
issues. Except for moving big parts of code (but such change would
likely conflict for other reasons too) and trivial context when the
patch touches lines close to virSomethingPtr.

Jirka



Re: RFC: do we want/need the "Ptr" typedefs for internal code ?

2021-03-09 Thread Jiri Denemark
On Tue, Mar 09, 2021 at 17:44:16 +, Daniel P. Berrangé wrote:
...
> We can't do anything about the use "Ptr" in the include/ files because
> that is public ABI. We can potentially eliminate "Ptr" types everywhere
> else in the codebase, even the src/libvirt*.c files corresponding to
> the public includes.
> 
> Does anyone have suggestions for how these "Ptr" typedefs are
> benefiting libvirt ? Would anyone miss them ?

Oh yes, please go ahead and remove them. I learnt to use them as part of
libvirt coding style when I started working on libvirt, but never really
understood the reason behind them.

Jirka



Re: [PATCH] domaincapstest: Return EXIT_SUCCESS / EXIT_FAILURE instead of -1

2021-03-09 Thread Jiri Denemark
On Tue, Mar 09, 2021 at 12:54:55 +0100, Peter Krempa wrote:
> The value is used as return value for the process itself.
> 
> Signed-off-by: Peter Krempa 
> ---
>  tests/domaincapstest.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tests/domaincapstest.c b/tests/domaincapstest.c
> index 7a082705c6..65d9f4c635 100644
> --- a/tests/domaincapstest.c
> +++ b/tests/domaincapstest.c
> @@ -475,7 +475,7 @@ mymain(void)
>  DO_TEST_BHYVE("fbuf", "/usr/sbin/bhyve", _caps, 
> VIR_DOMAIN_VIRT_BHYVE);
>  #endif /* WITH_BHYVE */
> 
> -return ret;
> +return ret == 0 ? EXIT_SUCCESS : EXIT_FAILURE;
>  }
> 
>  #if WITH_QEMU

Reviewed-by: Jiri Denemark 



Re: [PATCH] qemuMigrationSrcRun: Don't jump to 'exit_monitor' from outside of the monitor

2021-03-09 Thread Jiri Denemark
On Tue, Mar 09, 2021 at 12:51:04 +0100, Peter Krempa wrote:
> Failure of 'qemuMigrationSetDBusVMState' would jump to 'exit_monitor'
> but the function isn't called inside of the monitor context.
> 
> Signed-off-by: Peter Krempa 
> ---
>  src/qemu/qemu_migration.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
> index e44931dcfa..79dcb4a15d 100644
> --- a/src/qemu/qemu_migration.c
> +++ b/src/qemu/qemu_migration.c
> @@ -4156,7 +4156,7 @@ qemuMigrationSrcRun(virQEMUDriverPtr driver,
>  }
> 
>  if (qemuMigrationSetDBusVMState(driver, vm) < 0)
> -goto exit_monitor;
> +goto error;
> 
>  /* Before EnterMonitor, since already qemuProcessStopCPUs does that */
>  if (!(flags & VIR_MIGRATE_LIVE) &&

Reviewed-by: Jiri Denemark 



Re: [libvirt PATCH v2 1/1] qemuProcessUpdateGuestCPU: Check host cpu for forbidden features

2021-03-05 Thread Jiri Denemark
On Thu, Feb 25, 2021 at 14:23:06 +0100, Tim Wiederhake wrote:
> See https://bugzilla.redhat.com/show_bug.cgi?id=1840770
> 
> Signed-off-by: Tim Wiederhake 
> ---
>  src/qemu/qemu_process.c | 27 +++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
> index bfa742577f..cecf606312 100644
> --- a/src/qemu/qemu_process.c
> +++ b/src/qemu/qemu_process.c
> @@ -6149,6 +6149,33 @@ qemuProcessUpdateGuestCPU(virDomainDefPtr def,
>  if (virCPUConvertLegacy(hostarch, def->cpu) < 0)
>  return -1;
>  
> +if (def->cpu->check != VIR_CPU_CHECK_NONE) {
> +virCPUDefPtr host;
> +size_t i;
> +
> +host = virQEMUCapsGetHostModel(qemuCaps, def->virtType,
> +   VIR_QEMU_CAPS_HOST_CPU_FULL);
> +
> +for (i = 0; i < def->cpu->nfeatures; ++i) {
> +virCPUFeatureDefPtr feature;
> +
> +if (def->cpu->features[i].policy != VIR_CPU_FEATURE_FORBID)
> +continue;
> +
> +feature = virCPUDefFindFeature(host, def->cpu->features[i].name);

This would crash in case virQEMUCapsGetHostModel returned NULL, which
may happen quite easily, especially on non-x86 architectures.

> +if (!feature)
> +continue;
> +
> +if (feature->policy == VIR_CPU_FEATURE_DISABLE)
> +continue;
> +
> +virReportError(VIR_ERR_CPU_INCOMPATIBLE,
> +   _("Host CPU provides forbidden feature '%s'"),
> +   def->cpu->features[i].name);
> +return -1;
> +}
> +}
> +

This new code is a good candidate for separation into a new function. I
believe we don't need architecture specific implementation for this
function, but something like virCPUCheckForbiddenFeatures in
src/cpu/cpu.c seems like a logical choice.

>  /* nothing to update for host-passthrough / maximum */
>  if (def->cpu->mode != VIR_CPU_MODE_HOST_PASSTHROUGH &&
>  def->cpu->mode != VIR_CPU_MODE_MAXIMUM) {

Jirka



[libvirt PATCH] cpu_map: Fix spelling of svme-addr-chk feature

2021-03-03 Thread Jiri Denemark
Commit a208176ca1d9eedf8aa6bf12fde6a7a9579ab549 introduced this feature
with an incorrect "svme-addr-check" spelling.

Signed-off-by: Jiri Denemark 
---
 src/cpu_map/sync_qemu_i386.py  | 2 +-
 src/cpu_map/x86_EPYC-Milan.xml | 2 +-
 src/cpu_map/x86_features.xml   | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/cpu_map/sync_qemu_i386.py b/src/cpu_map/sync_qemu_i386.py
index 3febecdfd1..1a98fa70d7 100755
--- a/src/cpu_map/sync_qemu_i386.py
+++ b/src/cpu_map/sync_qemu_i386.py
@@ -142,7 +142,7 @@ def translate_feature(name):
 "CPUID_SS": "ss",
 "CPUID_SVM_NPT": "npt",
 "CPUID_SVM_NRIPSAVE": "nrip-save",
-"CPUID_SVM_SVME_ADDR_CHK": "svme-addr-check",
+"CPUID_SVM_SVME_ADDR_CHK": "svme-addr-chk",
 "CPUID_TSC": "tsc",
 "CPUID_VME": "vme",
 "CPUID_XSAVE_XGETBV1": "xgetbv1",
diff --git a/src/cpu_map/x86_EPYC-Milan.xml b/src/cpu_map/x86_EPYC-Milan.xml
index 53f0cd6aac..3055e175fa 100644
--- a/src/cpu_map/x86_EPYC-Milan.xml
+++ b/src/cpu_map/x86_EPYC-Milan.xml
@@ -76,7 +76,7 @@
 
 
 
-
+
 
 
 
diff --git a/src/cpu_map/x86_features.xml b/src/cpu_map/x86_features.xml
index c43520d08f..e98b84f3ef 100644
--- a/src/cpu_map/x86_features.xml
+++ b/src/cpu_map/x86_features.xml
@@ -554,7 +554,7 @@
   
 
   
-  
+  
 
   
 
-- 
2.30.0



[libvirt PATCH] cpu_map: Install x86_EPYC-Milan.xml

2021-03-02 Thread Jiri Denemark
Signed-off-by: Jiri Denemark 
---
 src/cpu_map/meson.build | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/cpu_map/meson.build b/src/cpu_map/meson.build
index 48f69f623c..013fc62a02 100644
--- a/src/cpu_map/meson.build
+++ b/src/cpu_map/meson.build
@@ -34,6 +34,7 @@ cpumap_data = [
   'x86_Dhyana.xml',
   'x86_EPYC-IBPB.xml',
   'x86_EPYC.xml',
+  'x86_EPYC-Milan.xml',
   'x86_EPYC-Rome.xml',
   'x86_features.xml',
   'x86_Haswell-IBRS.xml',
-- 
2.30.0



[libvirt PATCH] cpu_map: Add EPYC-Milan x86 CPU model

2021-03-01 Thread Jiri Denemark
Introduced in QEMU 6.0.0 by 623972ceae091b31331ae4a1dc94fe5cbb891937

Signed-off-by: Jiri Denemark 
---
 src/cpu_map/index.xml  |  1 +
 src/cpu_map/x86_EPYC-Milan.xml | 92 ++
 2 files changed, 93 insertions(+)
 create mode 100644 src/cpu_map/x86_EPYC-Milan.xml

diff --git a/src/cpu_map/index.xml b/src/cpu_map/index.xml
index 2e0685df68..ffe1fa91e5 100644
--- a/src/cpu_map/index.xml
+++ b/src/cpu_map/index.xml
@@ -68,6 +68,7 @@
 
 
 
+
 
 
 
diff --git a/src/cpu_map/x86_EPYC-Milan.xml b/src/cpu_map/x86_EPYC-Milan.xml
new file mode 100644
index 00..53f0cd6aac
--- /dev/null
+++ b/src/cpu_map/x86_EPYC-Milan.xml
@@ -0,0 +1,92 @@
+
+  
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+  
+
-- 
2.30.0



Re: [libvirt PATCH 0/5] cpu_map: Sync with Qemu

2021-03-01 Thread Jiri Denemark
On Mon, Feb 22, 2021 at 13:20:05 +0100, Tim Wiederhake wrote:
> Qemu updated its list of known cpu features. Follow suit.
> 
> Tim Wiederhake (5):
>   cpu_map/sync_qemu_i386.py: Add mapping for amd-ssbd
>   cpu_map/sync_qemu_i386.py: Add mapping for ibrs
>   cpu_map/sync_qemu_i386.py: Add mapping for svme-addr-check
>   cpumap: Add support for ibrs CPU feature
>   cpumap: Add support for svme-addr-check CPU feature
> 
>  src/cpu_map/sync_qemu_i386.py  | 3 +++
>  src/cpu_map/x86_features.xml   | 6 ++
>  tests/cputestdata/x86_64-cpuid-EPYC-7502-32-Core-guest.xml | 1 +
>  tests/cputestdata/x86_64-cpuid-EPYC-7502-32-Core-host.xml  | 1 +
>  4 files changed, 11 insertions(+)

Reviewed-by: Jiri Denemark 

and pushed, thanks.



Release of libvirt-7.1.0

2021-03-01 Thread Jiri Denemark
The 7.1.0 release of both libvirt and libvirt-python is tagged and
signed tarballs and source RPMs are available at

https://libvirt.org/sources/
https://libvirt.org/sources/python/

Thanks everybody who helped with this release by sending patches,
reviewing, testing, or providing any other feedback. Your work is
greatly appreciated.

* Portability

  * Implement Apple Silicon support

libvirt now runs on the ARM-based Apple Silicon Macs.

* New features

  * Introduce virtio-pmem  model

The virtio-pmem is a virtio variant of NVDIMM and just like NVDIMM
virtio-pmem also allows accessing host pages bypassing guest page cache.

  * Introduce  for 

Booting is possible from virtiofs filesystems. Introduce an option
to control the boot order, like we do for other bootable devices.

  * hyperv: implement new APIs

The ``virDomainUndefine()``, ``virDomainUndefineFlags()``,
``virDomainDefineXML()``, ``virDomainAttachDevice()``, and
``virDomainAttachDeviceFlags()``, ``virConnectListAllNetworks()``,
``virConnectNumOfNetworks()``, ``virNetworkLookupByName()``,
``virNetworkLookupByUUID()``, ``virConnectNumOfDefinedNetworks()``,
``virConnectListDefinedNetworks()``, ``virNetworkGetAutostart()``,
``virNetworkIsActive()``, ``virNetworkIsPersistent()``,
``virNetworkGetXMLDesc()``, and ``virDomainScreenshot()``, APIs have been
implemented in the Hyper-V driver.

  * Support  element in plain  devices

This is useful when libvirt doesn't have the privileges necessary
to set the hostdev device's MAC address (which is a necessary
part of the alternate ).

  * Introduce  support

Introduces support for QEMU vhost-user-blk device that can be used
to access storage exported via the vhost-user protocol by daemons such
as the ``qemu-storage-daemon``.

* Bug fixes

  * qemu: Fix disk quiescing rollback when creating external snapshots

   If the qemu guest agent call to freeze filesystems failed when creating
   an external snapshot with ``VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE`` flag the
   filesystems would be unconditionally thawed. This could cause problems when
   the filesystems were frozen by an explicit call to ``virDomainFSFreeze``
   since the guest agent then rejects any further freeze attempts once are
   filesystems frozen, an explicit freeze followed by a quiesced snapshot
   would fail and thaw filesystems.

   Users are also encouraged to use ``virDomainFSFreeze/Thaw`` manually instead
   of relying on ``VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE`` if they need finer
   grained control.

  * cgroups: Fix how we setup and configure cgroups on hosts with systemd

When libvirt is running on host with systemd we register every VM with
machined which creates the VM root cgroup for us as well. Before this fix
we were directly modifying files in the VM root cgroup which was incorrect
because all the files are managed by systemd. The implication was that any
change done by libvirt to cgroup attributes supported by systemd could be
removed which happens for example by running ``systemctl daemon-reload``.

To fix the issue libvirt now uses DBus calls for some of the cgroup
attributes that distribute the resources proportionally to the cgroup
siblings and for the rest we have a new sub-cgroup that libvirt can
managed directly.

For more details why this is necessary see
`systemd cgroup `_ documentation.

  * qemu: Fix swtpm device with aarch64

The TPM TIS device name for x86 is ``tpm-tis``, whereas for aarch64 it is
``tpm-tis-device``. Fix the use of TPM TIS device with aarch64 by using
the proper device name when building the QEMU command line.

  * libxl: Fix domain shutdown

Commit fa30ee04a2 introduced the possibility of a race between the
shutdown and death threads used to process domain shutdown and death
events from libxl. On normal domain shutdown the shutdown thread handles
all aspects of shutting down and cleaning up the domain. The death
thread is only used to handle out-of-band domain destruction and is
inhibited when domain shutdown is under libvirt's control. The race is
avoided by also inhibiting the death thread when libvirt starts the
shutdown thread.

Enjoy.

Jirka



libvirt-7.1.0 release candidate 2

2021-02-25 Thread Jiri Denemark
I have just tagged v7.1.0-rc2 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



Re: [libvirt PATCH] qemuProcessUpdateGuestCPU: Check cpu if check=full is set

2021-02-25 Thread Jiri Denemark
On Thu, Feb 25, 2021 at 08:17:09 +0100, Tim Wiederhake wrote:
> libvirt performs cpu checking if "check" is set to "partial", but skips
> checking the cpu if "check" is set to "full".

This is intentional because QEMU knows better. I wish we had no CPU
comparison in libvirt at all, but we can't do that for backward
compatibility...

The real problem here is that unlike all other feature policies in our
CPU definition 'forbid' cannot be checked via QEMU.

> See https://bugzilla.redhat.com/show_bug.cgi?id=1840770
> 
> Signed-off-by: Tim Wiederhake 
> ---
>  src/qemu/qemu_process.c | 8 
>  1 file changed, 8 insertions(+)
> 
> diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
> index bfa742577f..5b8c1397ef 100644
> --- a/src/qemu/qemu_process.c
> +++ b/src/qemu/qemu_process.c
> @@ -6149,6 +6149,14 @@ qemuProcessUpdateGuestCPU(virDomainDefPtr def,
>  if (virCPUConvertLegacy(hostarch, def->cpu) < 0)
>  return -1;
>  
> +if (def->cpu->check == VIR_CPU_CHECK_FULL) {
> +virCPUDefPtr host = virQEMUCapsGetHostModel(qemuCaps, def->virtType,
> +
> VIR_QEMU_CAPS_HOST_CPU_FULL);
> +
> +if (virCPUCompare(hostarch, host, def->cpu, true) < 0)
> +return -1;
> +}
> +

I believe this should be replaced with a more targeted approach to only
check forbidden features. And I guess we can do so for check != none.

Jirka



[libvirt PATCH] qemu_domainjob: Make copy of owner API

2021-02-24 Thread Jiri Denemark
Using the job owner API name directly works fine as long as it is a
static string or the owner's thread is still running. However, this is
not always the case. For example, when the owner API name is filled in a
job when we're reconnecting to existing domains after daemon restart,
the dynamically allocated owner name will disappear with the
reconnecting thread. Any follow up usage of the pointer will read random
memory.

Signed-off-by: Jiri Denemark 
---
 src/qemu/qemu_domainjob.c | 12 ++--
 src/qemu/qemu_process.c   |  2 +-
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c
index 3c2c6b9179..b58d6837ad 100644
--- a/src/qemu/qemu_domainjob.c
+++ b/src/qemu/qemu_domainjob.c
@@ -190,7 +190,7 @@ qemuDomainObjResetJob(qemuDomainJobObjPtr job)
 {
 job->active = QEMU_JOB_NONE;
 job->owner = 0;
-job->ownerAPI = NULL;
+g_clear_pointer(>ownerAPI, g_free);
 job->started = 0;
 }
 
@@ -200,7 +200,7 @@ qemuDomainObjResetAgentJob(qemuDomainJobObjPtr job)
 {
 job->agentActive = QEMU_AGENT_JOB_NONE;
 job->agentOwner = 0;
-job->agentOwnerAPI = NULL;
+g_clear_pointer(>agentOwnerAPI, g_free);
 job->agentStarted = 0;
 }
 
@@ -210,7 +210,7 @@ qemuDomainObjResetAsyncJob(qemuDomainJobObjPtr job)
 {
 job->asyncJob = QEMU_ASYNC_JOB_NONE;
 job->asyncOwner = 0;
-job->asyncOwnerAPI = NULL;
+g_clear_pointer(>asyncOwnerAPI, g_free);
 job->asyncStarted = 0;
 job->phase = 0;
 job->mask = QEMU_JOB_DEFAULT_MASK;
@@ -890,7 +890,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver,
   obj, obj->def->name);
 priv->job.active = job;
 priv->job.owner = virThreadSelfID();
-priv->job.ownerAPI = virThreadJobGet();
+priv->job.ownerAPI = g_strdup(virThreadJobGet());
 priv->job.started = now;
 } else {
 VIR_DEBUG("Started async job: %s (vm=%p name=%s)",
@@ -901,7 +901,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver,
 priv->job.current->status = QEMU_DOMAIN_JOB_STATUS_ACTIVE;
 priv->job.asyncJob = asyncJob;
 priv->job.asyncOwner = virThreadSelfID();
-priv->job.asyncOwnerAPI = virThreadJobGet();
+priv->job.asyncOwnerAPI = g_strdup(virThreadJobGet());
 priv->job.asyncStarted = now;
 priv->job.current->started = now;
 }
@@ -917,7 +917,7 @@ qemuDomainObjBeginJobInternal(virQEMUDriverPtr driver,
   qemuDomainAsyncJobTypeToString(priv->job.asyncJob));
 priv->job.agentActive = agentJob;
 priv->job.agentOwner = virThreadSelfID();
-priv->job.agentOwnerAPI = virThreadJobGet();
+priv->job.agentOwnerAPI = g_strdup(virThreadJobGet());
 priv->job.agentStarted = now;
 }
 
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index bfa742577f..398f63282e 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -3732,7 +3732,7 @@ qemuProcessRecoverJob(virQEMUDriverPtr driver,
 /* Restore the config of the async job which is not persisted */
 priv->jobs_queued++;
 priv->job.asyncJob = QEMU_ASYNC_JOB_BACKUP;
-priv->job.asyncOwnerAPI = virThreadJobGet();
+priv->job.asyncOwnerAPI = g_strdup(virThreadJobGet());
 priv->job.asyncStarted = now;
 
 qemuDomainObjSetAsyncJobMask(vm, (QEMU_JOB_DEFAULT_MASK |
-- 
2.30.0



Entering freeze for libvirt-7.1.0

2021-02-22 Thread Jiri Denemark
I have just tagged v7.1.0-rc1 in the repository and pushed signed
tarballs and source RPMs to https://libvirt.org/sources/

Please give the release candidate some testing and in case you find a
serious issue which should have a fix in the upcoming release, feel
free to reply to this thread to make sure the issue is more visible.

If you have not done so yet, please update NEWS.rst to document any
significant change you made since the last release.

Thanks,

Jirka



Re: [PATCH v2 14/15] qemu: migration: Migrate block dirty bitmaps corresponding to checkpoints

2021-02-19 Thread Jiri Denemark
On Fri, Feb 19, 2021 at 12:58:26 +0100, Peter Krempa wrote:
> Preserve block dirty bitmaps after migration with
> QEMU_MONITOR_MIGRATE_NON_SHARED_(DISK|INC).
> 
> This patch implements functions which offer the bitmaps to the
> destination, check for eligibility on destination and then configure
> source for the migration.
> 
> Signed-off-by: Peter Krempa 
> ---
>  src/qemu/qemu_migration.c | 333 +-
>  1 file changed, 331 insertions(+), 2 deletions(-)

Reviewed-by: Jiri Denemark 



Re: [PATCH v2 07/15] qemu: migration_params: Add infrastructure for 'dirty-bitmaps' migration feature

2021-02-19 Thread Jiri Denemark
On Fri, Feb 19, 2021 at 12:58:19 +0100, Peter Krempa wrote:
> Add the migration capability flag and the propagation of the
> corresponding mapping configuration. The mapping will be produced from
> the bitmaps on disk depending on both sides of the migration and the
> necessity to perform merges.
> 
> Signed-off-by: Peter Krempa 
> ---
>  src/qemu/qemu_migration_params.c | 29 +
>  src/qemu/qemu_migration_params.h |  5 +
>  2 files changed, 34 insertions(+)

Reviewed-by: Jiri Denemark 



Re: [PATCH v2 04/15] qemu: migration: Create qcow2 v3 images for VIR_MIGRATE_NON_SHARED_DISK

2021-02-19 Thread Jiri Denemark
On Fri, Feb 19, 2021 at 12:58:16 +0100, Peter Krempa wrote:
> Use the new format when pre-creating the image for the user. Users
> wishing to use the legacy format can always provide their own images or
> use hared storage.

s/hared/shared/

> Signed-off-by: Peter Krempa 
> ---
>  src/qemu/qemu_migration.c | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
> index e8e35c1c7c..94b9b34ca0 100644
> --- a/src/qemu/qemu_migration.c
> +++ b/src/qemu/qemu_migration.c
> @@ -180,6 +180,7 @@ qemuMigrationDstPrecreateDisk(virConnectPtr *conn,
>  char *volStr = NULL;
>  g_auto(virBuffer) buf = VIR_BUFFER_INITIALIZER;
>  const char *format = NULL;
> +const char *compat = NULL;
>  unsigned int flags = 0;
> 
>  VIR_DEBUG("Precreate disk type=%s", 
> virStorageTypeToString(disk->src->type));
> @@ -212,8 +213,11 @@ qemuMigrationDstPrecreateDisk(virConnectPtr *conn,
>  if (!(pool = virStoragePoolLookupByTargetPath(*conn, basePath)))
>  goto cleanup;
>  format = virStorageFileFormatTypeToString(disk->src->format);
> -if (disk->src->format == VIR_STORAGE_FILE_QCOW2)
> +if (disk->src->format == VIR_STORAGE_FILE_QCOW2) {
>  flags |= VIR_STORAGE_VOL_CREATE_PREALLOC_METADATA;
> +/* format qcow2v3 image */
> +compat = "1.1";
> +}
>  break;
> 
>  case VIR_STORAGE_TYPE_VOLUME:

And how about creating v3 only when the source offered bitmaps for
migration? Although automatic creation of disk images during migration
has always been magic and anyone caring about the exact image format on
the destination needs to precreate the images manually.

So I guess this should good enough.

Reviewed-by: Jiri Denemark 



Re: [PATCH v2 01/15] qemucapabilitiesdata: Update test data for qemu-6.0 on x86_64

2021-02-19 Thread Jiri Denemark
On Fri, Feb 19, 2021 at 12:58:13 +0100, Peter Krempa wrote:
> Include the 'transform' member of 'block-bitmap-mapping'. This is based
> on qemu commit v5.2.0-2208-gc79f01c945
> 
> Signed-off-by: Peter Krempa 
> ---
>  .../caps_6.0.0.x86_64.replies | 741 ++
>  .../caps_6.0.0.x86_64.xml |  22 +-
>  2 files changed, 441 insertions(+), 322 deletions(-)

Reviewed-by: Jiri Denemark 



Re: [PATCH 18/19] qemu: migration: Migrate block dirty bitmaps corresponding to checkpoints

2021-02-18 Thread Jiri Denemark
On Thu, Feb 18, 2021 at 16:50:43 +0100, Peter Krempa wrote:
> On Thu, Feb 18, 2021 at 15:54:37 +0100, Jiri Denemark wrote:
> > On Thu, Feb 11, 2021 at 16:37:57 +0100, Peter Krempa wrote:
> > > Preserve block dirty bitmaps after migration with
> > > QEMU_MONITOR_MIGRATE_NON_SHARED_(DISK|INC).
> > > 
> > > This patch implements functions which offer the bitmaps to the
> > > destination, check for eligibility on destination and then configure
> > > source for the migration.
> > > 
> > > Signed-off-by: Peter Krempa 
> > > ---
> > >  src/qemu/qemu_migration.c | 333 +-
> > >  1 file changed, 331 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
> > > index 36424f8493..16bfad0390 100644
> > > --- a/src/qemu/qemu_migration.c
> > > +++ b/src/qemu/qemu_migration.c
> > ...
> > > @@ -2528,6 +2619,92 @@ qemuMigrationDstPrepare(virDomainObjPtr vm,
> > >   migrateFrom, fd, NULL);
> > >  }
> > > 
> > > +
> > > +/**
> > > + * qemuMigrationDstPrepareAnyBlockDirtyBitmaps:
> > > + * @vm: domain object
> > > + * @mig: migration cookie
> > > + * @migParams: migration parameters
> > > + * @flags: migration flags
> > > + *
> > > + * Checks whether block dirty bitmaps offered by the migration source are
> > > + * to be migrated (e.g. they don't exist, the destination is compatible 
> > > etc)
> > > + * and sets up destination qemu for migrating the bitmaps as well as 
> > > updates the
> > > + * list of eligible bitmaps in the migration cookie to be sent back to 
> > > the
> > > + * source.
> > > + */
> > > +static int
> > > +qemuMigrationDstPrepareAnyBlockDirtyBitmaps(virDomainObjPtr vm,
> > > +qemuMigrationCookiePtr mig,
> > > +qemuMigrationParamsPtr 
> > > migParams,
> > > +unsigned int flags)
> > > +{
> > > +qemuDomainObjPrivatePtr priv = vm->privateData;
> > > +g_autoptr(virJSONValue) mapping = NULL;
> > > +g_autoptr(GHashTable) blockNamedNodeData = NULL;
> > > +GSList *nextdisk;
> > > +
> > > +if (!mig->nbd ||
> > > +!mig->blockDirtyBitmaps ||
> > > +!(flags & (VIR_MIGRATE_NON_SHARED_DISK | 
> > > VIR_MIGRATE_NON_SHARED_INC)) ||
> > > +!virQEMUCapsGet(priv->qemuCaps, 
> > > QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING))
> > > +return 0;
> > 
> > Shouldn't we report an error in case the source sent bitmaps, but local
> > QEMU does not support QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING?
> 
> See below.
> 
> > 
> > > +
> > > +if (qemuMigrationCookieBlockDirtyBitmapsMatchDisks(vm->def, 
> > > mig->blockDirtyBitmaps) < 0)
> > > +return -1;
> > > +
> > > +if (!(blockNamedNodeData = qemuBlockGetNamedNodeData(vm, 
> > > QEMU_ASYNC_JOB_MIGRATION_IN)))
> > > +return -1;
> > > +
> > > +for (nextdisk = mig->blockDirtyBitmaps; nextdisk; nextdisk = 
> > > nextdisk->next) {
> > > +qemuMigrationBlockDirtyBitmapsDiskPtr disk = nextdisk->data;
> > > +qemuBlockNamedNodeDataPtr nodedata;
> > > +GSList *nextbitmap;
> > > +
> > > +if (!(nodedata = virHashLookup(blockNamedNodeData, 
> > > disk->nodename))) {
> > > +virReportError(VIR_ERR_INTERNAL_ERROR,
> > > +   _("failed to find data for block node '%s'"),
> > > +   disk->nodename);
> > > +return -1;
> > > +}
> > > +
> > > +/* don't migrate bitmaps into non-qcow2v3+ images */
> > 
> > How about "Bitmaps can only be migrated to qcow2 v3+"?
> > 
> > > +if (disk->disk->src->format != VIR_STORAGE_FILE_QCOW2 ||
> > > +nodedata->qcow2v2) {
> > > +disk->skip = true;
> > 
> > Is skipping the disk the right thing to do? Should we report an error
> > and abort migration instead? Just checking, maybe we can't do so for
> > backward compatibility...
> > 
> > > +continue;
> > > +}
> > > +
> > > +for (nextbitmap = disk->bitmaps; nextbitmap; nextbitmap = 
> > > nextbitmap->next) {
> > > +qemuMigrationBlockDirtyBitmapsDiskBitmapPtr bitmap = 
> > > nextbitmap->data;
> > > +size_t k;
> > > +
> > > +/* don't migrate into existing bitmaps */
> > > +for (k = 0; k < nodedata->nbitmaps; k++) {
> > > +if (STREQ(bitmap->bitmapname, 
> > > nodedata->bitmaps[k]->name)) {
> > > +bitmap->skip = true;
> > 
> > And similar questions for bitmaps here.
> 
> That would require that we have the users explicitly enable this feature
> rather than doing it implicitly. The disk format can be changed during
> the migration.
> 
> If we want to do it explicitly only I'll need to add a migration flag
> and all the infra for it.

I see. I agree copying the bitmaps whenever possible is a good idea.

Reviewed-by: Jiri Denemark 



Re: [PATCH 19/19] qemu: capabilities: Enable QEMU_CAPS_INCREMENTAL_BACKUP

2021-02-18 Thread Jiri Denemark
On Thu, Feb 11, 2021 at 16:37:58 +0100, Peter Krempa wrote:
> For incremental backup we need QEMU_CAPS_BLOCKDEV,
> QEMU_CAPS_BLOCKDEV_REOPEN, QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING.
> 
> Signed-off-by: Peter Krempa 
> ---
>  src/qemu/qemu_capabilities.c | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
> index 38555dde98..7cc2532954 100644
> --- a/src/qemu/qemu_capabilities.c
> +++ b/src/qemu/qemu_capabilities.c
> @@ -5164,8 +5164,10 @@ virQEMUCapsInitQMPVersionCaps(virQEMUCapsPtr qemuCaps)
>  void
>  virQEMUCapsInitProcessCapsInterlock(virQEMUCapsPtr qemuCaps)
>  {
> -if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_BLOCKDEV))
> -virQEMUCapsClear(qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP);
> +if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_BLOCKDEV) &&
> +virQEMUCapsGet(qemuCaps, QEMU_CAPS_BLOCKDEV_REOPEN) &&
> +virQEMUCapsGet(qemuCaps, 
> QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING))
> +virQEMUCapsSet(qemuCaps, QEMU_CAPS_INCREMENTAL_BACKUP);
> 
>  if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE_USB_STORAGE) &&
>  !virQEMUCapsGet(qemuCaps, QEMU_CAPS_USB_STORAGE_WERROR)) {

Reviewed-by: Jiri Denemark 



Re: [PATCH 18/19] qemu: migration: Migrate block dirty bitmaps corresponding to checkpoints

2021-02-18 Thread Jiri Denemark
On Thu, Feb 11, 2021 at 16:37:57 +0100, Peter Krempa wrote:
> Preserve block dirty bitmaps after migration with
> QEMU_MONITOR_MIGRATE_NON_SHARED_(DISK|INC).
> 
> This patch implements functions which offer the bitmaps to the
> destination, check for eligibility on destination and then configure
> source for the migration.
> 
> Signed-off-by: Peter Krempa 
> ---
>  src/qemu/qemu_migration.c | 333 +-
>  1 file changed, 331 insertions(+), 2 deletions(-)
> 
> diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
> index 36424f8493..16bfad0390 100644
> --- a/src/qemu/qemu_migration.c
> +++ b/src/qemu/qemu_migration.c
...
> @@ -2528,6 +2619,92 @@ qemuMigrationDstPrepare(virDomainObjPtr vm,
>   migrateFrom, fd, NULL);
>  }
> 
> +
> +/**
> + * qemuMigrationDstPrepareAnyBlockDirtyBitmaps:
> + * @vm: domain object
> + * @mig: migration cookie
> + * @migParams: migration parameters
> + * @flags: migration flags
> + *
> + * Checks whether block dirty bitmaps offered by the migration source are
> + * to be migrated (e.g. they don't exist, the destination is compatible etc)
> + * and sets up destination qemu for migrating the bitmaps as well as updates 
> the
> + * list of eligible bitmaps in the migration cookie to be sent back to the
> + * source.
> + */
> +static int
> +qemuMigrationDstPrepareAnyBlockDirtyBitmaps(virDomainObjPtr vm,
> +qemuMigrationCookiePtr mig,
> +qemuMigrationParamsPtr migParams,
> +unsigned int flags)
> +{
> +qemuDomainObjPrivatePtr priv = vm->privateData;
> +g_autoptr(virJSONValue) mapping = NULL;
> +g_autoptr(GHashTable) blockNamedNodeData = NULL;
> +GSList *nextdisk;
> +
> +if (!mig->nbd ||
> +!mig->blockDirtyBitmaps ||
> +!(flags & (VIR_MIGRATE_NON_SHARED_DISK | 
> VIR_MIGRATE_NON_SHARED_INC)) ||
> +!virQEMUCapsGet(priv->qemuCaps, 
> QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING))
> +return 0;

Shouldn't we report an error in case the source sent bitmaps, but local
QEMU does not support QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING?

> +
> +if (qemuMigrationCookieBlockDirtyBitmapsMatchDisks(vm->def, 
> mig->blockDirtyBitmaps) < 0)
> +return -1;
> +
> +if (!(blockNamedNodeData = qemuBlockGetNamedNodeData(vm, 
> QEMU_ASYNC_JOB_MIGRATION_IN)))
> +return -1;
> +
> +for (nextdisk = mig->blockDirtyBitmaps; nextdisk; nextdisk = 
> nextdisk->next) {
> +qemuMigrationBlockDirtyBitmapsDiskPtr disk = nextdisk->data;
> +qemuBlockNamedNodeDataPtr nodedata;
> +GSList *nextbitmap;
> +
> +if (!(nodedata = virHashLookup(blockNamedNodeData, disk->nodename))) 
> {
> +virReportError(VIR_ERR_INTERNAL_ERROR,
> +   _("failed to find data for block node '%s'"),
> +   disk->nodename);
> +return -1;
> +}
> +
> +/* don't migrate bitmaps into non-qcow2v3+ images */

How about "Bitmaps can only be migrated to qcow2 v3+"?

> +if (disk->disk->src->format != VIR_STORAGE_FILE_QCOW2 ||
> +nodedata->qcow2v2) {
> +disk->skip = true;

Is skipping the disk the right thing to do? Should we report an error
and abort migration instead? Just checking, maybe we can't do so for
backward compatibility...

> +continue;
> +}
> +
> +for (nextbitmap = disk->bitmaps; nextbitmap; nextbitmap = 
> nextbitmap->next) {
> +qemuMigrationBlockDirtyBitmapsDiskBitmapPtr bitmap = 
> nextbitmap->data;
> +size_t k;
> +
> +/* don't migrate into existing bitmaps */
> +for (k = 0; k < nodedata->nbitmaps; k++) {
> +if (STREQ(bitmap->bitmapname, nodedata->bitmaps[k]->name)) {
> +bitmap->skip = true;

And similar questions for bitmaps here.

> +break;
> +}
> +}
> +
> +if (bitmap->skip)
> +continue;
> +}
> +}
> +
> +if (qemuMigrationCookieBlockDirtyBitmapsToParams(mig->blockDirtyBitmaps,
> + ) < 0)
> +return -1;
> +
> +if (!mapping)
> +return 0;
> +
> +qemuMigrationParamsSetBlockDirtyBitmapMapping(migParams, );
> +mig->flags |= QEMU_MIGRATION_COOKIE_BLOCK_DIRTY_BITMAPS;
> +return 0;
> +}
> +
> +
>  static int
>  qemuMigrationDstPrepareAny(virQEMUDriverPtr driver,
> virConnectPtr dconn,
...
> +static int
> +qemuMigrationSrcRunPrepareBlockDirtyBitmaps(virDomainObjPtr vm,
> +qemuMigrationCookiePtr mig,
> +qemuMigrationParamsPtr migParams,
> +unsigned int flags)
> +
> +{
> +

Re: [PATCH 17/19] qemu: migration: Clean up temporary bitmaps when cancelling a migration

2021-02-18 Thread Jiri Denemark
On Thu, Feb 11, 2021 at 16:37:56 +0100, Peter Krempa wrote:
> In case when the block migration job required temporary bitmaps for
> merging the appropriate checkpoints we need to clean them up when
> cancelling the job. On success we don't need to do that though as the
> bitmaps are just temporary thus are not written to disk.
> 
> Signed-off-by: Peter Krempa 
> ---
>  src/qemu/qemu_migration.c | 31 +++
>  1 file changed, 31 insertions(+)
> 
> diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
> index 37f0d43d24..36424f8493 100644
> --- a/src/qemu/qemu_migration.c
> +++ b/src/qemu/qemu_migration.c
> @@ -834,6 +834,32 @@ qemuMigrationSrcNBDCopyCancel(virQEMUDriverPtr driver,
>  }
> 
> 
> +static int
> +qemuMigrationSrcCancelRemoveTempBitmaps(virDomainObjPtr vm,
> +qemuDomainAsyncJob asyncJob)
> +{
> +qemuDomainObjPrivatePtr priv = vm->privateData;
> +virQEMUDriverPtr driver = priv->driver;
> +qemuDomainJobPrivatePtr jobPriv = priv->job.privateData;
> +GSList *next;
> +
> +if (!jobPriv->migTempBitmaps)
> +return 0;

This check is pretty much redundant as the loop will do exactly the
same.

> +
> +for (next = jobPriv->migTempBitmaps; next; next = next->next) {
> +qemuDomainJobPrivateMigrateTempBitmapPtr t = next->data;
> +
> +if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0)
> +return -1;
> +qemuMonitorBitmapRemove(priv->mon, t->nodename, t->bitmapname);
> +if (qemuDomainObjExitMonitor(driver, vm) < 0)
> +return -1;
> +}
> +
> +return 0;
> +}
> +
> +
>  static virStorageSourcePtr
>  qemuMigrationSrcNBDStorageCopyBlockdevPrepareSource(virDomainDiskDefPtr disk,
>  const char *host,
...

Reviewed-by: Jiri Denemark 



Re: [PATCH 14/19] qemu: domain: Store list of temporary bitmaps for migration in status XML

2021-02-18 Thread Jiri Denemark
;
> +
> +if (!bitmapname || !nodename) {
> +virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
> +   _("malformed "));
> +return -1;
> +}

Right, something similar is needed in patch 12/19. Although you could
mention extend the error message here and mention the error happened in
a status XML.

And you could even get away without the temp variables by marking bmp
for autoclean and stealing its content when adding to the list.

> +
> +bmp = g_new0(qemuDomainJobPrivateMigrateTempBitmap, 1);
> +bmp->nodename = g_steal_pointer();
> +bmp->bitmapname = g_steal_pointer();
> +
> +bitmaps = g_slist_prepend(bitmaps, bmp);
> +}
> +
> +jobPriv->migTempBitmaps = g_slist_reverse(g_steal_pointer());
> +return 0;
> +}
> +
> +
>  static int
>  qemuDomainParseJobPrivate(xmlXPathContextPtr ctxt,
>qemuDomainJobObjPtr job,
> @@ -277,6 +358,9 @@ qemuDomainParseJobPrivate(xmlXPathContextPtr ctxt,
>  if (qemuDomainObjPrivateXMLParseJobNBD(vm, ctxt) < 0)
>  return -1;
> 
> +if (qemuDomainParseJobPrivateXMLMigrationBlockDirtyBitmapsTemp(priv, 
> ctxt) < 0)
> +return -1;
> +
>  if (qemuMigrationParamsParse(ctxt, >migParams) < 0)
>  return -1;
> 
...

No matter whether you decide to change the functions names...

Reviewed-by: Jiri Denemark 



Re: [PATCH 13/19] qemu: migration_cookie: Add helpers for transforming the cookie into migration params

2021-02-18 Thread Jiri Denemark
On Thu, Feb 11, 2021 at 16:37:52 +0100, Peter Krempa wrote:
> 'qemuMigrationCookieBlockDirtyBitmapsMatchDisks' maps the bitmaps from
> the migration cookie to actual disk objects definition pointers.
> 
> 'qemuMigrationCookieBlockDirtyBitmapsToParams' converts the bitmap
> definitions from the migration cookie into parameters for the
> 'block-bitmap-mapping' migration parameter.
> 
> Signed-off-by: Peter Krempa 
> ---
>  src/qemu/qemu_migration_cookie.c | 115 +++
>  src/qemu/qemu_migration_cookie.h |   7 ++
>  2 files changed, 122 insertions(+)
...
> diff --git a/src/qemu/qemu_migration_cookie.h 
> b/src/qemu/qemu_migration_cookie.h
> index 8636f955da..f8393e0ce0 100644
> --- a/src/qemu/qemu_migration_cookie.h
> +++ b/src/qemu/qemu_migration_cookie.h
> @@ -226,3 +226,10 @@ qemuMigrationCookieXMLFormat(virQEMUDriverPtr driver,
>   virQEMUCapsPtr qemuCaps,
>   virBufferPtr buf,
>   qemuMigrationCookiePtr mig);
> +
> +int qemuMigrationCookieBlockDirtyBitmapsMatchDisks(virDomainDefPtr def,
> +   GSList *disks);
> +
> +int
> +qemuMigrationCookieBlockDirtyBitmapsToParams(GSList *disks,
> + virJSONValuePtr *mapping);

A little bit of consistency would not hurt :-) Either

int
qemuMigrationCookieBlockDirtyBitmapsMatchDisks(...

or

    int qemuMigrationCookieBlockDirtyBitmapsToParams(...


Reviewed-by: Jiri Denemark 



Re: [PATCH 12/19] qemu: migration_cookie: Add XML handling for setting up bitmap migration

2021-02-18 Thread Jiri Denemark
On Thu, Feb 11, 2021 at 16:37:51 +0100, Peter Krempa wrote:
> In cases where we are copying the storage we need to ensure that also
> bitmaps are copied properly. This patch adds migration cookie XML
> infrastructure which will allow the migration sides reach consensus on
> which bitmaps to migrate.
> 
> Signed-off-by: Peter Krempa 
> ---
>  src/qemu/qemu_migration_cookie.c | 134 +++
>  src/qemu/qemu_migration_cookie.h |  34 
>  2 files changed, 168 insertions(+)
> 
> diff --git a/src/qemu/qemu_migration_cookie.c 
> b/src/qemu/qemu_migration_cookie.c
> index 6f2b1b2f57..94ba9c83d0 100644
> --- a/src/qemu/qemu_migration_cookie.c
> +++ b/src/qemu/qemu_migration_cookie.c
...
> @@ -758,6 +795,48 @@ 
> qemuMigrationCookieNBDXMLFormat(qemuMigrationCookieNBDPtr nbd,
>  }
> 
> 
> +static void
> +qemuMigrationCookieBlockDirtyBitmapsFormat(virBufferPtr buf,
> +   GSList *bitmaps)
> +{
> +g_auto(virBuffer) disksBuf = VIR_BUFFER_INIT_CHILD(buf);
> +GSList *nextdisk;
> +
> +for (nextdisk = bitmaps; nextdisk; nextdisk = nextdisk->next) {
> +qemuMigrationBlockDirtyBitmapsDiskPtr disk = nextdisk->data;
> +g_auto(virBuffer) diskAttrBuf = VIR_BUFFER_INITIALIZER;
> +g_auto(virBuffer) diskChildBuf = VIR_BUFFER_INIT_CHILD();
> +bool hasBitmaps = false;
> +GSList *nextbitmap;
> +
> +if (disk->skip || !disk->bitmaps)
> +continue;
> +
> +for (nextbitmap = disk->bitmaps; nextbitmap; nextbitmap = 
> nextbitmap->next) {
> +qemuMigrationBlockDirtyBitmapsDiskBitmapPtr bitmap = 
> nextbitmap->data;
> +
> +if (bitmap->skip)
> +continue;
> +
> +virBufferAsprintf(,
> +  "\n",
> +  bitmap->bitmapname, bitmap->alias);
> +
> +hasBitmaps = true;
> +}
> +
> +if (!hasBitmaps)
> +continue;

You could drop hasBitmaps and call virBufferUse instead, but not a big
deal.

> +
> +virBufferAsprintf(, " target='%s'", disk->target);
> +virXMLFormatElement(, "disk", , );
> +}
> +
> +
> +virXMLFormatElement(buf, "blockDirtyBitmaps", NULL, );
> +}
> +
> +
>  int
>  qemuMigrationCookieXMLFormat(virQEMUDriverPtr driver,
>   virQEMUCapsPtr qemuCaps,
> @@ -829,6 +908,9 @@ qemuMigrationCookieXMLFormat(virQEMUDriverPtr driver,
>  if (mig->flags & QEMU_MIGRATION_COOKIE_CAPS)
>  qemuMigrationCookieCapsXMLFormat(buf, mig->caps);
> 
> +if (mig->flags & QEMU_MIGRATION_COOKIE_BLOCK_DIRTY_BITMAPS)
> +qemuMigrationCookieBlockDirtyBitmapsFormat(buf, 
> mig->blockDirtyBitmaps);
> +
>  virBufferAdjustIndent(buf, -2);
>  virBufferAddLit(buf, "\n");
>  return 0;
> @@ -1132,6 +1214,53 @@ 
> qemuMigrationCookieXMLParseMandatoryFeatures(xmlXPathContextPtr ctxt,
>  }
> 
> 
> +static int
> +qemuMigrationCookieBlockDirtyBitmapsParse(xmlXPathContextPtr ctxt,
> +  qemuMigrationCookiePtr mig)
> +{
> +g_autoslist(qemuMigrationBlockDirtyBitmapsDisk) disks = NULL;
> +g_autofree xmlNodePtr *disknodes = NULL;
> +int ndisknodes;
> +size_t i;
> +VIR_XPATH_NODE_AUTORESTORE(ctxt)
> +
> +if ((ndisknodes = virXPathNodeSet("./blockDirtyBitmaps/disk", ctxt, 
> )) < 0)
> +return -1;
> +
> +for (i = 0; i < ndisknodes; i++) {
> +GSList *bitmaps = NULL;
> +qemuMigrationBlockDirtyBitmapsDiskPtr disk;
> +g_autofree xmlNodePtr *bitmapnodes = NULL;
> +int nbitmapnodes;
> +size_t j;
> +
> +ctxt->node = disknodes[i];
> +
> +if ((nbitmapnodes = virXPathNodeSet("./bitmap", ctxt, )) 
> < 0)
> +return -1;
> +
> +for (j = 0; j < nbitmapnodes; j++) {
> +qemuMigrationBlockDirtyBitmapsDiskBitmapPtr bitmap;
> +
> +bitmap = g_new0(qemuMigrationBlockDirtyBitmapsDiskBitmap, 1);
> +bitmap->bitmapname = virXMLPropString(bitmapnodes[j], "name");
> +bitmap->alias = virXMLPropString(bitmapnodes[j], "alias");

So what if the attributes do not exist? And virXMLPropString does not
abort on OOM. You should check for the result being non-NULL here.

> +bitmaps = g_slist_prepend(bitmaps, bitmap);
> +}
> +
> +disk = g_new0(qemuMigrationBlockDirtyBitmapsDisk, 1);
> +disk->target = virXMLPropString(disknodes[i], "target");

And here as well.

> +disk->bitmaps = g_slist_reverse(bitmaps);
> +
> +disks = g_slist_prepend(disks, disk);
> +}
> +
> +mig->blockDirtyBitmaps = g_slist_reverse(g_steal_pointer());
> +
> +return 0;
> +}
> +
> +
>  static int
>  qemuMigrationCookieXMLParse(qemuMigrationCookiePtr mig,
>  virQEMUDriverPtr driver,
...

With the checks for virXMLPropString result added

Reviewed-by: Jiri Denemark 



Re: [PATCH 16/19] tests: qemumigrationcookie: Add testing for block dirty bitmap migration

2021-02-18 Thread Jiri Denemark
On Thu, Feb 11, 2021 at 16:37:55 +0100, Peter Krempa wrote:
> Test the XML infrastructure for  migration cookie
> element as well as the conversion to migration parameters for QMP schema
> validation.
> 
> Signed-off-by: Peter Krempa 
> ---
>  tests/meson.build |   2 +-
>  .../nbd-bitmaps-xml2xml-in.xml|  52 ++
>  .../nbd-bitmaps-xml2xml-migparams.json|  25 +++
>  .../nbd-bitmaps-xml2xml-out.xml   |  51 ++
>  tests/qemumigrationcookiexmltest.c| 166 +++---
>  5 files changed, 269 insertions(+), 27 deletions(-)
>  create mode 100644 
> tests/qemumigrationcookiexmldata/nbd-bitmaps-xml2xml-in.xml
>  create mode 100644 
> tests/qemumigrationcookiexmldata/nbd-bitmaps-xml2xml-migparams.json
>  create mode 100644 
> tests/qemumigrationcookiexmldata/nbd-bitmaps-xml2xml-out.xml

Reviewed-by: Jiri Denemark 



Re: [PATCH 15/19] tests: qemustatusxml2xml: Add status XML from migration with bitmaps

2021-02-18 Thread Jiri Denemark
On Thu, Feb 11, 2021 at 16:37:54 +0100, Peter Krempa wrote:
> The XML sample shows the status XML when migrating with bitmaps
> including the  element added in previous commit.
> 
> It will also be used for the migration cookie test.
> 
> Signed-off-by: Peter Krempa 
> ---
>  .../migration-out-nbd-bitmaps-in.xml  | 574 ++
>  .../migration-out-nbd-bitmaps-out.xml |   1 +
>  tests/qemustatusxml2xmltest.c |   1 +
>  3 files changed, 576 insertions(+)
>  create mode 100644 
> tests/qemustatusxml2xmldata/migration-out-nbd-bitmaps-in.xml
>  create mode 12 
> tests/qemustatusxml2xmldata/migration-out-nbd-bitmaps-out.xml

Reviewed-by: Jiri Denemark 



Plans for the next release

2021-02-18 Thread Jiri Denemark
We are getting close to the next release of libvirt. To aim for the
release on Mar 01 I suggest entering the freeze on Monday Feb 22 and
tagging RC2 on Thursday Feb 25.

I hope this works for everyone.

Jirka



Re: [PATCH 08/19] storage: Format qcow2v3 volumes by default

2021-02-12 Thread Jiri Denemark
On Fri, Feb 12, 2021 at 12:13:58 +, Daniel P. Berrangé wrote:
> On Fri, Feb 12, 2021 at 11:55:36AM +0100, Peter Krempa wrote:
> > On Fri, Feb 12, 2021 at 10:49:02 +, Daniel Berrange wrote:
> > > On Thu, Feb 11, 2021 at 04:37:47PM +0100, Peter Krempa wrote:
> > > > Format the new volumes with 'compat=1.1' since the minimum supported
> > > > qemu version is now 1.5 rather the pre-historic compat=0.10.
> > > 
> > > I understand the desire to do this, but this is none the less a
> > > semantic change to the behaviour of the APIs. It is the same
> > > situation as arbitrarily changing the defaults for any part of
> > > the domain XML.
> > 
> > I'm aware of that, but at certain point it IMO doesn't make sense to try
> > to stick with a prehistoric format just for the sake of it and IMO this
> > is it.
> 
> Well that's a policy decision and it is upto the user or mgmt app to
> decide when they wish to drop compatibility with old distros. RHEL
> had continued to publish its cloud images with old format until very
> recently for sake of compat. I don't think libvirt should be forcing
> that decision onto people as it sabotages libvirt's value of providing
> long term stable behaviour to applications.

The oldest QEMU release we support understands v3 so there's no reason
to use an older format in this respect. But you're right there might be
other use cases which would still need v2 format for compatibility. I
guess I was too focused on our usage and libvirt/QEMU compatibility when
reviewing the patch.

Jirka



Re: [PATCH 08/19] storage: Format qcow2v3 volumes by default

2021-02-12 Thread Jiri Denemark
On Thu, Feb 11, 2021 at 16:37:47 +0100, Peter Krempa wrote:
> Format the new volumes with 'compat=1.1' since the minimum supported
> qemu version is now 1.5 rather the pre-historic compat=0.10.
> 
> Signed-off-by: Peter Krempa 
> ---
>  src/storage/storage_util.c  | 2 +-
>  .../storagevolxml2argvdata/luks-convert-encrypt2fileqcow2.argv  | 2 +-
>  tests/storagevolxml2argvdata/qcow2-compat.argv  | 2 +-
>  tests/storagevolxml2argvdata/qcow2-from-logical-compat.argv | 2 +-
>  tests/storagevolxml2argvdata/qcow2-luks-convert-encrypt.argv| 2 +-
>  .../qcow2-luks-convert-encrypt2fileqcow2.argv   | 2 +-
>  tests/storagevolxml2argvdata/qcow2-luks.argv| 2 +-
>  .../qcow2-nobacking-convert-prealloc-compat.argv| 2 +-
>  .../storagevolxml2argvdata/qcow2-nobacking-prealloc-compat.argv | 2 +-
>  .../qcow2-nocapacity-convert-prealloc.argv  | 2 +-
>  tests/storagevolxml2argvdata/qcow2-nocapacity.argv  | 2 +-
>  tests/storagevolxml2argvdata/qcow2-nocow-compat.argv| 2 +-
>  tests/storagevolxml2argvdata/qcow2-zerocapacity.argv| 2 +-
>  13 files changed, 13 insertions(+), 13 deletions(-)

Reviewed-by: Jiri Denemark 



Re: [PATCH 07/19] storagevolxml2argvdata: Rewrap all output files

2021-02-12 Thread Jiri Denemark
On Thu, Feb 11, 2021 at 16:37:46 +0100, Peter Krempa wrote:
> Use scripts/test-wrap-argv.py to rewrap the output files so that any
> further changes don't introduce churn since we are rewrapping the output
> automatically now.
> 
> Signed-off-by: Peter Krempa 
> ---
>  tests/storagevolxml2argvdata/iso-input.argv   |  6 +++--
>  tests/storagevolxml2argvdata/iso.argv |  4 +++-
>  .../logical-from-qcow2.argv   |  6 +++--
>  tests/storagevolxml2argvdata/luks-cipher.argv |  8 ---
>  .../luks-convert-encrypt.argv | 23 ---
>  .../luks-convert-encrypt2fileqcow2.argv   | 19 ++-
>  .../luks-convert-encrypt2fileraw.argv | 20 ++--
>  .../luks-convert-qcow2.argv   | 21 +++--
>  .../storagevolxml2argvdata/luks-convert.argv  | 20 ++--
>  tests/storagevolxml2argvdata/luks.argv|  8 ---
>  tests/storagevolxml2argvdata/qcow2-1.1.argv   |  8 ---
>  .../storagevolxml2argvdata/qcow2-compat.argv  |  8 ---
>  .../qcow2-from-logical-compat.argv|  8 ---
>  tests/storagevolxml2argvdata/qcow2-lazy.argv  |  9 +---
>  ...ow2-nobacking-convert-prealloc-compat.argv | 10 +---
>  .../qcow2-nobacking-prealloc-compat.argv  |  8 ---
>  .../qcow2-nocapacity-convert-prealloc.argv|  9 +---
>  .../qcow2-nocapacity.argv |  6 ++---
>  .../qcow2-nocow-compat.argv   |  9 +---
>  tests/storagevolxml2argvdata/qcow2-nocow.argv |  9 +---
>  .../qcow2-zerocapacity.argv   |  5 +++-
>  21 files changed, 147 insertions(+), 77 deletions(-)

Reviewed-by: Jiri Denemark 



Re: [PATCH 06/19] testutils: virTestRewrapFile: Rewrap also '.argv' files

2021-02-12 Thread Jiri Denemark
On Thu, Feb 11, 2021 at 16:37:45 +0100, Peter Krempa wrote:
> The suffix is used for output files of 'storagevolxml2argvtest.
> 
> Signed-off-by: Peter Krempa 
> ---
>  tests/testutils.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/tests/testutils.c b/tests/testutils.c
> index 7ecf7923b8..8734790457 100644
> --- a/tests/testutils.c
> +++ b/tests/testutils.c
> @@ -332,6 +332,7 @@ virTestRewrapFile(const char *filename)
>  g_autoptr(virCommand) cmd = NULL;
> 
>  if (!(virStringHasSuffix(filename, ".args") ||
> +  virStringHasSuffix(filename, ".argv") ||
>virStringHasSuffix(filename, ".ldargs")))
>  return 0;
> 

Reviewed-by: Jiri Denemark 



Re: [PATCH 05/19] qemuMigrationSrcPerformPeer2Peer3: Don't leak 'dom_xml' on cleanup

2021-02-12 Thread Jiri Denemark
On Thu, Feb 11, 2021 at 16:37:44 +0100, Peter Krempa wrote:
> Use g_autofree for 'dom_xml' to free it on some of the (unlikely) code
> paths jumping to cleanup prior to the deallocation which is done right
> after it's not needed any more since it's a big string.
> 
> Noticed when running under valgrind:
> 
> ==2204780== 8,192 bytes in 1 blocks are definitely lost in loss record 2,539 
> of 2,551
> ==2204780==at 0x483BCE8: realloc (vg_replace_malloc.c:834)
> ==2204780==by 0x4D890DF: g_realloc (in /usr/lib64/libglib-2.0.so.0.6600.4)
> ==2204780==by 0x4DA3AF0: g_string_append_vprintf (in 
> /usr/lib64/libglib-2.0.so.0.6600.4)
> ==2204780==by 0x4917293: virBufferAsprintf (virbuffer.c:307)
> ==2204780==by 0x49B0B75: virDomainChrDefFormat (domain_conf.c:26109)
> ==2204780==by 0x49E25EF: virDomainDefFormatInternalSetRootName 
> (domain_conf.c:28956)
> ==2204780==by 0x15F81D24: qemuDomainDefFormatBufInternal 
> (qemu_domain.c:6204)
> ==2204780==by 0x15F8270D: qemuDomainDefFormatXMLInternal 
> (qemu_domain.c:6229)
> ==2204780==by 0x15F8270D: qemuDomainDefFormatLive (qemu_domain.c:6279)
> ==2204780==by 0x15FD8100: qemuMigrationSrcBeginPhase 
> (qemu_migration.c:2395)
> ==2204780==by 0x15FE0F0D: qemuMigrationSrcPerformPeer2Peer3 
> (qemu_migration.c:4640)
> ==2204780==by 0x15FE0F0D: qemuMigrationSrcPerformPeer2Peer 
> (qemu_migration.c:5093)
> ==2204780==by 0x15FE0F0D: qemuMigrationSrcPerformJob 
> (qemu_migration.c:5168)
> ==2204780==by 0x15FE280E: qemuMigrationSrcPerform (qemu_migration.c:5372)
> ==2204780==by 0x15F9BA3D: qemuDomainMigratePerform3Params 
> (qemu_driver.c:11841)
> 
> Signed-off-by: Peter Krempa 
> ---
>  src/qemu/qemu_migration.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
> index f44d31c971..37f0d43d24 100644
> --- a/src/qemu/qemu_migration.c
> +++ b/src/qemu/qemu_migration.c
> @@ -4347,7 +4347,7 @@ qemuMigrationSrcPerformPeer2Peer3(virQEMUDriverPtr 
> driver,
>  char *uri_out = NULL;
>  char *cookiein = NULL;
>  char *cookieout = NULL;
> -char *dom_xml = NULL;
> +g_autofree char *dom_xml = NULL;
>  int cookieinlen = 0;
>  int cookieoutlen = 0;
>  int ret = -1;

Oh wow, the leak has been with us for 10 years since v3 migration
protocol was introduced...

Reviewed-by: Jiri Denemark 



<    3   4   5   6   7   8   9   10   11   12   >