Re: [libvirt] [PATCH RESEND v2 0/4] re-introduce

2022-04-11 Thread Ani Sinha
On Tue, Apr 12, 2022 at 9:50 AM Ani Sinha  wrote:
>
> On Tue, Mar 8, 2022 at 10:28 PM Michael S. Tsirkin  wrote:
> >
> > On Tue, Mar 08, 2022 at 10:15:49PM +0530, Ani Sinha wrote:
> > >
> > > Change log:
> > > v2: rebased the patchset. Laine's response is appended at the end.
> > >
> > > I am re-introducing the patchset for  which got
> > > reverted here few months back:
> > >
> > > https://www.spinics.net/linux/fedora/libvir/msg224089.html
> > >
> > > The reason for the reversal was that there seemed to be some
> > > instability/issues around the use of the qemu commandline which this
> > > patchset tries to support. In particular, some guest operating systems
> > > did not like the way QEMU was trying to disable native hotplug on pcie
> > > root ports. Subsequently, in QEMU 6.2, we have changed our mechanism
> > > using which we disable native hotplug. As I understand, we do not have
> > > any reported issues so far in 6.2 around this area. QEMU will enter a
> > > soft feature freeze in the first week of march in prep for 7.0 release.
> >
> > Right. But unfortunately we did not yet really work on
> > a sane interface for this.
> >
> > The way I see it, at high level we thinkably need two flags
> > - disable ACPI hotplug
> > - enable native hotplug (maybe separately for pci and pcie?)
>
> pci does not have native hotplug. so this would be applicable only for
> q35. For i440fx we have two separate flags already to disable acpi
> hotplug, one for root bus and another for bridges.
>
> >
> > and with both enabled guests actually can switch between
> > the two.
> >
> > This will at least reflect the hardware, so has a chance to be
> > stable.
> >
> > The big question however would be what is the actual use-case.
> > Without that this begs the question of why do we bother at all.
>
> To me the main motivation is as I have described here:
> https://listman.redhat.com/archives/libvir-list/2021-October/msg00068.html
>
> One concrete example of why one might still want to use native hotplug with
> pcie-root-port controller is the fact that we are still discovering issues 
> with
> acpi hotplug on PCIE. One such issue is:
> https://lists.gnu.org/archive/html/qemu-devel/2021-09/msg02146.html
> Another reason is that users have been using native hotplug on pcie root ports
> up until now. They have built and tested their systems based on native 
> hotplug.
> They may not want to suddenly move to acpi based hotplug just because it is 
> now
> the default in qemu. Supporting the option to chose one or the other through
> libvirt makes things simpler for end users.

Essentially what I do not like is that we are imposing acpi hotplug on
q35 for the entire community without giving them a choice to revert
back to native hotplug though libvirt.

>
> > To allow hotplug of bridges? If it is really necessary for us then
> > we should think hard about questions that surround this:
> >
> > - how does one hotplug a pcie switch?
> > - any way to use e.g. dynamic ACPI to support hotplug of bridges?
> > - do we want to bite the bullet and create an option for management
> >   to fully control guest memory layout including all pci devices?
> >
> >
> >
> > > Libvirt is also entering a new release cycle phaze. Hence, I am
> > > introducing this patchset early enough in the release cycles so that if
> > > we do see any issues on the qemu side during the rc0, rc1 cycles and if
> > > reversal of this patchset is again required, it can be done in time
> > > before the next libvirt release end of March.
> > >
> > > All the patches in this series had been previously reviewed. Some
> > > subsequent fixes were made after my initial patches were pushed. I have
> > > squashed all those fixes and consolidated them into four patches. I have
> > > also updated the documentation to reflect the new changes from the QEMU
> > > side and rebased my changes fixing the tests in the process.
> > >
> > > What changed in QEMU post version 6.1 ?
> > > =
> > >
> > > We have made basically two major changes in QEMU. First is this change:
> > >
> > > (1) commit 211afe5c69b597acf85fdd577eb497f5be1ffbd8
> > > Author: Julia Suvorova 
> > > Date:   Fri Nov 12 06:08:56 2021 -0500
> > >
> > > hw/i386/acpi-build: Deny control on PCIe Native Hot-plug in _OSC
> > >
> > > There are two ways to enable ACPI PCI Hot-plug:
> > >
> > > * Disable the Hot-plug Capable bit on PCIe slots.
> > >
> > > This was the first approach which led to regression [1-2], as
> > > I/O space for a port is allocated only when it is hot-pluggable,
> > > which is determined by HPC bit.
> > >
> > > * Leave the HPC bit on and disable PCIe Native Hot-plug in 
> > > _OSC
> > >   method.
> > >
> > > This removes the (future) ability of hot-plugging switches with PCIe
> > > Native hotplug since ACPI PCI Hot-plug only works with cold-plugged
> > > bridges. If the user wants to explicitely use this 

Re: [libvirt] [PATCH RESEND v2 0/4] re-introduce

2022-04-11 Thread Ani Sinha
On Tue, Mar 8, 2022 at 10:28 PM Michael S. Tsirkin  wrote:
>
> On Tue, Mar 08, 2022 at 10:15:49PM +0530, Ani Sinha wrote:
> >
> > Change log:
> > v2: rebased the patchset. Laine's response is appended at the end.
> >
> > I am re-introducing the patchset for  which got
> > reverted here few months back:
> >
> > https://www.spinics.net/linux/fedora/libvir/msg224089.html
> >
> > The reason for the reversal was that there seemed to be some
> > instability/issues around the use of the qemu commandline which this
> > patchset tries to support. In particular, some guest operating systems
> > did not like the way QEMU was trying to disable native hotplug on pcie
> > root ports. Subsequently, in QEMU 6.2, we have changed our mechanism
> > using which we disable native hotplug. As I understand, we do not have
> > any reported issues so far in 6.2 around this area. QEMU will enter a
> > soft feature freeze in the first week of march in prep for 7.0 release.
>
> Right. But unfortunately we did not yet really work on
> a sane interface for this.
>
> The way I see it, at high level we thinkably need two flags
> - disable ACPI hotplug
> - enable native hotplug (maybe separately for pci and pcie?)

pci does not have native hotplug. so this would be applicable only for
q35. For i440fx we have two separate flags already to disable acpi
hotplug, one for root bus and another for bridges.

>
> and with both enabled guests actually can switch between
> the two.
>
> This will at least reflect the hardware, so has a chance to be
> stable.
>
> The big question however would be what is the actual use-case.
> Without that this begs the question of why do we bother at all.

To me the main motivation is as I have described here:
https://listman.redhat.com/archives/libvir-list/2021-October/msg00068.html

One concrete example of why one might still want to use native hotplug with
pcie-root-port controller is the fact that we are still discovering issues with
acpi hotplug on PCIE. One such issue is:
https://lists.gnu.org/archive/html/qemu-devel/2021-09/msg02146.html
Another reason is that users have been using native hotplug on pcie root ports
up until now. They have built and tested their systems based on native hotplug.
They may not want to suddenly move to acpi based hotplug just because it is now
the default in qemu. Supporting the option to chose one or the other through
libvirt makes things simpler for end users.

> To allow hotplug of bridges? If it is really necessary for us then
> we should think hard about questions that surround this:
>
> - how does one hotplug a pcie switch?
> - any way to use e.g. dynamic ACPI to support hotplug of bridges?
> - do we want to bite the bullet and create an option for management
>   to fully control guest memory layout including all pci devices?
>
>
>
> > Libvirt is also entering a new release cycle phaze. Hence, I am
> > introducing this patchset early enough in the release cycles so that if
> > we do see any issues on the qemu side during the rc0, rc1 cycles and if
> > reversal of this patchset is again required, it can be done in time
> > before the next libvirt release end of March.
> >
> > All the patches in this series had been previously reviewed. Some
> > subsequent fixes were made after my initial patches were pushed. I have
> > squashed all those fixes and consolidated them into four patches. I have
> > also updated the documentation to reflect the new changes from the QEMU
> > side and rebased my changes fixing the tests in the process.
> >
> > What changed in QEMU post version 6.1 ?
> > =
> >
> > We have made basically two major changes in QEMU. First is this change:
> >
> > (1) commit 211afe5c69b597acf85fdd577eb497f5be1ffbd8
> > Author: Julia Suvorova 
> > Date:   Fri Nov 12 06:08:56 2021 -0500
> >
> > hw/i386/acpi-build: Deny control on PCIe Native Hot-plug in _OSC
> >
> > There are two ways to enable ACPI PCI Hot-plug:
> >
> > * Disable the Hot-plug Capable bit on PCIe slots.
> >
> > This was the first approach which led to regression [1-2], as
> > I/O space for a port is allocated only when it is hot-pluggable,
> > which is determined by HPC bit.
> >
> > * Leave the HPC bit on and disable PCIe Native Hot-plug in _OSC
> >   method.
> >
> > This removes the (future) ability of hot-plugging switches with PCIe
> > Native hotplug since ACPI PCI Hot-plug only works with cold-plugged
> > bridges. If the user wants to explicitely use this feature, they can
> > disable ACPI PCI Hot-plug with:
> > --global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off
> >
> > Change the bit in _OSC method so that the OS selects ACPI PCI Hot-plug
> > instead of PCIe Native.
> >
> > [1] https://gitlab.com/qemu-project/qemu/-/issues/641
> > [2] https://bugzilla.redhat.com/show_bug.cgi?id=2006409
> >
> > Signed-off-by: Julia Suvorova 
> > Signed-off-by: Igor 

Re: [libvirt RFC] virFile: new VIR_FILE_WRAPPER_BIG_PIPE to improve performance

2022-04-11 Thread Dr. David Alan Gilbert
* Claudio Fontana (cfont...@suse.de) wrote:
> On 4/7/22 3:57 PM, Claudio Fontana wrote:
> > On 4/7/22 3:53 PM, Dr. David Alan Gilbert wrote:
> >> * Claudio Fontana (cfont...@suse.de) wrote:
> >>> On 4/5/22 10:35 AM, Dr. David Alan Gilbert wrote:
>  * Claudio Fontana (cfont...@suse.de) wrote:
> > On 3/28/22 10:31 AM, Daniel P. Berrangé wrote:
> >> On Sat, Mar 26, 2022 at 04:49:46PM +0100, Claudio Fontana wrote:
> >>> On 3/25/22 12:29 PM, Daniel P. Berrangé wrote:
>  On Fri, Mar 18, 2022 at 02:34:29PM +0100, Claudio Fontana wrote:
> > On 3/17/22 4:03 PM, Dr. David Alan Gilbert wrote:
> >> * Claudio Fontana (cfont...@suse.de) wrote:
> >>> On 3/17/22 2:41 PM, Claudio Fontana wrote:
>  On 3/17/22 11:25 AM, Daniel P. Berrangé wrote:
> > On Thu, Mar 17, 2022 at 11:12:11AM +0100, Claudio Fontana wrote:
> >> On 3/16/22 1:17 PM, Claudio Fontana wrote:
> >>> On 3/14/22 6:48 PM, Daniel P. Berrangé wrote:
>  On Mon, Mar 14, 2022 at 06:38:31PM +0100, Claudio Fontana 
>  wrote:
> > On 3/14/22 6:17 PM, Daniel P. Berrangé wrote:
> >> On Sat, Mar 12, 2022 at 05:30:01PM +0100, Claudio Fontana 
> >> wrote:
> >>> the first user is the qemu driver,
> >>>
> >>> virsh save/resume would slow to a crawl with a default 
> >>> pipe size (64k).
> >>>
> >>> This improves the situation by 400%.
> >>>
> >>> Going through io_helper still seems to incur in some 
> >>> penalty (~15%-ish)
> >>> compared with direct qemu migration to a nc socket to a 
> >>> file.
> >>>
> >>> Signed-off-by: Claudio Fontana 
> >>> ---
> >>>  src/qemu/qemu_driver.c|  6 +++---
> >>>  src/qemu/qemu_saveimage.c | 11 ++-
> >>>  src/util/virfile.c| 12 
> >>>  src/util/virfile.h|  1 +
> >>>  4 files changed, 22 insertions(+), 8 deletions(-)
> >>>
> >>> Hello, I initially thought this to be a qemu performance 
> >>> issue,
> >>> so you can find the discussion about this in qemu-devel:
> >>>
> >>> "Re: bad virsh save /dev/null performance (600 MiB/s max)"
> >>>
> >>> https://lists.gnu.org/archive/html/qemu-devel/2022-03/msg03142.html
> >
> >
> >> Current results show these experimental averages maximum 
> >> throughput
> >> migrating to /dev/null per each FdWrapper Pipe Size (as per 
> >> QEMU QMP
> >> "query-migrate", tests repeated 5 times for each).
> >> VM Size is 60G, most of the memory effectively touched before 
> >> migration,
> >> through user application allocating and touching all memory 
> >> with
> >> pseudorandom data.
> >>
> >> 64K: 5200 Mbps (current situation)
> >> 128K:5800 Mbps
> >> 256K:   20900 Mbps
> >> 512K:   21600 Mbps
> >> 1M: 22800 Mbps
> >> 2M: 22800 Mbps
> >> 4M: 22400 Mbps
> >> 8M: 22500 Mbps
> >> 16M:22800 Mbps
> >> 32M:22900 Mbps
> >> 64M:22900 Mbps
> >> 128M:   22800 Mbps
> >>
> >> This above is the throughput out of patched libvirt with 
> >> multiple Pipe Sizes for the FDWrapper.
> >
> > Ok, its bouncing around with noise after 1 MB. So I'd suggest 
> > that
> > libvirt attempt to raise the pipe limit to 1 MB by default, but
> > not try to go higher.
> >
> >> As for the theoretical limit for the libvirt architecture,
> >> I ran a qemu migration directly issuing the appropriate QMP
> >> commands, setting the same migration parameters as per libvirt,
> >> and then migrating to a socket netcatted to /dev/null via
> >> {"execute": "migrate", "arguments": { "uri", 
> >> "unix:///tmp/netcat.sock" } } :
> >>
> >> QMP:37000 Mbps
> >
> >> So although the Pipe size improves things (in particular the
> >> large jump is for the 256K size, although 1M seems a very good 
> >> value),
> >> there is still a second bottleneck in there somewhere that
> >> accounts for a loss of ~14200 Mbps in throughput.
> >>>
> >>>
> >>> Interesting addition: I tested quickly on a system with faster 
> 

Re: [libvirt RFC] virFile: new VIR_FILE_WRAPPER_BIG_PIPE to improve performance

2022-04-11 Thread Claudio Fontana
On 4/7/22 3:57 PM, Claudio Fontana wrote:
> On 4/7/22 3:53 PM, Dr. David Alan Gilbert wrote:
>> * Claudio Fontana (cfont...@suse.de) wrote:
>>> On 4/5/22 10:35 AM, Dr. David Alan Gilbert wrote:
 * Claudio Fontana (cfont...@suse.de) wrote:
> On 3/28/22 10:31 AM, Daniel P. Berrangé wrote:
>> On Sat, Mar 26, 2022 at 04:49:46PM +0100, Claudio Fontana wrote:
>>> On 3/25/22 12:29 PM, Daniel P. Berrangé wrote:
 On Fri, Mar 18, 2022 at 02:34:29PM +0100, Claudio Fontana wrote:
> On 3/17/22 4:03 PM, Dr. David Alan Gilbert wrote:
>> * Claudio Fontana (cfont...@suse.de) wrote:
>>> On 3/17/22 2:41 PM, Claudio Fontana wrote:
 On 3/17/22 11:25 AM, Daniel P. Berrangé wrote:
> On Thu, Mar 17, 2022 at 11:12:11AM +0100, Claudio Fontana wrote:
>> On 3/16/22 1:17 PM, Claudio Fontana wrote:
>>> On 3/14/22 6:48 PM, Daniel P. Berrangé wrote:
 On Mon, Mar 14, 2022 at 06:38:31PM +0100, Claudio Fontana 
 wrote:
> On 3/14/22 6:17 PM, Daniel P. Berrangé wrote:
>> On Sat, Mar 12, 2022 at 05:30:01PM +0100, Claudio Fontana 
>> wrote:
>>> the first user is the qemu driver,
>>>
>>> virsh save/resume would slow to a crawl with a default pipe 
>>> size (64k).
>>>
>>> This improves the situation by 400%.
>>>
>>> Going through io_helper still seems to incur in some 
>>> penalty (~15%-ish)
>>> compared with direct qemu migration to a nc socket to a 
>>> file.
>>>
>>> Signed-off-by: Claudio Fontana 
>>> ---
>>>  src/qemu/qemu_driver.c|  6 +++---
>>>  src/qemu/qemu_saveimage.c | 11 ++-
>>>  src/util/virfile.c| 12 
>>>  src/util/virfile.h|  1 +
>>>  4 files changed, 22 insertions(+), 8 deletions(-)
>>>
>>> Hello, I initially thought this to be a qemu performance 
>>> issue,
>>> so you can find the discussion about this in qemu-devel:
>>>
>>> "Re: bad virsh save /dev/null performance (600 MiB/s max)"
>>>
>>> https://lists.gnu.org/archive/html/qemu-devel/2022-03/msg03142.html
>
>
>> Current results show these experimental averages maximum 
>> throughput
>> migrating to /dev/null per each FdWrapper Pipe Size (as per QEMU 
>> QMP
>> "query-migrate", tests repeated 5 times for each).
>> VM Size is 60G, most of the memory effectively touched before 
>> migration,
>> through user application allocating and touching all memory with
>> pseudorandom data.
>>
>> 64K: 5200 Mbps (current situation)
>> 128K:5800 Mbps
>> 256K:   20900 Mbps
>> 512K:   21600 Mbps
>> 1M: 22800 Mbps
>> 2M: 22800 Mbps
>> 4M: 22400 Mbps
>> 8M: 22500 Mbps
>> 16M:22800 Mbps
>> 32M:22900 Mbps
>> 64M:22900 Mbps
>> 128M:   22800 Mbps
>>
>> This above is the throughput out of patched libvirt with 
>> multiple Pipe Sizes for the FDWrapper.
>
> Ok, its bouncing around with noise after 1 MB. So I'd suggest that
> libvirt attempt to raise the pipe limit to 1 MB by default, but
> not try to go higher.
>
>> As for the theoretical limit for the libvirt architecture,
>> I ran a qemu migration directly issuing the appropriate QMP
>> commands, setting the same migration parameters as per libvirt,
>> and then migrating to a socket netcatted to /dev/null via
>> {"execute": "migrate", "arguments": { "uri", 
>> "unix:///tmp/netcat.sock" } } :
>>
>> QMP:37000 Mbps
>
>> So although the Pipe size improves things (in particular the
>> large jump is for the 256K size, although 1M seems a very good 
>> value),
>> there is still a second bottleneck in there somewhere that
>> accounts for a loss of ~14200 Mbps in throughput.
>>>
>>>
>>> Interesting addition: I tested quickly on a system with faster cpus 
>>> and larger VM sizes, up to 200GB,
>>> and the difference in throughput libvirt vs qemu is basically the 
>>> same ~14500 Mbps.
>>>
>>> ~5 mbps qemu to netcat socket to /dev/null
>>> ~35500 mbps virsh save to 

Re: [RFC PATCH v1 0/5] Add virDomainGetSevAttestationReport API

2022-04-11 Thread Cole Robinson
On 3/23/22 3:36 PM, Tyler Fanelli wrote:
> This an RFC discussing a new API, virDomainGetSevAttestationReport (along 
> with a
> virsh command "domgetsevreport"), with initial QEMU support via the
> "query-sev-attestation-report" QAPI mechanism. "query-sev-attestation-report" 
> is
> supplied a base64-encoded 16 byte "mnonce" string as input, with a purpose of
> being embedded into the attestation report to provide protection.
> 
> My main point of concern is the design/communication of the 
> virTypedParameterPtr
> exchanged between the client and libvirtd and how they interact together, as I
> have seen no other API follow the method I used. Namely, the same
> virTypedParameterPtr is used for both input _AND_ output. The same
> virTypedParameterPtr containing the original mnonce string inputted to the 
> API is
> also used to contain the attestation report upon being returned from the API.
> 
> This contrasts with much of the APIs I've noticed, which use a
> virTypedParameterPtr for either input or output, but not both.
> 
> This patch is not final, as I still would like some human-readable outputting
> and storage of the attestation report.
> 
> Looking for thoughts on the design of this API, as well as suggested
> improvements.

The proposed API name contains Sev, when all the existing domain APIs
have generic names: virDomainGetLaunchSecurityInfo,
virDomainSetLaunchSecurityState.

I was thinking it would be nice to avoid that Sev specific bit. So I
looked at upcoming SNP and TDX qemu patches to see if they add any qmp
commands that take input and return a lot of output. Then maybe it would
make sense to name this something like
virDomainGetLaunchSecurityInfoWithParams  which we could use more in the
future.

qemu SNP patches: https://github.com/AMDESE/qemu/tree/snp-v3
- Only extend existing query-sev and query-sev-capabilities

qemu TDX patches: https://github.com/intel/qemu-tdx
- Adds query-tdx and query-tdx-capabilities, both with no input

So, that doesn't help.


After that, my question is, what query-sev-attestation-report adds. The
kernel commit explains what it is:
https://github.com/torvalds/linux/commit/2c07ded06427dd3339278487a1413d5e478f05f9

> The SEV FW version >= 0.23 added a new command that can be used to query
> the attestation report containing the SHA-256 digest of the guest memory
> encrypted through the KVM_SEV_LAUNCH_UPDATE_{DATA, VMSA} commands and
> sign the report with the Platform Endorsement Key (PEK).
> 
> See the SEV FW API spec section 6.8 for more details.
> 
> Note there already exist a command (KVM_SEV_LAUNCH_MEASURE) that can be
> used to get the SHA-256 digest. The main difference between the
> KVM_SEV_LAUNCH_MEASURE and KVM_SEV_ATTESTATION_REPORT is that the latter
> can be called while the guest is running and the measurement value is
> signed with PEK.

So it is LAUNCH_MEASURE, available at VM runtime, signed with an extra key.

qemu caches the LAUNCH_MEASURE value at VM startup and lets us query it
with qmp any time, so I don't think runtime access matters.

Maybe the extra key signing is a security fix or something. I haven't
figured it out.

But as is it's not clear what this buys us over the launch measurement
we already report with virDomainGetLaunchSecurityInfo


If we figure out what the point of this is, IMO we can more easily
reason about whether it makes sense to add a Sev specific libvirt API,
and whether we need virTypedParams for both input and output. For
example if the API really is specific to this one and only KVM ioctl/QMP
command, we could hardcode the parameters and skip the virTypedParams
question entirely.

CCing danpb for his thoughts

- Cole



Re: Issue in WinRM on Linux

2022-04-11 Thread Nishit Sharma
Yes Alvin. I am using same box. Any contact for Vagrant folks?

Regards
Nishit

On Mon, 11 Apr 2022 at 6:27 PM, Alvin Starr  wrote:

> It would look like you are using
> https://app.vagrantup.com/peru/boxes/windows-10-enterprise-x64-eval or a
> related image.
>
> Your best point of contact for support would likely be the Vagrant people.
>
>
>
>
>
> On 2022-04-11 08:17, Nishit Sharma wrote:
>
> Hi Michal,
>
> Reason for asking is I am using peru/windowsbox as guest os running on
> linux/kvm host. The box owner pointed winRM support in libvirt.
> The VM is running on linux/kvm host which have different domain IP
> (10.190.xxx.xxx) and windows guest gets 192.168.xxx.xxx IP.
> Atleast on windows localhost:985/wsman should be accessible.
> I am clueless.
>
> Regards
> Nishit
>
> On Mon, 11 Apr 2022 at 5:20 PM, Michal Prívozník 
> wrote:
>
>> On 4/11/22 12:49, Nishit Sharma wrote:
>> > Hi Michal,
>> >
>> > Thing is the winRM from linux is unable to open remote shell in windows
>> > guest as mentioned in error logs. I am using NAT mode but didn’t use
>> > port forwarding. Firewall for 5985 is open in windows guest and from
>> > linux I can connect with windows guest through Enter-PSSession and
>> > telnet. Even on guest os localhost:5985/wsman is showing http error 405,
>> > no page.
>> > Any suggestions?
>>
>> Where is the winRM running? is it on the host that's running the Windows
>> VM? Because if it is, then you shouldn't need any port forwarding as the
>> host has access to the bridge and VM's NIC directly. Otherwise, you'll
>> need port forwarding.
>>
>> Are you connecting from another VM? Then try checking firewall in the
>> host. At any rate, I'm still failing to see what can libvirt do to help.
>>
>> Michal
>>
>>
> --
> Alvin Starr   ||   land:  (647)478-6285
> Netvel Inc.   ||   Cell:  (416)806-0133al...@netvel.net   
>||
>
>
>


[PATCH v3] conf: Move validation checks from virDomainDiskDefIotuneParse into domain_validate.c

2022-04-11 Thread Moteen Shah
Move validation from virDomainDiskDefIotuneParse into the validation callback

Signed-off-by: Moteen Shah 
---
 src/conf/domain_conf.c | 40 --
 src/conf/domain_validate.c | 37 +++
 2 files changed, 37 insertions(+), 40 deletions(-)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 5dd269b283..bd2884088c 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -8790,46 +8790,6 @@ virDomainDiskDefIotuneParse(virDomainDiskDef *def,
 def->blkdeviotune.group_name =
 virXPathString("string(./iotune/group_name)", ctxt);
 
-if ((def->blkdeviotune.total_bytes_sec &&
- def->blkdeviotune.read_bytes_sec) ||
-(def->blkdeviotune.total_bytes_sec &&
- def->blkdeviotune.write_bytes_sec)) {
-virReportError(VIR_ERR_XML_ERROR, "%s",
-   _("total and read/write bytes_sec "
- "cannot be set at the same time"));
-return -1;
-}
-
-if ((def->blkdeviotune.total_iops_sec &&
- def->blkdeviotune.read_iops_sec) ||
-(def->blkdeviotune.total_iops_sec &&
- def->blkdeviotune.write_iops_sec)) {
-virReportError(VIR_ERR_XML_ERROR, "%s",
-   _("total and read/write iops_sec "
- "cannot be set at the same time"));
-return -1;
-}
-
-if ((def->blkdeviotune.total_bytes_sec_max &&
- def->blkdeviotune.read_bytes_sec_max) ||
-(def->blkdeviotune.total_bytes_sec_max &&
- def->blkdeviotune.write_bytes_sec_max)) {
-virReportError(VIR_ERR_XML_ERROR, "%s",
-   _("total and read/write bytes_sec_max "
- "cannot be set at the same time"));
-return -1;
-}
-
-if ((def->blkdeviotune.total_iops_sec_max &&
- def->blkdeviotune.read_iops_sec_max) ||
-(def->blkdeviotune.total_iops_sec_max &&
- def->blkdeviotune.write_iops_sec_max)) {
-virReportError(VIR_ERR_XML_ERROR, "%s",
-   _("total and read/write iops_sec_max "
- "cannot be set at the same time"));
-return -1;
-}
-
 return 0;
 }
 #undef PARSE_IOTUNE
diff --git a/src/conf/domain_validate.c b/src/conf/domain_validate.c
index d6869e8fd8..68190fc3e2 100644
--- a/src/conf/domain_validate.c
+++ b/src/conf/domain_validate.c
@@ -651,6 +651,43 @@ virDomainDiskDefValidate(const virDomainDef *def,
 }
 }
 
+/* Validate IotuneParse */
+if ((disk->blkdeviotune.total_bytes_sec &&
+ disk->blkdeviotune.read_bytes_sec) ||
+(disk->blkdeviotune.total_bytes_sec &&
+ disk->blkdeviotune.write_bytes_sec)) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write bytes_sec cannot be set at the 
same time"));
+return -1;
+}
+
+if ((disk->blkdeviotune.total_iops_sec &&
+ disk->blkdeviotune.read_iops_sec) ||
+(disk->blkdeviotune.total_iops_sec &&
+ disk->blkdeviotune.write_iops_sec)) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write iops_sec cannot be set at the 
same time"));
+return -1;
+}
+
+if ((disk->blkdeviotune.total_bytes_sec_max &&
+ disk->blkdeviotune.read_bytes_sec_max) ||
+(disk->blkdeviotune.total_bytes_sec_max &&
+ disk->blkdeviotune.write_bytes_sec_max)) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write bytes_sec_max cannot be set at 
the same time"));
+return -1;
+}
+
+if ((disk->blkdeviotune.total_iops_sec_max &&
+ disk->blkdeviotune.read_iops_sec_max) ||
+(disk->blkdeviotune.total_iops_sec_max &&
+ disk->blkdeviotune.write_iops_sec_max)) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write iops_sec_max cannot be set at 
the same time"));
+return -1;
+}
+
 /* Reject disks with a bus type that is not compatible with the
  * given address type. The function considers only buses that are
  * handled in common code. For other bus types it's not possible
-- 
2.35.1



[PATCH v3] conf: Move validation checks from virDomainDiskDefIotuneParse into domain_validate.c

2022-04-11 Thread Moteen Shah
Move validation from virDomainDiskDefIotuneParse into the validation callback

Signed-off-by: Moteen Shah 
---
 src/conf/domain_conf.c | 40 --
 src/conf/domain_validate.c | 37 +++
 2 files changed, 37 insertions(+), 40 deletions(-)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 5dd269b283..bd2884088c 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -8790,46 +8790,6 @@ virDomainDiskDefIotuneParse(virDomainDiskDef *def,
 def->blkdeviotune.group_name =
 virXPathString("string(./iotune/group_name)", ctxt);
 
-if ((def->blkdeviotune.total_bytes_sec &&
- def->blkdeviotune.read_bytes_sec) ||
-(def->blkdeviotune.total_bytes_sec &&
- def->blkdeviotune.write_bytes_sec)) {
-virReportError(VIR_ERR_XML_ERROR, "%s",
-   _("total and read/write bytes_sec "
- "cannot be set at the same time"));
-return -1;
-}
-
-if ((def->blkdeviotune.total_iops_sec &&
- def->blkdeviotune.read_iops_sec) ||
-(def->blkdeviotune.total_iops_sec &&
- def->blkdeviotune.write_iops_sec)) {
-virReportError(VIR_ERR_XML_ERROR, "%s",
-   _("total and read/write iops_sec "
- "cannot be set at the same time"));
-return -1;
-}
-
-if ((def->blkdeviotune.total_bytes_sec_max &&
- def->blkdeviotune.read_bytes_sec_max) ||
-(def->blkdeviotune.total_bytes_sec_max &&
- def->blkdeviotune.write_bytes_sec_max)) {
-virReportError(VIR_ERR_XML_ERROR, "%s",
-   _("total and read/write bytes_sec_max "
- "cannot be set at the same time"));
-return -1;
-}
-
-if ((def->blkdeviotune.total_iops_sec_max &&
- def->blkdeviotune.read_iops_sec_max) ||
-(def->blkdeviotune.total_iops_sec_max &&
- def->blkdeviotune.write_iops_sec_max)) {
-virReportError(VIR_ERR_XML_ERROR, "%s",
-   _("total and read/write iops_sec_max "
- "cannot be set at the same time"));
-return -1;
-}
-
 return 0;
 }
 #undef PARSE_IOTUNE
diff --git a/src/conf/domain_validate.c b/src/conf/domain_validate.c
index d6869e8fd8..68190fc3e2 100644
--- a/src/conf/domain_validate.c
+++ b/src/conf/domain_validate.c
@@ -651,6 +651,43 @@ virDomainDiskDefValidate(const virDomainDef *def,
 }
 }
 
+/* Validate IotuneParse */
+if ((disk->blkdeviotune.total_bytes_sec &&
+ disk->blkdeviotune.read_bytes_sec) ||
+(disk->blkdeviotune.total_bytes_sec &&
+ disk->blkdeviotune.write_bytes_sec)) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write bytes_sec cannot be set at the 
same time"));
+return -1;
+}
+
+if ((disk->blkdeviotune.total_iops_sec &&
+ disk->blkdeviotune.read_iops_sec) ||
+(disk->blkdeviotune.total_iops_sec &&
+ disk->blkdeviotune.write_iops_sec)) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write iops_sec cannot be set at the 
same time"));
+return -1;
+}
+
+if ((disk->blkdeviotune.total_bytes_sec_max &&
+ disk->blkdeviotune.read_bytes_sec_max) ||
+(disk->blkdeviotune.total_bytes_sec_max &&
+ disk->blkdeviotune.write_bytes_sec_max)) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write bytes_sec_max cannot be set at 
the same time"));
+return -1;
+}
+
+if ((disk->blkdeviotune.total_iops_sec_max &&
+ disk->blkdeviotune.read_iops_sec_max) ||
+(disk->blkdeviotune.total_iops_sec_max &&
+ disk->blkdeviotune.write_iops_sec_max)) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write iops_sec_max cannot be set at 
the same time"));
+return -1;
+}
+
 /* Reject disks with a bus type that is not compatible with the
  * given address type. The function considers only buses that are
  * handled in common code. For other bus types it's not possible
-- 
2.35.1



Re: Issue in WinRM on Linux

2022-04-11 Thread Nishit Sharma
Hi Michal,

Reason for asking is I am using peru/windowsbox as guest os running on
linux/kvm host. The box owner pointed winRM support in libvirt.
The VM is running on linux/kvm host which have different domain IP
(10.190.xxx.xxx) and windows guest gets 192.168.xxx.xxx IP.
Atleast on windows localhost:985/wsman should be accessible.
I am clueless.

Regards
Nishit

On Mon, 11 Apr 2022 at 5:20 PM, Michal Prívozník 
wrote:

> On 4/11/22 12:49, Nishit Sharma wrote:
> > Hi Michal,
> >
> > Thing is the winRM from linux is unable to open remote shell in windows
> > guest as mentioned in error logs. I am using NAT mode but didn’t use
> > port forwarding. Firewall for 5985 is open in windows guest and from
> > linux I can connect with windows guest through Enter-PSSession and
> > telnet. Even on guest os localhost:5985/wsman is showing http error 405,
> > no page.
> > Any suggestions?
>
> Where is the winRM running? is it on the host that's running the Windows
> VM? Because if it is, then you shouldn't need any port forwarding as the
> host has access to the bridge and VM's NIC directly. Otherwise, you'll
> need port forwarding.
>
> Are you connecting from another VM? Then try checking firewall in the
> host. At any rate, I'm still failing to see what can libvirt do to help.
>
> Michal
>
>


Re: Issue in WinRM on Linux

2022-04-11 Thread Michal Prívozník
On 4/11/22 12:49, Nishit Sharma wrote:
> Hi Michal,
> 
> Thing is the winRM from linux is unable to open remote shell in windows
> guest as mentioned in error logs. I am using NAT mode but didn’t use
> port forwarding. Firewall for 5985 is open in windows guest and from
> linux I can connect with windows guest through Enter-PSSession and
> telnet. Even on guest os localhost:5985/wsman is showing http error 405,
> no page.
> Any suggestions?

Where is the winRM running? is it on the host that's running the Windows
VM? Because if it is, then you shouldn't need any port forwarding as the
host has access to the bridge and VM's NIC directly. Otherwise, you'll
need port forwarding.

Are you connecting from another VM? Then try checking firewall in the
host. At any rate, I'm still failing to see what can libvirt do to help.

Michal



Re: [PATCH v2 0/8] Introduce network backed NVRAM

2022-04-11 Thread Rohit Kumar

Thanks for taking the time to review this, Ani!

On 09/04/22 8:22 pm, Ani Sinha wrote:

On Fri, Apr 8, 2022 at 11:19 PM Rohit Kumar  wrote:

Libvirt domain XML currently allows only local filepaths
that can be used to specify a NVRAM disk.
Since, VMs can migrate across hypervisor hosts, it should be
possible to allocate NVRAM disks on network storage for
uninterrupted access.

This series extends the NVRAM element to support hosting over
network-backed disks, for high availability.
It achieves this by embedding virStorageSource pointer for
nvram into _virDomainLoaderDef.

It introduces a 'type' attribute for NVRAM element to
specify 'file' vs 'network' backed NVRAM.

Changes v1->v2:
  - Split the patch into smaller patches
  - Added unit test
  - Updated the doc
  - Addressed Peter's comment on v1 
(https://urldefense.proofpoint.com/v2/url?u=https-3A__listman.redhat.com_archives_libvir-2Dlist_2022-2DMarch_229684.html=DwIBaQ=s883GpUCOChKOHiocYtGcg=ABSkr7gy7ZTfApFfI-Xxt1gZNtsDDiXoXOXc0OrkyFs=2qaiZSbuvF3-NV3omL5Oy7HR2A7UtREZi77Fq1EtOI0rL4dFj8bT74YoovtK373G=UFWNI0THYsFToxpv6P_ZHpTzpIB45eECG3Ltl8rRYHc=
 )

Ok so without going deeper into the actual change, following are some
quick comments based on some of my own experience of introducing new
conf options in libvirt:

(a) Please update NEWS.rst t to document the new xml attribute support
for the next release. This should be the last patch in the series
preferrably.

Yes, thanks. I will update it.

(b) Please put patch #2 and #5 together. Also please prefix the first
line with "conf:" I think patch #4 should also go together but I will
let others comment.

On patch v1, Peter suggested to saperate the cleanups in a different patch.
Sure, putting  #2 and #5 would be good idea.

(c) It's better that the unit tests (patches #7 and #8) go along with
the changes in the same patch. Then when cherry picking the unit tests
will be picked along with the change itself.
Yes, this seems logical. I would also prefer to wait for Peter's review 
before sending out the next patch.

(d) also I have commented separately, your validation patch needs
additional unit tests to check the validation actually works.

Ack., thanks.







Rohit Kumar (8):
   Make NVRAM a virStorageSource type.
   Add support to parse/format virStorageSource type NVRAM
   Validate remote store NVRAM
   Cleanup diskSourceNetwork and diskSourceFile schema
   Update XML schema to support network backed NVRAM
   Update NVRAM documentation
   Add unit test for network backed NVRAM
   Add unit test to support new 'file' type NVRAM

  docs/formatdomain.rst | 43 +++--
  src/conf/domain_conf.c| 88 ---
  src/conf/domain_conf.h|  2 +-
  src/conf/schemas/domaincommon.rng | 80 +++--
  src/qemu/qemu_cgroup.c|  3 +-
  src/qemu/qemu_command.c   |  2 +-
  src/qemu/qemu_domain.c| 14 +--
  src/qemu/qemu_driver.c|  5 +-
  src/qemu/qemu_firmware.c  | 23 +++--
  src/qemu/qemu_namespace.c |  5 +-
  src/qemu/qemu_process.c   |  5 +-
  src/qemu/qemu_validate.c  | 22 +
  src/security/security_dac.c   |  6 +-
  src/security/security_selinux.c   |  6 +-
  src/security/virt-aa-helper.c |  5 +-
  src/vbox/vbox_common.c|  2 +-
  .../bios-nvram-file.x86_64-latest.args| 37 
  tests/qemuxml2argvdata/bios-nvram-file.xml| 23 +
  .../bios-nvram-network.x86_64-latest.args | 37 
  tests/qemuxml2argvdata/bios-nvram-network.xml | 25 ++
  tests/qemuxml2argvtest.c  |  2 +
  21 files changed, 360 insertions(+), 75 deletions(-)
  create mode 100644 tests/qemuxml2argvdata/bios-nvram-file.x86_64-latest.args
  create mode 100644 tests/qemuxml2argvdata/bios-nvram-file.xml
  create mode 100644 
tests/qemuxml2argvdata/bios-nvram-network.x86_64-latest.args
  create mode 100644 tests/qemuxml2argvdata/bios-nvram-network.xml

--
2.25.1





Re: [PATCH v2 3/8] Validate remote store NVRAM

2022-04-11 Thread Rohit Kumar



On 09/04/22 8:19 pm, Ani Sinha wrote:

On Fri, Apr 8, 2022 at 11:19 PM Rohit Kumar  wrote:

Remote store NVRAM feature is being enabled only
if it supports 'blockdev' capability.

Signed-off-by: Prerna Saxena 
Signed-off-by: Florian Schmidt 
Signed-off-by: Rohit Kumar 

Please add negative unit tests to check the validation along with this
patch. Prefix the patch with "qemu:".
Thanks for pointing this out, Ani. Sure, I will add it the next patch 
series.



---
  src/qemu/qemu_validate.c | 22 ++
  1 file changed, 22 insertions(+)

diff --git a/src/qemu/qemu_validate.c b/src/qemu/qemu_validate.c
index 96f5427678..2a961b1f50 100644
--- a/src/qemu/qemu_validate.c
+++ b/src/qemu/qemu_validate.c
@@ -611,6 +611,25 @@ qemuValidateDomainDefBoot(const virDomainDef *def)
  }


+static int
+qemuValidateDomainDefNvram(const virDomainDef *def,
+   virQEMUCaps *qemuCaps)
+{
+if (def->os.loader && def->os.loader->nvram) {
+if (def->os.loader->nvram->type !=  VIR_STORAGE_TYPE_FILE &&
+!virQEMUCapsGet(qemuCaps, QEMU_CAPS_BLOCKDEV)) {
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+   _("This Qemu does not support 'blockdev' capability 
"
+ "for remote store NVRAM. NVRAM type other than "
+ "'file' is not supported with this QEMU"));
+return -1;
+}
+}
+
+return 0;
+}
+
+
  /**
   * qemuValidateDefGetVcpuHotplugGranularity:
   * @def: domain definition
@@ -1185,6 +1204,9 @@ qemuValidateDomainDef(const virDomainDef *def,
  if (qemuValidateDomainDefBoot(def) < 0)
  return -1;

+if (qemuValidateDomainDefNvram(def, qemuCaps) < 0)
+return -1;
+
  if (qemuValidateDomainVCpuTopology(def, qemuCaps) < 0)
  return -1;

--
2.25.1





Re: Issue in WinRM on Linux

2022-04-11 Thread Nishit Sharma
Hi Michal,

Thing is the winRM from linux is unable to open remote shell in windows
guest as mentioned in error logs. I am using NAT mode but didn’t use port
forwarding. Firewall for 5985 is open in windows guest and from linux I can
connect with windows guest through Enter-PSSession and telnet. Even on
guest os localhost:5985/wsman is showing http error 405, no page.
Any suggestions?

Regards
Nishit
On Mon, 11 Apr 2022 at 3:17 PM, Michal Prívozník 
wrote:

> On 4/6/22 16:02, Nishit Sharma wrote:
> > Hi All,
> >
>
> I'm not exactly sure what you are expecting from libvirt here. Libvirt
> does not configure anything inside the guest, for libvirt guest is
> basically a black box (except for a very small set of operations that
> guest agent can do, but Windows Remote Desktop is none of that).
>
> If you are having troubles connecting from outside I suspect a firewall
> issue somewhere. If you are using a libvirt network (e.g. the default
> one) in NAT mode, you want to configure port forward. But I guess you've
> done so since there is some communication going back and forth.
>
> Michal
>
>


[CI] libvirt-gitlab-executor project is now part of the libvirt namespace

2022-04-11 Thread Erik Skultety
This is just a small announcement that the $subj project which was created in
order to provision customized VMs as throwaway worker nodes for our functional
test suite utilizing GitLab's custom executor feature was transferred to the
libvirt's GitLab namespace.
Anything that relates to both infrastructure and executing the functional test
suite in a customized environment should go to this project.

Regards,
Erik



Re: Issue in WinRM on Linux

2022-04-11 Thread Michal Prívozník
On 4/6/22 16:02, Nishit Sharma wrote:
> Hi All,
> 

I'm not exactly sure what you are expecting from libvirt here. Libvirt
does not configure anything inside the guest, for libvirt guest is
basically a black box (except for a very small set of operations that
guest agent can do, but Windows Remote Desktop is none of that).

If you are having troubles connecting from outside I suspect a firewall
issue somewhere. If you are using a libvirt network (e.g. the default
one) in NAT mode, you want to configure port forward. But I guess you've
done so since there is some communication going back and forth.

Michal



Re: [PATCH v2] conf: Move validation checks from virDomainDiskDefIotuneParse into domain_validate.c

2022-04-11 Thread Michal Prívozník
On 4/9/22 11:38, Moteen Shah wrote:

I recomend using 'git send-email' which does the right thing (as in not
send the patch as an attachment).

Quick glance at the patch though: since you are touching the error
messages (moving them, not changing them), please do put them on one
line, like this:


+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write bytes_sec cannot be set at the 
same time"));

Error message are exempt from the 80 chars long line rule, so that they
are easy to 'git grep'.

Michal



Re: [libvirt PATCH] virsh: fix event registration for single event

2022-04-11 Thread Michal Prívozník
On 4/11/22 09:33, Ján Tomko wrote:
> Allocate a larger 'data' array than strictly needed
> for simplicity and use 'ndata' as the index when
> filling it to put the single event at the first unused
> place, instead of at its index in the virshDomainEventCallbacks
> array.
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=2073887
> 
> Fixes: c6bb2746933bbe65877a5f8a8d60e100b0bf8a59
> Signed-off-by: Ján Tomko 
> ---
>  tools/virsh-domain-event.c | 19 ---
>  1 file changed, 8 insertions(+), 11 deletions(-)

Reviewed-by: Michal Privoznik 

Michal



[libvirt PATCH] virsh: fix event registration for single event

2022-04-11 Thread Ján Tomko
Allocate a larger 'data' array than strictly needed
for simplicity and use 'ndata' as the index when
filling it to put the single event at the first unused
place, instead of at its index in the virshDomainEventCallbacks
array.

https://bugzilla.redhat.com/show_bug.cgi?id=2073887

Fixes: c6bb2746933bbe65877a5f8a8d60e100b0bf8a59
Signed-off-by: Ján Tomko 
---
 tools/virsh-domain-event.c | 19 ---
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/tools/virsh-domain-event.c b/tools/virsh-domain-event.c
index 1a2f1cb6e0..6dbb64a655 100644
--- a/tools/virsh-domain-event.c
+++ b/tools/virsh-domain-event.c
@@ -915,23 +915,20 @@ cmdEvent(vshControl *ctl, const vshCmd *cmd)
 return false;
 }
 
-if (eventName)
-data = g_new0(virshDomEventData, 1);
-else
-data = g_new0(virshDomEventData, 
G_N_ELEMENTS(virshDomainEventCallbacks));
+data = g_new0(virshDomEventData, G_N_ELEMENTS(virshDomainEventCallbacks));
 
 for (i = 0; i < G_N_ELEMENTS(virshDomainEventCallbacks); i++) {
 if (eventName &&
 STRNEQ(eventName, virshDomainEventCallbacks[i].name))
 continue;
 
-data[i].event = i;
-data[i].ctl = ctl;
-data[i].loop = loop;
-data[i].count = 
-data[i].timestamp = timestamp;
-data[i].cb = [i];
-data[i].id = -1;
+data[ndata].event = i;
+data[ndata].ctl = ctl;
+data[ndata].loop = loop;
+data[ndata].count = 
+data[ndata].timestamp = timestamp;
+data[ndata].cb = [i];
+data[ndata].id = -1;
 ndata++;
 }
 
-- 
2.35.1



[PATCH] processMonitorEOFEvent:fix delet Domain object about the new VM in processMonitorEOFEvent()

2022-04-11 Thread Yi Wang
From: 王鹏钧10288409 <10288409@zte.in...@lin-184c297bae7.zte.com.cn>

Virsh shutdown is executed firstly, virsh destroy is executed later,
and VM is recreated again. In this process, due to will delet Domain
object about the new VM in processMonitorEOFEvent(), which
virDomainObjListRemove is called to remove objlist, the new VM
cannot be found through virsh list command and qemu process is still
running. Therefore, add virDomainObjListFindByName function checks
to avoid delet Domain object about the new VM in objlist.

This process chart of problem is as follows

 shutdown   |  destroy |   create
+==+===
  qemuMonitorIO |  |
qemuProcessHandleMonitorEOF |  |
| virDomainDestroy |
|  qemuProcessStop |
|  | qemuDomainCreateXML
|  | qemuProcessInit
 processMonitorEOFEvent |  |
   qemuDomainRemoveInactive |  |
|  |   qemuProcessLaunch

Signed-off-by: Wang PengJun 
Signed-off-by: Yi Wang 
---
 src/qemu/qemu_driver.c | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 8c0e36e9b2..c16e9a9795 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -4024,6 +4024,7 @@ processMonitorEOFEvent(virQEMUDriver *driver,
 const char *auditReason = "shutdown";
 unsigned int stopFlags = 0;
 virObjectEvent *event = NULL;
+virDomainObj *obj;
 
 if (qemuProcessBeginStopJob(driver, vm, VIR_JOB_DESTROY, true) < 0)
 return;
@@ -4055,8 +4056,17 @@ processMonitorEOFEvent(virQEMUDriver *driver,
 virObjectEventStateQueue(driver->domainEventState, event);
 
  endjob:
-qemuDomainRemoveInactive(driver, vm);
-qemuDomainObjEndJob(vm);
+virObjectUnlock(vm);
+obj = virDomainObjListFindByName(driver->domains, vm->def->name);
+if (vm == obj) {
+qemuDomainRemoveInactive(driver, vm);
+qemuDomainObjEndJob(vm);
+}
+virDomainObjEndAPI();
+if (vm != NULL) {
+virObjectLock(vm);
+qemuDomainObjEndJob(vm);
+}
 }
 
 
-- 
2.27.0