Re: [PATCH 2/2] qemu: fix one more race on undefining and create

2022-04-13 Thread Nikolay Shirokovskiy
ср, 13 апр. 2022 г. в 17:33, Martin Kletzander :
>
> On Wed, Apr 13, 2022 at 11:43:32AM +0300, Nikolay Shirokovskiy wrote:
> >[1] closes gap in virDomainObjListRemove so that concurrent thread can
> >not step in and obtain the domain while domain is temporary unlocked. But
> >there is another gap exist:
> >
> >thread B - executes create API
> >thread C - executes undefine API
> >
> >- thread A executes some job on domain
> >- threads B and C obtains domain from list and wait for job condition
> >- thread A finishes its job and C grabs job condition, removes domain
> >  from list and finishes
> >- thread B grabs job condition and start the domain, unfortunately
> >  is not in the list already
> >
> >[1] commit c7d1c139ca3402e875002753952e80ce8054374e
> >Author: Martin Kletzander 
> >Date:   Thu Dec 11 11:14:08 2014 +0100
> >
> >qemu: avoid rare race when undefining domain
> >
> >Signed-off-by: Nikolay Shirokovskiy 
>
> Unfortunately not all APIs take jobs (yet), but it can't be worse than
> it is now ;)  Thanks for the patch,
>
> Reviewed-by: Martin Kletzander 

Thanks for the reviews!

Nikolay



[PATCH 1/2] qemu: drop needless acquiring job on removing domain

2022-04-13 Thread Nikolay Shirokovskiy
Acquiring job introduced in commit [1] to fix a race described in the
commit. Actually it does not help because we get domain in create API
before acuiring job. Then [2] fix the race but [1] was not reverted even
is does not required by [2] to work properly.

[1] commit b629c64e5e0a32ef439b8eeb3a697e2cd76f3248
Author: Martin Kletzander 
Date:   Thu Oct 30 14:38:35 2014 +0100

qemu: avoid rare race when undefining domain

[2] commit c7d1c139ca3402e875002753952e80ce8054374e
Author: Martin Kletzander 
Date:   Thu Dec 11 11:14:08 2014 +0100

qemu: avoid rare race when undefining domain

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_domain.c| 45 +--
 src/qemu/qemu_domain.h| 10 -
 src/qemu/qemu_driver.c| 12 +--
 src/qemu/qemu_migration.c | 10 -
 src/qemu/qemu_process.c   | 12 ---
 5 files changed, 20 insertions(+), 69 deletions(-)

diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index a4334de158..95134a3570 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -7343,7 +7343,7 @@ qemuDomainRemoveInactive(virQEMUDriver *driver,
  * lock on driver->domains in order to call the remove obj
  * from locked list method.
  */
-static void
+void
 qemuDomainRemoveInactiveLocked(virQEMUDriver *driver,
virDomainObj *vm)
 {
@@ -7357,49 +7357,6 @@ qemuDomainRemoveInactiveLocked(virQEMUDriver *driver,
 virDomainObjListRemoveLocked(driver->domains, vm);
 }
 
-/**
- * qemuDomainRemoveInactiveJob:
- *
- * Just like qemuDomainRemoveInactive but it tries to grab a
- * VIR_JOB_MODIFY first. Even though it doesn't succeed in
- * grabbing the job the control carries with
- * qemuDomainRemoveInactive call.
- */
-void
-qemuDomainRemoveInactiveJob(virQEMUDriver *driver,
-virDomainObj *vm)
-{
-bool haveJob;
-
-haveJob = qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) >= 0;
-
-qemuDomainRemoveInactive(driver, vm);
-
-if (haveJob)
-qemuDomainObjEndJob(vm);
-}
-
-
-/**
- * qemuDomainRemoveInactiveJobLocked:
- *
- * Similar to qemuDomainRemoveInactiveJob, except that the caller must
- * also hold the lock @driver->domains
- */
-void
-qemuDomainRemoveInactiveJobLocked(virQEMUDriver *driver,
-  virDomainObj *vm)
-{
-bool haveJob;
-
-haveJob = qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) >= 0;
-
-qemuDomainRemoveInactiveLocked(driver, vm);
-
-if (haveJob)
-qemuDomainObjEndJob(vm);
-}
-
 
 void
 qemuDomainSetFakeReboot(virDomainObj *vm,
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index f4d84ca43a..0415a34908 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -672,6 +672,10 @@ int qemuDomainSnapshotDiscardAllMetadata(virQEMUDriver 
*driver,
 void qemuDomainRemoveInactive(virQEMUDriver *driver,
   virDomainObj *vm);
 
+void
+qemuDomainRemoveInactiveLocked(virQEMUDriver *driver,
+   virDomainObj *vm);
+
 void qemuDomainSetFakeReboot(virDomainObj *vm,
  bool value);
 
@@ -1036,12 +1040,6 @@ int
 qemuDomainDefNumaCPUsRectify(virDomainDef *def,
  virQEMUCaps *qemuCaps);
 
-void qemuDomainRemoveInactiveJob(virQEMUDriver *driver,
- virDomainObj *vm);
-
-void qemuDomainRemoveInactiveJobLocked(virQEMUDriver *driver,
-   virDomainObj *vm);
-
 int virQEMUFileOpenAs(uid_t fallback_uid,
   gid_t fallback_gid,
   bool dynamicOwnership,
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 8c0e36e9b2..ee0963c30d 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1625,7 +1625,7 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr 
conn,
 
 if (qemuProcessBeginJob(driver, vm, VIR_DOMAIN_JOB_OPERATION_START,
 flags) < 0) {
-qemuDomainRemoveInactiveJob(driver, vm);
+qemuDomainRemoveInactive(driver, vm);
 goto cleanup;
 }
 
@@ -2751,7 +2751,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver,
 }
 qemuDomainObjEndAsyncJob(vm);
 if (ret == 0)
-qemuDomainRemoveInactiveJob(driver, vm);
+qemuDomainRemoveInactive(driver, vm);
 
  cleanup:
 virQEMUSaveDataFree(data);
@@ -3228,7 +3228,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom,
 
 qemuDomainObjEndAsyncJob(vm);
 if (ret == 0 && flags & VIR_DUMP_CRASH)
-qemuDomainRemoveInactiveJob(driver, vm);
+qemuDomainRemoveInactive(driver, vm);
 
  cleanup:
 virDomainObjEndAPI();
@@ -3536,7 +3536,7 @@ processGuestPanicEvent(virQEMUDriver *driver,
  endjob:
 qemuDomainObjEndAsyncJob(vm);
 if (removeInactive)
-qemuDomainRemoveInactiveJob(driver, vm);
+qemuDomainRemoveInactive(driver, vm);
 }
 
 
@@ -5851,7 +58

[PATCH 2/2] qemu: fix one more race on undefining and create

2022-04-13 Thread Nikolay Shirokovskiy
[1] closes gap in virDomainObjListRemove so that concurrent thread can
not step in and obtain the domain while domain is temporary unlocked. But
there is another gap exist:

thread B - executes create API
thread C - executes undefine API

- thread A executes some job on domain
- threads B and C obtains domain from list and wait for job condition
- thread A finishes its job and C grabs job condition, removes domain
  from list and finishes
- thread B grabs job condition and start the domain, unfortunately
  is not in the list already

[1] commit c7d1c139ca3402e875002753952e80ce8054374e
Author: Martin Kletzander 
Date:   Thu Dec 11 11:14:08 2014 +0100

qemu: avoid rare race when undefining domain

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_domainjob.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c
index 1cddc7f2f0..cb20b798f7 100644
--- a/src/qemu/qemu_domainjob.c
+++ b/src/qemu/qemu_domainjob.c
@@ -849,6 +849,16 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver,
 if (!nested && !qemuDomainNestedJobAllowed(>job, job))
 goto retry;
 
+if (obj->removing) {
+char uuidstr[VIR_UUID_STRING_BUFLEN];
+
+virUUIDFormat(obj->def->uuid, uuidstr);
+virReportError(VIR_ERR_NO_DOMAIN,
+   _("no domain with matching uuid '%s' (%s)"),
+   uuidstr, obj->def->name);
+goto cleanup;
+}
+
 ignore_value(virTimeMillisNow());
 
 if (job) {
-- 
2.35.1



[PATCH 0/2] qemu: fix and cleanup for a race on undefine and create

2022-04-13 Thread Nikolay Shirokovskiy
Nikolay Shirokovskiy (2):
  qemu: drop needless acquiring job on removing domain
  qemu: fix one more race on undefining and create

 src/qemu/qemu_domain.c| 45 +--
 src/qemu/qemu_domain.h| 10 -
 src/qemu/qemu_domainjob.c | 10 +
 src/qemu/qemu_driver.c| 12 +--
 src/qemu/qemu_migration.c | 10 -
 src/qemu/qemu_process.c   | 12 ---
 6 files changed, 30 insertions(+), 69 deletions(-)

-- 
2.35.1



[PATCH 2/4] qemu: fix releasing VNC websocket port domain does not own

2022-04-12 Thread Nikolay Shirokovskiy
Scenario is with two domains with same VNC websocket port.

- start first domain
- start second, it will fail as port is occupied

As a result port will be released which breaks port reservation logic.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/conf/domain_conf.h  |  1 +
 src/qemu/qemu_process.c | 10 ++
 2 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index 694491cd63..88a411d00c 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -1901,6 +1901,7 @@ struct _virDomainGraphicsDef {
 bool portReserved;
 int websocket;
 bool websocketGenerated;
+bool websocketReserved;
 bool autoport;
 char *keymap;
 virDomainGraphicsAuthDef auth;
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index cae87cdeca..9c7583a10b 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -4678,9 +4678,11 @@ qemuProcessGraphicsReservePorts(virDomainGraphicsDef 
*graphics,
 return -1;
 graphics->data.vnc.portReserved = true;
 }
-if (graphics->data.vnc.websocket > 0 &&
-virPortAllocatorSetUsed(graphics->data.vnc.websocket) < 0)
-return -1;
+if (graphics->data.vnc.websocket > 0) {
+if (virPortAllocatorSetUsed(graphics->data.vnc.websocket) < 0)
+return -1;
+graphics->data.vnc.websocketReserved = true;
+}
 break;
 
 case VIR_DOMAIN_GRAPHICS_TYPE_SPICE:
@@ -8270,7 +8272,7 @@ void qemuProcessStop(virQEMUDriver *driver,
 virPortAllocatorRelease(graphics->data.vnc.websocket);
 graphics->data.vnc.websocketGenerated = false;
 graphics->data.vnc.websocket = -1;
-} else if (graphics->data.vnc.websocket) {
+} else if (graphics->data.vnc.websocketReserved) {
 virPortAllocatorRelease(graphics->data.vnc.websocket);
 }
 }
-- 
2.35.1



[PATCH 4/4] qemu: cleanup code to relece SPICE ports

2022-04-12 Thread Nikolay Shirokovskiy
SPICE ports cleanup looks overly complicated. We can just set *reserved
flags whenever port is reserved (auto or non auto).

Also *Reserved flags are not cleared on stop in case of reconnect with
autoport (flags are set on reconnect in qemuProcessGraphicsReservePorts
call). Yeah config is freed in the end of stopping domain but still.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_process.c | 25 -
 1 file changed, 8 insertions(+), 17 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 6db8bbc421..08a38edfb6 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -4071,9 +4071,7 @@ qemuProcessSPICEAllocatePorts(virQEMUDriver *driver,
 return -1;
 
 graphics->data.spice.port = port;
-
-if (!graphics->data.spice.autoport)
-graphics->data.spice.portReserved = true;
+graphics->data.spice.portReserved = true;
 }
 
 if (needTLSPort || graphics->data.spice.tlsPort == -1) {
@@ -4088,9 +4086,7 @@ qemuProcessSPICEAllocatePorts(virQEMUDriver *driver,
 return -1;
 
 graphics->data.spice.tlsPort = tlsPort;
-
-if (!graphics->data.spice.autoport)
-graphics->data.spice.tlsPortReserved = true;
+graphics->data.spice.tlsPortReserved = true;
 }
 
 return 0;
@@ -8279,19 +8275,14 @@ void qemuProcessStop(virQEMUDriver *driver,
 }
 }
 if (graphics->type == VIR_DOMAIN_GRAPHICS_TYPE_SPICE) {
-if (graphics->data.spice.autoport) {
+if (graphics->data.spice.portReserved) {
 virPortAllocatorRelease(graphics->data.spice.port);
-virPortAllocatorRelease(graphics->data.spice.tlsPort);
-} else {
-if (graphics->data.spice.portReserved) {
-virPortAllocatorRelease(graphics->data.spice.port);
-graphics->data.spice.portReserved = false;
-}
+graphics->data.spice.portReserved = false;
+}
 
-if (graphics->data.spice.tlsPortReserved) {
-virPortAllocatorRelease(graphics->data.spice.tlsPort);
-graphics->data.spice.tlsPortReserved = false;
-}
+if (graphics->data.spice.tlsPortReserved) {
+virPortAllocatorRelease(graphics->data.spice.tlsPort);
+graphics->data.spice.tlsPortReserved = false;
 }
 }
 }
-- 
2.35.1



[PATCH 3/4] qemu: cleanup code to release VNC websocket port

2022-04-12 Thread Nikolay Shirokovskiy
VNC websocket port cleanup looks a bit repetetive. Let's set websocketReserved
flag whenever we reserve port (auto or not).

Also websocketReserved flag is not cleared on stop in case of reconnect with
auto port (flags is set on reconnect in qemuProcessGraphicsReservePorts
call). Yeah config is freed in the end of stopping domain but still.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_process.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 9c7583a10b..6db8bbc421 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -4002,6 +4002,7 @@ qemuProcessVNCAllocatePorts(virQEMUDriver *driver,
 return -1;
 graphics->data.vnc.websocket = port;
 graphics->data.vnc.websocketGenerated = true;
+graphics->data.vnc.websocketReserved = true;
 }
 
 return 0;
@@ -8268,12 +8269,13 @@ void qemuProcessStop(virQEMUDriver *driver,
 virPortAllocatorRelease(graphics->data.vnc.port);
 graphics->data.vnc.portReserved = false;
 }
-if (graphics->data.vnc.websocketGenerated) {
+if (graphics->data.vnc.websocketReserved) {
 virPortAllocatorRelease(graphics->data.vnc.websocket);
+graphics->data.vnc.websocketReserved = false;
+}
+if (graphics->data.vnc.websocketGenerated) {
 graphics->data.vnc.websocketGenerated = false;
 graphics->data.vnc.websocket = -1;
-} else if (graphics->data.vnc.websocketReserved) {
-virPortAllocatorRelease(graphics->data.vnc.websocket);
 }
 }
 if (graphics->type == VIR_DOMAIN_GRAPHICS_TYPE_SPICE) {
-- 
2.35.1



[PATCH 1/4] qemu: cleanup code to release VNC port

2022-04-12 Thread Nikolay Shirokovskiy
Code to release VNC port looks repetitive. The reason is there were
originally 2 functions to release ports - for auto and non-auto cases.

Also portReserved flag is not cleared on stop in case of reconnect with
auto port (flags is set on reconnect in qemuProcessGraphicsReservePorts call).
Yeah config is freed in the end of stopping domain but still.

Let's use this flag whenever we reserve port (auto or not). This makes
things clearer.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_process.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 2e11d24be2..cae87cdeca 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -3994,6 +3994,7 @@ qemuProcessVNCAllocatePorts(virQEMUDriver *driver,
 if (virPortAllocatorAcquire(driver->remotePorts, ) < 0)
 return -1;
 graphics->data.vnc.port = port;
+graphics->data.vnc.portReserved = true;
 }
 
 if (graphics->data.vnc.websocket == -1) {
@@ -8261,9 +8262,7 @@ void qemuProcessStop(virQEMUDriver *driver,
 for (i = 0; i < vm->def->ngraphics; ++i) {
 virDomainGraphicsDef *graphics = vm->def->graphics[i];
 if (graphics->type == VIR_DOMAIN_GRAPHICS_TYPE_VNC) {
-if (graphics->data.vnc.autoport) {
-virPortAllocatorRelease(graphics->data.vnc.port);
-} else if (graphics->data.vnc.portReserved) {
+if (graphics->data.vnc.portReserved) {
 virPortAllocatorRelease(graphics->data.vnc.port);
 graphics->data.vnc.portReserved = false;
 }
-- 
2.35.1



[PATCH 0/4] qemu: a bugfix and a few cleanups of graphics port releasing

2022-04-12 Thread Nikolay Shirokovskiy
Nikolay Shirokovskiy (4):
  qemu: cleanup code to release VNC port
  qemu: fix releasing VNC websocket port domain does not own
  qemu: cleanup code to release VNC websocket port
  qemu: cleanup code to relece SPICE ports

 src/conf/domain_conf.h  |  1 +
 src/qemu/qemu_process.c | 46 ++---
 2 files changed, 21 insertions(+), 26 deletions(-)

-- 
2.35.1



[RFC PING PATCH 0/2] logging: add service to cleanup internal log files

2022-04-07 Thread Nikolay Shirokovskiy
Hi all!

Please take a look a RFC linked below. Sorry I switched to a new email
and can't reply to it directly to draw attention.

[RFC PATCH 0/2] logging: add service to cleanup internal log files
https://listman.redhat.com/archives/libvir-list/2022-February/228817.html

Nikolay



Re: [PATCH] qemu: Rename @main variable in qemuDomainRemoveLogs()

2022-04-06 Thread Nikolay Shirokovskiy
ср, 6 апр. 2022 г. в 11:11, Michal Privoznik :
>
> Older GCC fails to understand that 'char *main' is a variable and
> not main() function. Rename the variable to appease old GCC.
>
> Signed-off-by: Michal Privoznik 
> ---
>  src/qemu/qemu_domain.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
> index c35e5c09a3..a4334de158 100644
> --- a/src/qemu/qemu_domain.c
> +++ b/src/qemu/qemu_domain.c
> @@ -11828,7 +11828,7 @@ qemuDomainRemoveLogs(virQEMUDriver *driver,
>  {
>  g_autoptr(virQEMUDriverConfig) cfg = NULL;
>  g_autofree char *format = NULL;
> -g_autofree char *main = NULL;
> +g_autofree char *main_log = NULL;
>  g_autoptr(DIR) dir = NULL;
>  struct dirent *entry;
>  int rc;
> @@ -11840,13 +11840,13 @@ qemuDomainRemoveLogs(virQEMUDriver *driver,
>  if (virDirOpen(, cfg->logDir) < 0)
>  return -1;
>
> -main = g_strdup_printf("%s.log", name);
> +main_log = g_strdup_printf("%s.log", name);
>  format = g_strdup_printf("%s.log.%%u", name);
>
>  while ((rc = virDirRead(dir, , cfg->logDir)) > 0) {
>  unsigned int u;
>
> -if (STREQ(entry->d_name, main) ||
> +if (STREQ(entry->d_name, main_log) ||
>  sscanf(entry->d_name, format, ) == 1) {
>  g_autofree char *path = NULL;
>
> --
> 2.35.1
>

Reviewed-by: Nikolay Shirokovskiy 

Nikolay



[PATCH RESEND] qemu: disarm fake reboot flag on reset

2022-04-05 Thread Nikolay Shirokovskiy
From: Maxim Nestratov 

This is a quite an old (created at 2016) patch fixing an issue for at
that time contemporary Fedora 23. virsh reboot returns success (yet
after hanging for a while), VM is rebooted sucessfully too but then
shutdown from inside guest causes reboot and not shutdown.

VM has agent installed. So virsh reboot first tries to reboot VM thru
the agent. The agent calls 'shutdown -r' command. Typically it returns
instantly but on this distro for some reason it takes time. I did not
investigate the cause but the command waits in dbus client code,
probably waits for reply. The libvirt waits 60s for agent command to
execute and then errors out. Next reboot API falls back to ACPI shutdown
which returns successfully thus the reboot command return success too.

Yet shutdown command in guest eventually successfull and guest is truly
rebooted. So libvirt does not receive SHUTDOWN event and fake reboot
flag which is armed on fallback path stays armed. Thus next shutdown
from guest leads to reboot.

The issue has 100% repro on Fedora 23. On modern distros I can't
reproduce it at all. Shutdown command is asynchronous and returns
immediately even if I start some service that ignores TERM signal and
thus shutdown procedure waits for 90s (if I not mistaken) before sending
KILL.

Yet I guess it is nice to have this patch to be more robust.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_process.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index f63fc3389d..d81ed9391a 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -435,6 +435,7 @@ qemuProcessHandleReset(qemuMonitor *mon G_GNUC_UNUSED,
 if (priv->agent)
 qemuAgentNotifyEvent(priv->agent, QEMU_AGENT_EVENT_RESET);
 
+qemuDomainSetFakeReboot(vm, false);
 qemuDomainSaveStatus(vm);
 
  unlock:
-- 
2.35.1



[PATCH] qemu: disarm fake reboot flag on reset

2022-04-05 Thread Nikolay Shirokovskiy


binA9h7J9RSaP.bin
Description: Binary data


[PATCH 10/11] qemu: use snapshot-load for modern QEMU to revert to snapshot

2022-03-31 Thread Nikolay Shirokovskiy
Note that if snapshot was done using old QEMU API then it is loaded
using old QEMU API as well as we don't know on which disk vmstate is.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_process.c  |  8 +-
 src/qemu/qemu_snapshot.c | 54 
 2 files changed, 61 insertions(+), 1 deletion(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 6ed7eaaa83..eac6b00ff4 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -98,6 +98,7 @@
 #include "virutil.h"
 #include "storage_source.h"
 #include "backup_conf.h"
+#include "virdomainsnapshotobjlist.h"
 
 #include "logging/log_manager.h"
 #include "logging/log_protocol.h"
@@ -7389,6 +7390,7 @@ qemuProcessLaunch(virConnectPtr conn,
 size_t nnicindexes = 0;
 g_autofree int *nicindexes = NULL;
 unsigned long long maxMemLock = 0;
+bool backcompatSnapshot;
 
 VIR_DEBUG("conn=%p driver=%p vm=%p name=%s if=%d asyncJob=%d "
   "incoming.launchURI=%s incoming.deferredURI=%s "
@@ -7444,11 +7446,15 @@ qemuProcessLaunch(virConnectPtr conn,
 if (qemuExtDevicesStart(driver, vm, incoming != NULL) < 0)
 goto cleanup;
 
+if (snapshot)
+backcompatSnapshot = !virQEMUCapsGet(priv->qemuCaps, 
QEMU_CAPS_SNAPSHOT_SAVE) ||
+ !virDomainSnapshotObjGetDef(snapshot)->memorydisk;
+
 VIR_DEBUG("Building emulator command line");
 if (!(cmd = qemuBuildCommandLine(driver,
  vm,
  incoming ? incoming->launchURI : NULL,
- snapshot, vmop,
+ snapshot && backcompatSnapshot ? snapshot 
: NULL, vmop,
  false,
  qemuCheckFips(vm),
  , , 0)))
diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 9f81befe85..605288f6c5 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -2395,6 +2395,54 @@ qemuSnapshotRevertWriteMetadata(virDomainObj *vm,
 }
 
 
+static int
+qemuSnapshotLoadState(virQEMUDriver *driver,
+  virDomainObj *vm,
+  virDomainMomentObj *snap)
+{
+qemuDomainObjPrivate *priv = vm->privateData;
+virDomainSnapshotDef *snapdef = virDomainSnapshotObjGetDef(snap);
+g_autoptr(GPtrArray) devices = g_ptr_array_new();
+const char *memoryNode;
+int ret = -1;
+int rc;
+
+if (!(devices = qemuSnapshotGetDisksNodes(snapdef, vm->def, )))
+goto cleanup;
+
+if (qemuDomainObjEnterMonitorAsync(driver, vm,
+   VIR_ASYNC_JOB_START) < 0)
+goto cleanup;
+rc = qemuMonitorSnapshotLoad(priv->mon,
+ "snapshot-load",
+ snap->def->name,
+ memoryNode,
+ (const char **)devices->pdata,
+ devices->len);
+qemuDomainObjExitMonitor(vm);
+if (rc < 0)
+goto cleanup;
+
+if (qemuSnapshotWaitJob(driver, vm, VIR_ASYNC_JOB_START,
+"snapshot-load") < 0)
+goto cleanup;
+
+ret = 0;
+
+ cleanup:
+if (ret < 0 && virDomainObjIsActive(vm)) {
+virErrorPtr err;
+
+virErrorPreserveLast();
+qemuProcessStop(driver, vm,
+VIR_DOMAIN_SHUTOFF_FAILED,
+VIR_ASYNC_JOB_START, 0);
+virErrorRestore();
+}
+return ret;
+}
+
+
 static int
 qemuSnapshotRevertActive(virDomainObj *vm,
  virDomainSnapshotPtr snapshot,
@@ -2407,6 +2455,7 @@ qemuSnapshotRevertActive(virDomainObj *vm,
  unsigned int start_flags,
  unsigned int flags)
 {
+qemuDomainObjPrivate *priv = vm->privateData;
 virObjectEvent *event = NULL;
 virObjectEvent *event2 = NULL;
 int detail;
@@ -2458,6 +2507,11 @@ qemuSnapshotRevertActive(virDomainObj *vm,
 if (rc < 0)
 return -1;
 
+if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_SNAPSHOT_SAVE) &&
+snapdef->memorydisk &&
+qemuSnapshotLoadState(driver, vm, snap) < 0)
+return -1;
+
 /* Touch up domain state.  */
 if (!(flags & VIR_DOMAIN_SNAPSHOT_REVERT_RUNNING) &&
 (snapdef->state == VIR_DOMAIN_SNAPSHOT_PAUSED ||
-- 
2.35.1



[PATCH 07/11] qemu: check internal snaphshot memory disk

2022-03-31 Thread Nikolay Shirokovskiy
- check it is supported by the QEMU
- check disk itself has internal snapshot
- for modern QEMU select it if it is not specified explicitly

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_snapshot.c | 32 
 1 file changed, 32 insertions(+)

diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 579d744c60..54eafb5020 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -1852,6 +1852,38 @@ qemuSnapshotCreateAlignDisks(virDomainObj *vm,
 if (virDomainSnapshotAlignDisks(def, NULL, align_location, true) < 0)
 return -1;
 
+if (def->memory == VIR_DOMAIN_SNAPSHOT_LOCATION_INTERNAL) {
+bool modern = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_SNAPSHOT_SAVE);
+
+if (modern) {
+size_t i;
+
+for (i = 0; i < def->ndisks; i++) {
+if (def->disks[i].snapshot != 
VIR_DOMAIN_SNAPSHOT_LOCATION_INTERNAL)
+continue;
+
+if (def->memorydisk) {
+if (STREQ(def->memorydisk, def->disks[i].name))
+break;
+} else {
+def->memorydisk = g_strdup(def->disks[i].name);
+break;
+}
+}
+
+if (i == def->ndisks && def->memorydisk) {
+virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
+   _("memory disk '%s' should itself have internal 
snapshot"),
+   def->memorydisk);
+return -1;
+}
+} else if (def->memorydisk) {
+virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
+   _("specifing memory disk is not supported by this 
QEMU"));
+return -1;
+}
+}
+
 return 0;
 }
 
-- 
2.35.1



[PATCH 11/11] qemu: use snapshot-delete for modern QEMU to delete snapshot

2022-03-31 Thread Nikolay Shirokovskiy
Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_snapshot.c | 54 ++--
 1 file changed, 46 insertions(+), 8 deletions(-)

diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 605288f6c5..fb34c21495 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -267,6 +267,50 @@ qemuSnapshotWaitJob(virQEMUDriver *driver,
 }
 
 
+static int
+qemuSnapshotDiscardDataActive(virQEMUDriver *driver,
+  virDomainObj *vm,
+  virDomainMomentObj *snap)
+{
+qemuDomainObjPrivate *priv = vm->privateData;
+bool modern = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_SNAPSHOT_SAVE);
+virDomainSnapshotDef *snapdef = virDomainSnapshotObjGetDef(snap);
+g_autoptr(GPtrArray) devices = g_ptr_array_new();
+int rc;
+
+if (!modern) {
+if (qemuDomainObjEnterMonitorAsync(driver, vm,
+   VIR_ASYNC_JOB_SNAPSHOT_DELETE) < 0)
+return -1;
+
+rc = qemuMonitorDeleteSnapshot(priv->mon, snap->def->name);
+qemuDomainObjExitMonitor(vm);
+if (rc < 0)
+return -1;
+
+return 0;
+}
+
+if (!(devices = qemuSnapshotGetDisksNodes(snapdef, vm->def, NULL)))
+return -1;
+
+if (qemuDomainObjEnterMonitorAsync(driver, vm,
+   VIR_ASYNC_JOB_SNAPSHOT_DELETE) < 0)
+return -1;
+rc = qemuMonitorSnapshotDelete(priv->mon,
+   "snapshot-delete",
+   snap->def->name,
+   (const char **)devices->pdata,
+   devices->len);
+qemuDomainObjExitMonitor(vm);
+if (rc < 0)
+return -1;
+
+return qemuSnapshotWaitJob(driver, vm, VIR_ASYNC_JOB_SNAPSHOT_DELETE,
+   "snapshot-delete");
+}
+
+
 /* Discard one snapshot (or its metadata), without reparenting any children.  
*/
 static int
 qemuSnapshotDiscard(virQEMUDriver *driver,
@@ -276,7 +320,6 @@ qemuSnapshotDiscard(virQEMUDriver *driver,
 bool metadata_only)
 {
 g_autofree char *snapFile = NULL;
-qemuDomainObjPrivate *priv;
 virDomainMomentObj *parentsnap = NULL;
 g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
 
@@ -301,13 +344,8 @@ qemuSnapshotDiscard(virQEMUDriver *driver,
 if (qemuSnapshotForEachQcow2(driver, def, snap, "-d", true) < 0)
 return -1;
 } else {
-priv = vm->privateData;
-if (qemuDomainObjEnterMonitorAsync(driver, vm,
-   VIR_ASYNC_JOB_SNAPSHOT) == 0) {
-/* we continue on even in the face of error */
-qemuMonitorDeleteSnapshot(priv->mon, snap->def->name);
-qemuDomainObjExitMonitor(vm);
-}
+/* we continue on even in the face of error */
+qemuSnapshotDiscardDataActive(driver, vm, snap);
 }
 }
 
-- 
2.35.1



[PATCH 09/11] qemu: use snapshot-save for modern qemu to create snapshot

2022-03-31 Thread Nikolay Shirokovskiy
Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_process.c  |  11 +++
 src/qemu/qemu_snapshot.c | 179 +--
 2 files changed, 182 insertions(+), 8 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 9918423701..6ed7eaaa83 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -958,6 +958,17 @@ qemuProcessHandleJobStatusChange(qemuMonitor *mon 
G_GNUC_UNUSED,
   jobname, vm, vm->def->name,
   qemuMonitorJobStatusTypeToString(status), status);
 
+if (STREQ(jobname, "snapshot-save") ||
+STREQ(jobname, "snapshot-delete") ||
+STREQ(jobname, "snapshot-load")) {
+if (status == QEMU_MONITOR_JOB_STATUS_CONCLUDED && priv->job.current) {
+priv->job.current->status = VIR_DOMAIN_JOB_STATUS_COMPLETED;
+virDomainObjBroadcast(vm);
+}
+
+goto cleanup;
+}
+
 if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_BLOCKDEV)) {
 VIR_DEBUG("job '%s' handled by old blockjob handler", jobname);
 goto cleanup;
diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 54eafb5020..9f81befe85 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -142,6 +142,131 @@ qemuSnapshotForEachQcow2(virQEMUDriver *driver,
 }
 
 
+static GPtrArray *
+qemuSnapshotGetDisksNodes(virDomainSnapshotDef *snapdef,
+  virDomainDef *def,
+  const char **memoryNode)
+{
+g_autoptr(GPtrArray) devices = g_ptr_array_new();
+size_t i;
+
+if (memoryNode)
+*memoryNode = NULL;
+
+for (i = 0; i < snapdef->ndisks; i++) {
+if (snapdef->disks[i].snapshot == 
VIR_DOMAIN_SNAPSHOT_LOCATION_INTERNAL)
+g_ptr_array_add(devices, def->disks[i]->src->nodeformat);
+
+if (memoryNode && STREQ(snapdef->memorydisk, snapdef->disks[i].name))
+*memoryNode = def->disks[i]->src->nodeformat;
+}
+
+if (memoryNode && !*memoryNode) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   _("cannot find vmstate disk '%s'"), 
snapdef->memorydisk);
+return NULL;
+}
+
+return g_steal_pointer();
+}
+
+
+static int
+qemuSnapshotDismissJob(virQEMUDriver *driver,
+   virDomainObj *vm,
+   virDomainAsyncJob asyncJob,
+   const char *jobid)
+{
+qemuDomainObjPrivate *priv = vm->privateData;
+int rc;
+
+if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0)
+return -1;
+
+rc = qemuMonitorJobDismiss(priv->mon, jobid);
+qemuDomainObjExitMonitor(vm);
+if (rc < 0)
+return -1;
+
+return 0;
+}
+
+
+static int
+qemuSnapshotWaitJob(virQEMUDriver *driver,
+virDomainObj *vm,
+virDomainAsyncJob asyncJob,
+const char *jobid)
+{
+qemuDomainObjPrivate *priv = vm->privateData;
+qemuMonitorJobInfo **jobs = NULL;
+size_t njobs = 0;
+qemuMonitorJobInfo *job = NULL;
+int ret = -1;
+size_t i;
+int rc;
+
+while (priv->job.current->status != VIR_DOMAIN_JOB_STATUS_COMPLETED) {
+/*
+ * We can't do much if wait fails and if domain is still active as in
+ * order to cleanup we need to call job-cancel and again wait for
+ * concluded state.
+ */
+if (virDomainObjWait(vm) < 0)
+return -1;
+}
+
+if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0)
+goto cleanup;
+rc = qemuMonitorGetJobInfo(priv->mon, , );
+qemuDomainObjExitMonitor(vm);
+if (rc < 0)
+goto cleanup;
+
+for (i = 0; i < njobs; i++) {
+if (STREQ_NULLABLE(jobs[i]->id, jobid)) {
+job = jobs[i];
+break;
+}
+}
+
+if (!job) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   _("cannot obtain status of '%s' job"), jobid);
+goto cleanup;
+}
+if (job->status != QEMU_MONITOR_JOB_STATUS_CONCLUDED) {
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   _("unexpected '%s' job status '%s'"), jobid,
+   qemuMonitorJobStatusTypeToString(job->status));
+goto cleanup;
+}
+if (job->error) {
+virReportError(VIR_ERR_OPERATION_FAILED,
+   _("'%s' job failed '%s'"), jobid, job->error);
+goto cleanup;
+}
+
+ret = 0;
+
+ cleanup:
+if (virDomainObjIsActive(vm)) {
+virErrorPtr err;
+
+virErrorPreserveLast();
+if (qemuSnapshotDismissJob(driver, vm, asyncJob, jobid) < 0)
+VIR_WARN("failed to dismiss job '%s'", jobid);
+virErrorRestore();
+}
+
+for (i = 0; i < njo

[PATCH 08/11] qemu: add snapshot job types to qemuMonitorJobType

2022-03-31 Thread Nikolay Shirokovskiy
Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_monitor.c | 3 +++
 src/qemu/qemu_monitor.h | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 4b33407e50..77682f817c 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -119,6 +119,9 @@ VIR_ENUM_IMPL(qemuMonitorJob,
   "mirror",
   "backup",
   "create",
+  "snapshot-save",
+  "snapshot-load",
+  "snapshot-delete",
 );
 
 VIR_ENUM_IMPL(qemuMonitorJobStatus,
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index 8067236693..67715ef252 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -139,6 +139,9 @@ typedef enum {
 QEMU_MONITOR_JOB_TYPE_MIRROR,
 QEMU_MONITOR_JOB_TYPE_BACKUP,
 QEMU_MONITOR_JOB_TYPE_CREATE,
+QEMU_MONITOR_JOB_TYPE_SNAPSHOT_SAVE,
+QEMU_MONITOR_JOB_TYPE_SNAPSHOT_LOAD,
+QEMU_MONITOR_JOB_TYPE_SNAPSHOT_DELETE,
 QEMU_MONITOR_JOB_TYPE_LAST
 } qemuMonitorJobType;
 
-- 
2.35.1



[PATCH 06/11] conf: add memory state disk for internal snapshots

2022-03-31 Thread Nikolay Shirokovskiy
This can be used to specify disk where to place vmstate data.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/conf/snapshot_conf.c | 13 +
 src/conf/snapshot_conf.h |  1 +
 2 files changed, 14 insertions(+)

diff --git a/src/conf/snapshot_conf.c b/src/conf/snapshot_conf.c
index 80946beba9..8c9ee03942 100644
--- a/src/conf/snapshot_conf.c
+++ b/src/conf/snapshot_conf.c
@@ -117,6 +117,7 @@ virDomainSnapshotDefDispose(void *obj)
 size_t i;
 
 g_free(def->memorysnapshotfile);
+g_free(def->memorydisk);
 for (i = 0; i < def->ndisks; i++)
 virDomainSnapshotDiskDefClear(>disks[i]);
 g_free(def->disks);
@@ -305,6 +306,7 @@ virDomainSnapshotDefParse(xmlXPathContextPtr ctxt,
 
 if ((memoryNode = virXPathNode("./memory", ctxt))) {
 def->memorysnapshotfile = virXMLPropString(memoryNode, "file");
+def->memorydisk = virXMLPropString(memoryNode, "disk");
 
 if (virXMLPropEnumDefault(memoryNode, "snapshot",
   virDomainSnapshotLocationTypeFromString,
@@ -323,6 +325,8 @@ virDomainSnapshotDefParse(xmlXPathContextPtr ctxt,
 if (def->memory == VIR_DOMAIN_SNAPSHOT_LOCATION_DEFAULT) {
 if (def->memorysnapshotfile) {
 def->memory = VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL;
+} else if (def->memorydisk) {
+def->memory = VIR_DOMAIN_SNAPSHOT_LOCATION_INTERNAL;
 } else if (flags & VIR_DOMAIN_SNAPSHOT_PARSE_REDEFINE) {
 if (offline) {
 def->memory = VIR_DOMAIN_SNAPSHOT_LOCATION_NO;
@@ -347,6 +351,14 @@ virDomainSnapshotDefParse(xmlXPathContextPtr ctxt,
 return NULL;
 }
 
+if (def->memorydisk &&
+def->memory != VIR_DOMAIN_SNAPSHOT_LOCATION_INTERNAL) {
+virReportError(VIR_ERR_XML_ERROR,
+   _("memory disk '%s' requires internal snapshot"),
+   def->memorydisk);
+return NULL;
+}
+
 if (offline &&
 def->memory != VIR_DOMAIN_SNAPSHOT_LOCATION_DEFAULT &&
 def->memory != VIR_DOMAIN_SNAPSHOT_LOCATION_NO) {
@@ -844,6 +856,7 @@ virDomainSnapshotDefFormatInternal(virBuffer *buf,
 virBufferAsprintf(buf, "memory));
 virBufferEscapeString(buf, " file='%s'", def->memorysnapshotfile);
+virBufferEscapeString(buf, " disk='%s'", def->memorydisk);
 virBufferAddLit(buf, "/>\n");
 }
 
diff --git a/src/conf/snapshot_conf.h b/src/conf/snapshot_conf.h
index 1f787f1a94..b7b448a071 100644
--- a/src/conf/snapshot_conf.h
+++ b/src/conf/snapshot_conf.h
@@ -72,6 +72,7 @@ struct _virDomainSnapshotDef {
 
 virDomainSnapshotLocation memory;
 char *memorysnapshotfile; /* memory state file when snapshot is external */
+char *memorydisk; /* memory state disk when snapshot is internal */
 
 size_t ndisks; /* should not exceed dom->ndisks */
 virDomainSnapshotDiskDef *disks;
-- 
2.35.1



[PATCH 04/11] qemu: introduce QEMU_ASYNC_JOB_SNAPSHOT_DELETE

2022-03-31 Thread Nikolay Shirokovskiy
We are going to use this job for snapshot delete API. Currently snapshot
delete API uses QEMU_JOB_MODIFY.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/hypervisor/domain_job.c | 1 +
 src/hypervisor/domain_job.h | 1 +
 src/qemu/qemu_domainjob.c   | 2 ++
 src/qemu/qemu_driver.c  | 6 ++
 src/qemu/qemu_migration.c   | 2 ++
 src/qemu/qemu_process.c | 3 +++
 6 files changed, 15 insertions(+)

diff --git a/src/hypervisor/domain_job.c b/src/hypervisor/domain_job.c
index ff4e008cb5..ad71a77150 100644
--- a/src/hypervisor/domain_job.c
+++ b/src/hypervisor/domain_job.c
@@ -37,6 +37,7 @@ VIR_ENUM_IMPL(virDomainAsyncJob,
   "save",
   "dump",
   "snapshot",
+  "snapshot delete",
   "start",
   "backup",
 );
diff --git a/src/hypervisor/domain_job.h b/src/hypervisor/domain_job.h
index db8b8b1390..cb24e3f743 100644
--- a/src/hypervisor/domain_job.h
+++ b/src/hypervisor/domain_job.h
@@ -63,6 +63,7 @@ typedef enum {
 VIR_ASYNC_JOB_SAVE,
 VIR_ASYNC_JOB_DUMP,
 VIR_ASYNC_JOB_SNAPSHOT,
+VIR_ASYNC_JOB_SNAPSHOT_DELETE,
 VIR_ASYNC_JOB_START,
 VIR_ASYNC_JOB_BACKUP,
 
diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c
index 56a3f6cb19..89f721abc9 100644
--- a/src/qemu/qemu_domainjob.c
+++ b/src/qemu/qemu_domainjob.c
@@ -85,6 +85,7 @@ virDomainAsyncJobPhaseToString(virDomainAsyncJob job,
 case VIR_ASYNC_JOB_SAVE:
 case VIR_ASYNC_JOB_DUMP:
 case VIR_ASYNC_JOB_SNAPSHOT:
+case VIR_ASYNC_JOB_SNAPSHOT_DELETE:
 case VIR_ASYNC_JOB_START:
 case VIR_ASYNC_JOB_NONE:
 case VIR_ASYNC_JOB_BACKUP:
@@ -111,6 +112,7 @@ virDomainAsyncJobPhaseFromString(virDomainAsyncJob job,
 case VIR_ASYNC_JOB_SAVE:
 case VIR_ASYNC_JOB_DUMP:
 case VIR_ASYNC_JOB_SNAPSHOT:
+case VIR_ASYNC_JOB_SNAPSHOT_DELETE:
 case VIR_ASYNC_JOB_START:
 case VIR_ASYNC_JOB_NONE:
 case VIR_ASYNC_JOB_BACKUP:
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 74bc2c7bf4..547cffa1bc 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -12741,6 +12741,12 @@ static int qemuDomainAbortJob(virDomainPtr dom)
 ret = 0;
 break;
 
+case VIR_ASYNC_JOB_SNAPSHOT_DELETE:
+virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+   _("snapshot deletion expected to be exclusive job"));
+goto endjob;
+break;
+
 case VIR_ASYNC_JOB_LAST:
 default:
 virReportEnumRangeError(virDomainAsyncJob, priv->job.asyncJob);
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 3e653543c6..cb4c0e544a 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -1717,6 +1717,8 @@ qemuMigrationJobName(virDomainObj *vm)
 return _("snapshot job");
 case VIR_ASYNC_JOB_START:
 return _("start job");
+case VIR_ASYNC_JOB_SNAPSHOT_DELETE:
+return _("snapshot delete job");
 case VIR_ASYNC_JOB_BACKUP:
 return _("backup job");
 case VIR_ASYNC_JOB_LAST:
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index f110e4f3dd..9918423701 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -3648,6 +3648,9 @@ qemuProcessRecoverJob(virQEMUDriver *driver,
 /* Already handled in VIR_DOMAIN_PAUSED_STARTING_UP check. */
 break;
 
+case VIR_ASYNC_JOB_SNAPSHOT_DELETE:
+break;
+
 case VIR_ASYNC_JOB_BACKUP:
 ignore_value(virTimeMillisNow());
 
-- 
2.35.1



[PATCH 02/11] qemu: add monitor commands to snapshot-{save, load, delete}

2022-03-31 Thread Nikolay Shirokovskiy
Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_monitor.c  |  53 ++
 src/qemu/qemu_monitor.h  |  23 
 src/qemu/qemu_monitor_json.c | 106 +++
 src/qemu/qemu_monitor_json.h |  23 
 4 files changed, 205 insertions(+)

diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 316cff5b9b..4b33407e50 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -4566,3 +4566,56 @@ qemuMonitorChangeMemoryRequestedSize(qemuMonitor *mon,
 
 return qemuMonitorJSONChangeMemoryRequestedSize(mon, alias, requestedsize);
 }
+
+
+int
+qemuMonitorSnapshotSave(qemuMonitor *mon,
+const char *jobname,
+const char *snapid,
+const char *vmstateDevice,
+const char **devices,
+size_t ndevices)
+{
+VIR_DEBUG("jobname=%s, snapid=%s, vmstateDevice=%s, devices=%p, 
ndevices=%zd",
+  jobname, snapid, vmstateDevice, devices, ndevices);
+
+QEMU_CHECK_MONITOR(mon);
+
+return qemuMonitorJSONSnapshotSave(mon, jobname, snapid, vmstateDevice,
+   devices, ndevices);
+}
+
+
+int
+qemuMonitorSnapshotLoad(qemuMonitor *mon,
+const char *jobname,
+const char *snapid,
+const char *vmstateDevice,
+const char **devices,
+size_t ndevices)
+{
+VIR_DEBUG("jobname=%s, snapid=%s, vmstateDevice=%s, devices=%p, 
ndevices=%zd",
+  jobname, snapid, vmstateDevice, devices, ndevices);
+
+QEMU_CHECK_MONITOR(mon);
+
+return qemuMonitorJSONSnapshotLoad(mon, jobname, snapid, vmstateDevice,
+   devices, ndevices);
+}
+
+
+int
+qemuMonitorSnapshotDelete(qemuMonitor *mon,
+  const char *jobname,
+  const char *snapid,
+  const char **devices,
+  size_t ndevices)
+{
+VIR_DEBUG("jobname=%s, snapid=%s, devices=%p, ndevices=%zd",
+  jobname, snapid, devices, ndevices);
+
+QEMU_CHECK_MONITOR(mon);
+
+return qemuMonitorJSONSnapshotDelete(mon, jobname, snapid,
+ devices, ndevices);
+}
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index 5c2a749282..8067236693 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -1588,3 +1588,26 @@ int
 qemuMonitorChangeMemoryRequestedSize(qemuMonitor *mon,
  const char *alias,
  unsigned long long requestedsize);
+
+int
+qemuMonitorSnapshotSave(qemuMonitor *mon,
+const char *jobname,
+const char *snapid,
+const char *vmstateDevice,
+const char **devices,
+size_t ndevices);
+
+int
+qemuMonitorSnapshotLoad(qemuMonitor *mon,
+const char *jobname,
+const char *snapid,
+const char *vmstateDevice,
+const char **devices,
+size_t ndevices);
+
+int
+qemuMonitorSnapshotDelete(qemuMonitor *mon,
+  const char *jobname,
+  const char *snapid,
+  const char **devices,
+  size_t ndevices);
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index d5622bd6d9..864be427c3 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -8966,3 +8966,109 @@ qemuMonitorJSONChangeMemoryRequestedSize(qemuMonitor 
*mon,
 
 return qemuMonitorJSONSetObjectProperty(mon, path, "requested-size", 
);
 }
+
+
+int
+qemuMonitorJSONSnapshotSave(qemuMonitor *mon,
+const char *jobname,
+const char *snapid,
+const char *vmstateDevice,
+const char **devices,
+size_t ndevices)
+{
+g_autoptr(virJSONValue) devicesJSON = virJSONValueNewArray();
+g_autoptr(virJSONValue) cmd = NULL;
+g_autoptr(virJSONValue) reply = NULL;
+size_t i;
+
+for (i = 0; i < ndevices; i++)
+if (virJSONValueArrayAppendString(devicesJSON, devices[i]) < 0)
+return -1;
+
+cmd = qemuMonitorJSONMakeCommand("snapshot-save",
+ "s:job-id", jobname,
+ "s:tag", snapid,
+ "s:vmstate", vmstateDevice,
+ "a:devices", ,
+ NULL);
+if (!cmd)
+return -1;
+
+if (qemuMonitor

[PATCH 00/11] qemu: switch to modern QEMU's snapshot-* API for

2022-03-31 Thread Nikolay Shirokovskiy
This series applied on top of [1].

In this series I still use old API to load snapshot if snapshot was done
using old API. The thing is snapshot-load require "vmstate" parameter to
be set. And with old snapshot the disk which holds vmstate is not known
to libvirt. This is unfortunate as we will need to use old API for
a long time even after we drop support for QEMU older then 6.0 (where
snapshot-* API is introduced). I guess the best will be to allow
omiting "vmstate" for snapshot-load and thus falling back to default
disk for vmstate.

The code for recovering on libvirtd crash/restart is missing. I'll work on
it too if the overall approach is accepted.

[PATCH v2 REBASE 0/3] qemu: don't pause VM when creating internal snapshot
https://listman.redhat.com/archives/libvir-list/2022-March/229719.html

Nikolay Shirokovskiy (11):
  qemu: add QEMU_CAPS_SNAPSHOT_SAVE capability
  qemu: add monitor commands to snapshot-{save, load, delete}
  qemu: move snapshot related funcs from domain.c to snapshot.c
  qemu: introduce QEMU_ASYNC_JOB_SNAPSHOT_DELETE
  qemu: snapshot: prepare for async snapshot deletion
  conf: add memory state disk for internal snapshots
  qemu: check internal snaphshot memory disk
  qemu: add snapshot job types to qemuMonitorJobType
  qemu: use snapshot-save for modern qemu to create snapshot
  qemu: use snapshot-load for modern QEMU to revert to snapshot
  qemu: use snapshot-delete for modern QEMU to delete snapshot

 src/conf/snapshot_conf.c  |  13 +
 src/conf/snapshot_conf.h  |   1 +
 src/hypervisor/domain_job.c   |   1 +
 src/hypervisor/domain_job.h   |   1 +
 src/qemu/qemu_capabilities.c  |   2 +
 src/qemu/qemu_capabilities.h  |   1 +
 src/qemu/qemu_domain.c| 241 +---
 src/qemu/qemu_domain.h|  19 -
 src/qemu/qemu_domainjob.c |   2 +
 src/qemu/qemu_driver.c|  14 +-
 src/qemu/qemu_migration.c |   2 +
 src/qemu/qemu_monitor.c   |  56 ++
 src/qemu/qemu_monitor.h   |  26 +
 src/qemu/qemu_monitor_json.c  | 106 
 src/qemu/qemu_monitor_json.h  |  23 +
 src/qemu/qemu_process.c   |  22 +-
 src/qemu/qemu_snapshot.c  | 583 +-
 src/qemu/qemu_snapshot.h  |  10 +
 .../caps_6.0.0.aarch64.xml|   1 +
 .../qemucapabilitiesdata/caps_6.0.0.s390x.xml |   1 +
 .../caps_6.0.0.x86_64.xml |   1 +
 .../caps_6.1.0.x86_64.xml |   1 +
 .../caps_6.2.0.aarch64.xml|   1 +
 .../qemucapabilitiesdata/caps_6.2.0.ppc64.xml |   1 +
 .../caps_6.2.0.x86_64.xml |   1 +
 .../qemucapabilitiesdata/caps_7.0.0.ppc64.xml |   1 +
 .../caps_7.0.0.x86_64.xml |   1 +
 27 files changed, 844 insertions(+), 288 deletions(-)

-- 
2.35.1



[PATCH 05/11] qemu: snapshot: prepare for async snapshot deletion

2022-03-31 Thread Nikolay Shirokovskiy
Use async job instead of regular one as we are going to use async job
harness to wait for snapshot-delete job completion in QEMU.

We can use VIR_DOMAIN_JOB_OPERATION_UNKNOWN because it is used only
when reporting job progress and getting job progress is disabled now for
snapshot deletion. It is not clear whether we should add
VIR_DOMAIN_JOB_OPERATION_SNAPSHOT_DELETE and report job progress in future
as snapshot-deletion should not take much time I guess.

Yet using VIR_DOMAIN_JOB_OPERATION_UNKNOWN seems to be brittle.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_snapshot.c | 20 ++--
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 878a0abb34..579d744c60 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -177,10 +177,12 @@ qemuSnapshotDiscard(virQEMUDriver *driver,
 return -1;
 } else {
 priv = vm->privateData;
-qemuDomainObjEnterMonitor(driver, vm);
-/* we continue on even in the face of error */
-qemuMonitorDeleteSnapshot(priv->mon, snap->def->name);
-qemuDomainObjExitMonitor(vm);
+if (qemuDomainObjEnterMonitorAsync(driver, vm,
+   VIR_ASYNC_JOB_SNAPSHOT) == 0) {
+/* we continue on even in the face of error */
+qemuMonitorDeleteSnapshot(priv->mon, snap->def->name);
+qemuDomainObjExitMonitor(vm);
+}
 }
 }
 
@@ -2551,8 +2553,14 @@ qemuSnapshotDelete(virDomainObj *vm,
   VIR_DOMAIN_SNAPSHOT_DELETE_METADATA_ONLY |
   VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN_ONLY, -1);
 
-if (qemuDomainObjBeginJob(driver, vm, VIR_JOB_MODIFY) < 0)
+/*
+ * For snapshot deletion quering stats/aborting is not supported yet
+ * thus disable any parallels jobs.
+ */
+if (qemuDomainObjBeginAsyncJob(driver, vm, VIR_ASYNC_JOB_SNAPSHOT_DELETE,
+   VIR_DOMAIN_JOB_OPERATION_UNKNOWN, flags) < 
0)
 return -1;
+qemuDomainObjSetAsyncJobMask(vm, VIR_JOB_NONE);
 
 if (!(snap = qemuSnapObjFromSnapshot(vm, snapshot)))
 goto endjob;
@@ -2624,7 +2632,7 @@ qemuSnapshotDelete(virDomainObj *vm,
 }
 
  endjob:
-qemuDomainObjEndJob(vm);
+qemuDomainObjEndAsyncJob(vm);
 
 return ret;
 }
-- 
2.35.1



[PATCH 03/11] qemu: move snapshot related funcs from domain.c to snapshot.c

2022-03-31 Thread Nikolay Shirokovskiy
These functions mostly used from the qemu_snapshot.c and also
semantically they belong to this source file.

At the same time rename qemuDomainSnapshot* to qemuSnaphot*.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_domain.c   | 241 +--
 src/qemu/qemu_domain.h   |  19 ---
 src/qemu/qemu_driver.c   |   8 +-
 src/qemu/qemu_snapshot.c | 268 ---
 src/qemu/qemu_snapshot.h |  10 ++
 5 files changed, 269 insertions(+), 277 deletions(-)

diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 18d403e099..56cf0c3f68 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -38,6 +38,7 @@
 #include "qemu_checkpoint.h"
 #include "qemu_validate.h"
 #include "qemu_namespace.h"
+#include "qemu_snapshot.h"
 #include "viralloc.h"
 #include "virlog.h"
 #include "virerror.h"
@@ -64,7 +65,6 @@
 #include "virsecret.h"
 #include "logging/log_manager.h"
 #include "locking/domain_lock.h"
-#include "virdomainsnapshotobjlist.h"
 #include "virdomaincheckpointobjlist.h"
 #include "backup_conf.h"
 #include "virutil.h"
@@ -7041,243 +7041,6 @@ qemuFindQemuImgBinary(virQEMUDriver *driver)
 return driver->qemuImgBinary;
 }
 
-int
-qemuDomainSnapshotWriteMetadata(virDomainObj *vm,
-virDomainMomentObj *snapshot,
-virDomainXMLOption *xmlopt,
-const char *snapshotDir)
-{
-g_autofree char *newxml = NULL;
-g_autofree char *snapDir = NULL;
-g_autofree char *snapFile = NULL;
-char uuidstr[VIR_UUID_STRING_BUFLEN];
-unsigned int flags = VIR_DOMAIN_SNAPSHOT_FORMAT_SECURE |
-VIR_DOMAIN_SNAPSHOT_FORMAT_INTERNAL;
-virDomainSnapshotDef *def = virDomainSnapshotObjGetDef(snapshot);
-
-if (virDomainSnapshotGetCurrent(vm->snapshots) == snapshot)
-flags |= VIR_DOMAIN_SNAPSHOT_FORMAT_CURRENT;
-virUUIDFormat(vm->def->uuid, uuidstr);
-newxml = virDomainSnapshotDefFormat(uuidstr, def, xmlopt, flags);
-if (newxml == NULL)
-return -1;
-
-snapDir = g_strdup_printf("%s/%s", snapshotDir, vm->def->name);
-if (g_mkdir_with_parents(snapDir, 0777) < 0) {
-virReportSystemError(errno, _("cannot create snapshot directory '%s'"),
- snapDir);
-return -1;
-}
-
-snapFile = g_strdup_printf("%s/%s.xml", snapDir, def->parent.name);
-
-return virXMLSaveFile(snapFile, NULL, "snapshot-edit", newxml);
-}
-
-
-/* The domain is expected to be locked and inactive. Return -1 on normal
- * failure, 1 if we skipped a disk due to try_all.  */
-static int
-qemuDomainSnapshotForEachQcow2Raw(virQEMUDriver *driver,
-  virDomainDef *def,
-  virDomainMomentObj *snap,
-  const char *op,
-  bool try_all,
-  int ndisks)
-{
-virDomainSnapshotDef *snapdef = virDomainSnapshotObjGetDef(snap);
-const char *qemuimgbin;
-size_t i;
-bool skipped = false;
-
-qemuimgbin = qemuFindQemuImgBinary(driver);
-if (qemuimgbin == NULL) {
-/* qemuFindQemuImgBinary set the error */
-return -1;
-}
-
-for (i = 0; i < ndisks; i++) {
-g_autoptr(virCommand) cmd = virCommandNewArgList(qemuimgbin, 
"snapshot",
- op, snap->def->name, 
NULL);
-int format = virDomainDiskGetFormat(def->disks[i]);
-
-/* FIXME: we also need to handle LVM here */
-if (def->disks[i]->device != VIR_DOMAIN_DISK_DEVICE_DISK ||
-snapdef->disks[i].snapshot != 
VIR_DOMAIN_SNAPSHOT_LOCATION_INTERNAL)
-continue;
-
-if (!virStorageSourceIsLocalStorage(def->disks[i]->src)) {
-virReportError(VIR_ERR_OPERATION_INVALID,
-   _("can't manipulate inactive snapshots of disk 
'%s'"),
-   def->disks[i]->dst);
-return -1;
-}
-
-if (format > 0 && format != VIR_STORAGE_FILE_QCOW2) {
-if (try_all) {
-/* Continue on even in the face of error, since other
- * disks in this VM may have the same snapshot name.
- */
-VIR_WARN("skipping snapshot action on %s",
- def->disks[i]->dst);
-skipped = true;
-continue;
-} else if (STREQ(op, "-c") && i) {
-/* We must roll back partial creation by deleting
- * all earlier snapshots.  */
-qemuDomainSnapshotForEachQcow2Raw(driv

[PATCH 01/11] qemu: add QEMU_CAPS_SNAPSHOT_SAVE capability

2022-03-31 Thread Nikolay Shirokovskiy
If it is present then QEMU supports snaphot-{save, load, delete} set of
QMP commands.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_capabilities.c  | 2 ++
 src/qemu/qemu_capabilities.h  | 1 +
 tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml | 1 +
 tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml   | 1 +
 tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml  | 1 +
 tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml  | 1 +
 tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml | 1 +
 tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml   | 1 +
 tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml  | 1 +
 tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml   | 1 +
 tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml  | 1 +
 11 files changed, 12 insertions(+)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 6b4ed08499..3acc735e99 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -669,6 +669,7 @@ VIR_ENUM_IMPL(virQEMUCaps,
   /* 425 */
   "blockdev.nbd.tls-hostname", /* 
QEMU_CAPS_BLOCKDEV_NBD_TLS_HOSTNAME */
   "memory-backend-file.prealloc-threads", /* 
QEMU_CAPS_MEMORY_BACKEND_PREALLOC_THREADS */
+  "snapshot-save", /* QEMU_CAPS_SNAPSHOT_SAVE */
 );
 
 
@@ -1236,6 +1237,7 @@ struct virQEMUCapsStringFlags virQEMUCapsCommands[] = {
 { "query-dirty-rate", QEMU_CAPS_QUERY_DIRTY_RATE },
 { "sev-inject-launch-secret", QEMU_CAPS_SEV_INJECT_LAUNCH_SECRET },
 { "calc-dirty-rate", QEMU_CAPS_CALC_DIRTY_RATE },
+{ "snapshot-save", QEMU_CAPS_SNAPSHOT_SAVE },
 };
 
 struct virQEMUCapsStringFlags virQEMUCapsMigration[] = {
diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index 948029d60d..ff5301a1b4 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -644,6 +644,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for 
syntax-check */
 /* 425 */
 QEMU_CAPS_BLOCKDEV_NBD_TLS_HOSTNAME, /* tls hostname can be overriden for 
NBD clients */
 QEMU_CAPS_MEMORY_BACKEND_PREALLOC_THREADS, /* -object 
memory-backend-*.prealloc-threads */
+QEMU_CAPS_SNAPSHOT_SAVE, /* supports snapshot-save qmp command */
 
 QEMU_CAPS_LAST /* this must always be the last item */
 } virQEMUCapsFlags;
diff --git a/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml 
b/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml
index 7557e6ad71..7d1de6c1f7 100644
--- a/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml
+++ b/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml
@@ -193,6 +193,7 @@
   
   
   
+  
   600
   0
   61700242
diff --git a/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml 
b/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml
index 9be3dc..7e52c2bad5 100644
--- a/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml
+++ b/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml
@@ -150,6 +150,7 @@
   
   
   
+  
   600
   0
   39100242
diff --git a/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml 
b/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml
index 61d561dc69..e3ea706b2c 100644
--- a/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml
+++ b/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml
@@ -236,6 +236,7 @@
   
   
   
+  
   600
   0
   43100242
diff --git a/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml 
b/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml
index 0b58210335..2167bfd5ed 100644
--- a/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml
+++ b/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml
@@ -240,6 +240,7 @@
   
   
   
+  
   6001000
   0
   43100243
diff --git a/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml 
b/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml
index d08b2c0213..e2778e6d34 100644
--- a/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml
+++ b/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml
@@ -204,6 +204,7 @@
   
   
   
+  
   6001050
   0
   61700244
diff --git a/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml 
b/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml
index 8c52964ec0..7ffdd4d5fa 100644
--- a/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml
+++ b/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml
@@ -202,6 +202,7 @@
   
   
   
+  
   6002000
   0
   42900244
diff --git a/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml 
b/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml
index cdf72b9ebf..ae0fb90df9 100644
--- a/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml
+++ b/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml
@@ -242,6 +242,7 @@
   
   
   
+  
   6002000
   0
   43100244
diff --git a/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml 
b/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml
index 8aba3329ad..e19eef6f8e 100644
--- a/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml
+++ b/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml
@@ -203,6 +203,7 @@
   
   
   
+  
   6002050
   0
   42900243
diff --git a/tests/qe

[PATCH v2 REBASE 2/3] qemu: remove excessive check in qemuSnapshotCreateActiveInternal

2022-03-31 Thread Nikolay Shirokovskiy
After [1] it is not possible to get inactive state for domain while
communicating with QEMU (see also [2]).

Also even if it were possible the qemuProcessStopCPUs would return error
and the check would not be reached.

[1] commit 81f50cb92d16643bcd749e3ab5b404b8b7cec643
Author: Jiri Denemark 
Date:   Thu Feb 11 11:20:28 2016 +0100

qemu: Avoid calling qemuProcessStop without a job

[2] commit f1ea5bd506b6e00aa8bf55940b147d3c0fe7f124
Author: Ján Tomko 
Date:   Wed Nov 24 13:41:09 2021 +0100

qemu: turn qemuDomainObjExitMonitor into void

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_snapshot.c | 5 -
 1 file changed, 5 deletions(-)

diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index fc5b6dc7ce..df9da4613f 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -305,11 +305,6 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver,
 goto cleanup;
 
 resume = true;
-if (!virDomainObjIsActive(vm)) {
-virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
-   _("guest unexpectedly quit"));
-goto cleanup;
-}
 }
 
 if (qemuDomainObjEnterMonitorAsync(driver, vm,
-- 
2.35.1



[PATCH v2 REBASE 3/3] qemu: remove unnecessary gotos in qemuSnapshotCreateActiveInternal

2022-03-31 Thread Nikolay Shirokovskiy
Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_snapshot.c | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index df9da4613f..5d622592c3 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -297,21 +297,19 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver,
 int ret = -1;
 
 if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0))
-goto cleanup;
+return -1;
 
 if (halt) {
 if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE,
 VIR_ASYNC_JOB_SNAPSHOT) < 0)
-goto cleanup;
+return -1;
 
 resume = true;
 }
 
 if (qemuDomainObjEnterMonitorAsync(driver, vm,
-   VIR_ASYNC_JOB_SNAPSHOT) < 0) {
-resume = false;
-goto cleanup;
-}
+   VIR_ASYNC_JOB_SNAPSHOT) < 0)
+return -1;
 
 ret = qemuMonitorCreateSnapshot(priv->mon, snap->def->name);
 qemuDomainObjExitMonitor(vm);
-- 
2.35.1



[PATCH v2 REBASE 0/3] qemu: don't pause VM when creating internal snapshot

2022-03-31 Thread Nikolay Shirokovskiy
Diff to v1 [1]:
- fix mistake of dropping hunk that starts CPUs if snapshot was
  not successful. We still need it if halt upon snapshot is requested.

The is series is mostly about the first patch. The others patch are
minor cleanups.

[1] v1 version
https://listman.redhat.com/archives/libvir-list/2022-March/229437.html

Maxim Nestratov (1):
  qemu: don't pause vm when creating internal snapshot

Nikolay Shirokovskiy (2):
  qemu: remove excessive check in qemuSnapshotCreateActiveInternal
  qemu: remove unnecessary gotos in qemuSnapshotCreateActiveInternal

 src/qemu/qemu_snapshot.c | 24 +++-
 1 file changed, 7 insertions(+), 17 deletions(-)

-- 
2.35.1



[PATCH v2 REBASE 1/3] qemu: don't pause vm when creating internal snapshot

2022-03-31 Thread Nikolay Shirokovskiy


binsBsfYDDMYI.bin
Description: Binary data


Re: libvirt: introduce VIR_DOMAIN_DESTROY_REMOVE_LOGS flag

2022-03-28 Thread Nikolay Shirokovskiy
пн, 28 февр. 2022 г. в 14:13, Nikolay Shirokovskiy
:
>
> Hi, all!
>
> What do you think?

Hi, please take a look at this series.

Nikolay

>
> пн, 14 февр. 2022 г. в 15:19, Nikolay Shirokovskiy 
> :
>>
>> The patch series based on discussion in RFC [1].
>>
>> I wonder if we'd better add some property like "transient logs" instead
>> of adding a flag to destoy API.
>>
>> Yes libguestfs uses virDomainDestroyFlags to terminate a VM and it is
>> intended client of this new flag but it or other clients may want to use
>> shutdown API or use autodestroy domains with same log behaviour. Then
>> for the current task we only need to support this property in domain xml
>> I guess.
>>
>> [1] removing VMs logs
>> https://listman.redhat.com/archives/libvir-list/2022-February/msg00273.html
>>
>> Nikolay Shirokovskiy (3):
>>   libvirt: introduce VIR_DOMAIN_DESTROY_REMOVE_LOGS flag
>>   qemu: support VIR_DOMAIN_DESTROY_REMOVE_LOGS flag
>>   tools: support --remove-logs flag on destroing domain
>>
>>  docs/manpages/virsh.rst  |  7 +-
>>  include/libvirt/libvirt-domain.h |  1 +
>>  src/libvirt-domain.c |  6 +
>>  src/qemu/qemu_domain.c   | 41 
>>  src/qemu/qemu_domain.h   |  4 
>>  src/qemu/qemu_driver.c   |  8 ++-
>>  tools/virsh-domain.c |  8 ++-
>>  7 files changed, 72 insertions(+), 3 deletions(-)
>>
>> --
>> 2.31.1
>>



[PATCH v2 3/3] qemu: remove unnecessary gotos in qemuSnapshotCreateActiveInternal

2022-03-24 Thread Nikolay Shirokovskiy
Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_snapshot.c | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 540fda4854..c23fb40849 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -297,21 +297,19 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver,
 int ret = -1;
 
 if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0))
-goto cleanup;
+return -1;
 
 if (halt) {
 if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE,
 QEMU_ASYNC_JOB_SNAPSHOT) < 0)
-goto cleanup;
+return -1;
 
 resume = true;
 }
 
 if (qemuDomainObjEnterMonitorAsync(driver, vm,
-   QEMU_ASYNC_JOB_SNAPSHOT) < 0) {
-resume = false;
-goto cleanup;
-}
+   QEMU_ASYNC_JOB_SNAPSHOT) < 0)
+return -1;
 
 ret = qemuMonitorCreateSnapshot(priv->mon, snap->def->name);
 qemuDomainObjExitMonitor(driver, vm);
-- 
2.35.1



[PATCH v2 0/3] qemu: don't pause VM when creating internal snapshot

2022-03-24 Thread Nikolay Shirokovskiy
Diff to v1 [1]:
- fix mistake of dropping hunk that starts CPUs if snapshot was
  not successful. We still need it if halt upon snapshot is requested.

The is series is mostly about the first patch. The others patch are
minor cleanups.

[1] v1 version
https://listman.redhat.com/archives/libvir-list/2022-March/229437.html

Maxim Nestratov (1):
  qemu: don't pause vm when creating internal snapshot

Nikolay Shirokovskiy (2):
  qemu: remove excessive check in qemuSnapshotCreateActiveInternal
  qemu: remove unnecessary gotos in qemuSnapshotCreateActiveInternal

 src/qemu/qemu_snapshot.c | 24 +++-
 1 file changed, 7 insertions(+), 17 deletions(-)

-- 
2.35.1



[PATCH v2 1/3] qemu: don't pause vm when creating internal snapshot

2022-03-24 Thread Nikolay Shirokovskiy
From: Maxim Nestratov 

The pause was introduced in [1] in ancient times when probably distro's
QEMU does not have RESUME event. Yet the event was in the upstream QEMU at
the time version according to [2].

So the event is supported since QEMU version 0.13 which is much older
then oldest currently supported 2.11.0 version and we can remove manual
pause/resume. Except for the halt case.

[1] commit 346236fea97602e9e6529c5d41a32ed26b126082
Author: Jiri Denemark 
Date:   Thu Feb 24 16:46:44 2011 +0100

qemu: Stop guest CPUs before creating a snapshot

commit 6ed2c484f261fd8bd217f855b9e5e5f981e63f0a
Author: Luiz Capitulino 
Date:   Tue Apr 27 20:35:59 2010 -0300

QMP: Introduce RESUME event

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_snapshot.c | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index a7901779fc..cd94ddae24 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -293,16 +293,13 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver,
 virObjectEvent *event = NULL;
 bool resume = false;
 virDomainSnapshotDef *snapdef = virDomainSnapshotObjGetDef(snap);
+bool halt = !!(flags & VIR_DOMAIN_SNAPSHOT_CREATE_HALT);
 int ret = -1;
 
 if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0))
 goto cleanup;
 
-if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) {
-/* savevm monitor command pauses the domain emitting an event which
- * confuses libvirt since it's not notified when qemu resumes the
- * domain. Thus we stop and start CPUs ourselves.
- */
+if (halt) {
 if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE,
 QEMU_ASYNC_JOB_SNAPSHOT) < 0)
 goto cleanup;
@@ -329,7 +326,7 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver,
 if (!(snapdef->cookie = (virObject *) qemuDomainSaveCookieNew(vm)))
 goto cleanup;
 
-if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_HALT) {
+if (halt) {
 event = virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_STOPPED,
  
VIR_DOMAIN_EVENT_STOPPED_FROM_SNAPSHOT);
 qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FROM_SNAPSHOT,
-- 
2.35.1



[PATCH v2 2/3] qemu: remove excessive check in qemuSnapshotCreateActiveInternal

2022-03-24 Thread Nikolay Shirokovskiy
After [1] it is not possible to get inactive state for domain while
communicating with QEMU (see also [2]).

Also even if it were possible the qemuProcessStopCPUs would return error
and the check would not be reached.

[1] commit 81f50cb92d16643bcd749e3ab5b404b8b7cec643
Author: Jiri Denemark 
Date:   Thu Feb 11 11:20:28 2016 +0100

qemu: Avoid calling qemuProcessStop without a job

[2] commit f1ea5bd506b6e00aa8bf55940b147d3c0fe7f124
Author: Ján Tomko 
Date:   Wed Nov 24 13:41:09 2021 +0100

qemu: turn qemuDomainObjExitMonitor into void

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_snapshot.c | 5 -
 1 file changed, 5 deletions(-)

diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index cd94ddae24..540fda4854 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -305,11 +305,6 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver,
 goto cleanup;
 
 resume = true;
-if (!virDomainObjIsActive(vm)) {
-virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
-   _("guest unexpectedly quit"));
-goto cleanup;
-}
 }
 
 if (qemuDomainObjEnterMonitorAsync(driver, vm,
-- 
2.35.1



Re: [PATCH 0/3] qemu: don't pause vm when creating internal snapshot

2022-03-24 Thread Nikolay Shirokovskiy
Please, disregard this series as it has a mistake. I'll send a new one.

Nikolay

пн, 21 мар. 2022 г. в 18:16, Nikolay Shirokovskiy
:
>
> The is series is mostly about the first patch. The others patch are
> minor cleanups.
>
> Maxim Nestratov (1):
>   qemu: don't pause vm when creating internal snapshot
>
> Nikolay Shirokovskiy (2):
>   qemu: remove excessive check in qemuSnapshotCreateActiveInternal
>   qemu: remove unnecessary goto in qemuSnapshotCreateActiveInternal
>
>  src/qemu/qemu_snapshot.c | 65 +++-
>  1 file changed, 18 insertions(+), 47 deletions(-)
>
> --
> 2.35.1
>



[PATCH 2/3] qemu: remove excessive check in qemuSnapshotCreateActiveInternal

2022-03-21 Thread Nikolay Shirokovskiy
After [1] it is not possible to get inactive state for domain while
communicating with QEMU (see also [2]).

Also even if it were possible the qemuProcessStopCPUs would return error
and the check would not be reached.

[1] commit 81f50cb92d16643bcd749e3ab5b404b8b7cec643
Author: Jiri Denemark 
Date:   Thu Feb 11 11:20:28 2016 +0100

qemu: Avoid calling qemuProcessStop without a job

[2] commit f1ea5bd506b6e00aa8bf55940b147d3c0fe7f124
Author: Ján Tomko 
Date:   Wed Nov 24 13:41:09 2021 +0100

qemu: turn qemuDomainObjExitMonitor into void

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_snapshot.c | 15 ---
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index a41c782c7f..8c050cbd75 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -298,17 +298,10 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver,
 if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0))
 goto cleanup;
 
-if (halt) {
-if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE,
-QEMU_ASYNC_JOB_SNAPSHOT) < 0)
-goto cleanup;
-
-if (!virDomainObjIsActive(vm)) {
-virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
-   _("guest unexpectedly quit"));
-goto cleanup;
-}
-}
+if (halt &&
+qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE,
+QEMU_ASYNC_JOB_SNAPSHOT) < 0)
+goto cleanup;
 
 if (qemuDomainObjEnterMonitorAsync(driver, vm,
QEMU_ASYNC_JOB_SNAPSHOT) < 0)
-- 
2.35.1



[PATCH 0/3] qemu: don't pause vm when creating internal snapshot

2022-03-21 Thread Nikolay Shirokovskiy
The is series is mostly about the first patch. The others patch are
minor cleanups.

Maxim Nestratov (1):
  qemu: don't pause vm when creating internal snapshot

Nikolay Shirokovskiy (2):
  qemu: remove excessive check in qemuSnapshotCreateActiveInternal
  qemu: remove unnecessary goto in qemuSnapshotCreateActiveInternal

 src/qemu/qemu_snapshot.c | 65 +++-
 1 file changed, 18 insertions(+), 47 deletions(-)

-- 
2.35.1



[PATCH 1/3] qemu: don't pause vm when creating internal snapshot

2022-03-21 Thread Nikolay Shirokovskiy
From: Maxim Nestratov 

The pause was introduced in [1] in ancient times when probably distro's
QEMU does not have RESUME event. Yet the event was in the upstream QEMU at
the time version according to [2].

So the event is supported since QEMU version 0.13 which is much older
then oldest currently supported 2.11.0 version and we can remove manual
pause/resume. Except for the halt case.

[1] commit 346236fea97602e9e6529c5d41a32ed26b126082
Author: Jiri Denemark 
Date:   Thu Feb 24 16:46:44 2011 +0100

qemu: Stop guest CPUs before creating a snapshot

commit 6ed2c484f261fd8bd217f855b9e5e5f981e63f0a
Author: Luiz Capitulino 
Date:   Tue Apr 27 20:35:59 2010 -0300

QMP: Introduce RESUME event

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_snapshot.c | 29 -
 1 file changed, 4 insertions(+), 25 deletions(-)

diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index a7901779fc..a41c782c7f 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -291,23 +291,18 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver,
 {
 qemuDomainObjPrivate *priv = vm->privateData;
 virObjectEvent *event = NULL;
-bool resume = false;
 virDomainSnapshotDef *snapdef = virDomainSnapshotObjGetDef(snap);
+bool halt = !!(flags & VIR_DOMAIN_SNAPSHOT_CREATE_HALT);
 int ret = -1;
 
 if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0))
 goto cleanup;
 
-if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) {
-/* savevm monitor command pauses the domain emitting an event which
- * confuses libvirt since it's not notified when qemu resumes the
- * domain. Thus we stop and start CPUs ourselves.
- */
+if (halt) {
 if (qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE,
 QEMU_ASYNC_JOB_SNAPSHOT) < 0)
 goto cleanup;
 
-resume = true;
 if (!virDomainObjIsActive(vm)) {
 virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("guest unexpectedly quit"));
@@ -316,10 +311,8 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver,
 }
 
 if (qemuDomainObjEnterMonitorAsync(driver, vm,
-   QEMU_ASYNC_JOB_SNAPSHOT) < 0) {
-resume = false;
+   QEMU_ASYNC_JOB_SNAPSHOT) < 0)
 goto cleanup;
-}
 
 ret = qemuMonitorCreateSnapshot(priv->mon, snap->def->name);
 qemuDomainObjExitMonitor(driver, vm);
@@ -329,29 +322,15 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver,
 if (!(snapdef->cookie = (virObject *) qemuDomainSaveCookieNew(vm)))
 goto cleanup;
 
-if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_HALT) {
+if (halt) {
 event = virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_STOPPED,
  
VIR_DOMAIN_EVENT_STOPPED_FROM_SNAPSHOT);
 qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FROM_SNAPSHOT,
 QEMU_ASYNC_JOB_SNAPSHOT, 0);
 virDomainAuditStop(vm, "from-snapshot");
-resume = false;
 }
 
  cleanup:
-if (resume && virDomainObjIsActive(vm) &&
-qemuProcessStartCPUs(driver, vm,
- VIR_DOMAIN_RUNNING_UNPAUSED,
- QEMU_ASYNC_JOB_SNAPSHOT) < 0) {
-event = virDomainEventLifecycleNewFromObj(vm,
- VIR_DOMAIN_EVENT_SUSPENDED,
- VIR_DOMAIN_EVENT_SUSPENDED_API_ERROR);
-if (virGetLastErrorCode() == VIR_ERR_OK) {
-virReportError(VIR_ERR_OPERATION_FAILED, "%s",
-   _("resuming after snapshot failed"));
-}
-}
-
 virObjectEventStateQueue(driver->domainEventState, event);
 
 return ret;
-- 
2.35.1



[PATCH 3/3] qemu: remove unnecessary goto in qemuSnapshotCreateActiveInternal

2022-03-21 Thread Nikolay Shirokovskiy
Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_snapshot.c | 25 -
 1 file changed, 12 insertions(+), 13 deletions(-)

diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c
index 8c050cbd75..18b802e6d5 100644
--- a/src/qemu/qemu_snapshot.c
+++ b/src/qemu/qemu_snapshot.c
@@ -290,43 +290,42 @@ qemuSnapshotCreateActiveInternal(virQEMUDriver *driver,
  unsigned int flags)
 {
 qemuDomainObjPrivate *priv = vm->privateData;
-virObjectEvent *event = NULL;
 virDomainSnapshotDef *snapdef = virDomainSnapshotObjGetDef(snap);
 bool halt = !!(flags & VIR_DOMAIN_SNAPSHOT_CREATE_HALT);
-int ret = -1;
+int rc;
 
 if (!qemuMigrationSrcIsAllowed(driver, vm, false, 0))
-goto cleanup;
+return -1;
 
 if (halt &&
 qemuProcessStopCPUs(driver, vm, VIR_DOMAIN_PAUSED_SAVE,
 QEMU_ASYNC_JOB_SNAPSHOT) < 0)
-goto cleanup;
+return -1;
 
 if (qemuDomainObjEnterMonitorAsync(driver, vm,
QEMU_ASYNC_JOB_SNAPSHOT) < 0)
-goto cleanup;
+return -1;
 
-ret = qemuMonitorCreateSnapshot(priv->mon, snap->def->name);
+rc = qemuMonitorCreateSnapshot(priv->mon, snap->def->name);
 qemuDomainObjExitMonitor(driver, vm);
-if (ret < 0)
-goto cleanup;
+if (rc < 0)
+return -1;
 
 if (!(snapdef->cookie = (virObject *) qemuDomainSaveCookieNew(vm)))
-goto cleanup;
+return -1;
 
 if (halt) {
+virObjectEvent *event;
+
 event = virDomainEventLifecycleNewFromObj(vm, VIR_DOMAIN_EVENT_STOPPED,
  
VIR_DOMAIN_EVENT_STOPPED_FROM_SNAPSHOT);
+virObjectEventStateQueue(driver->domainEventState, event);
 qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FROM_SNAPSHOT,
 QEMU_ASYNC_JOB_SNAPSHOT, 0);
 virDomainAuditStop(vm, "from-snapshot");
 }
 
- cleanup:
-virObjectEventStateQueue(driver->domainEventState, event);
-
-return ret;
+return 0;
 }
 
 
-- 
2.35.1



[RFC PATCH 1/2] logging: touch opened files periodically

2022-02-24 Thread Nikolay Shirokovskiy
This will protect log files from being deleted by virtlogcleaner
even if log is not being written active currently.
---
 src/logging/log_handler.c | 113 +-
 1 file changed, 98 insertions(+), 15 deletions(-)

diff --git a/src/logging/log_handler.c b/src/logging/log_handler.c
index 5c3df37415..fee4567911 100644
--- a/src/logging/log_handler.c
+++ b/src/logging/log_handler.c
@@ -34,6 +34,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include "configmake.h"
 
@@ -43,6 +45,8 @@ VIR_LOG_INIT("logging.log_handler");
 
 #define DEFAULT_MODE 0600
 
+#define LOG_HANDLER_TOUCH_TIMEOUT (24 * 3600 * 1000)
+
 typedef struct _virLogHandlerLogFile virLogHandlerLogFile;
 struct _virLogHandlerLogFile {
 virRotatingFileWriter *file;
@@ -65,6 +69,8 @@ struct _virLogHandler {
 virLogHandlerLogFile **files;
 size_t nfiles;
 
+int timer;
+
 virLogHandlerShutdownInhibitor inhibitor;
 void *opaque;
 };
@@ -102,6 +108,17 @@ virLogHandlerLogFileFree(virLogHandlerLogFile *file)
 }
 
 
+static void
+virLogHandlerCleanupTimer(virLogHandler *handler)
+{
+if (handler->nfiles > 0 || handler->timer == 0)
+return;
+
+virEventRemoveTimeout(handler->timer);
+handler->timer = 0;
+}
+
+
 static void
 virLogHandlerLogFileClose(virLogHandler *handler,
   virLogHandlerLogFile *file)
@@ -115,6 +132,8 @@ virLogHandlerLogFileClose(virLogHandler *handler,
 break;
 }
 }
+
+virLogHandlerCleanupTimer(handler);
 }
 
 
@@ -209,6 +228,30 @@ virLogHandlerNew(bool privileged,
 }
 
 
+/*
+ * This helper aims to handle races with file deleting by log file cleaner.
+ * Cleaner can unlink file right after we open it for write. In this case
+ * let's just recreate it.
+ *
+ */
+static virRotatingFileWriter *
+virLogHandlerNewWriter(const char *path,
+   off_t maxlen,
+   size_t maxbackup,
+   bool trunc,
+   mode_t mode)
+{
+virRotatingFileWriter *writer;
+
+writer = virRotatingFileWriterNew(path, maxlen, maxbackup, trunc, mode);
+if (virFileExists(path))
+return writer;
+
+virRotatingFileWriterFree(writer);
+return virRotatingFileWriterNew(path, maxlen, maxbackup, trunc, mode);
+}
+
+
 static virLogHandlerLogFile *
 virLogHandlerLogFilePostExecRestart(virLogHandler *handler,
 virJSONValue *object)
@@ -253,11 +296,11 @@ virLogHandlerLogFilePostExecRestart(virLogHandler 
*handler,
 goto error;
 }
 
-if ((file->file = virRotatingFileWriterNew(path,
-   handler->max_size,
-   handler->max_backups,
-   false,
-   DEFAULT_MODE)) == NULL)
+if ((file->file = virLogHandlerNewWriter(path,
+ handler->max_size,
+ handler->max_backups,
+ false,
+ DEFAULT_MODE)) == NULL)
 goto error;
 
 if (virJSONValueObjectGetNumberInt(object, "pipefd", >pipefd) < 0) {
@@ -280,6 +323,26 @@ virLogHandlerLogFilePostExecRestart(virLogHandler *handler,
 }
 
 
+static void
+virLogHandlerTimeout(int timer G_GNUC_UNUSED,
+ void *opaque)
+{
+virLogHandler *handler = opaque;
+size_t i;
+
+virObjectLock(handler);
+
+for (i = 0; i < handler->nfiles; i++) {
+const char *path = 
virRotatingFileWriterGetPath(handler->files[i]->file);
+
+if (utime(path, NULL) < 0)
+VIR_WARN("utime(%s) error: %s", path, g_strerror(errno));
+}
+
+virObjectUnlock(handler);
+}
+
+
 virLogHandler *
 virLogHandlerNewPostExecRestart(virJSONValue *object,
 bool privileged,
@@ -330,6 +393,11 @@ virLogHandlerNewPostExecRestart(virJSONValue *object,
 }
 }
 
+if (handler->nfiles > 0 &&
+(handler->timer = virEventAddTimeout(LOG_HANDLER_TOUCH_TIMEOUT,
+ virLogHandlerTimeout,
+ handler, NULL) <= 0))
+goto error;
 
 return handler;
 
@@ -349,7 +417,10 @@ virLogHandlerDispose(void *obj)
 handler->inhibitor(false, handler->opaque);
 virLogHandlerLogFileFree(handler->files[i]);
 }
+
 g_free(handler->files);
+handler->nfiles = 0;
+virLogHandlerCleanupTimer(handler);
 }
 
 
@@ -393,11 +464,21 @@ virLogHandlerDomainOpenLogFile(virLogHandler *handler,
 file->driver = g_strdup(driver);
 file->domname = g_strdup(domname);
 
-if ((file->file = virRotatingFileWriterNew(path,
-   handler->max_size,
-   handler->max_backups,
-

[RFC PATCH 0/2] logging: add service to cleanup internal log files

2022-02-24 Thread Nikolay Shirokovskiy
Hi, all.

This is a RFC for cleanup service suggested in [1]. I argumented there
that logrotate is not suitable in current form to cooperate with
virtlogd and only peform cleanup function.

There I thought we need to have file locking for cleanup service to
cooperate nicely with virtlogd. In this patch series I used timestamps
for that purpuse which seems to be more simple/reliable technique. The
idea is to keep logs for active VMs fresh by touching them periodically
for the case when log is not actually written for a long time.

This is an RFC and so it misses a lot of pieces:
- we only need to touch internal log files (and not log files of serial
  devices for example)
- openrc support is missing
- spec bits are missing

I guess service itself need to be written in C. For a couple of
reasons:

- not sure if we have a policy not to use shell scripts in deployments 
  (we still have libvirt-guests.sh)
- cleaner time complexity in this version is O(N^2). It is not a big
  deal to write a O(N) version but looks like it will be ugly looking
  in shell script
- I'd like to make at least max age to be configurable parameter
  in virtlogd.conf. And we have conf parser in C already. As a side 
  note I'm not sure if we should make cleanup invocation period and touching
  period to be configurable (both are 1 day currently)

There is also a need for drivers to add their cleanup paths to cleanup
service configuration. I guess we can

- add "cleanup_paths = []" to virtlogd.conf
- drivers can add conf files to virtlogd.d/. For example qemu drivers
  will have in virtlogd.d/qemu.conf:

  cleanup_paths += "/var/log/libvirt/qemu/"

On this way we need to support configure directories and += syntax for
lists.

[1] Re: removing VMs logs
https://listman.redhat.com/archives/libvir-list/2022-February/msg00425.html

Nikolay Shirokovskiy (2):
  logging: touch opened files periodically
  logging: add virtlogcleaner service

 src/logging/log_handler.c | 113 ++
 src/logging/meson.build   |  15 
 src/logging/virtlogcleaner.service.in |   7 ++
 src/logging/virtlogcleaner.sh |   9 ++
 src/logging/virtlogcleaner.timer  |   8 ++
 src/logging/virtlogd.service.in   |   1 +
 6 files changed, 138 insertions(+), 15 deletions(-)
 create mode 100644 src/logging/virtlogcleaner.service.in
 create mode 100755 src/logging/virtlogcleaner.sh
 create mode 100644 src/logging/virtlogcleaner.timer

-- 
2.31.1



Re: [RFC PATCH 00/10] VirtioNet RSS support

2021-10-21 Thread Nikolay Shirokovskiy
чт, 21 окт. 2021 г. в 01:28, Andrew Melnichenko :

> Hi,
> Yes, the work is in progress. Now. I'm working with a proper solution for
> the eBPF RSS helper.
>

Ok. Thank you!



>
> On Wed, Oct 20, 2021 at 3:23 PM Nikolay Shirokovskiy <
> nshirokovs...@virtuozzo.com> wrote:
>
>> Hi, Andrew.
>>
>> We in Virtuozzo are interested in this functionality too. Do you plan to
>> continue your work on it?
>>
>> Nikolay
>>
>> пн, 16 авг. 2021 г. в 15:00, Andrew Melnichenko :
>>
>>> Ping
>>>
>>> On Wed, Jul 28, 2021 at 11:17 AM Andrew Melnychenko 
>>> wrote:
>>>
>>>> This series of patches add RSS property support for virtio-net-pci.
>>>>
>>>> Virtio RSS effectively works with TAP devices, it requires additional
>>>> vectors for VirtioNet, queues for TAP device, and vCPU cores.
>>>> Example of device configuration:
>>>> ```
>>>> 
>>>>   
>>>>   
>>>>   
>>>>   
>>>>   >>> function="0x0"/>
>>>> 
>>>> ```
>>>>
>>>> Capability "rss" enables RSS, "rss_hash_report" - enables hashes in
>>>> vheader.
>>>> Qemu uses eBPF program as RSS driver.
>>>> For loading RSS eBPF program, the helper is used.
>>>> Path to the helper is provided by Qemu through "query-helper-paths" qmp
>>>> command.
>>>> The helper "qemu-ebpf-rss-helper" is built with Qemu and may differ
>>>> from build to build.
>>>> So it's required that the Qemu should provide a proper helper path.
>>>> Libvirt would call the helper and receive the program and map fd
>>>> through unix socket.
>>>> Fds would be passed to Qemu in "ebpf_rss_fds" property by passing to
>>>> child process or unix socket.
>>>> If libvirt would fail at helper call or Qemu didn't provide the path,
>>>> the Qemu would be launched without "ebpf_rss_fds" property.
>>>> Without "ebpf_rss_fds" property, Qemu would try to load eBPF program by
>>>> itself - usually, it would require additional system permissions.
>>>> Qemu may use "in-qemu" RSS as a fallback option, which will not require
>>>> system
>>>> permissions, but doesn't work with vhost TAP.
>>>>
>>>> Qemu patches:
>>>> https://lists.nongnu.org/archive/html/qemu-devel/2021-07/msg03535.html
>>>>
>>>> Andrew Melnychenko (10):
>>>>   domain_conf: Added configs for RSS and Hash report.
>>>>   qemu_capabilities: Added capabilites for qemu's "rss" and "hash".
>>>>   qemu_command: Added "rss" and "hash" properties.
>>>>   virsocket: Added receive for multiple fds.
>>>>   qemu_capabilities: Added capability for qemu's "ebpf_rss_fds".
>>>>   qemu_capabilities: Added capability for ebpf helper path.
>>>>   qemu_interface: Added ebpf helper call.
>>>>   qemu_command: Added ebpf RSS helper call for NIC creation.
>>>>   qemu_hotplug: Added helper call for hotplug NIC.
>>>>   docs: Added descriptions for "rss" and "rss_hash_report"
>>>> configurations.
>>>>
>>>>  docs/formatdomain.rst| 16 +++
>>>>  src/conf/domain_conf.c   | 31 +-
>>>>  src/conf/domain_conf.h   |  2 +
>>>>  src/libvirt_private.syms |  1 +
>>>>  src/qemu/qemu_capabilities.c | 48 +
>>>>  src/qemu/qemu_capabilities.h |  5 +++
>>>>  src/qemu/qemu_command.c  | 46 +++-
>>>>  src/qemu/qemu_command.h  |  2 +
>>>>  src/qemu/qemu_hotplug.c  | 30 -
>>>>  src/qemu/qemu_interface.c| 54 +++
>>>>  src/qemu/qemu_interface.h|  2 +
>>>>  src/qemu/qemu_monitor.c  |  9 
>>>>  src/qemu/qemu_monitor.h  |  3 ++
>>>>  src/qemu/qemu_monitor_json.c | 50 ++
>>>>  src/qemu/qemu_monitor_json.h |  3 ++
>>>>  src/qemu/qemu_validate.c | 16 +++
>>>>  src/util/virsocket.c | 83 
>>>>  src/util/virsocket.h |  2 +
>>>>  18 files changed, 399 insertions(+), 4 deletions(-)
>>>>
>>>> --
>>>> 2.31.1
>>>>
>>>>


Re: [RFC PATCH 00/10] VirtioNet RSS support

2021-10-20 Thread Nikolay Shirokovskiy
Hi, Andrew.

We in Virtuozzo are interested in this functionality too. Do you plan to
continue your work on it?

Nikolay

пн, 16 авг. 2021 г. в 15:00, Andrew Melnichenko :

> Ping
>
> On Wed, Jul 28, 2021 at 11:17 AM Andrew Melnychenko 
> wrote:
>
>> This series of patches add RSS property support for virtio-net-pci.
>>
>> Virtio RSS effectively works with TAP devices, it requires additional
>> vectors for VirtioNet, queues for TAP device, and vCPU cores.
>> Example of device configuration:
>> ```
>> 
>>   
>>   
>>   
>>   
>>   > function="0x0"/>
>> 
>> ```
>>
>> Capability "rss" enables RSS, "rss_hash_report" - enables hashes in
>> vheader.
>> Qemu uses eBPF program as RSS driver.
>> For loading RSS eBPF program, the helper is used.
>> Path to the helper is provided by Qemu through "query-helper-paths" qmp
>> command.
>> The helper "qemu-ebpf-rss-helper" is built with Qemu and may differ from
>> build to build.
>> So it's required that the Qemu should provide a proper helper path.
>> Libvirt would call the helper and receive the program and map fd through
>> unix socket.
>> Fds would be passed to Qemu in "ebpf_rss_fds" property by passing to
>> child process or unix socket.
>> If libvirt would fail at helper call or Qemu didn't provide the path,
>> the Qemu would be launched without "ebpf_rss_fds" property.
>> Without "ebpf_rss_fds" property, Qemu would try to load eBPF program by
>> itself - usually, it would require additional system permissions.
>> Qemu may use "in-qemu" RSS as a fallback option, which will not require
>> system
>> permissions, but doesn't work with vhost TAP.
>>
>> Qemu patches:
>> https://lists.nongnu.org/archive/html/qemu-devel/2021-07/msg03535.html
>>
>> Andrew Melnychenko (10):
>>   domain_conf: Added configs for RSS and Hash report.
>>   qemu_capabilities: Added capabilites for qemu's "rss" and "hash".
>>   qemu_command: Added "rss" and "hash" properties.
>>   virsocket: Added receive for multiple fds.
>>   qemu_capabilities: Added capability for qemu's "ebpf_rss_fds".
>>   qemu_capabilities: Added capability for ebpf helper path.
>>   qemu_interface: Added ebpf helper call.
>>   qemu_command: Added ebpf RSS helper call for NIC creation.
>>   qemu_hotplug: Added helper call for hotplug NIC.
>>   docs: Added descriptions for "rss" and "rss_hash_report"
>> configurations.
>>
>>  docs/formatdomain.rst| 16 +++
>>  src/conf/domain_conf.c   | 31 +-
>>  src/conf/domain_conf.h   |  2 +
>>  src/libvirt_private.syms |  1 +
>>  src/qemu/qemu_capabilities.c | 48 +
>>  src/qemu/qemu_capabilities.h |  5 +++
>>  src/qemu/qemu_command.c  | 46 +++-
>>  src/qemu/qemu_command.h  |  2 +
>>  src/qemu/qemu_hotplug.c  | 30 -
>>  src/qemu/qemu_interface.c| 54 +++
>>  src/qemu/qemu_interface.h|  2 +
>>  src/qemu/qemu_monitor.c  |  9 
>>  src/qemu/qemu_monitor.h  |  3 ++
>>  src/qemu/qemu_monitor_json.c | 50 ++
>>  src/qemu/qemu_monitor_json.h |  3 ++
>>  src/qemu/qemu_validate.c | 16 +++
>>  src/util/virsocket.c | 83 
>>  src/util/virsocket.h |  2 +
>>  18 files changed, 399 insertions(+), 4 deletions(-)
>>
>> --
>> 2.31.1
>>
>>


Re: RFC: revert to external snapshot API

2021-09-16 Thread Nikolay Shirokovskiy
[...]


>
> > > One thing you've missed though is that deletion of snapshots now
> becomes
> > > quite a challenge.
> > >
> >
> > Yeah I did not consider implementing deleting at that moment. However to
> > make
> > external snapshots usable it should be implemented as well.
>
> Yeah, both need to be implemented at the same time.
>
> > Anyway is anybody in Red Hat working on this (considering you and Pavel
> > discussed
> > the topic recently)? If not I would like to proceed with implementation.
>
> We definitely plan to work on it, but I can't give you any time
> estimates yet.
>
> More importantly, since you are interested into this as well, it would
> be great if you could elaborate on how you want to use this if it's
> ready especially any special scenarios.
>
> For us the basic goal is to achieve feature parity between internal and
> external snapshots so that they can be used interchangably and
> eventually prefering external snapshots as the way forward.
>
>
Basically this is what we want to do too. On top of that we want to support
asynchronous snapshot/revert to snapshot based on work in progress in QEMU
being done by Virtuozzo [1]. We have no any patches for libvirt yet.

Do you mind if we proceed with implementation? I guess I can start more
or less right now.

Nikolay


[1] Background Snapshots in QEMU: Towards Asynchronous Revert
https://kvmforum2021.sched.com/event/ke2f?iframe=no


Re: RFC: revert to external snapshot API

2021-09-13 Thread Nikolay Shirokovskiy
ср, 8 сент. 2021 г. в 12:27, Peter Krempa :

> On Tue, Aug 31, 2021 at 16:46:25 +0300, Nikolay Shirokovskiy wrote:
> > Hi, all.
>
> Hi, sorry for the late reply I was on PTO.
>
>
> > I want to implement reverting to external snapshot functionality.
> > I've checked the mailing list history and found some previous attempts
> > (not sure if this is a complete list of them).
> >
> > [1] was done in 2012 by the Redhat team itself. [2] and [3] was done in
> > 2018.
> > Looks like it was not clear how the API should look like.
>
> One additional thing is that me and phrdina started discussing this (in
> person so I can't point you to a discussion) 2 weeks ago.
>
> I'll summarize the points we agreed upon.
>
> > For example we have disk.qcow2 <- disk.snap1 chain <- disk.snap2 chain
> > (after
> > having snap1 and snap2 snapshots). Now we want to revert to snap1
> snapshot.
>
> There's one implementation snag we currently have which complicates
> stuff. Let's expand your above scenario with the snapshot states:
>
>
>s  s
>n  n
>a  a
>p  p
>1  2
>   p
> o  │  │   r
> rbase.qcow2│snap1.qcow2   │snap2.qcow2e
> i ───► │ ►│ ► s
> g  │  │   e
> i  │  │   n
> n t
>
> A rather big set of problems which we will necessarily encounter when
> implementing this comes from the following:
>
> 1) users can modify the VM betwen snapshots or between the last snapshot
> and the abandoned state and this modification (e.g. changing a disk
> image) must be considered to prevent data loss when manipulating the
> images.
>

Hi, Peter!

Can you please explain in more detail what the issue is? For example if
after snap1 I changed disk image to some base2.qcow2 then snap1.qcow2 is
expected to be managed by mgmt (expected to be deleted I guess). Then I made
a snap2. Now I have base.qcow2 (keeping snap1 state) and  base2.qcow2
(keeping snap2 state) <-snap2.qcow2
chain. Thanks to metadata I can revert to both snap1 and snap2 as I know
the domain
xml in those states and these xmls references right images (snap1
references base.qcow2
and snap2 references base2.qcow2).


> 2) libvirt doesn't record the XML used to create the snapshot (e.g.
>

It is not quite so. According to
https://libvirt.org/formatsnapshot.html#example
xml given to create API saved in snapshot metadata. Or I miss something.



> snap1.xm) thus we don't actually know what the (historical) snapshot was
> actually recording. Coupled with the fact that the user could have
> changed the VM definition between 'snap1' and 'snap2' we can't even
> infer what the state was.
>

Here too I cannot follow what the issue is. Can you please give an example?


>
>
> > The
> > snapshot state is held by disk.qcow2 image. We can run reverted domain on
> > disk.qcow2 itself but then we lose snap1 (held by disk.qcow2) and snap2
> > (held by disk.snap1). So we definitely should run on some overlay over
> > disk.qcow2. But then what name should be given to overlay? We should have
> > an option for mgmt to specify this name like in case of snapshots itself.
>
> Exactly. Reversion of external snapshots will necessarily require a new
> API, which will take a new "snapshot" XML describing the new overlays as
> you describe below.
>
> In the simple case such as with local files we can use the same
> algorithm for creating overlay filenames as we do when creating
> snapshots but generally we need to give the MGMT the ability to specify
> the overlay name.
>
>
> > The [1] has some discussion on adding extra options to reverting
> > functionality.
> > Like for example if we revert back to snap2 then having the ability to
> run
> > from
> > state in disk.snap2 rather than disk.snap1. My opinion is there is no
> need
> > to
> > as if one wants to revert to the state of disk2.snap2 it can take a
> > snapshot (some
> > snap3).
>
> It's possible to avoid doing a combined "take snapshot" operation
> as long as we have the possibility to take a snapshot and destroy the
> VM as

Re: RFC: revert to external snapshot API

2021-09-08 Thread Nikolay Shirokovskiy
Hi! What do you think of this approach to implementing reverting?

вт, 31 авг. 2021 г. в 16:46, Nikolay Shirokovskiy <
nshirokovs...@virtuozzo.com>:

> Hi, all.
>
> I want to implement reverting to external snapshot functionality.
> I've checked the mailing list history and found some previous attempts
> (not sure if this is a complete list of them).
>
> [1] was done in 2012 by the Redhat team itself. [2] and [3] was done in
> 2018.
> Looks like it was not clear how the API should look like.
>
> For example we have disk.qcow2 <- disk.snap1 chain <- disk.snap2 chain
> (after
> having snap1 and snap2 snapshots). Now we want to revert to snap1
> snapshot. The
> snapshot state is held by disk.qcow2 image. We can run reverted domain on
> disk.qcow2 itself but then we lose snap1 (held by disk.qcow2) and snap2
> (held by disk.snap1). So we definitely should run on some overlay over
> disk.qcow2. But then what name should be given to overlay? We should have
> an option for mgmt to specify this name like in case of snapshots itself.
>
> The [1] has some discussion on adding extra options to reverting
> functionality.
> Like for example if we revert back to snap2 then having the ability to run
> from
> state in disk.snap2 rather than disk.snap1. My opinion is there is no need
> to
> as if one wants to revert to the state of disk2.snap2 it can take a
> snapshot (some
> snap3). At the same time one needs to be aware that revert operation loses
> current state and later one can revert only to the states of snapshots.
> This is the way internal snapshots work and the way one expects external
> snapshots to work too.
>
> The [2] takes an approach of reusing current top image as overlay on
> revert so
> that in the above example disk.snap2 will be overlay over disk.qcow2 on
> reverting to snap1 snapshot. IMHO this is a confusing naming scheme.
>
> The [3] suggests a different scheme for naming images. For example after
> taking
> snapshot snap1 the chain should be disk.snap1 <- disk.qcow2 which looks
> very
> appealing to me. With this naming using the [2] approach is quite natural.
> Implementing this does not look hard even for a running domain but this is
> a big change to API and all mgmt need to be aware of (well it could be done
> safely using a new flag).
>
> Anyway we can go on with current image names. In order to specify overlay
> image name let's introduce new API:
>
> int virDomainRevertToSnapshotXML(virDomainSnapshotPtr snapshot,
>  char *xmlDesc,
>  unsigned int flags)
>
> with XML like:
>
> 
>   
> 
>   
> 
>   
> 
>
> Having an XML looks like a bit overkill right now but I could not
> find a better solution.
>
> If overlay name is omitted the generated name will be like disk.snap1-1 in
> the
> above example to be in alignment with creating a snapshot case. So that
> disk.snap1*
> images hold states derived from snap1 state. We can also support reverting
> to
> external snapshot thru existing virDomainRevertToSnapshot for those who
> rely on
> generated names using this naming scheme.
>
> [1] Re: [PATCHv4 4/4] snapshot: qemu: Implement reverting of external
> snapshots
> https://listman.redhat.com/archives/libvir-list/2012-November/msg00932.html
>
> [2] Re: [PATCH 1/5] snapshot: Implement reverting for external disk
> snapshots
> https://listman.redhat.com/archives/libvir-list/2018-November/msg00338.html
>
> [3] External disk snapshot paths and the revert operation
> https://listman.redhat.com/archives/libvir-list/2018-October/msg01110.html
>


RFC: revert to external snapshot API

2021-08-31 Thread Nikolay Shirokovskiy
Hi, all.

I want to implement reverting to external snapshot functionality.
I've checked the mailing list history and found some previous attempts
(not sure if this is a complete list of them).

[1] was done in 2012 by the Redhat team itself. [2] and [3] was done in
2018.
Looks like it was not clear how the API should look like.

For example we have disk.qcow2 <- disk.snap1 chain <- disk.snap2 chain
(after
having snap1 and snap2 snapshots). Now we want to revert to snap1 snapshot.
The
snapshot state is held by disk.qcow2 image. We can run reverted domain on
disk.qcow2 itself but then we lose snap1 (held by disk.qcow2) and snap2
(held by disk.snap1). So we definitely should run on some overlay over
disk.qcow2. But then what name should be given to overlay? We should have
an option for mgmt to specify this name like in case of snapshots itself.

The [1] has some discussion on adding extra options to reverting
functionality.
Like for example if we revert back to snap2 then having the ability to run
from
state in disk.snap2 rather than disk.snap1. My opinion is there is no need
to
as if one wants to revert to the state of disk2.snap2 it can take a
snapshot (some
snap3). At the same time one needs to be aware that revert operation loses
current state and later one can revert only to the states of snapshots.
This is the way internal snapshots work and the way one expects external
snapshots to work too.

The [2] takes an approach of reusing current top image as overlay on revert
so
that in the above example disk.snap2 will be overlay over disk.qcow2 on
reverting to snap1 snapshot. IMHO this is a confusing naming scheme.

The [3] suggests a different scheme for naming images. For example after
taking
snapshot snap1 the chain should be disk.snap1 <- disk.qcow2 which looks very
appealing to me. With this naming using the [2] approach is quite natural.
Implementing this does not look hard even for a running domain but this is
a big change to API and all mgmt need to be aware of (well it could be done
safely using a new flag).

Anyway we can go on with current image names. In order to specify overlay
image name let's introduce new API:

int virDomainRevertToSnapshotXML(virDomainSnapshotPtr snapshot,
 char *xmlDesc,
 unsigned int flags)

with XML like:


  

  

  


Having an XML looks like a bit overkill right now but I could not
find a better solution.

If overlay name is omitted the generated name will be like disk.snap1-1 in
the
above example to be in alignment with creating a snapshot case. So that
disk.snap1*
images hold states derived from snap1 state. We can also support reverting
to
external snapshot thru existing virDomainRevertToSnapshot for those who
rely on
generated names using this naming scheme.

[1] Re: [PATCHv4 4/4] snapshot: qemu: Implement reverting of external
snapshots
https://listman.redhat.com/archives/libvir-list/2012-November/msg00932.html

[2] Re: [PATCH 1/5] snapshot: Implement reverting for external disk
snapshots
https://listman.redhat.com/archives/libvir-list/2018-November/msg00338.html

[3] External disk snapshot paths and the revert operation
https://listman.redhat.com/archives/libvir-list/2018-October/msg01110.html


RFC: nwfilter rule direction clarification

2021-04-08 Thread Nikolay Shirokovskiy
Hi, all!

For next nwfilter:


  

  


I got next result:

-A FI-vmec437726363e0 -p tcp -m tcp --sport 22 -j RETURN
-A FO-vmec437726363e0 -p tcp -m tcp --dport 22 -j RETURN
-A HI-vmec437726363e0 -p tcp -m tcp --sport 22 -j RETURN
-A libvirt-host-in -m physdev --physdev-in vmec437726363e0 -g
HI-vmec437726363e0
-A libvirt-in -m physdev --physdev-in vmec437726363e0 -g FI-vmec437726363e0
-A libvirt-in-post -m physdev --physdev-in vmec437726363e0 -j ACCEPT
-A libvirt-out -m physdev --physdev-out vmec437726363e0
--physdev-is-bridged -g FO-vmec437726363e0

It is not clear to me why the rule is added to FI-* chains. I guess this
filter
is supposed to filter only outgoing traffic.

I tested with libvirt-5.6.0 but AFAIU the behaviour in upstream is the
same. Also looks
like this behaviour exists for a long time so I doubted it is a bug.

Nikolay


Re: [PATCH 02/23] qemu: factor out qemuValidateDomainBlkdeviotune

2021-03-11 Thread Nikolay Shirokovskiy
ср, 3 мар. 2021 г. в 15:54, Peter Krempa :

> On Mon, Jan 11, 2021 at 12:49:55 +0300, Nikolay Shirokovskiy wrote:
> > It can also be used for validation of input in qemuDomainSetBlockIoTune.
> >
> > Signed-off-by: Nikolay Shirokovskiy 
> > ---
> >  src/qemu/qemu_validate.c | 100
> ++-
> >  src/qemu/qemu_validate.h |   4 ++
> >  2 files changed, 60 insertions(+), 44 deletions(-)
> >
> > diff --git a/src/qemu/qemu_validate.c b/src/qemu/qemu_validate.c
> > index eadf3af..6a27565 100644
> > --- a/src/qemu/qemu_validate.c
> > +++ b/src/qemu/qemu_validate.c
> > @@ -2696,6 +2696,61 @@ qemuValidateDomainDeviceDefDiskFrontend(const
> virDomainDiskDef *disk,
> >  }
> >
> >
> > +int
> > +qemuValidateDomainBlkdeviotune(const virDomainBlockIoTuneInfo *iotune,
> > +   virQEMUCapsPtr qemuCaps)
> > +{
>
> The check that group_name must be set along with other fields :
>
> /* group_name by itself is ignored by qemu */
> if (disk->blkdeviotune.group_name &&
> !virDomainBlockIoTuneInfoHasAny(>blkdeviotune)) {
> virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
>_("group_name can be configured only together with "
>  "settings"));
> return -1;
> }
>
>
> also belongs here.
>

I cannot put this check into qemuValidateDomainBlkdeviotune. The thing is
I'm going to reuse this function
in qemuDomainSetBlockIoTune and in qemuDomainSetBlockIoTune is it fine to
have group_name only in input.
This means "move this disk to that io tune group" or "create io tune group
and put this disk into it" depending
on whether io tune with that name exists or not (this is what
qemuDomainSetBlockIoTuneDefaults do).


Nikolay



>
>
> > +if (iotune->total_bytes_sec > QEMU_BLOCK_IOTUNE_MAX ||
> > +iotune->read_bytes_sec > QEMU_BLOCK_IOTUNE_MAX ||
> > +iotune->write_bytes_sec > QEMU_BLOCK_IOTUNE_MAX ||
> > +iotune->total_iops_sec > QEMU_BLOCK_IOTUNE_MAX ||
> > +iotune->read_iops_sec > QEMU_BLOCK_IOTUNE_MAX ||
> > +iotune->write_iops_sec > QEMU_BLOCK_IOTUNE_MAX ||
> > +iotune->total_bytes_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
> > +iotune->read_bytes_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
> > +iotune->write_bytes_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
> > +iotune->total_iops_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
> > +iotune->read_iops_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
> > +iotune->write_iops_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
> > +iotune->size_iops_sec > QEMU_BLOCK_IOTUNE_MAX) {
> > +virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED,
> > +   _("block I/O throttle limit must "
> > + "be no more than %llu using QEMU"),
> > +   QEMU_BLOCK_IOTUNE_MAX);
> > +return -1;
> > +}
>
> We also nowadays prefer if the error detail strings are not broken up,
> but that's not a required change.
>
> With the group name check moved too:
>
> Reviewed-by: Peter Krempa 
>
>


Re: [PATCH 16/23] conf: get rid of macros in virDomainDiskDefIotuneParse

2021-03-09 Thread Nikolay Shirokovskiy
ср, 3 мар. 2021 г. в 17:06, Peter Krempa :

> On Mon, Jan 11, 2021 at 12:50:09 +0300, Nikolay Shirokovskiy wrote:
> > At first glance we don't get much win because of introduction of
> > virDomainBlockIoTuneFieldNames and virDomainBlockIoTuneFields. But we
> are going
> > to use these two in other places to remove usage of macros too.
> >
> > Signed-off-by: Nikolay Shirokovskiy 
> > ---
> >  src/conf/domain_conf.c | 99
> +++---
> >  1 file changed, 69 insertions(+), 30 deletions(-)
> >
> > diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
> > index 800bca5..024d0e3 100644
> > --- a/src/conf/domain_conf.c
> > +++ b/src/conf/domain_conf.c
> > @@ -8695,40 +8695,80 @@
> virDomainBlockIoTuneValidate(virDomainBlockIoTuneInfoPtr iotune)
> >  return 0;
> >  }
> >
> > -#define PARSE_IOTUNE(val) \
> > -if (virXPathULongLong("string(./iotune/" #val ")", \
> > -  ctxt, >blkdeviotune.val) == -2) { \
> > -virReportError(VIR_ERR_XML_ERROR, \
> > -   _("disk iotune field '%s' must be an integer"),
> #val); \
> > -return -1; \
> > -}
> > +
> > +static const char* virDomainBlockIoTuneFieldNames[] = {
> > +VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_BYTES_SEC,
> > +VIR_DOMAIN_BLOCK_IOTUNE_READ_BYTES_SEC,
> > +VIR_DOMAIN_BLOCK_IOTUNE_WRITE_BYTES_SEC,
> > +VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_IOPS_SEC,
> > +VIR_DOMAIN_BLOCK_IOTUNE_READ_IOPS_SEC,
> > +VIR_DOMAIN_BLOCK_IOTUNE_WRITE_IOPS_SEC,
> > +VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_BYTES_SEC_MAX,
> > +VIR_DOMAIN_BLOCK_IOTUNE_READ_BYTES_SEC_MAX,
> > +VIR_DOMAIN_BLOCK_IOTUNE_WRITE_BYTES_SEC_MAX,
> > +VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_IOPS_SEC_MAX,
> > +VIR_DOMAIN_BLOCK_IOTUNE_READ_IOPS_SEC_MAX,
> > +VIR_DOMAIN_BLOCK_IOTUNE_WRITE_IOPS_SEC_MAX,
> > +VIR_DOMAIN_BLOCK_IOTUNE_SIZE_IOPS_SEC,
> > +VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_BYTES_SEC_MAX_LENGTH,
> > +VIR_DOMAIN_BLOCK_IOTUNE_READ_BYTES_SEC_MAX_LENGTH,
> > +VIR_DOMAIN_BLOCK_IOTUNE_WRITE_BYTES_SEC_MAX_LENGTH,
> > +VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_IOPS_SEC_MAX_LENGTH,
> > +VIR_DOMAIN_BLOCK_IOTUNE_READ_IOPS_SEC_MAX_LENGTH,
> > +VIR_DOMAIN_BLOCK_IOTUNE_WRITE_IOPS_SEC_MAX_LENGTH,
> > +};
> > +
> > +
> > +static unsigned long long**
> > +virDomainBlockIoTuneFields(virDomainBlockIoTuneInfoPtr iotune)
> > +{
> > +unsigned long long **ret = g_new0(unsigned long long*,
> > +
> G_N_ELEMENTS(virDomainBlockIoTuneFieldNames));
> > +size_t i = 0;
> > +
> > +ret[i++] = >total_bytes_sec;
> > +ret[i++] = >read_bytes_sec;
> > +ret[i++] = >write_bytes_sec;
> > +ret[i++] = >total_iops_sec;
> > +ret[i++] = >read_iops_sec;
> > +ret[i++] = >write_iops_sec;
> > +ret[i++] = >total_bytes_sec_max;
> > +ret[i++] = >read_bytes_sec_max;
> > +ret[i++] = >write_bytes_sec_max;
> > +ret[i++] = >total_iops_sec_max;
> > +ret[i++] = >read_iops_sec_max;
> > +ret[i++] = >write_iops_sec_max;
> > +ret[i++] = >size_iops_sec;
> > +ret[i++] = >total_bytes_sec_max_length;
> > +ret[i++] = >read_bytes_sec_max_length;
> > +ret[i++] = >write_bytes_sec_max_length;
> > +ret[i++] = >total_iops_sec_max_length;
> > +ret[i++] = >read_iops_sec_max_length;
> > +ret[i++] = >write_iops_sec_max_length;
> > +
> > +return ret;
> > +}
> > +
> >
> >  static int
> >  virDomainDiskDefIotuneParse(virDomainDiskDefPtr def,
> >  xmlXPathContextPtr ctxt)
> >  {
> > -PARSE_IOTUNE(total_bytes_sec);
> > -PARSE_IOTUNE(read_bytes_sec);
> > -PARSE_IOTUNE(write_bytes_sec);
> > -PARSE_IOTUNE(total_iops_sec);
> > -PARSE_IOTUNE(read_iops_sec);
> > -PARSE_IOTUNE(write_iops_sec);
> > -
> > -PARSE_IOTUNE(total_bytes_sec_max);
> > -PARSE_IOTUNE(read_bytes_sec_max);
> > -PARSE_IOTUNE(write_bytes_sec_max);
> > -PARSE_IOTUNE(total_iops_sec_max);
> > -PARSE_IOTUNE(read_iops_sec_max);
> > -PARSE_IOTUNE(write_iops_sec_max);
> > -
> > -PARSE_IOTUNE(size_iops_sec);
> > -
> > -PARSE_IOTUNE(total_bytes_sec_max_length);
> > -PARSE_IOTUNE(read_bytes_sec_max_length);
> > -PARSE_IOTUNE(write_bytes_sec_max_length);
> > -PARSE_IOTUNE(total_iops_sec_max_length)

Re: [PATCH 00/23] RFC: get rid of macros when dealing with block io tunes

2021-02-25 Thread Nikolay Shirokovskiy
A ping to save this series from falling thru the cracks.

ср, 27 янв. 2021 г. в 11:19, Nikolay Shirokovskiy <
nshirokovs...@virtuozzo.com>:

> Polite ping.
>
> On Mon, Jan 11, 2021 at 12:49 PM Nikolay Shirokovskiy <
> nshirokovs...@virtuozzo.com> wrote:
>
>> Hi, all.
>>
>> I started this work as adding missing parts/fixing issues/etc in block
>> iotune
>> code but then turned to refactoring code. We use a lot of macros in this
>> place
>> and if we get rid of them I belive we will have much more
>> readable/reusable/
>> extendable code.
>>
>> Most of macros usage is for iterating over unsigned long long values. I'm
>> talking about parsing/formating xml, converting from/to
>> virTypedParameterPtr
>> list etc. These places do not care about tune semantics and thus we can
>> add tools for the said iteration. See patch [1] for the first such
>> conversion.
>>
>> Patches before it partially prepare for this conversion, partially just
>> improve code reuse and add missing parts.
>>
>> The work on removing macros in code handling iotunes is not finished as
>> I wanted to get an approvement that I have taken a right direction. At the
>> same time series shows the potential of the approach (take a look on how
>> qemuDomainSetBlockIoTune and testDomainSetBlockIoTune are looking now!).
>>
>> Some places like qemuDomainSetBlockIoTuneDefaults deal with fields
>> semantics.
>> So we need another approach to remove macros there but it is a matter of
>> a different RFC.
>>
>> Nikolay Shirokovskiy (23):
>>   qemu: pass supportGroupNameOption as expected
>>   qemu: factor out qemuValidateDomainBlkdeviotune
>>   qemu: reuse validation in qemuDomainSetBlockIoTune
>>   qemu: remove extra check for QEMU_BLOCK_IOTUNE_MAX
>>   conf: factor out virDomainBlockIoTuneValidate
>>   qemu: reuse virDomainBlockIoTuneValidate
>>   test driver: reuse virDomainBlockIoTuneValidate
>>   qemu: reset max iotune setting when needed
>>   qemu: add max iotune settings check to virDomainBlockIoTuneValidate
>>   qemu: remove iotune max checks
>>   test driver: remove iotune max checks
>>   qemu: prepare for removing qemuBlockIoTuneSetFlags
>>   qemu: use group_name instead of QEMU_BLOCK_IOTUNE_SET_GROUP_NAME
>>   qemu: remove qemuBlockIoTuneSetFlags usage in qemuDomainSetBlockIoTune
>>   qemu: remove qemuBlockIoTuneSetFlags enum completly
>>   conf: get rid of macros in virDomainDiskDefIotuneParse
>>   [1]
>>   conf: get rid of macros in virDomainDiskDefFormatIotune
>>   conf: add virDomainBlockIoTuneFromParams
>>   conf: add virDomainBlockIoTuneToEventParams
>>   qemu: qemuDomainSetBlockIoTune use functions to convert to/from params
>>   test driver: remove TEST_BLOCK_IOTUNE_MAX checks
>>   test driver: prepare to delete macros in testDomainSetBlockIoTune
>>   test driver: testDomainSetBlockIoTune: use functions to convert
>> to/from params
>>
>>  src/conf/domain_conf.c   | 303
>> +--
>>  src/conf/domain_conf.h   |  16 +++
>>  src/libvirt_private.syms |   3 +
>>  src/qemu/qemu_driver.c   | 281
>> ---
>>  src/qemu/qemu_validate.c | 100 +---
>>  src/qemu/qemu_validate.h |   4 +
>>  src/test/test_driver.c   | 156 ++--
>>  7 files changed, 380 insertions(+), 483 deletions(-)
>>
>> --
>> 1.8.3.1
>>
>>


Re: [PATCH 00/23] RFC: get rid of macros when dealing with block io tunes

2021-01-27 Thread Nikolay Shirokovskiy
Polite ping.

On Mon, Jan 11, 2021 at 12:49 PM Nikolay Shirokovskiy <
nshirokovs...@virtuozzo.com> wrote:

> Hi, all.
>
> I started this work as adding missing parts/fixing issues/etc in block
> iotune
> code but then turned to refactoring code. We use a lot of macros in this
> place
> and if we get rid of them I belive we will have much more
> readable/reusable/
> extendable code.
>
> Most of macros usage is for iterating over unsigned long long values. I'm
> talking about parsing/formating xml, converting from/to
> virTypedParameterPtr
> list etc. These places do not care about tune semantics and thus we can
> add tools for the said iteration. See patch [1] for the first such
> conversion.
>
> Patches before it partially prepare for this conversion, partially just
> improve code reuse and add missing parts.
>
> The work on removing macros in code handling iotunes is not finished as
> I wanted to get an approvement that I have taken a right direction. At the
> same time series shows the potential of the approach (take a look on how
> qemuDomainSetBlockIoTune and testDomainSetBlockIoTune are looking now!).
>
> Some places like qemuDomainSetBlockIoTuneDefaults deal with fields
> semantics.
> So we need another approach to remove macros there but it is a matter of
> a different RFC.
>
> Nikolay Shirokovskiy (23):
>   qemu: pass supportGroupNameOption as expected
>   qemu: factor out qemuValidateDomainBlkdeviotune
>   qemu: reuse validation in qemuDomainSetBlockIoTune
>   qemu: remove extra check for QEMU_BLOCK_IOTUNE_MAX
>   conf: factor out virDomainBlockIoTuneValidate
>   qemu: reuse virDomainBlockIoTuneValidate
>   test driver: reuse virDomainBlockIoTuneValidate
>   qemu: reset max iotune setting when needed
>   qemu: add max iotune settings check to virDomainBlockIoTuneValidate
>   qemu: remove iotune max checks
>   test driver: remove iotune max checks
>   qemu: prepare for removing qemuBlockIoTuneSetFlags
>   qemu: use group_name instead of QEMU_BLOCK_IOTUNE_SET_GROUP_NAME
>   qemu: remove qemuBlockIoTuneSetFlags usage in qemuDomainSetBlockIoTune
>   qemu: remove qemuBlockIoTuneSetFlags enum completly
>   conf: get rid of macros in virDomainDiskDefIotuneParse
>   [1]
>   conf: get rid of macros in virDomainDiskDefFormatIotune
>   conf: add virDomainBlockIoTuneFromParams
>   conf: add virDomainBlockIoTuneToEventParams
>   qemu: qemuDomainSetBlockIoTune use functions to convert to/from params
>   test driver: remove TEST_BLOCK_IOTUNE_MAX checks
>   test driver: prepare to delete macros in testDomainSetBlockIoTune
>   test driver: testDomainSetBlockIoTune: use functions to convert
> to/from params
>
>  src/conf/domain_conf.c   | 303
> +--
>  src/conf/domain_conf.h   |  16 +++
>  src/libvirt_private.syms |   3 +
>  src/qemu/qemu_driver.c   | 281 ---
>  src/qemu/qemu_validate.c | 100 +---
>  src/qemu/qemu_validate.h |   4 +
>  src/test/test_driver.c   | 156 ++--
>  7 files changed, 380 insertions(+), 483 deletions(-)
>
> --
> 1.8.3.1
>
>


Re: [PATCH] qemu: Avoid crash in qemuStateShutdownPrepare() and qemuStateShutdownWait()

2021-01-22 Thread Nikolay Shirokovskiy
On Fri, Jan 22, 2021 at 2:24 PM Michal Privoznik 
wrote:

> On 1/22/21 12:09 PM, Nikolay Shirokovskiy wrote:
> > On Fri, Jan 22, 2021 at 12:45 PM Michal Privoznik 
> > wrote:
> >
> >> If libvirtd is sent SIGTERM while it is still initializing, it
> >> may crash. The following scenario was observed (using 'stress' to
> >> slow down CPU so much that the window where the problem exists is
> >> bigger):
> >>
> >> 1) The main thread is already executing virNetDaemonRun() and is
> >> in virEventRunDefaultImpl().
> >> 2) The thread that's supposed to run daemonRunStateInit() is
> >> spawned already, but daemonRunStateInit() is in its very early
> >> stage (in the stack trace I see it's executing
> >> virIdentityGetSystem()).
> >>
> >> If SIGTERM (or any other signal that we don't override handler
> >> for) arrives at this point, the main thread jumps out from
> >> virEventRunDefaultImpl() and enters virStateShutdownPrepare()
> >> (via shutdownPrepareCb which was set earlier). This iterates
> >> through stateShutdownPrepare() callbacks of state drivers and
> >> reaching qemuStateShutdownPrepare() eventually only to
> >> dereference qemu_driver. But since thread 2) has not been
> >> scheduled/not proceeded yet, qemu_driver was not allocated yet.
> >>
> >> Solution is simple - just check if qemu_driver is not NULL. But
> >> doing so only in qemuStateShutdownPrepare() would push the
> >> problem down to virStateShutdownWait(), well
> >> qemuStateShutdownWait(). Therefore, duplicate the trick there
> >> too.
> >>
> >
> > I guess this is a partial solution. Initialization may be in a state when
> > qemu_driver is initialized but qemu_driver->workerPool is still NULL
> > for example.
>
> Yes.
>
> >
> > Maybe we'd better delay shutdown until initialization is finished?
>
> I'm not exactly sure how to achieve that. Do you have a hint? Also, part
> of qemu driver state init is autostarting domains (which may take ages).
>

I'm thinking of adding a new variable 'initialized' to virnetdaemon. It can
be
set by call from daemonRunStateInit after initialization is finished.
And we should call shutdownPrepare/shutdownWait only when both 'quit' and
'initialized' are true. It will require another pipe pair probably to wake
up
event loop thread when 'initialized' it set to true.

As initialization can take a lot of time (autostart as you mentioned) we
can arm finishTimer right when quit is set without waiting for
initialization
to finish. This way we exit ungracefully just as in other cases when
shutdown
finishing takes too much time. Libvirtd has a goal to handle crashes at
any time so this exit should be fine.

Nikolay


Re: [PATCH] qemu: Avoid crash in qemuStateShutdownPrepare() and qemuStateShutdownWait()

2021-01-22 Thread Nikolay Shirokovskiy
On Fri, Jan 22, 2021 at 12:45 PM Michal Privoznik 
wrote:

> If libvirtd is sent SIGTERM while it is still initializing, it
> may crash. The following scenario was observed (using 'stress' to
> slow down CPU so much that the window where the problem exists is
> bigger):
>
> 1) The main thread is already executing virNetDaemonRun() and is
>in virEventRunDefaultImpl().
> 2) The thread that's supposed to run daemonRunStateInit() is
>spawned already, but daemonRunStateInit() is in its very early
>stage (in the stack trace I see it's executing
>virIdentityGetSystem()).
>
> If SIGTERM (or any other signal that we don't override handler
> for) arrives at this point, the main thread jumps out from
> virEventRunDefaultImpl() and enters virStateShutdownPrepare()
> (via shutdownPrepareCb which was set earlier). This iterates
> through stateShutdownPrepare() callbacks of state drivers and
> reaching qemuStateShutdownPrepare() eventually only to
> dereference qemu_driver. But since thread 2) has not been
> scheduled/not proceeded yet, qemu_driver was not allocated yet.
>
> Solution is simple - just check if qemu_driver is not NULL. But
> doing so only in qemuStateShutdownPrepare() would push the
> problem down to virStateShutdownWait(), well
> qemuStateShutdownWait(). Therefore, duplicate the trick there
> too.
>

I guess this is a partial solution. Initialization may be in a state when
qemu_driver is initialized but qemu_driver->workerPool is still NULL
for example.

Maybe we'd better delay shutdown until initialization is finished?

Nikolay



>
> Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1895359#c14
> Signed-off-by
> :
> Michal Privoznik 
> ---
>  src/qemu/qemu_driver.c | 6 ++
>  1 file changed, 6 insertions(+)
>
> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 027617deef..ca4f366323 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -1072,6 +1072,9 @@ qemuStateStop(void)
>  static int
>  qemuStateShutdownPrepare(void)
>  {
> +if (!qemu_driver)
> +return 0;
> +
>  virThreadPoolStop(qemu_driver->workerPool);
>  return 0;
>  }
> @@ -1091,6 +1094,9 @@ qemuDomainObjStopWorkerIter(virDomainObjPtr vm,
>  static int
>  qemuStateShutdownWait(void)
>  {
> +if (!qemu_driver)
> +return 0;
> +
>  virDomainObjListForEach(qemu_driver->domains, false,
>  qemuDomainObjStopWorkerIter, NULL);
>  virThreadPoolDrain(qemu_driver->workerPool);
> --
> 2.26.2
>
>


[PATCH v2] meson: build vstorage only on linux

2021-01-19 Thread Nikolay Shirokovskiy
This should fix CI error:

../dist-unpack/libvirt-7.1.0/src/storage/storage_backend_vstorage.c:10:10: 
fatal error: 'mntent.h' file not found
#include 
^~

on freebsd and mac.

Signed-off-by: Nikolay Shirokovskiy 
---
 meson.build | 15 +--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/meson.build b/meson.build
index e3e7ff7..d8a63ba 100644
--- a/meson.build
+++ b/meson.build
@@ -1957,8 +1957,19 @@ if conf.has('WITH_LIBVIRTD')
   endif
 
   if not get_option('storage_vstorage').disabled()
-use_storage = true
-conf.set('WITH_STORAGE_VSTORAGE', 1)
+vstorage_enable = true
+if host_machine.system() != 'linux'
+  if get_option('storage_fs').enabled()
+error('Vstorage is supported only on Linux')
+  else
+vstorage_enable = false
+  endif
+endif
+
+if vstorage_enable
+  use_storage = true
+  conf.set('WITH_STORAGE_VSTORAGE', 1)
+endif
   endif
 
   if not get_option('storage_zfs').disabled()
-- 
1.8.3.1



Re: [PATCH] meson: don't build vstorage where mntent.h is not present

2021-01-19 Thread Nikolay Shirokovskiy
On Tue, Jan 19, 2021 at 5:59 PM Pavel Hrdina  wrote:

> On Tue, Jan 19, 2021 at 04:41:35PM +0300, Nikolay Shirokovskiy wrote:
> > This should fix CI error:
> >
> >
>  ../dist-unpack/libvirt-7.1.0/src/storage/storage_backend_vstorage.c:10:10:
> fatal error: 'mntent.h' file not found
> > #include 
> > ^~
> >
> > on freebsd and mac.
> >
> > Signed-off-by: Nikolay Shirokovskiy 
> > ---
> >  meson.build | 16 ++--
> >  1 file changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/meson.build b/meson.build
> > index e3e7ff7..a6b6169 100644
> > --- a/meson.build
> > +++ b/meson.build
> > @@ -1957,8 +1957,20 @@ if conf.has('WITH_LIBVIRTD')
> >endif
> >
> >if not get_option('storage_vstorage').disabled()
> > -use_storage = true
> > -conf.set('WITH_STORAGE_VSTORAGE', 1)
> > +vstorage_enable = true
> > +
> > +if not cc.has_header('mntent.h')
>
> This makes me question if it makes sense to build vstorage for anything
> else then linux? It looks like that on FreeBSD or macOS it will be never
> enabled and we already disable libvirtd and all storage drivers for
> windows so we might as well make this condition
>
> if host_machine.system() != 'linux'
>
> and claim that vstorage is supported only on linux.
>
> I see that the check is inspired by FS storage driver but if mntent.h is
> not available or difficult to get on FreeBSD or macOS we could make it
> easier for users instead of having them trying to get mntent.h.
>

Ok.


>
> > +  if get_option('storage_fs').enabled()
> > +error(' is required for the FS storage driver')
>
> This should probably say "Virtuozzo storage driver".
>
>
Yep, fixing CI is a bit of a hurry :)

Nikolay



>
> > +  else
> > +vstorage_enable = false
> > +  endif
> > +endif
> > +
> > +if vstorage_enable
> > +  use_storage = true
> > +  conf.set('WITH_STORAGE_VSTORAGE', 1)
> > +endif
> >endif
> >
> >if not get_option('storage_zfs').disabled()
> > --
> > 1.8.3.1
> >
>


[PATCH] meson: don't build vstorage where mntent.h is not present

2021-01-19 Thread Nikolay Shirokovskiy
This should fix CI error:

../dist-unpack/libvirt-7.1.0/src/storage/storage_backend_vstorage.c:10:10: 
fatal error: 'mntent.h' file not found
#include 
^~

on freebsd and mac.

Signed-off-by: Nikolay Shirokovskiy 
---
 meson.build | 16 ++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/meson.build b/meson.build
index e3e7ff7..a6b6169 100644
--- a/meson.build
+++ b/meson.build
@@ -1957,8 +1957,20 @@ if conf.has('WITH_LIBVIRTD')
   endif
 
   if not get_option('storage_vstorage').disabled()
-use_storage = true
-conf.set('WITH_STORAGE_VSTORAGE', 1)
+vstorage_enable = true
+
+if not cc.has_header('mntent.h')
+  if get_option('storage_fs').enabled()
+error(' is required for the FS storage driver')
+  else
+vstorage_enable = false
+  endif
+endif
+
+if vstorage_enable
+  use_storage = true
+  conf.set('WITH_STORAGE_VSTORAGE', 1)
+endif
   endif
 
   if not get_option('storage_zfs').disabled()
-- 
1.8.3.1



Re: [PATCH] meson: fix vstorage driver build

2021-01-19 Thread Nikolay Shirokovskiy
This patch is outdated by
https://www.redhat.com/archives/libvir-list/2021-January/msg00778.html

On Mon, Jan 18, 2021 at 3:02 PM Nikolay Shirokovskiy <
nshirokovs...@virtuozzo.com> wrote:

> It breaks on using - in VSTORAGE-MOUNT definition with:
>
>   In file included from ../config.h:19:0,
>   from ../src/util/viraudit.c:22:
>   ./meson-config.h:180:17: error: ISO C99 requires whitespace after the
> macro name [-Werror]
>   #define VSTORAGE-MOUNT "/usr/bin/vstorage-mount"
>   ^
>   ./meson-config.h:180:0: error: "VSTORAGE" redefined [-Werror]
>   #define VSTORAGE-MOUNT "/usr/bin/vstorage-mount"
>   ^
>   ./meson-config.h:178:0: note: this is the location of the previous
> definition
>   #define VSTORAGE "/usr/bin/vstorage"
>   ^
>   #define VSTORAGE-MOUNT "/usr/bin/vstorage-mount"
> ---
>  meson.build | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/meson.build b/meson.build
> index b5277b4..aff2565 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -1974,7 +1974,7 @@ if conf.has('WITH_LIBVIRTD')
>conf.set('WITH_STORAGE_VSTORAGE', 1)
>foreach name : ['vstorage', 'vstorage-mount', 'umount']
>  path = get_variable('@0@
> _prog'.format(name.underscorify())).path()
> -conf.set_quoted(name.to_upper(), path)
> +conf.set_quoted(name.underscorify().to_upper(), path)
>endforeach
>  endif
>endif
> --
> 1.8.3.1
>
>


[PATCH] vstorage: remove build time checks for runtime binaries

2021-01-19 Thread Nikolay Shirokovskiy
Accoring to current agreement mentioned in list recently [1]. Now
vstorage driver will be build in default devs environment and also can
be included into CI. This also closes quite old abandoned thread on
alternative checks for binaries in case of this same driver [2].

[1] https://www.redhat.com/archives/libvir-list/2021-January/msg00750.html
[2] https://www.redhat.com/archives/libvir-list/2020-July/msg00697.html

Signed-off-by: Nikolay Shirokovskiy 
---
 meson.build| 22 ++
 src/storage/storage_backend_vstorage.c |  4 ++--
 2 files changed, 4 insertions(+), 22 deletions(-)

diff --git a/meson.build b/meson.build
index b5277b4..e3e7ff7 100644
--- a/meson.build
+++ b/meson.build
@@ -1957,26 +1957,8 @@ if conf.has('WITH_LIBVIRTD')
   endif
 
   if not get_option('storage_vstorage').disabled()
-vstorage_enable = true
-
-foreach name : ['vstorage', 'vstorage-mount', 'umount']
-  set_variable(
-'@0@_prog'.format(name.underscorify()),
-find_program(name, required: get_option('storage_vstorage'), dirs: 
libvirt_sbin_path)
-  )
-  if not get_variable('@0@_prog'.format(name.underscorify())).found()
-vstorage_enable = false
-  endif
-endforeach
-
-if vstorage_enable
-  use_storage = true
-  conf.set('WITH_STORAGE_VSTORAGE', 1)
-  foreach name : ['vstorage', 'vstorage-mount', 'umount']
-path = get_variable('@0@_prog'.format(name.underscorify())).path()
-conf.set_quoted(name.to_upper(), path)
-  endforeach
-endif
+use_storage = true
+conf.set('WITH_STORAGE_VSTORAGE', 1)
   endif
 
   if not get_option('storage_zfs').disabled()
diff --git a/src/storage/storage_backend_vstorage.c 
b/src/storage/storage_backend_vstorage.c
index 6cff9f1..7c67407 100644
--- a/src/storage/storage_backend_vstorage.c
+++ b/src/storage/storage_backend_vstorage.c
@@ -65,7 +65,7 @@ virStorageBackendVzPoolStart(virStoragePoolObjPtr pool)
 
 mode = g_strdup_printf("%o", def->target.perms.mode);
 
-cmd = virCommandNewArgList(VSTORAGE_MOUNT,
+cmd = virCommandNewArgList("vstorage-mount",
"-c", def->source.name,
def->target.path,
"-m", mode,
@@ -129,7 +129,7 @@ virStorageBackendVzPoolStop(virStoragePoolObjPtr pool)
 if ((rc = virStorageBackendVzIsMounted(pool)) != 1)
 return rc;
 
-cmd = virCommandNewArgList(UMOUNT, def->target.path, NULL);
+cmd = virCommandNewArgList("umount", def->target.path, NULL);
 return virCommandRun(cmd, NULL);
 }
 
-- 
1.8.3.1



[PATCH] meson: fix vstorage driver build

2021-01-18 Thread Nikolay Shirokovskiy
It breaks on using - in VSTORAGE-MOUNT definition with:

  In file included from ../config.h:19:0,
  from ../src/util/viraudit.c:22:
  ./meson-config.h:180:17: error: ISO C99 requires whitespace after the macro 
name [-Werror]
  #define VSTORAGE-MOUNT "/usr/bin/vstorage-mount"
  ^
  ./meson-config.h:180:0: error: "VSTORAGE" redefined [-Werror]
  #define VSTORAGE-MOUNT "/usr/bin/vstorage-mount"
  ^
  ./meson-config.h:178:0: note: this is the location of the previous definition
  #define VSTORAGE "/usr/bin/vstorage"
  ^
  #define VSTORAGE-MOUNT "/usr/bin/vstorage-mount"
---
 meson.build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meson.build b/meson.build
index b5277b4..aff2565 100644
--- a/meson.build
+++ b/meson.build
@@ -1974,7 +1974,7 @@ if conf.has('WITH_LIBVIRTD')
   conf.set('WITH_STORAGE_VSTORAGE', 1)
   foreach name : ['vstorage', 'vstorage-mount', 'umount']
 path = get_variable('@0@_prog'.format(name.underscorify())).path()
-conf.set_quoted(name.to_upper(), path)
+conf.set_quoted(name.underscorify().to_upper(), path)
   endforeach
 endif
   endif
-- 
1.8.3.1



[PATCH 23/23] test driver: testDomainSetBlockIoTune: use functions to convert to/from params

2021-01-11 Thread Nikolay Shirokovskiy
Signed-off-by: Nikolay Shirokovskiy 
---
 src/test/test_driver.c | 97 +-
 1 file changed, 9 insertions(+), 88 deletions(-)

diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index fba94b9..d0a7040 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -3609,11 +3609,11 @@ testDomainSetBlockIoTune(virDomainPtr dom,
 virDomainObjPtr vm = NULL;
 virDomainDefPtr def = NULL;
 virDomainBlockIoTuneInfo info = {0};
+virDomainBlockIoTuneInfo set_fields = {0};
 virDomainDiskDefPtr conf_disk = NULL;
 virTypedParameterPtr eventParams = NULL;
 int eventNparams = 0;
 int eventMaxparams = 0;
-size_t i;
 int ret = -1;
 
 virCheckFlags(VIR_DOMAIN_AFFECT_LIVE |
@@ -3678,96 +3678,17 @@ testDomainSetBlockIoTune(virDomainPtr dom,
 
 info = conf_disk->blkdeviotune;
 
-if (virTypedParamsAddString(, , ,
-VIR_DOMAIN_TUNABLE_BLKDEV_DISK, path) < 0)
+if (virDomainBlockIoTuneFromParams(params, nparams, , _fields) < 
0)
 goto cleanup;
 
-#define SET_IOTUNE_FIELD(FIELD, STR, TUNABLE_STR) \
-if (STREQ(param->field, STR)) { \
-info.FIELD = param->value.ul; \
-if (virTypedParamsAddULLong(, , \
-, \
-TUNABLE_STR, \
-param->value.ul) < 0) \
-goto cleanup; \
-continue; \
-}
-
-for (i = 0; i < nparams; i++) {
-virTypedParameterPtr param = [i];
-
-SET_IOTUNE_FIELD(total_bytes_sec,
- VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_BYTES_SEC,
- VIR_DOMAIN_TUNABLE_BLKDEV_TOTAL_BYTES_SEC);
-SET_IOTUNE_FIELD(read_bytes_sec,
- VIR_DOMAIN_BLOCK_IOTUNE_READ_BYTES_SEC,
- VIR_DOMAIN_TUNABLE_BLKDEV_READ_BYTES_SEC);
-SET_IOTUNE_FIELD(write_bytes_sec,
- VIR_DOMAIN_BLOCK_IOTUNE_WRITE_BYTES_SEC,
- VIR_DOMAIN_TUNABLE_BLKDEV_WRITE_BYTES_SEC);
-SET_IOTUNE_FIELD(total_iops_sec,
- VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_IOPS_SEC,
- VIR_DOMAIN_TUNABLE_BLKDEV_TOTAL_IOPS_SEC);
-SET_IOTUNE_FIELD(read_iops_sec,
- VIR_DOMAIN_BLOCK_IOTUNE_READ_IOPS_SEC,
- VIR_DOMAIN_TUNABLE_BLKDEV_READ_IOPS_SEC);
-SET_IOTUNE_FIELD(write_iops_sec,
- VIR_DOMAIN_BLOCK_IOTUNE_WRITE_IOPS_SEC,
- VIR_DOMAIN_TUNABLE_BLKDEV_WRITE_IOPS_SEC);
-
-SET_IOTUNE_FIELD(total_bytes_sec_max,
- VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_BYTES_SEC_MAX,
- VIR_DOMAIN_TUNABLE_BLKDEV_TOTAL_BYTES_SEC_MAX);
-SET_IOTUNE_FIELD(read_bytes_sec_max,
- VIR_DOMAIN_BLOCK_IOTUNE_READ_BYTES_SEC_MAX,
- VIR_DOMAIN_TUNABLE_BLKDEV_READ_BYTES_SEC_MAX);
-SET_IOTUNE_FIELD(write_bytes_sec_max,
- VIR_DOMAIN_BLOCK_IOTUNE_WRITE_BYTES_SEC_MAX,
- VIR_DOMAIN_TUNABLE_BLKDEV_WRITE_BYTES_SEC_MAX);
-SET_IOTUNE_FIELD(total_iops_sec_max,
- VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_IOPS_SEC_MAX,
- VIR_DOMAIN_TUNABLE_BLKDEV_TOTAL_IOPS_SEC_MAX);
-SET_IOTUNE_FIELD(read_iops_sec_max,
- VIR_DOMAIN_BLOCK_IOTUNE_READ_IOPS_SEC_MAX,
- VIR_DOMAIN_TUNABLE_BLKDEV_READ_IOPS_SEC_MAX);
-SET_IOTUNE_FIELD(write_iops_sec_max,
- VIR_DOMAIN_BLOCK_IOTUNE_WRITE_IOPS_SEC_MAX,
- VIR_DOMAIN_TUNABLE_BLKDEV_WRITE_IOPS_SEC_MAX);
-SET_IOTUNE_FIELD(size_iops_sec,
- VIR_DOMAIN_BLOCK_IOTUNE_SIZE_IOPS_SEC,
- VIR_DOMAIN_TUNABLE_BLKDEV_SIZE_IOPS_SEC);
-
-if (STREQ(param->field, VIR_DOMAIN_BLOCK_IOTUNE_GROUP_NAME)) {
-info.group_name = g_strdup(param->value.s);
-if (virTypedParamsAddString(,
-,
-,
-VIR_DOMAIN_TUNABLE_BLKDEV_GROUP_NAME,
-param->value.s) < 0)
-goto cleanup;
-continue;
-}
+if (virDomainBlockIoTuneToEventParams(, _fields,
+  ,
+  , ) < 0)
+goto cleanup;
 
-SET_IOTUNE_FIELD(total_bytes_sec_max_length,
- VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_BYTES_SEC_MAX_LENGTH,
- VIR_DOMAIN_TUNABLE_BLKDEV_TOTAL_BYTES_SEC_MAX_LENGTH);
-SET_IOTUNE_FIELD(read_bytes_sec_max_length,
- VIR_DOMAIN

[PATCH 22/23] test driver: prepare to delete macros in testDomainSetBlockIoTune

2021-01-11 Thread Nikolay Shirokovskiy
We are going to delete macros for converting from params/adding event params.
Thus let's make macros conform to existing virDomainBlockIoTuneFromParams.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/test/test_driver.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index 58d3d79..fba94b9 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -3677,7 +3677,6 @@ testDomainSetBlockIoTune(virDomainPtr dom,
 }
 
 info = conf_disk->blkdeviotune;
-info.group_name = g_strdup(conf_disk->blkdeviotune.group_name);
 
 if (virTypedParamsAddString(, , ,
 VIR_DOMAIN_TUNABLE_BLKDEV_DISK, path) < 0)
@@ -3739,7 +3738,6 @@ testDomainSetBlockIoTune(virDomainPtr dom,
  VIR_DOMAIN_TUNABLE_BLKDEV_SIZE_IOPS_SEC);
 
 if (STREQ(param->field, VIR_DOMAIN_BLOCK_IOTUNE_GROUP_NAME)) {
-VIR_FREE(info.group_name);
 info.group_name = g_strdup(param->value.s);
 if (virTypedParamsAddString(,
 ,
@@ -3771,6 +3769,9 @@ testDomainSetBlockIoTune(virDomainPtr dom,
 }
 #undef SET_IOTUNE_FIELD
 
+if (!info.group_name)
+info.group_name = g_strdup(conf_disk->blkdeviotune.group_name);
+
 if (virDomainBlockIoTuneValidate() < 0)
 goto cleanup;
 
-- 
1.8.3.1



[PATCH 21/23] test driver: remove TEST_BLOCK_IOTUNE_MAX checks

2021-01-11 Thread Nikolay Shirokovskiy
The check is copied from qemu driver I guess and does not make much sense for
test driver. This patch is a preparation step to get rid of macros in this
place.  And I guess it make sence just to drop this check instead of moving to
some function.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/test/test_driver.c | 9 -
 1 file changed, 9 deletions(-)

diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index 299be2a..58d3d79 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -3599,8 +3599,6 @@ testDomainGetInterfaceParameters(virDomainPtr dom,
 }
 
 
-#define TEST_BLOCK_IOTUNE_MAX 1000LL
-
 static int
 testDomainSetBlockIoTune(virDomainPtr dom,
  const char *path,
@@ -3699,13 +3697,6 @@ testDomainSetBlockIoTune(virDomainPtr dom,
 for (i = 0; i < nparams; i++) {
 virTypedParameterPtr param = [i];
 
-if (param->value.ul > TEST_BLOCK_IOTUNE_MAX) {
-virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED,
-   _("block I/O throttle limit value must"
- " be no more than %llu"), TEST_BLOCK_IOTUNE_MAX);
-goto cleanup;
-}
-
 SET_IOTUNE_FIELD(total_bytes_sec,
  VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_BYTES_SEC,
  VIR_DOMAIN_TUNABLE_BLKDEV_TOTAL_BYTES_SEC);
-- 
1.8.3.1



[PATCH 20/23] qemu: qemuDomainSetBlockIoTune use functions to convert to/from params

2021-01-11 Thread Nikolay Shirokovskiy
Best viewed with --patience.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_driver.c | 72 ++
 1 file changed, 8 insertions(+), 64 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 29c93de..8bcb876 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -16039,7 +16039,6 @@ qemuDomainSetBlockIoTune(virDomainPtr dom,
 g_autofree char *drivealias = NULL;
 const char *qdevid = NULL;
 int ret = -1;
-size_t i;
 virDomainDiskDefPtr conf_disk = NULL;
 virDomainDiskDefPtr disk;
 virDomainBlockIoTuneInfo set_fields;
@@ -16121,72 +16120,17 @@ qemuDomainSetBlockIoTune(virDomainPtr dom,
 if (virDomainObjGetDefs(vm, flags, , ) < 0)
 goto endjob;
 
-if (virTypedParamsAddString(, , ,
-VIR_DOMAIN_TUNABLE_BLKDEV_DISK, path) < 0)
+if (virDomainBlockIoTuneFromParams(params, nparams, , _fields) < 
0)
 goto endjob;
 
-#define SET_IOTUNE_FIELD(FIELD, CONST) \
-if (STREQ(param->field, VIR_DOMAIN_BLOCK_IOTUNE_##CONST)) { \
-info.FIELD = param->value.ul; \
-set_fields.FIELD = 1; \
-if (virTypedParamsAddULLong(, , \
-, \
-VIR_DOMAIN_TUNABLE_BLKDEV_##CONST, \
-param->value.ul) < 0) \
-goto endjob; \
-continue; \
-}
-
-for (i = 0; i < nparams; i++) {
-virTypedParameterPtr param = [i];
-
-SET_IOTUNE_FIELD(total_bytes_sec, TOTAL_BYTES_SEC);
-SET_IOTUNE_FIELD(read_bytes_sec, READ_BYTES_SEC);
-SET_IOTUNE_FIELD(write_bytes_sec, WRITE_BYTES_SEC);
-SET_IOTUNE_FIELD(total_iops_sec, TOTAL_IOPS_SEC);
-SET_IOTUNE_FIELD(read_iops_sec, READ_IOPS_SEC);
-SET_IOTUNE_FIELD(write_iops_sec, WRITE_IOPS_SEC);
-
-SET_IOTUNE_FIELD(total_bytes_sec_max,
- TOTAL_BYTES_SEC_MAX);
-SET_IOTUNE_FIELD(read_bytes_sec_max,
- READ_BYTES_SEC_MAX);
-SET_IOTUNE_FIELD(write_bytes_sec_max,
- WRITE_BYTES_SEC_MAX);
-SET_IOTUNE_FIELD(total_iops_sec_max,
- TOTAL_IOPS_SEC_MAX);
-SET_IOTUNE_FIELD(read_iops_sec_max,
- READ_IOPS_SEC_MAX);
-SET_IOTUNE_FIELD(write_iops_sec_max,
- WRITE_IOPS_SEC_MAX);
-SET_IOTUNE_FIELD(size_iops_sec, SIZE_IOPS_SEC);
-
-/* NB: Cannot use macro since this is a value.s not a value.ul */
-if (STREQ(param->field, VIR_DOMAIN_BLOCK_IOTUNE_GROUP_NAME)) {
-info.group_name = g_strdup(param->value.s);
-if (virTypedParamsAddString(, ,
-,
-VIR_DOMAIN_TUNABLE_BLKDEV_GROUP_NAME,
-param->value.s) < 0)
-goto endjob;
-continue;
-}
-
-SET_IOTUNE_FIELD(total_bytes_sec_max_length,
- TOTAL_BYTES_SEC_MAX_LENGTH);
-SET_IOTUNE_FIELD(read_bytes_sec_max_length,
- READ_BYTES_SEC_MAX_LENGTH);
-SET_IOTUNE_FIELD(write_bytes_sec_max_length,
- WRITE_BYTES_SEC_MAX_LENGTH);
-SET_IOTUNE_FIELD(total_iops_sec_max_length,
- TOTAL_IOPS_SEC_MAX_LENGTH);
-SET_IOTUNE_FIELD(read_iops_sec_max_length,
- READ_IOPS_SEC_MAX_LENGTH);
-SET_IOTUNE_FIELD(write_iops_sec_max_length,
- WRITE_IOPS_SEC_MAX_LENGTH);
-}
+if (virDomainBlockIoTuneToEventParams(, _fields,
+  ,
+  , ) < 0)
+goto endjob;
 
-#undef SET_IOTUNE_FIELD
+if (virTypedParamsAddString(, , ,
+VIR_DOMAIN_TUNABLE_BLKDEV_DISK, path) < 0)
+goto endjob;
 
 if (virDomainBlockIoTuneValidate() < 0)
 goto endjob;
-- 
1.8.3.1



[PATCH 19/23] conf: add virDomainBlockIoTuneToEventParams

2021-01-11 Thread Nikolay Shirokovskiy
It will be used as a replacement for macros code adding event params.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/conf/domain_conf.c   | 53 
 src/conf/domain_conf.h   |  7 +++
 src/libvirt_private.syms |  1 +
 3 files changed, 61 insertions(+)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index aa6a2a8..7e91654 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -31773,6 +31773,59 @@ virDomainBlockIoTuneFromParams(virTypedParameterPtr 
params,
 }
 
 
+static const char* virDomainBlockIoTuneFieldEventNames[] = {
+VIR_DOMAIN_TUNABLE_BLKDEV_TOTAL_BYTES_SEC,
+VIR_DOMAIN_TUNABLE_BLKDEV_READ_BYTES_SEC,
+VIR_DOMAIN_TUNABLE_BLKDEV_WRITE_BYTES_SEC,
+VIR_DOMAIN_TUNABLE_BLKDEV_TOTAL_IOPS_SEC,
+VIR_DOMAIN_TUNABLE_BLKDEV_READ_IOPS_SEC,
+VIR_DOMAIN_TUNABLE_BLKDEV_WRITE_IOPS_SEC,
+VIR_DOMAIN_TUNABLE_BLKDEV_TOTAL_BYTES_SEC_MAX,
+VIR_DOMAIN_TUNABLE_BLKDEV_READ_BYTES_SEC_MAX,
+VIR_DOMAIN_TUNABLE_BLKDEV_WRITE_BYTES_SEC_MAX,
+VIR_DOMAIN_TUNABLE_BLKDEV_TOTAL_IOPS_SEC_MAX,
+VIR_DOMAIN_TUNABLE_BLKDEV_READ_IOPS_SEC_MAX,
+VIR_DOMAIN_TUNABLE_BLKDEV_WRITE_IOPS_SEC_MAX,
+VIR_DOMAIN_TUNABLE_BLKDEV_SIZE_IOPS_SEC,
+VIR_DOMAIN_TUNABLE_BLKDEV_TOTAL_BYTES_SEC_MAX_LENGTH,
+VIR_DOMAIN_TUNABLE_BLKDEV_READ_BYTES_SEC_MAX_LENGTH,
+VIR_DOMAIN_TUNABLE_BLKDEV_WRITE_BYTES_SEC_MAX_LENGTH,
+VIR_DOMAIN_TUNABLE_BLKDEV_TOTAL_IOPS_SEC_MAX_LENGTH,
+VIR_DOMAIN_TUNABLE_BLKDEV_READ_IOPS_SEC_MAX_LENGTH,
+VIR_DOMAIN_TUNABLE_BLKDEV_WRITE_IOPS_SEC_MAX_LENGTH,
+};
+
+
+int
+virDomainBlockIoTuneToEventParams(virDomainBlockIoTuneInfoPtr iotune,
+  virDomainBlockIoTuneInfoPtr set,
+  virTypedParameterPtr *params,
+  int *nparams,
+  int *maxparams)
+{
+g_autofree unsigned long long **fields = 
virDomainBlockIoTuneFields(iotune);
+g_autofree unsigned long long **set_fields = 
virDomainBlockIoTuneFields(set);
+size_t i;
+
+for (i = 0; i < G_N_ELEMENTS(virDomainBlockIoTuneFieldEventNames); i++) {
+const char *name = virDomainBlockIoTuneFieldEventNames[i];
+
+if (*set_fields[i] &&
+virTypedParamsAddULLong(params, nparams, maxparams,
+name, *fields[i]) < 0)
+return -1;
+}
+
+if (iotune->group_name &&
+virTypedParamsAddString(params, nparams, maxparams,
+VIR_DOMAIN_TUNABLE_BLKDEV_GROUP_NAME,
+iotune->group_name) < 0)
+return -1;
+
+return 0;
+}
+
+
 /**
  * virHostdevIsSCSIDevice:
  * @hostdev: host device to check
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index e511939..ea18720 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -3955,6 +3955,13 @@ virDomainBlockIoTuneFromParams(virTypedParameterPtr 
params,
virDomainBlockIoTuneInfoPtr iotune,
virDomainBlockIoTuneInfoPtr set);
 
+int
+virDomainBlockIoTuneToEventParams(virDomainBlockIoTuneInfoPtr iotune,
+  virDomainBlockIoTuneInfoPtr set,
+  virTypedParameterPtr *params,
+  int *nparams,
+  int *maxparams);
+
 bool
 virDomainDriveAddressIsUsedByDisk(const virDomainDef *def,
   virDomainDiskBus bus_type,
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index bc42df0..e71b580 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -237,6 +237,7 @@ virDomainBlockIoTuneInfoHasAny;
 virDomainBlockIoTuneInfoHasBasic;
 virDomainBlockIoTuneInfoHasMax;
 virDomainBlockIoTuneInfoHasMaxLength;
+virDomainBlockIoTuneToEventParams;
 virDomainBlockIoTuneValidate;
 virDomainBootTypeFromString;
 virDomainBootTypeToString;
-- 
1.8.3.1



[PATCH 18/23] conf: add virDomainBlockIoTuneFromParams

2021-01-11 Thread Nikolay Shirokovskiy
It will be used as a replacement for macros code converting params
to iotune structure.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/conf/domain_conf.c   | 32 
 src/conf/domain_conf.h   |  6 ++
 src/libvirt_private.syms |  1 +
 3 files changed, 39 insertions(+)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index bbe6ae7..aa6a2a8 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -31741,6 +31741,38 @@ virDomainBlockIoTuneInfoEqual(const 
virDomainBlockIoTuneInfo *a,
 }
 
 
+int
+virDomainBlockIoTuneFromParams(virTypedParameterPtr params,
+   int nparams,
+   virDomainBlockIoTuneInfoPtr iotune,
+   virDomainBlockIoTuneInfoPtr set)
+{
+g_autofree unsigned long long **fields = 
virDomainBlockIoTuneFields(iotune);
+g_autofree unsigned long long **set_fields = 
virDomainBlockIoTuneFields(set);
+const char *group_name = NULL;
+size_t i;
+
+for (i = 0; i < G_N_ELEMENTS(virDomainBlockIoTuneFieldNames); i++) {
+const char *name = virDomainBlockIoTuneFieldNames[i];
+int rc;
+
+rc = virTypedParamsGetULLong(params, nparams, name, fields[i]);
+if (rc < 0)
+return -1;
+if (rc == 1)
+*set_fields[i] = 1;
+}
+
+if (virTypedParamsGetString(params, nparams,
+VIR_DOMAIN_BLOCK_IOTUNE_GROUP_NAME,
+_name) < 0)
+return -1;
+iotune->group_name = g_strdup(group_name);
+
+return 0;
+}
+
+
 /**
  * virHostdevIsSCSIDevice:
  * @hostdev: host device to check
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index 3c42313..e511939 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -3949,6 +3949,12 @@ bool
 virDomainBlockIoTuneInfoEqual(const virDomainBlockIoTuneInfo *a,
   const virDomainBlockIoTuneInfo *b);
 
+int
+virDomainBlockIoTuneFromParams(virTypedParameterPtr params,
+   int nparams,
+   virDomainBlockIoTuneInfoPtr iotune,
+   virDomainBlockIoTuneInfoPtr set);
+
 bool
 virDomainDriveAddressIsUsedByDisk(const virDomainDef *def,
   virDomainDiskBus bus_type,
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index bfe3ee7..bc42df0 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -230,6 +230,7 @@ virDomainAudioTypeTypeFromString;
 virDomainAudioTypeTypeToString;
 virDomainBlockedReasonTypeFromString;
 virDomainBlockedReasonTypeToString;
+virDomainBlockIoTuneFromParams;
 virDomainBlockIoTuneInfoCopy;
 virDomainBlockIoTuneInfoEqual;
 virDomainBlockIoTuneInfoHasAny;
-- 
1.8.3.1



[PATCH 17/23] conf: get rid of macros in virDomainDiskDefFormatIotune

2021-01-11 Thread Nikolay Shirokovskiy
Signed-off-by: Nikolay Shirokovskiy 
---
 src/conf/domain_conf.c | 42 +-
 1 file changed, 9 insertions(+), 33 deletions(-)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 024d0e3..bbe6ae7 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -24225,54 +24225,30 @@ virDomainDiskBackingStoreFormat(virBufferPtr buf,
 }
 
 
-#define FORMAT_IOTUNE(val) \
-if (disk->blkdeviotune.val) { \
-virBufferAsprintf(, "<" #val ">%llu\n", \
-  disk->blkdeviotune.val); \
-}
-
 static void
 virDomainDiskDefFormatIotune(virBufferPtr buf,
  virDomainDiskDefPtr disk)
 {
 g_auto(virBuffer) childBuf = VIR_BUFFER_INIT_CHILD(buf);
+g_autofree unsigned long long **fields =
+virDomainBlockIoTuneFields(>blkdeviotune);
+size_t i;
 
-FORMAT_IOTUNE(total_bytes_sec);
-FORMAT_IOTUNE(read_bytes_sec);
-FORMAT_IOTUNE(write_bytes_sec);
-FORMAT_IOTUNE(total_iops_sec);
-FORMAT_IOTUNE(read_iops_sec);
-FORMAT_IOTUNE(write_iops_sec);
-
-FORMAT_IOTUNE(total_bytes_sec_max);
-FORMAT_IOTUNE(read_bytes_sec_max);
-FORMAT_IOTUNE(write_bytes_sec_max);
-FORMAT_IOTUNE(total_iops_sec_max);
-FORMAT_IOTUNE(read_iops_sec_max);
-FORMAT_IOTUNE(write_iops_sec_max);
+for (i = 0; i < G_N_ELEMENTS(virDomainBlockIoTuneFieldNames); i++) {
+const char *name = virDomainBlockIoTuneFieldNames[i];
 
-if (disk->blkdeviotune.size_iops_sec) {
-virBufferAsprintf(, "%llu\n",
-  disk->blkdeviotune.size_iops_sec);
+if (*fields[i])
+virBufferAsprintf(, "<%s>%llu\n",
+  name, *fields[i], name);
 }
 
-if (disk->blkdeviotune.group_name) {
+if (disk->blkdeviotune.group_name)
 virBufferEscapeString(, "%s\n",
   disk->blkdeviotune.group_name);
-}
-
-FORMAT_IOTUNE(total_bytes_sec_max_length);
-FORMAT_IOTUNE(read_bytes_sec_max_length);
-FORMAT_IOTUNE(write_bytes_sec_max_length);
-FORMAT_IOTUNE(total_iops_sec_max_length);
-FORMAT_IOTUNE(read_iops_sec_max_length);
-FORMAT_IOTUNE(write_iops_sec_max_length);
 
 virXMLFormatElement(buf, "iotune", NULL, );
 }
 
-#undef FORMAT_IOTUNE
-
 
 static void
 virDomainDiskDefFormatDriver(virBufferPtr buf,
-- 
1.8.3.1



[PATCH 16/23] conf: get rid of macros in virDomainDiskDefIotuneParse

2021-01-11 Thread Nikolay Shirokovskiy
At first glance we don't get much win because of introduction of
virDomainBlockIoTuneFieldNames and virDomainBlockIoTuneFields. But we are going
to use these two in other places to remove usage of macros too.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/conf/domain_conf.c | 99 +++---
 1 file changed, 69 insertions(+), 30 deletions(-)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 800bca5..024d0e3 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -8695,40 +8695,80 @@ 
virDomainBlockIoTuneValidate(virDomainBlockIoTuneInfoPtr iotune)
 return 0;
 }
 
-#define PARSE_IOTUNE(val) \
-if (virXPathULongLong("string(./iotune/" #val ")", \
-  ctxt, >blkdeviotune.val) == -2) { \
-virReportError(VIR_ERR_XML_ERROR, \
-   _("disk iotune field '%s' must be an integer"), #val); \
-return -1; \
-}
+
+static const char* virDomainBlockIoTuneFieldNames[] = {
+VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_BYTES_SEC,
+VIR_DOMAIN_BLOCK_IOTUNE_READ_BYTES_SEC,
+VIR_DOMAIN_BLOCK_IOTUNE_WRITE_BYTES_SEC,
+VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_IOPS_SEC,
+VIR_DOMAIN_BLOCK_IOTUNE_READ_IOPS_SEC,
+VIR_DOMAIN_BLOCK_IOTUNE_WRITE_IOPS_SEC,
+VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_BYTES_SEC_MAX,
+VIR_DOMAIN_BLOCK_IOTUNE_READ_BYTES_SEC_MAX,
+VIR_DOMAIN_BLOCK_IOTUNE_WRITE_BYTES_SEC_MAX,
+VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_IOPS_SEC_MAX,
+VIR_DOMAIN_BLOCK_IOTUNE_READ_IOPS_SEC_MAX,
+VIR_DOMAIN_BLOCK_IOTUNE_WRITE_IOPS_SEC_MAX,
+VIR_DOMAIN_BLOCK_IOTUNE_SIZE_IOPS_SEC,
+VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_BYTES_SEC_MAX_LENGTH,
+VIR_DOMAIN_BLOCK_IOTUNE_READ_BYTES_SEC_MAX_LENGTH,
+VIR_DOMAIN_BLOCK_IOTUNE_WRITE_BYTES_SEC_MAX_LENGTH,
+VIR_DOMAIN_BLOCK_IOTUNE_TOTAL_IOPS_SEC_MAX_LENGTH,
+VIR_DOMAIN_BLOCK_IOTUNE_READ_IOPS_SEC_MAX_LENGTH,
+VIR_DOMAIN_BLOCK_IOTUNE_WRITE_IOPS_SEC_MAX_LENGTH,
+};
+
+
+static unsigned long long**
+virDomainBlockIoTuneFields(virDomainBlockIoTuneInfoPtr iotune)
+{
+unsigned long long **ret = g_new0(unsigned long long*,
+  
G_N_ELEMENTS(virDomainBlockIoTuneFieldNames));
+size_t i = 0;
+
+ret[i++] = >total_bytes_sec;
+ret[i++] = >read_bytes_sec;
+ret[i++] = >write_bytes_sec;
+ret[i++] = >total_iops_sec;
+ret[i++] = >read_iops_sec;
+ret[i++] = >write_iops_sec;
+ret[i++] = >total_bytes_sec_max;
+ret[i++] = >read_bytes_sec_max;
+ret[i++] = >write_bytes_sec_max;
+ret[i++] = >total_iops_sec_max;
+ret[i++] = >read_iops_sec_max;
+ret[i++] = >write_iops_sec_max;
+ret[i++] = >size_iops_sec;
+ret[i++] = >total_bytes_sec_max_length;
+ret[i++] = >read_bytes_sec_max_length;
+ret[i++] = >write_bytes_sec_max_length;
+ret[i++] = >total_iops_sec_max_length;
+ret[i++] = >read_iops_sec_max_length;
+ret[i++] = >write_iops_sec_max_length;
+
+return ret;
+}
+
 
 static int
 virDomainDiskDefIotuneParse(virDomainDiskDefPtr def,
 xmlXPathContextPtr ctxt)
 {
-PARSE_IOTUNE(total_bytes_sec);
-PARSE_IOTUNE(read_bytes_sec);
-PARSE_IOTUNE(write_bytes_sec);
-PARSE_IOTUNE(total_iops_sec);
-PARSE_IOTUNE(read_iops_sec);
-PARSE_IOTUNE(write_iops_sec);
-
-PARSE_IOTUNE(total_bytes_sec_max);
-PARSE_IOTUNE(read_bytes_sec_max);
-PARSE_IOTUNE(write_bytes_sec_max);
-PARSE_IOTUNE(total_iops_sec_max);
-PARSE_IOTUNE(read_iops_sec_max);
-PARSE_IOTUNE(write_iops_sec_max);
-
-PARSE_IOTUNE(size_iops_sec);
-
-PARSE_IOTUNE(total_bytes_sec_max_length);
-PARSE_IOTUNE(read_bytes_sec_max_length);
-PARSE_IOTUNE(write_bytes_sec_max_length);
-PARSE_IOTUNE(total_iops_sec_max_length);
-PARSE_IOTUNE(read_iops_sec_max_length);
-PARSE_IOTUNE(write_iops_sec_max_length);
+g_autofree unsigned long long **fields =
+virDomainBlockIoTuneFields(>blkdeviotune);
+size_t i;
+
+for (i = 0; i < G_N_ELEMENTS(virDomainBlockIoTuneFieldNames); i++) {
+const char *name = virDomainBlockIoTuneFieldNames[i];
+g_autofree char *sel = g_strdup_printf("string(./iotune/%s)", name);
+
+if (virXPathULongLong(sel, ctxt, fields[i]) == -2) {
+virReportError(VIR_ERR_XML_ERROR,
+   _("disk iotune field '%s' must be an integer"),
+   name);
+return -1;
+}
+}
 
 def->blkdeviotune.group_name =
 virXPathString("string(./iotune/group_name)", ctxt);
@@ -8738,7 +8778,6 @@ virDomainDiskDefIotuneParse(virDomainDiskDefPtr def,
 
 return 0;
 }
-#undef PARSE_IOTUNE
 
 
 static int
-- 
1.8.3.1



[PATCH 15/23] qemu: remove qemuBlockIoTuneSetFlags enum completly

2021-01-11 Thread Nikolay Shirokovskiy
It is not used anymore.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_driver.c | 11 ---
 1 file changed, 11 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 57d63b6..29c93de 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -15869,17 +15869,6 @@ qemuDomainOpenGraphicsFD(virDomainPtr dom,
 return ret;
 }
 
-typedef enum {
-QEMU_BLOCK_IOTUNE_SET_BYTES= 1 << 0,
-QEMU_BLOCK_IOTUNE_SET_IOPS = 1 << 1,
-QEMU_BLOCK_IOTUNE_SET_BYTES_MAX= 1 << 2,
-QEMU_BLOCK_IOTUNE_SET_IOPS_MAX = 1 << 3,
-QEMU_BLOCK_IOTUNE_SET_SIZE_IOPS= 1 << 4,
-QEMU_BLOCK_IOTUNE_SET_GROUP_NAME   = 1 << 5,
-QEMU_BLOCK_IOTUNE_SET_BYTES_MAX_LENGTH = 1 << 6,
-QEMU_BLOCK_IOTUNE_SET_IOPS_MAX_LENGTH  = 1 << 7,
-} qemuBlockIoTuneSetFlags;
-
 
 /* If the user didn't specify bytes limits, inherit previous values;
  * likewise if the user didn't specify iops limits.  */
-- 
1.8.3.1



[PATCH 14/23] qemu: remove qemuBlockIoTuneSetFlags usage in qemuDomainSetBlockIoTune

2021-01-11 Thread Nikolay Shirokovskiy
I'm going to get rid of macros code in qemuDomainSetBlockIoTune that converts
virTypedParameter parameters into struct. In the scope of the overall effort to
reduce/get rid of using macros when dealing with iotunes. And it will be much
easier to use per field structure which hold whether field was set or not when
removing macros in this place. So let's use just another
virDomainBlockIoTuneInfo for this purpose where fields will hold bool values
set/unset.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_driver.c | 100 +++--
 1 file changed, 55 insertions(+), 45 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 61be425..57d63b6 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -15886,26 +15886,32 @@ typedef enum {
 static int
 qemuDomainSetBlockIoTuneDefaults(virDomainBlockIoTuneInfoPtr newinfo,
  virDomainBlockIoTuneInfoPtr oldinfo,
- qemuBlockIoTuneSetFlags set_fields)
+ virDomainBlockIoTuneInfoPtr set_fields)
 {
-#define SET_IOTUNE_DEFAULTS(BOOL, FIELD) \
+#define SET_IOTUNE_DEFAULTS(FIELD) \
 do { \
-if (!(set_fields & QEMU_BLOCK_IOTUNE_SET_##BOOL)) { \
+bool set = set_fields->total_##FIELD || \
+   set_fields->read_##FIELD || \
+   set_fields->write_##FIELD; \
+if (!set) { \
 newinfo->total_##FIELD = oldinfo->total_##FIELD; \
 newinfo->read_##FIELD = oldinfo->read_##FIELD; \
 newinfo->write_##FIELD = oldinfo->write_##FIELD; \
 } \
 } while (0)
 
-SET_IOTUNE_DEFAULTS(BYTES, bytes_sec);
-SET_IOTUNE_DEFAULTS(BYTES_MAX, bytes_sec_max);
-SET_IOTUNE_DEFAULTS(IOPS, iops_sec);
-SET_IOTUNE_DEFAULTS(IOPS_MAX, iops_sec_max);
+SET_IOTUNE_DEFAULTS(bytes_sec);
+SET_IOTUNE_DEFAULTS(bytes_sec_max);
+SET_IOTUNE_DEFAULTS(iops_sec);
+SET_IOTUNE_DEFAULTS(iops_sec_max);
 #undef SET_IOTUNE_DEFAULTS
 
-#define RESET_IOTUNE_MAX(BOOL, FIELD) \
+#define RESET_IOTUNE_MAX(FIELD) \
 do { \
-if (set_fields & QEMU_BLOCK_IOTUNE_SET_##BOOL) { \
+bool set = set_fields->total_##FIELD || \
+   set_fields->read_##FIELD || \
+   set_fields->write_##FIELD; \
+if (set) { \
 if (!newinfo->total_##FIELD) \
 newinfo->total_##FIELD##_max = 0; \
 if (!newinfo->read_##FIELD) \
@@ -15915,11 +15921,11 @@ 
qemuDomainSetBlockIoTuneDefaults(virDomainBlockIoTuneInfoPtr newinfo,
 } \
 } while (0)
 
-RESET_IOTUNE_MAX(BYTES, bytes_sec);
-RESET_IOTUNE_MAX(IOPS, iops_sec);
+RESET_IOTUNE_MAX(bytes_sec);
+RESET_IOTUNE_MAX(iops_sec);
 #undef RESET_IOTUNE_MAX
 
-if (!(set_fields & QEMU_BLOCK_IOTUNE_SET_SIZE_IOPS))
+if (!set_fields->size_iops_sec)
 newinfo->size_iops_sec = oldinfo->size_iops_sec;
 if (!newinfo->group_name)
 newinfo->group_name = g_strdup(oldinfo->group_name);
@@ -15936,23 +15942,26 @@ 
qemuDomainSetBlockIoTuneDefaults(virDomainBlockIoTuneInfoPtr newinfo,
  * will cause an error. So, to mimic that, if our oldinfo was set and
  * our newinfo is clearing, then set max_length based on whether we
  * have a value in the family set/defined. */
-#define SET_MAX_LENGTH(BOOL, FIELD) \
+#define SET_MAX_LENGTH(MAX_LENGTH_FIELD, FIELD) \
 do { \
-if (!(set_fields & QEMU_BLOCK_IOTUNE_SET_##BOOL)) \
+bool set = set_fields->total_##MAX_LENGTH_FIELD || \
+   set_fields->read_##MAX_LENGTH_FIELD || \
+   set_fields->write_##MAX_LENGTH_FIELD; \
+if (!set) \
 newinfo->FIELD##_max_length = oldinfo->FIELD##_max_length; \
-else if ((set_fields & QEMU_BLOCK_IOTUNE_SET_##BOOL) && \
+else if (set && \
  oldinfo->FIELD##_max_length && \
  !newinfo->FIELD##_max_length) \
 newinfo->FIELD##_max_length = (newinfo->FIELD || \
newinfo->FIELD##_max) ? 1 : 0; \
 } while (0)
 
-SET_MAX_LENGTH(BYTES_MAX_LENGTH, total_bytes_sec);
-SET_MAX_LENGTH(BYTES_MAX_LENGTH, read_bytes_sec);
-SET_MAX_LENGTH(BYTES_MAX_LENGTH, write_bytes_sec);
-SET_MAX_LENGTH(IOPS_MAX_LENGTH, total_iops_sec);
-SET_MAX_LENGTH(IOPS_MAX_LENGTH, read_iops_sec);
-SET_MAX_LENGTH(IOPS_MAX_LENGTH, write_iops_sec);
+SET_MAX_LENGTH(bytes_sec_max_length, total_bytes_sec);
+SET_MAX_LENGTH(bytes_sec_max_length, read_bytes_sec);
+SET_MAX_LENGTH(bytes_sec_max_length, write_bytes_sec);
+SET_MAX_LENGTH(iops_sec_max_length, total_iops_sec);
+SET_MAX_LENGTH(iops_sec_max_length, read_iops_sec);
+SET_MAX_LENGTH(iops_sec_max_length, write_iops_sec);
 
 #undef SET

[PATCH 13/23] qemu: use group_name instead of QEMU_BLOCK_IOTUNE_SET_GROUP_NAME

2021-01-11 Thread Nikolay Shirokovskiy
We need qemuBlockIoTuneSetFlags to distinguish cases when some tune is not set
from tune is set explicitly to 0. We don't have the latter case for the group
name and don't need QEMU_BLOCK_IOTUNE_SET_GROUP_NAME.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_driver.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 15ccf90..61be425 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -15921,7 +15921,7 @@ 
qemuDomainSetBlockIoTuneDefaults(virDomainBlockIoTuneInfoPtr newinfo,
 
 if (!(set_fields & QEMU_BLOCK_IOTUNE_SET_SIZE_IOPS))
 newinfo->size_iops_sec = oldinfo->size_iops_sec;
-if (!(set_fields & QEMU_BLOCK_IOTUNE_SET_GROUP_NAME))
+if (!newinfo->group_name)
 newinfo->group_name = g_strdup(oldinfo->group_name);
 
 /* The length field is handled a bit differently. If not defined/set,
@@ -16165,7 +16165,6 @@ qemuDomainSetBlockIoTune(virDomainPtr dom,
 /* NB: Cannot use macro since this is a value.s not a value.ul */
 if (STREQ(param->field, VIR_DOMAIN_BLOCK_IOTUNE_GROUP_NAME)) {
 info.group_name = g_strdup(param->value.s);
-set_fields |= QEMU_BLOCK_IOTUNE_SET_GROUP_NAME;
 if (virTypedParamsAddString(, ,
 ,
 VIR_DOMAIN_TUNABLE_BLKDEV_GROUP_NAME,
-- 
1.8.3.1



[PATCH 12/23] qemu: prepare for removing qemuBlockIoTuneSetFlags

2021-01-11 Thread Nikolay Shirokovskiy
We need extra variable in macros thus let's add do {} while (0) wrapping.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_driver.c | 28 
 1 file changed, 16 insertions(+), 12 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 02b28f0..15ccf90 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -15889,11 +15889,13 @@ 
qemuDomainSetBlockIoTuneDefaults(virDomainBlockIoTuneInfoPtr newinfo,
  qemuBlockIoTuneSetFlags set_fields)
 {
 #define SET_IOTUNE_DEFAULTS(BOOL, FIELD) \
-if (!(set_fields & QEMU_BLOCK_IOTUNE_SET_##BOOL)) { \
-newinfo->total_##FIELD = oldinfo->total_##FIELD; \
-newinfo->read_##FIELD = oldinfo->read_##FIELD; \
-newinfo->write_##FIELD = oldinfo->write_##FIELD; \
-}
+do { \
+if (!(set_fields & QEMU_BLOCK_IOTUNE_SET_##BOOL)) { \
+newinfo->total_##FIELD = oldinfo->total_##FIELD; \
+newinfo->read_##FIELD = oldinfo->read_##FIELD; \
+newinfo->write_##FIELD = oldinfo->write_##FIELD; \
+} \
+} while (0)
 
 SET_IOTUNE_DEFAULTS(BYTES, bytes_sec);
 SET_IOTUNE_DEFAULTS(BYTES_MAX, bytes_sec_max);
@@ -15935,13 +15937,15 @@ 
qemuDomainSetBlockIoTuneDefaults(virDomainBlockIoTuneInfoPtr newinfo,
  * our newinfo is clearing, then set max_length based on whether we
  * have a value in the family set/defined. */
 #define SET_MAX_LENGTH(BOOL, FIELD) \
-if (!(set_fields & QEMU_BLOCK_IOTUNE_SET_##BOOL)) \
-newinfo->FIELD##_max_length = oldinfo->FIELD##_max_length; \
-else if ((set_fields & QEMU_BLOCK_IOTUNE_SET_##BOOL) && \
- oldinfo->FIELD##_max_length && \
- !newinfo->FIELD##_max_length) \
-newinfo->FIELD##_max_length = (newinfo->FIELD || \
-   newinfo->FIELD##_max) ? 1 : 0;
+do { \
+if (!(set_fields & QEMU_BLOCK_IOTUNE_SET_##BOOL)) \
+newinfo->FIELD##_max_length = oldinfo->FIELD##_max_length; \
+else if ((set_fields & QEMU_BLOCK_IOTUNE_SET_##BOOL) && \
+ oldinfo->FIELD##_max_length && \
+ !newinfo->FIELD##_max_length) \
+newinfo->FIELD##_max_length = (newinfo->FIELD || \
+   newinfo->FIELD##_max) ? 1 : 0; \
+} while (0)
 
 SET_MAX_LENGTH(BYTES_MAX_LENGTH, total_bytes_sec);
 SET_MAX_LENGTH(BYTES_MAX_LENGTH, read_bytes_sec);
-- 
1.8.3.1



[PATCH 10/23] qemu: remove iotune max checks

2021-01-11 Thread Nikolay Shirokovskiy
These checks are now in the above call to virDomainBlockIoTuneValidate.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_driver.c | 29 -
 1 file changed, 29 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 9093baf..02b28f0 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -16222,35 +16222,6 @@ qemuDomainSetBlockIoTune(virDomainPtr dom,
 if (qemuDomainCheckBlockIoTuneReset(disk, ) < 0)
 goto endjob;
 
-#define CHECK_MAX(val, _bool) \
-do { \
-if (info.val##_max) { \
-if (!info.val) { \
-virReportError(VIR_ERR_CONFIG_UNSUPPORTED, \
-   _("value '%s' cannot be set if " \
- "'%s' is not set"), \
-   #val "_max", #val); \
-goto endjob; \
-} \
-if (info.val##_max < info.val) { \
-virReportError(VIR_ERR_CONFIG_UNSUPPORTED, \
-   _("value '%s' cannot be " \
- "smaller than '%s'"), \
-   #val "_max", #val); \
-goto endjob; \
-} \
-} \
-} while (false)
-
-CHECK_MAX(total_bytes_sec, BYTES);
-CHECK_MAX(read_bytes_sec, BYTES);
-CHECK_MAX(write_bytes_sec, BYTES);
-CHECK_MAX(total_iops_sec, IOPS);
-CHECK_MAX(read_iops_sec, IOPS);
-CHECK_MAX(write_iops_sec, IOPS);
-
-#undef CHECK_MAX
-
 /* blockdev-based qemu doesn't want to set the throttling when a cdrom
  * is empty. Skip the monitor call here since we will set the 
throttling
  * once new media is inserted */
-- 
1.8.3.1



[PATCH 11/23] test driver: remove iotune max checks

2021-01-11 Thread Nikolay Shirokovskiy
These checks are now in the above call to virDomainBlockIoTuneValidate.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/test/test_driver.c | 19 ---
 1 file changed, 19 deletions(-)

diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index 4eb04d6..299be2a 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -3783,25 +3783,6 @@ testDomainSetBlockIoTune(virDomainPtr dom,
 if (virDomainBlockIoTuneValidate() < 0)
 goto cleanup;
 
-#define TEST_BLOCK_IOTUNE_MAX_CHECK(FIELD, FIELD_MAX) \
-do { \
-if (info.FIELD > info.FIELD_MAX) { \
-virReportError(VIR_ERR_INVALID_ARG, \
-   _("%s cannot be set higher than %s "), \
- #FIELD, #FIELD_MAX); \
-goto cleanup; \
-} \
-} while (0);
-
-TEST_BLOCK_IOTUNE_MAX_CHECK(total_bytes_sec, total_bytes_sec_max);
-TEST_BLOCK_IOTUNE_MAX_CHECK(read_bytes_sec, read_bytes_sec_max);
-TEST_BLOCK_IOTUNE_MAX_CHECK(write_bytes_sec, write_bytes_sec_max);
-TEST_BLOCK_IOTUNE_MAX_CHECK(total_iops_sec, total_iops_sec_max);
-TEST_BLOCK_IOTUNE_MAX_CHECK(read_iops_sec, read_iops_sec_max);
-TEST_BLOCK_IOTUNE_MAX_CHECK(write_iops_sec, write_iops_sec_max);
-
-#undef TEST_BLOCK_IOTUNE_MAX_CHECK
-
 virDomainDiskSetBlockIOTune(conf_disk, );
 info.group_name = NULL;
 
-- 
1.8.3.1



[PATCH 07/23] test driver: reuse virDomainBlockIoTuneValidate

2021-01-11 Thread Nikolay Shirokovskiy
Signed-off-by: Nikolay Shirokovskiy 
---
 src/test/test_driver.c | 32 +---
 1 file changed, 1 insertion(+), 31 deletions(-)

diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index 29c4c86..4eb04d6 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -3780,38 +3780,8 @@ testDomainSetBlockIoTune(virDomainPtr dom,
 }
 #undef SET_IOTUNE_FIELD
 
-if ((info.total_bytes_sec && info.read_bytes_sec) ||
-(info.total_bytes_sec && info.write_bytes_sec)) {
-virReportError(VIR_ERR_INVALID_ARG, "%s",
-   _("total and read/write of bytes_sec "
- "cannot be set at the same time"));
-goto cleanup;
-}
-
-if ((info.total_iops_sec && info.read_iops_sec) ||
-(info.total_iops_sec && info.write_iops_sec)) {
-virReportError(VIR_ERR_INVALID_ARG, "%s",
-   _("total and read/write of iops_sec "
- "cannot be set at the same time"));
+if (virDomainBlockIoTuneValidate() < 0)
 goto cleanup;
-}
-
-if ((info.total_bytes_sec_max && info.read_bytes_sec_max) ||
-(info.total_bytes_sec_max && info.write_bytes_sec_max)) {
-virReportError(VIR_ERR_INVALID_ARG, "%s",
-   _("total and read/write of bytes_sec_max "
- "cannot be set at the same time"));
-goto cleanup;
-}
-
-if ((info.total_iops_sec_max && info.read_iops_sec_max) ||
-(info.total_iops_sec_max && info.write_iops_sec_max)) {
-virReportError(VIR_ERR_INVALID_ARG, "%s",
-   _("total and read/write of iops_sec_max "
- "cannot be set at the same time"));
-goto cleanup;
-}
-
 
 #define TEST_BLOCK_IOTUNE_MAX_CHECK(FIELD, FIELD_MAX) \
 do { \
-- 
1.8.3.1



[PATCH 09/23] qemu: add max iotune settings check to virDomainBlockIoTuneValidate

2021-01-11 Thread Nikolay Shirokovskiy
Now only qemu and test drivers support iotunes and for both of them this check
makes sense. I guess there is little chance that this patch will break loading
of some domains with incorrect config though. If this is the issue then we can
put this common check to a different place.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/conf/domain_conf.c | 29 +
 1 file changed, 29 insertions(+)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 173424a..800bca5 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -8663,6 +8663,35 @@ virDomainBlockIoTuneValidate(virDomainBlockIoTuneInfoPtr 
iotune)
 return -1;
 }
 
+#define CHECK_MAX(val) \
+do { \
+if (iotune->val##_max) { \
+if (!iotune->val) { \
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED, \
+   _("value '%s' cannot be set if " \
+ "'%s' is not set"), \
+   #val "_max", #val); \
+return -1; \
+} \
+if (iotune->val##_max < iotune->val) { \
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED, \
+   _("value '%s' cannot be " \
+ "smaller than '%s'"), \
+   #val "_max", #val); \
+return -1; \
+} \
+} \
+} while (false)
+
+CHECK_MAX(total_bytes_sec);
+CHECK_MAX(read_bytes_sec);
+CHECK_MAX(write_bytes_sec);
+CHECK_MAX(total_iops_sec);
+CHECK_MAX(read_iops_sec);
+CHECK_MAX(write_iops_sec);
+
+#undef CHECK_MAX
+
 return 0;
 }
 
-- 
1.8.3.1



[PATCH 08/23] qemu: reset max iotune setting when needed

2021-01-11 Thread Nikolay Shirokovskiy
Currenly API is not very convinient when switching from read/write to total
tunes back and forth. read/write and total tunes can not be set simulaneously
so one need to choose one. Now if for example total_bytes_sec and
total_bytes_sec_max are set and we set read_bytes_sec only then API fails. The
issue is new max settings are copied from the old ones in this case and as
a result after defaults are applied we got settings for read_bytes_sec and
total_bytes_sec_max which is incorrect.

In order to handle such situation nicely let's reset max settings if
appropriate avg settings is reseted explicitly or implicitly.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_driver.c | 31 ---
 1 file changed, 20 insertions(+), 11 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 117c7b7..9093baf 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -15901,6 +15901,22 @@ 
qemuDomainSetBlockIoTuneDefaults(virDomainBlockIoTuneInfoPtr newinfo,
 SET_IOTUNE_DEFAULTS(IOPS_MAX, iops_sec_max);
 #undef SET_IOTUNE_DEFAULTS
 
+#define RESET_IOTUNE_MAX(BOOL, FIELD) \
+do { \
+if (set_fields & QEMU_BLOCK_IOTUNE_SET_##BOOL) { \
+if (!newinfo->total_##FIELD) \
+newinfo->total_##FIELD##_max = 0; \
+if (!newinfo->read_##FIELD) \
+newinfo->read_##FIELD##_max = 0; \
+if (!newinfo->write_##FIELD) \
+newinfo->write_##FIELD##_max = 0; \
+} \
+} while (0)
+
+RESET_IOTUNE_MAX(BYTES, bytes_sec);
+RESET_IOTUNE_MAX(IOPS, iops_sec);
+#undef RESET_IOTUNE_MAX
+
 if (!(set_fields & QEMU_BLOCK_IOTUNE_SET_SIZE_IOPS))
 newinfo->size_iops_sec = oldinfo->size_iops_sec;
 if (!(set_fields & QEMU_BLOCK_IOTUNE_SET_GROUP_NAME))
@@ -16210,17 +16226,10 @@ qemuDomainSetBlockIoTune(virDomainPtr dom,
 do { \
 if (info.val##_max) { \
 if (!info.val) { \
-if (QEMU_BLOCK_IOTUNE_SET_##_bool) { \
-virReportError(VIR_ERR_CONFIG_UNSUPPORTED, \
-   _("cannot reset '%s' when " \
- "'%s' is set"), \
-   #val, #val "_max"); \
-} else { \
-virReportError(VIR_ERR_CONFIG_UNSUPPORTED, \
-   _("value '%s' cannot be set if " \
- "'%s' is not set"), \
-   #val "_max", #val); \
-} \
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED, \
+   _("value '%s' cannot be set if " \
+ "'%s' is not set"), \
+   #val "_max", #val); \
 goto endjob; \
 } \
 if (info.val##_max < info.val) { \
-- 
1.8.3.1



[PATCH 06/23] qemu: reuse virDomainBlockIoTuneValidate

2021-01-11 Thread Nikolay Shirokovskiy
Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_driver.c | 31 +--
 1 file changed, 1 insertion(+), 30 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 984b45d..117c7b7 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -16170,37 +16170,8 @@ qemuDomainSetBlockIoTune(virDomainPtr dom,
 
 #undef SET_IOTUNE_FIELD
 
-if ((info.total_bytes_sec && info.read_bytes_sec) ||
-(info.total_bytes_sec && info.write_bytes_sec)) {
-virReportError(VIR_ERR_INVALID_ARG, "%s",
-   _("total and read/write of bytes_sec "
- "cannot be set at the same time"));
-goto endjob;
-}
-
-if ((info.total_iops_sec && info.read_iops_sec) ||
-(info.total_iops_sec && info.write_iops_sec)) {
-virReportError(VIR_ERR_INVALID_ARG, "%s",
-   _("total and read/write of iops_sec "
- "cannot be set at the same time"));
-goto endjob;
-}
-
-if ((info.total_bytes_sec_max && info.read_bytes_sec_max) ||
-(info.total_bytes_sec_max && info.write_bytes_sec_max)) {
-virReportError(VIR_ERR_INVALID_ARG, "%s",
-   _("total and read/write of bytes_sec_max "
- "cannot be set at the same time"));
+if (virDomainBlockIoTuneValidate() < 0)
 goto endjob;
-}
-
-if ((info.total_iops_sec_max && info.read_iops_sec_max) ||
-(info.total_iops_sec_max && info.write_iops_sec_max)) {
-virReportError(VIR_ERR_INVALID_ARG, "%s",
-   _("total and read/write of iops_sec_max "
- "cannot be set at the same time"));
-goto endjob;
-}
 
 virDomainBlockIoTuneInfoCopy(, _info);
 
-- 
1.8.3.1



[PATCH 05/23] conf: factor out virDomainBlockIoTuneValidate

2021-01-11 Thread Nikolay Shirokovskiy
virDomainBlockIoTuneValidate can be reused in virDomainSetBlockIoTune
implementations.

And also simplify if conditions.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/conf/domain_conf.c   | 78 +---
 src/conf/domain_conf.h   |  3 ++
 src/libvirt_private.syms |  1 +
 3 files changed, 44 insertions(+), 38 deletions(-)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 349fc28..173424a 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -8627,6 +8627,45 @@ virDomainDiskBackingStoreParse(xmlXPathContextPtr ctxt,
 return 0;
 }
 
+
+int
+virDomainBlockIoTuneValidate(virDomainBlockIoTuneInfoPtr iotune)
+{
+if (iotune->total_bytes_sec &&
+(iotune->read_bytes_sec || iotune->write_bytes_sec)) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write bytes_sec "
+ "cannot be set at the same time"));
+return -1;
+}
+
+if (iotune->total_iops_sec &&
+(iotune->read_iops_sec || iotune->write_iops_sec)) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write iops_sec "
+ "cannot be set at the same time"));
+return -1;
+}
+
+if (iotune->total_bytes_sec_max &&
+(iotune->read_bytes_sec_max || iotune->write_bytes_sec_max)) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write bytes_sec_max "
+ "cannot be set at the same time"));
+return -1;
+}
+
+if (iotune->total_iops_sec_max &&
+(iotune->read_iops_sec_max || iotune->write_iops_sec_max)) {
+virReportError(VIR_ERR_XML_ERROR, "%s",
+   _("total and read/write iops_sec_max "
+ "cannot be set at the same time"));
+return -1;
+}
+
+return 0;
+}
+
 #define PARSE_IOTUNE(val) \
 if (virXPathULongLong("string(./iotune/" #val ")", \
   ctxt, >blkdeviotune.val) == -2) { \
@@ -8665,45 +8704,8 @@ virDomainDiskDefIotuneParse(virDomainDiskDefPtr def,
 def->blkdeviotune.group_name =
 virXPathString("string(./iotune/group_name)", ctxt);
 
-if ((def->blkdeviotune.total_bytes_sec &&
- def->blkdeviotune.read_bytes_sec) ||
-(def->blkdeviotune.total_bytes_sec &&
- def->blkdeviotune.write_bytes_sec)) {
-virReportError(VIR_ERR_XML_ERROR, "%s",
-   _("total and read/write bytes_sec "
- "cannot be set at the same time"));
-return -1;
-}
-
-if ((def->blkdeviotune.total_iops_sec &&
- def->blkdeviotune.read_iops_sec) ||
-(def->blkdeviotune.total_iops_sec &&
- def->blkdeviotune.write_iops_sec)) {
-virReportError(VIR_ERR_XML_ERROR, "%s",
-   _("total and read/write iops_sec "
- "cannot be set at the same time"));
-return -1;
-}
-
-if ((def->blkdeviotune.total_bytes_sec_max &&
- def->blkdeviotune.read_bytes_sec_max) ||
-(def->blkdeviotune.total_bytes_sec_max &&
- def->blkdeviotune.write_bytes_sec_max)) {
-virReportError(VIR_ERR_XML_ERROR, "%s",
-   _("total and read/write bytes_sec_max "
- "cannot be set at the same time"));
+if (virDomainBlockIoTuneValidate(>blkdeviotune) < 0)
 return -1;
-}
-
-if ((def->blkdeviotune.total_iops_sec_max &&
- def->blkdeviotune.read_iops_sec_max) ||
-(def->blkdeviotune.total_iops_sec_max &&
- def->blkdeviotune.write_iops_sec_max)) {
-virReportError(VIR_ERR_XML_ERROR, "%s",
-   _("total and read/write iops_sec_max "
- "cannot be set at the same time"));
-return -1;
-}
 
 return 0;
 }
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index ec43bbe..3c42313 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -3963,3 +3963,6 @@ virHostdevIsMdevDevice(const virDomainHostdevDef *hostdev)
 bool
 virHostdevIsVFIODevice(const virDomainHostdevDef *hostdev)
 ATTRIBUTE_NONNULL(1);
+
+int
+virDomainBlockIoTuneValidate(virDomainBlockIoTuneInfoPtr iotune);
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index c325040..bfe3ee7 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -236,6 +236,7 @@ virDomainBlockIoTuneInfoHasAny;
 virDomainBlockIoTuneInfoHasBasic;
 virDomainBlockIoTuneInfoHasMax;
 virDomainBlockIoTuneInfoHasMaxLength;
+virDomainBlockIoTuneValidate;
 virDomainBootTypeFromString;
 virDomainBootTypeToString;
 virDomainCapabilitiesPolicyTypeToString;
-- 
1.8.3.1



[PATCH 04/23] qemu: remove extra check for QEMU_BLOCK_IOTUNE_MAX

2021-01-11 Thread Nikolay Shirokovskiy
This is checked in below call to qemuValidateDomainBlkdeviotune now. Note that
qemuValidateDomainBlkdeviotune does not check *_max_length values as we do
here. But I guess this is for good. I tried setting high _max_lengh values and
looks like their limits depend on *_max values and much less then
1000LL.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_driver.c | 7 ---
 1 file changed, 7 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 7337464..984b45d 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -16121,13 +16121,6 @@ qemuDomainSetBlockIoTune(virDomainPtr dom,
 for (i = 0; i < nparams; i++) {
 virTypedParameterPtr param = [i];
 
-if (param->value.ul > QEMU_BLOCK_IOTUNE_MAX) {
-virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED,
-   _("block I/O throttle limit value must"
- " be no more than %llu"), QEMU_BLOCK_IOTUNE_MAX);
-goto endjob;
-}
-
 SET_IOTUNE_FIELD(total_bytes_sec, BYTES, TOTAL_BYTES_SEC);
 SET_IOTUNE_FIELD(read_bytes_sec, BYTES, READ_BYTES_SEC);
 SET_IOTUNE_FIELD(write_bytes_sec, BYTES, WRITE_BYTES_SEC);
-- 
1.8.3.1



[PATCH 03/23] qemu: reuse validation in qemuDomainSetBlockIoTune

2021-01-11 Thread Nikolay Shirokovskiy
There is a little difference though in removed and reused code in
qemuDomainSetBlockIoTune.

First, removed code checked 'set_fields' instead of tune itself. set_fields is
true whenever corresponding virDomainBlockIoTuneInfoHas* it true. But
additionnaly it is true when 0 values are passed explicitly. So removed code
also failed if reset value is passed and qemu does not support the resetted
field. I guess this is not very useful check and can be dropped as result of
this patch. If field is not supported then it is reported as 0 and resetting it
to 0 is just NOP.

Second, check for QEMU_BLOCK_IOTUNE_MAX is added but it is alredy checked
above and I'm going to remove the above check.

Third, now check of qemuDomainSetBlockIoTune is done also if changing only
persistent config is requested. It is good because otherwise we will create
invalid config which can not be created thru define API due to existing qemu
validation code.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_driver.c | 31 ---
 1 file changed, 4 insertions(+), 27 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index b7f22c2..7337464 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -53,6 +53,7 @@
 #include "qemu_namespace.h"
 #include "qemu_saveimage.h"
 #include "qemu_snapshot.h"
+#include "qemu_validate.h"
 
 #include "virerror.h"
 #include "virlog.h"
@@ -16210,6 +16211,9 @@ qemuDomainSetBlockIoTune(virDomainPtr dom,
 
 virDomainBlockIoTuneInfoCopy(, _info);
 
+if (qemuValidateDomainBlkdeviotune(, priv->qemuCaps) < 0)
+goto endjob;
+
 if (def) {
 supportMaxOptions = virQEMUCapsGet(priv->qemuCaps,
QEMU_CAPS_DRIVE_IOTUNE_MAX);
@@ -16218,33 +16222,6 @@ qemuDomainSetBlockIoTune(virDomainPtr dom,
 supportMaxLengthOptions =
 virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DRIVE_IOTUNE_MAX_LENGTH);
 
-if (!supportMaxOptions &&
-(set_fields & (QEMU_BLOCK_IOTUNE_SET_BYTES_MAX |
-   QEMU_BLOCK_IOTUNE_SET_IOPS_MAX |
-   QEMU_BLOCK_IOTUNE_SET_SIZE_IOPS))) {
-virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
-   _("a block I/O throttling parameter is not "
- "supported with this QEMU binary"));
- goto endjob;
-}
-
-if (!supportGroupNameOption &&
-(set_fields & QEMU_BLOCK_IOTUNE_SET_GROUP_NAME)) {
-virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
-   _("the block I/O throttling group parameter is not "
- "supported with this QEMU binary"));
- goto endjob;
-}
-
-if (!supportMaxLengthOptions &&
-(set_fields & (QEMU_BLOCK_IOTUNE_SET_BYTES_MAX_LENGTH |
-   QEMU_BLOCK_IOTUNE_SET_IOPS_MAX_LENGTH))) {
-virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
-   _("a block I/O throttling length parameter is not "
- "supported with this QEMU binary"));
- goto endjob;
-}
-
 if (!(disk = qemuDomainDiskByName(def, path)))
 goto endjob;
 
-- 
1.8.3.1



[PATCH 02/23] qemu: factor out qemuValidateDomainBlkdeviotune

2021-01-11 Thread Nikolay Shirokovskiy
It can also be used for validation of input in qemuDomainSetBlockIoTune.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_validate.c | 100 ++-
 src/qemu/qemu_validate.h |   4 ++
 2 files changed, 60 insertions(+), 44 deletions(-)

diff --git a/src/qemu/qemu_validate.c b/src/qemu/qemu_validate.c
index eadf3af..6a27565 100644
--- a/src/qemu/qemu_validate.c
+++ b/src/qemu/qemu_validate.c
@@ -2696,6 +2696,61 @@ qemuValidateDomainDeviceDefDiskFrontend(const 
virDomainDiskDef *disk,
 }
 
 
+int
+qemuValidateDomainBlkdeviotune(const virDomainBlockIoTuneInfo *iotune,
+   virQEMUCapsPtr qemuCaps)
+{
+if (iotune->total_bytes_sec > QEMU_BLOCK_IOTUNE_MAX ||
+iotune->read_bytes_sec > QEMU_BLOCK_IOTUNE_MAX ||
+iotune->write_bytes_sec > QEMU_BLOCK_IOTUNE_MAX ||
+iotune->total_iops_sec > QEMU_BLOCK_IOTUNE_MAX ||
+iotune->read_iops_sec > QEMU_BLOCK_IOTUNE_MAX ||
+iotune->write_iops_sec > QEMU_BLOCK_IOTUNE_MAX ||
+iotune->total_bytes_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
+iotune->read_bytes_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
+iotune->write_bytes_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
+iotune->total_iops_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
+iotune->read_iops_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
+iotune->write_iops_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
+iotune->size_iops_sec > QEMU_BLOCK_IOTUNE_MAX) {
+virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED,
+   _("block I/O throttle limit must "
+ "be no more than %llu using QEMU"),
+   QEMU_BLOCK_IOTUNE_MAX);
+return -1;
+}
+
+/* block I/O throttling 1.7 */
+if (virDomainBlockIoTuneInfoHasMax(iotune) &&
+!virQEMUCapsGet(qemuCaps, QEMU_CAPS_DRIVE_IOTUNE_MAX)) {
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+   _("there are some block I/O throttling parameters "
+ "that are not supported with this QEMU binary"));
+return -1;
+}
+
+/* block I/O group 2.4 */
+if (iotune->group_name &&
+!virQEMUCapsGet(qemuCaps, QEMU_CAPS_DRIVE_IOTUNE_GROUP)) {
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+   _("the block I/O throttling group parameter is "
+ "not supported with this QEMU binary"));
+return -1;
+}
+
+/* block I/O throttling length 2.6 */
+if (virDomainBlockIoTuneInfoHasMaxLength(iotune) &&
+!virQEMUCapsGet(qemuCaps, QEMU_CAPS_DRIVE_IOTUNE_MAX_LENGTH)) {
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+   _("there are some block I/O throttling length 
parameters "
+ "that are not supported with this QEMU binary"));
+return -1;
+}
+
+return 0;
+}
+
+
 /**
  * qemuValidateDomainDeviceDefDiskBlkdeviotune:
  * @disk: disk configuration
@@ -2740,51 +2795,8 @@ qemuValidateDomainDeviceDefDiskBlkdeviotune(const 
virDomainDiskDef *disk,
 }
 }
 
-if (disk->blkdeviotune.total_bytes_sec > QEMU_BLOCK_IOTUNE_MAX ||
-disk->blkdeviotune.read_bytes_sec > QEMU_BLOCK_IOTUNE_MAX ||
-disk->blkdeviotune.write_bytes_sec > QEMU_BLOCK_IOTUNE_MAX ||
-disk->blkdeviotune.total_iops_sec > QEMU_BLOCK_IOTUNE_MAX ||
-disk->blkdeviotune.read_iops_sec > QEMU_BLOCK_IOTUNE_MAX ||
-disk->blkdeviotune.write_iops_sec > QEMU_BLOCK_IOTUNE_MAX ||
-disk->blkdeviotune.total_bytes_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
-disk->blkdeviotune.read_bytes_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
-disk->blkdeviotune.write_bytes_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
-disk->blkdeviotune.total_iops_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
-disk->blkdeviotune.read_iops_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
-disk->blkdeviotune.write_iops_sec_max > QEMU_BLOCK_IOTUNE_MAX ||
-disk->blkdeviotune.size_iops_sec > QEMU_BLOCK_IOTUNE_MAX) {
-virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED,
-  _("block I/O throttle limit must "
-"be no more than %llu using QEMU"), 
QEMU_BLOCK_IOTUNE_MAX);
-return -1;
-}
-
-/* block I/O throttling 1.7 */
-if (virDomainBlockIoTuneInfoHasMax(>blkdeviotune) &&
-!virQEMUCapsGet(qemuCaps, QEMU_CAPS_DRIVE_IOTUNE_MAX)) {
-virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
-   _("there are some block I/O throttling parameters "
- "that are not supported with this QEMU binary"

[PATCH 01/23] qemu: pass supportGroupNameOption as expected

2021-01-11 Thread Nikolay Shirokovskiy
supportGroupNameOption was originally passed before the below patch. To solve
the issue of that patch this change is not necessary and it make sense to
pass supportGroupNameOption because of qemuMonitorSetBlockIoThrottle
signature.

With patch group_name is passed to qemu if disk in iotune group but group
is not passed explicitly in parametes. For qemu it does not make a difference.

I'm preparing here for the next patch where another usage of
supportGroupNameOption will gone and instead of deleting supportGroupNameOption
variable altogether I guess it is better to restore it's proper usage.

commit e9d75343d4cf552575a3b305fa00a36ee71e9309
Author: Martin Kletzander 
Date:   Tue Jan 24 15:50:00 2017 +0100

qemu: Only set group_name when actually requested

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_driver.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 027617d..b7f22c2 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -16310,7 +16310,7 @@ qemuDomainSetBlockIoTune(virDomainPtr dom,
 qemuDomainObjEnterMonitor(driver, vm);
 ret = qemuMonitorSetBlockIoThrottle(priv->mon, drivealias, qdevid,
 , supportMaxOptions,
-set_fields & 
QEMU_BLOCK_IOTUNE_SET_GROUP_NAME,
+supportGroupNameOption,
 supportMaxLengthOptions);
 if (qemuDomainObjExitMonitor(driver, vm) < 0)
 ret = -1;
-- 
1.8.3.1



[PATCH 00/23] RFC: get rid of macros when dealing with block io tunes

2021-01-11 Thread Nikolay Shirokovskiy
Hi, all.

I started this work as adding missing parts/fixing issues/etc in block iotune
code but then turned to refactoring code. We use a lot of macros in this place
and if we get rid of them I belive we will have much more readable/reusable/
extendable code.

Most of macros usage is for iterating over unsigned long long values. I'm
talking about parsing/formating xml, converting from/to virTypedParameterPtr
list etc. These places do not care about tune semantics and thus we can
add tools for the said iteration. See patch [1] for the first such conversion.

Patches before it partially prepare for this conversion, partially just
improve code reuse and add missing parts.

The work on removing macros in code handling iotunes is not finished as
I wanted to get an approvement that I have taken a right direction. At the
same time series shows the potential of the approach (take a look on how
qemuDomainSetBlockIoTune and testDomainSetBlockIoTune are looking now!).

Some places like qemuDomainSetBlockIoTuneDefaults deal with fields semantics.
So we need another approach to remove macros there but it is a matter of
a different RFC.

Nikolay Shirokovskiy (23):
  qemu: pass supportGroupNameOption as expected
  qemu: factor out qemuValidateDomainBlkdeviotune
  qemu: reuse validation in qemuDomainSetBlockIoTune
  qemu: remove extra check for QEMU_BLOCK_IOTUNE_MAX
  conf: factor out virDomainBlockIoTuneValidate
  qemu: reuse virDomainBlockIoTuneValidate
  test driver: reuse virDomainBlockIoTuneValidate
  qemu: reset max iotune setting when needed
  qemu: add max iotune settings check to virDomainBlockIoTuneValidate
  qemu: remove iotune max checks
  test driver: remove iotune max checks
  qemu: prepare for removing qemuBlockIoTuneSetFlags
  qemu: use group_name instead of QEMU_BLOCK_IOTUNE_SET_GROUP_NAME
  qemu: remove qemuBlockIoTuneSetFlags usage in qemuDomainSetBlockIoTune
  qemu: remove qemuBlockIoTuneSetFlags enum completly
  conf: get rid of macros in virDomainDiskDefIotuneParse[1]
  conf: get rid of macros in virDomainDiskDefFormatIotune
  conf: add virDomainBlockIoTuneFromParams
  conf: add virDomainBlockIoTuneToEventParams
  qemu: qemuDomainSetBlockIoTune use functions to convert to/from params
  test driver: remove TEST_BLOCK_IOTUNE_MAX checks
  test driver: prepare to delete macros in testDomainSetBlockIoTune
  test driver: testDomainSetBlockIoTune: use functions to convert
to/from params

 src/conf/domain_conf.c   | 303 +--
 src/conf/domain_conf.h   |  16 +++
 src/libvirt_private.syms |   3 +
 src/qemu/qemu_driver.c   | 281 ---
 src/qemu/qemu_validate.c | 100 +---
 src/qemu/qemu_validate.h |   4 +
 src/test/test_driver.c   | 156 ++--
 7 files changed, 380 insertions(+), 483 deletions(-)

-- 
1.8.3.1



Re: [PATCH v3 1/4] src: adopt to VIR_DRV_SUPPORTS_FEATURE return -1

2021-01-07 Thread Nikolay Shirokovskiy
Sigh.. One more issue with the patch series. I sent a tiny patch for formal
approval.

Nikolay

On Thu, Jan 7, 2021 at 3:00 PM Michal Privoznik  wrote:

> On 1/7/21 12:53 PM, John Ferlan wrote:
> >
> >
> > On 12/18/20 1:56 AM, Nikolay Shirokovskiy wrote:
> >> Otherwise in some places we can mistakenly report 'unsupported' error
> instead
> >> of root cause. So let's handle root cause explicitly from the macro.
> >>
> >> Signed-off-by: Nikolay Shirokovskiy 
> >> ---
> >>   src/libvirt-domain.c | 511
> ++-
> >>   src/libvirt-host.c   |  18 +-
> >>   src/libvirt.c|   7 +-
> >>   3 files changed, 365 insertions(+), 171 deletions(-)
> >>
> >
> > [...]
> >
> >> @@ -3005,8 +3019,11 @@ virDomainMigrateVersion3Full(virDomainPtr domain,
> >>   return NULL;
> >>   params = tmp;
> >>
> >> -if (VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
> >> -
>  VIR_DRV_FEATURE_MIGRATE_CHANGE_PROTECTION))
> >> +ret = VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
> >> +
>  VIR_DRV_FEATURE_MIGRATE_CHANGE_PROTECTION);
> >> +if (ret < 0)
> >> +return NULL;
> >> +if (ret)
> >
> > Coverity complains this is a RESOURCE_LEAK for @tmp (or essentially
> @params)
> >
> > Perhaps the hunk for VIR_DRV_SUPPORTS_FEATURE should go before
> > virTypedParamsCopy or use goto done (similar if !dom_xml)?
> >
>
> Yes, reorder looks good.
>
> Michal
>
>


[PATCH] src: fix resource leak introduced in d4439a6b8

2021-01-07 Thread Nikolay Shirokovskiy
@tmp that was copied just above is leaked on plain return.
The issue is found by Coverity.

Patch that inroduced a leak:
d4439a6b8 : src: adopt to VIR_DRV_SUPPORTS_FEATURE return -1

Signed-off-by: Nikolay Shirokovskiy 
---
 src/libvirt-domain.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index 2f9081a..c9f8ffd 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -3022,7 +3022,7 @@ virDomainMigrateVersion3Full(virDomainPtr domain,
 ret = VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
VIR_DRV_FEATURE_MIGRATE_CHANGE_PROTECTION);
 if (ret < 0)
-return NULL;
+goto done;
 if (ret)
 protection = VIR_MIGRATE_CHANGE_PROTECTION;
 
-- 
1.8.3.1



Re: [PATCH v3 3/4] qemu: adopt to VIR_DRV_SUPPORTS_FEATURE return -1

2021-01-06 Thread Nikolay Shirokovskiy
Sorry, looks like I'm compiling with debug options and missed this error.
It was a deliberate change but compiler don't like it.

The build fix is pushed.

Nikolay

On Wed, Jan 6, 2021 at 5:44 PM Daniel P. Berrangé 
wrote:

> On Fri, Dec 18, 2020 at 09:56:47AM +0300, Nikolay Shirokovskiy wrote:
> > Otherwise in some places we can mistakenly report 'unsupported' error
> instead
> > of root cause. So let's handle root cause explicitly from the macro.
> >
> > Signed-off-by: Nikolay Shirokovskiy 
> > ---
> >  src/qemu/qemu_migration.c | 33 ++---
> >  1 file changed, 22 insertions(+), 11 deletions(-)
> >
> > diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
> > index 90b0ec9..fca21be 100644
> > --- a/src/qemu/qemu_migration.c
> > +++ b/src/qemu/qemu_migration.c
> > @@ -4706,12 +4706,13 @@
> qemuMigrationSrcPerformPeer2Peer(virQEMUDriverPtr driver,
> >  {
> >  int ret = -1;
> >  g_autoptr(virConnect) dconn = NULL;
> > -bool p2p;
> > +int p2p;
> >  virErrorPtr orig_err = NULL;
> >  bool offline = !!(flags & VIR_MIGRATE_OFFLINE);
> > -bool dstOffline = false;
> > +int dstOffline;
>
> Loosing the initializer made the compiler unhappy it seems
>
> ../dist-unpack/libvirt-7.0.0/src/qemu/qemu_migration.c: In function
> ‘qemuMigrationSrcPerformJob’:
> ../dist-unpack/libvirt-7.0.0/src/qemu/qemu_migration.c:4814:20: error:
> ‘dstOffline’ may be used uninitialized in this function
> [-Werror=maybe-uninitialized]
>  4814 | if (offline && !dstOffline) {
>   |^~~
> ../dist-unpack/libvirt-7.0.0/src/qemu/qemu_migration.c:4712:9: note:
> ‘dstOffline’ was declared here
>  4712 | int dstOffline;
>   | ^~
>
>
> Regards,
> Daniel
> --
> |: https://berrange.com  -o-
> https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-
> https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-
> https://www.instagram.com/dberrange :|
>
>


[PATCH v3 3/4] qemu: adopt to VIR_DRV_SUPPORTS_FEATURE return -1

2020-12-18 Thread Nikolay Shirokovskiy
Otherwise in some places we can mistakenly report 'unsupported' error instead
of root cause. So let's handle root cause explicitly from the macro.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_migration.c | 33 ++---
 1 file changed, 22 insertions(+), 11 deletions(-)

diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 90b0ec9..fca21be 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -4706,12 +4706,13 @@ qemuMigrationSrcPerformPeer2Peer(virQEMUDriverPtr 
driver,
 {
 int ret = -1;
 g_autoptr(virConnect) dconn = NULL;
-bool p2p;
+int p2p;
 virErrorPtr orig_err = NULL;
 bool offline = !!(flags & VIR_MIGRATE_OFFLINE);
-bool dstOffline = false;
+int dstOffline;
 g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
-bool useParams;
+int useParams;
+int rc;
 
 VIR_DEBUG("driver=%p, sconn=%p, vm=%p, xmlin=%s, dconnuri=%s, uri=%s, "
   "graphicsuri=%s, listenAddress=%s, nmigrate_disks=%zu, "
@@ -4771,17 +4772,27 @@ qemuMigrationSrcPerformPeer2Peer(virQEMUDriverPtr 
driver,
 qemuDomainObjEnterRemote(vm);
 p2p = VIR_DRV_SUPPORTS_FEATURE(dconn->driver, dconn,
VIR_DRV_FEATURE_MIGRATION_P2P);
-/* v3proto reflects whether the caller used Perform3, but with
- * p2p migrate, regardless of whether Perform2 or Perform3
- * were used, we decide protocol based on what target supports
- */
-*v3proto = VIR_DRV_SUPPORTS_FEATURE(dconn->driver, dconn,
-VIR_DRV_FEATURE_MIGRATION_V3);
+if (p2p < 0)
+goto cleanup;
+/* v3proto reflects whether the caller used Perform3, but with
+ * p2p migrate, regardless of whether Perform2 or Perform3
+ * were used, we decide protocol based on what target supports
+ */
+rc = VIR_DRV_SUPPORTS_FEATURE(dconn->driver, dconn,
+  VIR_DRV_FEATURE_MIGRATION_V3);
+if (rc < 0)
+goto cleanup;
+*v3proto = !!rc;
 useParams = VIR_DRV_SUPPORTS_FEATURE(dconn->driver, dconn,
  VIR_DRV_FEATURE_MIGRATION_PARAMS);
-if (offline)
+if (useParams < 0)
+goto cleanup;
+if (offline) {
 dstOffline = VIR_DRV_SUPPORTS_FEATURE(dconn->driver, dconn,
   
VIR_DRV_FEATURE_MIGRATION_OFFLINE);
+if (dstOffline < 0)
+goto cleanup;
+}
 if (qemuDomainObjExitRemote(vm, !offline) < 0)
 goto cleanup;
 
@@ -4819,7 +4830,7 @@ qemuMigrationSrcPerformPeer2Peer(virQEMUDriverPtr driver,
 persist_xml, dname, uri, 
graphicsuri,
 listenAddress, nmigrate_disks, 
migrate_disks,
 nbdPort, nbdURI, migParams, 
resource,
-useParams, flags);
+!!useParams, flags);
 } else {
 ret = qemuMigrationSrcPerformPeer2Peer2(driver, sconn, dconn, vm,
 dconnuri, flags, dname, 
resource,
-- 
1.8.3.1



[PATCH v3 1/4] src: adopt to VIR_DRV_SUPPORTS_FEATURE return -1

2020-12-18 Thread Nikolay Shirokovskiy
Otherwise in some places we can mistakenly report 'unsupported' error instead
of root cause. So let's handle root cause explicitly from the macro.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/libvirt-domain.c | 511 ++-
 src/libvirt-host.c   |  18 +-
 src/libvirt.c|   7 +-
 3 files changed, 365 insertions(+), 171 deletions(-)

diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index 5edc73e..2f9081a 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -2130,6 +2130,7 @@ virDomainGetMemoryParameters(virDomainPtr domain,
  int *nparams, unsigned int flags)
 {
 virConnectPtr conn;
+int rc;
 
 VIR_DOMAIN_DEBUG(domain, "params=%p, nparams=%d, flags=0x%x",
  params, (nparams) ? *nparams : -1, flags);
@@ -2142,8 +2143,11 @@ virDomainGetMemoryParameters(virDomainPtr domain,
 if (*nparams != 0)
 virCheckNonNullArgGoto(params, error);
 
-if (VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
- VIR_DRV_FEATURE_TYPED_PARAM_STRING))
+rc = VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
+  VIR_DRV_FEATURE_TYPED_PARAM_STRING);
+if (rc < 0)
+goto error;
+if (rc)
 flags |= VIR_TYPED_PARAM_STRING_OKAY;
 
 VIR_EXCLUSIVE_FLAGS_GOTO(VIR_DOMAIN_AFFECT_LIVE,
@@ -2251,6 +2255,7 @@ virDomainGetNumaParameters(virDomainPtr domain,
int *nparams, unsigned int flags)
 {
 virConnectPtr conn;
+int rc;
 
 VIR_DOMAIN_DEBUG(domain, "params=%p, nparams=%d, flags=0x%x",
  params, (nparams) ? *nparams : -1, flags);
@@ -2263,8 +2268,11 @@ virDomainGetNumaParameters(virDomainPtr domain,
 if (*nparams != 0)
 virCheckNonNullArgGoto(params, error);
 
-if (VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
- VIR_DRV_FEATURE_TYPED_PARAM_STRING))
+rc = VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
+  VIR_DRV_FEATURE_TYPED_PARAM_STRING);
+if (rc < 0)
+goto error;
+if (rc)
 flags |= VIR_TYPED_PARAM_STRING_OKAY;
 
 conn = domain->conn;
@@ -2369,6 +2377,7 @@ virDomainGetBlkioParameters(virDomainPtr domain,
 int *nparams, unsigned int flags)
 {
 virConnectPtr conn;
+int rc;
 
 VIR_DOMAIN_DEBUG(domain, "params=%p, nparams=%d, flags=0x%x",
  params, (nparams) ? *nparams : -1, flags);
@@ -2381,8 +2390,11 @@ virDomainGetBlkioParameters(virDomainPtr domain,
 if (*nparams != 0)
 virCheckNonNullArgGoto(params, error);
 
-if (VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
- VIR_DRV_FEATURE_TYPED_PARAM_STRING))
+rc = VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
+  VIR_DRV_FEATURE_TYPED_PARAM_STRING);
+if (rc < 0)
+goto error;
+if (rc)
 flags |= VIR_TYPED_PARAM_STRING_OKAY;
 
 VIR_EXCLUSIVE_FLAGS_GOTO(VIR_DOMAIN_AFFECT_LIVE,
@@ -2851,12 +2863,14 @@ virDomainMigrateVersion2(virDomainPtr domain,
 return NULL;
 }
 
-if (VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
- VIR_DRV_FEATURE_XML_MIGRATABLE)) {
+ret = VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
+   VIR_DRV_FEATURE_XML_MIGRATABLE);
+if (ret < 0)
+return NULL;
+if (ret)
 getxml_flags |= VIR_DOMAIN_XML_MIGRATABLE;
-} else {
+else
 getxml_flags |= VIR_DOMAIN_XML_SECURE | VIR_DOMAIN_XML_UPDATE_CPU;
-}
 
 dom_xml = domain->conn->driver->domainGetXMLDesc(domain, getxml_flags);
 if (!dom_xml)
@@ -3005,8 +3019,11 @@ virDomainMigrateVersion3Full(virDomainPtr domain,
 return NULL;
 params = tmp;
 
-if (VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
- VIR_DRV_FEATURE_MIGRATE_CHANGE_PROTECTION))
+ret = VIR_DRV_SUPPORTS_FEATURE(domain->conn->driver, domain->conn,
+   VIR_DRV_FEATURE_MIGRATE_CHANGE_PROTECTION);
+if (ret < 0)
+return NULL;
+if (ret)
 protection = VIR_MIGRATE_CHANGE_PROTECTION;
 
 VIR_DEBUG("Begin3 %p", domain->conn);
@@ -3403,6 +3420,8 @@ virDomainMigrateUnmanagedParams(virDomainPtr domain,
 int nparams,
 unsigned int flags)
 {
+int rc;
+
 VIR_DOMAIN_DEBUG(domain, "dconnuri=%s, params=%p, nparams=%d, flags=0x%x",
  NULLSTR(dconnuri), params, nparams, flags);
 VIR_TYPED_PARAMS_DEBUG(params, nparams)

[PATCH v3 2/4] libxl: adopt to VIR_DRV_SUPPORTS_FEATURE return -1

2020-12-18 Thread Nikolay Shirokovskiy
Otherwise in some places we can mistakenly report 'unsupported' error instead
of root cause. So let's handle root cause explicitly from the macro.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/libxl/libxl_migration.c | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/src/libxl/libxl_migration.c b/src/libxl/libxl_migration.c
index 6dc6812..2925aaa 100644
--- a/src/libxl/libxl_migration.c
+++ b/src/libxl/libxl_migration.c
@@ -1146,7 +1146,7 @@ libxlDomainMigrationSrcPerformP2P(libxlDriverPrivatePtr 
driver,
   unsigned int flags)
 {
 int ret = -1;
-bool useParams;
+int useParams;
 virConnectPtr dconn = NULL;
 virErrorPtr orig_err = NULL;
 libxlDriverConfigPtr cfg = libxlDriverConfigGet(driver);
@@ -1171,9 +1171,11 @@ libxlDomainMigrationSrcPerformP2P(libxlDriverPrivatePtr 
driver,
  VIR_DRV_FEATURE_MIGRATION_PARAMS);
 virObjectLock(vm);
 
-if (!useParams) {
-virReportError(VIR_ERR_OPERATION_FAILED, "%s",
-   _("Destination libvirt does not support migration with 
extensible parameters"));
+if (useParams <= 0) {
+if (useParams == 0)
+virReportError(VIR_ERR_OPERATION_FAILED, "%s",
+   _("Destination libvirt does not support migration"
+ " with extensible parameters"));
 goto cleanup;
 }
 
-- 
1.8.3.1



[PATCH v3 4/4] src: don't hide error in VIR_DRV_SUPPORTS_FEATURE

2020-12-18 Thread Nikolay Shirokovskiy
Otherwise we can get misleading error messages. One example is when connection
is broken we got "this function is not supported by the connection driver:
virDomainMigrate3" from virDomainMigrate3.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/driver.h | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/src/driver.h b/src/driver.h
index 6278aa0..2531ac3 100644
--- a/src/driver.h
+++ b/src/driver.h
@@ -47,17 +47,14 @@ typedef enum {
  * directly if you don't have to, because it may be NULL, use this macro
  * instead.
  *
- * Note that this treats a possible error returned by drv->supports_feature
- * the same as not supported. If you care about the error, call
- * drv->supports_feature directly.
- *
  * Returns:
- *   != 0  Feature is supported.
+ *   -1Error
+ *   >0Feature is supported.
  *   0 Feature is not supported.
  */
 #define VIR_DRV_SUPPORTS_FEATURE(drv, conn, feature) \
 ((drv)->connectSupportsFeature ? \
-(drv)->connectSupportsFeature((conn), (feature)) > 0 : 0)
+(drv)->connectSupportsFeature((conn), (feature)) : 0)
 
 
 #define __VIR_DRIVER_H_INCLUDES___
-- 
1.8.3.1



[PATCH v3 0/4] handle error in feature testing

2020-12-18 Thread Nikolay Shirokovskiy
Justification is in the last patch.

Diff to v2 [1]:
- instead of RFC where fix was applied only to virMigrate3 now
  all the places where VIR_DRV_SUPPORTS_FEATURE is used patched
  to handle errors


[1] [RFC PATCH v2] fix error message in virMigrate3 if connection is broken
https://www.redhat.com/archives/libvir-list/2020-September/msg01348.html

Nikolay Shirokovskiy (4):
  src: adopt to VIR_DRV_SUPPORTS_FEATURE return -1
  libxl: adopt to VIR_DRV_SUPPORTS_FEATURE return -1
  qemu: adopt to VIR_DRV_SUPPORTS_FEATURE return -1
  src: don't hide error in VIR_DRV_SUPPORTS_FEATURE

 src/driver.h|   9 +-
 src/libvirt-domain.c| 511 ++--
 src/libvirt-host.c  |  18 +-
 src/libvirt.c   |   7 +-
 src/libxl/libxl_migration.c |  10 +-
 src/qemu/qemu_migration.c   |  33 ++-
 6 files changed, 396 insertions(+), 192 deletions(-)

-- 
1.8.3.1



[PATCH v3] spec: don't touch existing nwfilters on update

2020-12-10 Thread Nikolay Shirokovskiy
Nwfilter can be edited by the user and we don't want to overwrite the editings.
Also the filters in %{datadir} does not have UUIDs and these are generated on
libvirtd start. Thus this patch also fixes regeneration of UUIDs on libvirtd
update.

Signed-off-by: Nikolay Shirokovskiy 
Reviewed-by: Michal Privoznik 
---

It is a successor to PATCH v2[1].

Diff to v2:
- Misc changes according to Andrea's review

[1] PATCH v2
https://www.redhat.com/archives/libvir-list/2020-December/msg00399.html
 

 libvirt.spec.in | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/libvirt.spec.in b/libvirt.spec.in
index 880051b..f20a1c7 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -1439,9 +1439,13 @@ fi
 rm -rf %{_localstatedir}/lib/rpm-state/libvirt || :
 
 %post daemon-config-nwfilter
-cp %{_datadir}/libvirt/nwfilter/*.xml %{_sysconfdir}/libvirt/nwfilter/
-# libvirt saves these files with mode 600
-chmod 600 %{_sysconfdir}/libvirt/nwfilter/*.xml
+for datadir_file in %{_datadir}/libvirt/nwfilter/*.xml; do
+  sysconfdir_file=%{_sysconfdir}/libvirt/nwfilter/$(basename "$datadir_file")
+  if [ ! -f "$sysconfdir_file" ]; then
+# libvirt saves these files with mode 600
+install -m 0600 "$datadir_file" "$sysconfdir_file"
+  fi
+done
 # Make sure libvirt picks up the new nwfilter defininitons
 mkdir -p %{_localstatedir}/lib/rpm-state/libvirt || :
 touch %{_localstatedir}/lib/rpm-state/libvirt/restart || :
-- 
1.8.3.1



[PATCH v2] spec: don't touch existing nwfilters on update

2020-12-07 Thread Nikolay Shirokovskiy
Nwfilter can be edited by the user and we don't want to overwrite the editings.
Also the filters in %{datadir} does not have UUIDs and these are generated on
libvirtd start. Thus this patch also fixes regeneration of UUIDs on libvirtd
update.

Signed-off-by: Nikolay Shirokovskiy 
---

It is a successor to original version of the patch [1]. The discussion can
be found in [2].

Diff to v1:
- just keep existing nwfilters untouched instead of bringing new
  version of nwfilter while keeping UUID 

[1] https://www.redhat.com/archives/libvir-list/2020-October/msg01357.html
[2] https://www.redhat.com/archives/libvir-list/2020-December/msg00260.html

 libvirt.spec.in | 19 ++-
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/libvirt.spec.in b/libvirt.spec.in
index 880051b..98914ce 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -1439,12 +1439,21 @@ fi
 rm -rf %{_localstatedir}/lib/rpm-state/libvirt || :
 
 %post daemon-config-nwfilter
-cp %{_datadir}/libvirt/nwfilter/*.xml %{_sysconfdir}/libvirt/nwfilter/
-# libvirt saves these files with mode 600
-chmod 600 %{_sysconfdir}/libvirt/nwfilter/*.xml
+restart_daemon=0
+for dfile in %{_datadir}/libvirt/nwfilter/*.xml; do
+  sfile=%{_sysconfdir}/libvirt/nwfilter/`basename $dfile`
+  if [ ! -f "$sfile" ]; then
+cp "$dfile" "$sfile"
+# libvirt saves these files with mode 600
+chmod 600 "$sfile"
+restart_daemon=1
+  fi
+done
 # Make sure libvirt picks up the new nwfilter defininitons
-mkdir -p %{_localstatedir}/lib/rpm-state/libvirt || :
-touch %{_localstatedir}/lib/rpm-state/libvirt/restart || :
+if [ $restart_daemon -eq 1 ]; then
+  mkdir -p %{_localstatedir}/lib/rpm-state/libvirt || :
+  touch %{_localstatedir}/lib/rpm-state/libvirt/restart || :
+fi
 
 %posttrans daemon-config-nwfilter
 if [ -f %{_localstatedir}/lib/rpm-state/libvirt/restart ]; then
-- 
1.8.3.1



Re: [PATCH v2 00/10] qemu: add option to process offloaded legacy blockjob event ealier

2020-12-04 Thread Nikolay Shirokovskiy



On 04.12.2020 19:46, Peter Krempa wrote:
> On Fri, Dec 04, 2020 at 19:28:55 +0300, Nikolay Shirokovskiy wrote:
>>
>>
>> On 04.12.2020 19:24, Peter Krempa wrote:
>>> On Fri, Dec 04, 2020 at 19:00:57 +0300, Nikolay Shirokovskiy wrote:
>>>>
>>>>
>>>> On 04.12.2020 18:52, Peter Krempa wrote:
>>>>> On Fri, Dec 04, 2020 at 18:44:00 +0300, Nikolay Shirokovskiy wrote:
>>>>>> On 04.12.2020 18:11, Peter Krempa wrote:
>>>>>>> On Fri, Nov 13, 2020 at 09:53:28 +0300, Nikolay Shirokovskiy wrote:
>>>>>
>>>>> [...]
>>>>>
>>>>>> Ok. By the way what about the issue described in 
>>>>>>
>>>>>> [PATCH v2 09/10] qemu: fix race on legacy block completion and quering 
>>>>>> stats
>>>>>>
>>>>>> Are modern blockjobs have similar?
>>>>>
>>>>> Modern blockjobs are started with '"auto-dismiss": false' so if you get
>>>>> to the monitor before the thread completing the blockjob finishes it,
>>>>> qemu still reports the status for the job.
>>>>
>>>> But from qemu docs I read that graph changes occur on finalize step and 
>>>> modern
>>>> blockjobs use auto-finalize = true. So at the moment of querying stats
>>>> graph in libvirt and qemu can be different I thought.
>>>
>>> That doesn't matter for the purposte of data returned from
>>> 'qemuDomainGetBlockJobInfo'. QEMU still reports the job, so you will
>>> still see it as running and the XML will look the same as while it was
>>> running.
>>>
>>
>> Yeah, but the 9th patch is not fixing qemuDomainGetBlockJobInfo (as 10th 
>> does).
>> It fixes getting stats. And different graphs in qemu and libvirt affects
>> stats collection.
> 
> Stats are queried via 'node-name' when -blockdev is used and as said,
> removal of the nodes happens only explicitly after we finalize the job.
> 
> Thus you will get stats per the state the backing chain looked before
> the blockjob modified the graph as we report the graph from our internal
> data. Some stats may be stale perhaps, but that's the worst case.

I see then, in modern approach we won't get graph mismatch because
we have node-names. Thanx for making it clear!

> 
> In case of stats for pre-blockdev, you will potentially get stats from
> the reconfigured chain which we attempt to map into the old chain. This
> indeed may fail.
> 
> In such case we can plainly not report them, the stats API doesn't
> always guarantee results.
> 
> Even for the case of the stats, the proposed fix is just too invasive
> for something which won't be used for long.
> 

Ok.

Nikolay



Re: [PATCH v2 00/10] qemu: add option to process offloaded legacy blockjob event ealier

2020-12-04 Thread Nikolay Shirokovskiy



On 04.12.2020 19:24, Peter Krempa wrote:
> On Fri, Dec 04, 2020 at 19:00:57 +0300, Nikolay Shirokovskiy wrote:
>>
>>
>> On 04.12.2020 18:52, Peter Krempa wrote:
>>> On Fri, Dec 04, 2020 at 18:44:00 +0300, Nikolay Shirokovskiy wrote:
>>>> On 04.12.2020 18:11, Peter Krempa wrote:
>>>>> On Fri, Nov 13, 2020 at 09:53:28 +0300, Nikolay Shirokovskiy wrote:
>>>
>>> [...]
>>>
>>>> Ok. By the way what about the issue described in 
>>>>
>>>> [PATCH v2 09/10] qemu: fix race on legacy block completion and quering 
>>>> stats
>>>>
>>>> Are modern blockjobs have similar?
>>>
>>> Modern blockjobs are started with '"auto-dismiss": false' so if you get
>>> to the monitor before the thread completing the blockjob finishes it,
>>> qemu still reports the status for the job.
>>
>> But from qemu docs I read that graph changes occur on finalize step and 
>> modern
>> blockjobs use auto-finalize = true. So at the moment of querying stats
>> graph in libvirt and qemu can be different I thought.
> 
> That doesn't matter for the purposte of data returned from
> 'qemuDomainGetBlockJobInfo'. QEMU still reports the job, so you will
> still see it as running and the XML will look the same as while it was
> running.
> 

Yeah, but the 9th patch is not fixing qemuDomainGetBlockJobInfo (as 10th does).
It fixes getting stats. And different graphs in qemu and libvirt affects
stats collection.

Nikolay

> In fact the images are still in use by qemu at that point as they were
> not blockdev-del-ed. The graph will change though.
> 




Re: [PATCH v2 00/10] qemu: add option to process offloaded legacy blockjob event ealier

2020-12-04 Thread Nikolay Shirokovskiy



On 04.12.2020 18:52, Peter Krempa wrote:
> On Fri, Dec 04, 2020 at 18:44:00 +0300, Nikolay Shirokovskiy wrote:
>> On 04.12.2020 18:11, Peter Krempa wrote:
>>> On Fri, Nov 13, 2020 at 09:53:28 +0300, Nikolay Shirokovskiy wrote:
> 
> [...]
> 
>> Ok. By the way what about the issue described in 
>>
>> [PATCH v2 09/10] qemu: fix race on legacy block completion and quering stats
>>
>> Are modern blockjobs have similar?
> 
> Modern blockjobs are started with '"auto-dismiss": false' so if you get
> to the monitor before the thread completing the blockjob finishes it,
> qemu still reports the status for the job.

But from qemu docs I read that graph changes occur on finalize step and modern
blockjobs use auto-finalize = true. So at the moment of querying stats
graph in libvirt and qemu can be different I thought.


Nikolay

> 
> The completion is done inside a 'MODIFY' job lock, so other thread won't
> be able to get to the monitor in the time between the job is dismissed
> in qemu and the XML/definition is updated.
> 




Re: [PATCH v2 00/10] qemu: add option to process offloaded legacy blockjob event ealier

2020-12-04 Thread Nikolay Shirokovskiy



On 04.12.2020 18:11, Peter Krempa wrote:
> On Fri, Nov 13, 2020 at 09:53:28 +0300, Nikolay Shirokovskiy wrote:
>> This is successor to [1] but I changed the subject as in the review the patch
>> 'qemu: sync backing chain update in virDomainGetBlockJobInfo' was not
>> considered good one from design POV. However I think the basic patch is 
>> helpful
>> to address similar issues. Look at [*] for example, there it allows to sync
>> backing chains during query block stats.
>>
>> In discussion of [1] I stated that first patch will also allow to get rid of
>> stale block job events on reconnection to qemu. But this will require
>> additional work actually so with this patch series stale events are still
>> present. And in the discussion it was also mentioned by Peter that stale 
>> events
>> are not harmful for legacy blockjobs code. I also removed previous attempts 
>> to
>> eliminate some of stale events.
>>
>> I still keep patch for virDomainGetBlockJobInfo. May be in comparsion to [*]
>> the approach the patch takes will look not so bad :)
>>
>> [1] First version of the patch series
>> https://www.redhat.com/archives/libvir-list/2020-October/msg01133.html
> 
> I had a look at this series and it's just doing too much just to "fix"
> apps which insist on polling especially with pre-blockdev qemu. I simply
> it's not worth adding the complexity you are proposing.
> 
> NACK series.

Ok. By the way what about the issue described in 

[PATCH v2 09/10] qemu: fix race on legacy block completion and quering stats

Are modern blockjobs have similar?

Nikolay

> 
> Instead I propose we add a workaround which will report fake unfinished
> job:
> 
> https://www.redhat.com/archives/libvir-list/2020-December/msg00352.html
> 
> The scope of this is way more limited and it doesn't actually try to
> modify the job code handovers which is very dangerous.
> 
> 
> 
> I had a look at this series and it's just doing too much just to "fix"
> apps which insist on polling especially with pre-blockdev qemu.
> 
> Additionally it adds documentation stating that apps should not poll
> virDomainGetBlockJobInfo.
> 



[PATCH v2 1/2] qemu: support append param on live attaching file chardev

2020-12-02 Thread Nikolay Shirokovskiy
Currently it is simply ignored.

Signed-off-by: Nikolay Shirokovskiy 
---
 src/qemu/qemu_monitor_json.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 47ee1ff..ff03a5a 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -7497,6 +7497,10 @@ qemuMonitorJSONAttachCharDevCommand(const char *chrID,
 backend_type = "file";
 if (virJSONValueObjectAppendString(data, "out", chr->data.file.path) < 
0)
 goto cleanup;
+if (virJSONValueObjectAdd(data,
+  "T:append", chr->data.file.append,
+  NULL) < 0)
+goto cleanup;
 break;
 
 case VIR_DOMAIN_CHR_TYPE_DEV:
-- 
1.8.3.1



[PATCH v2 0/2] qemu: support some missing params on live attaching chardev

2020-12-02 Thread Nikolay Shirokovskiy
Specifying chardev logfile and file backend append param is ignored yet.

Diff from v1: 

- use existing virJSONValueObjectAdd instead of introducing 
  virJSONValueObjectAppendBooleanTristate.

Nikolay Shirokovskiy (2):
  qemu: support append param on live attaching file chardev
  qemu: support logfile on live attaching chardev

 src/qemu/qemu_monitor_json.c | 11 +++
 1 file changed, 11 insertions(+)

-- 
1.8.3.1



  1   2   3   4   5   6   7   8   9   10   >