Re: [libvirt PATCH v2 0/4] Enable copy/paste for vnc displays

2022-05-09 Thread Jonathon Jongsma

On 5/9/22 11:06 AM, Marc-André Lureau wrote:

Hi Jonathon

On Thu, Mar 24, 2022 at 11:26 PM Jonathon Jongsma > wrote:


This patch series enables support for the qemu-vdagent character
device which
enables copy/paste support between guest and client when using vnc
graphics.

The guest must be configured with something like the following:

     
       
         
         
       
       
     

Copy/paste sync requires a vnc client that has support for
copy/paste commands.
Currently virt-viewer does not work, but the version of tigervnc
provided by
fedora (executable name 'vncviewer') does work.

More details about this device on Gerd's blog:
https://www.kraxel.org/blog/2021/05/qemu-cut-paste/


For now I have left the target to be configurable to match the spicevmc
channel, although Marc-Andre has suggested to simply hard-code it to
the virtio
name com.redhat.spice.0

Changes in v2:
  - change xml syntax to use  and  sub-elements of

    defined in the same way as they are for the spice display.
  - fix a build failure when apparmor was enabled
  - Add another test for when features are turned off


Is there anything missing to merge this?
thanks


Thanks for the ping on this. I suppose I was waiting to see if anybody 
else had any opinions on whether the virtio name should be configurable 
or not (which you had recommended against), and then it fell off my 
radar for a little bit. I will push it tomorrow if nobody speaks up 
before then.


Jonathon



Re: [libvirt PATCH v2 0/4] Enable copy/paste for vnc displays

2022-05-09 Thread Marc-André Lureau
Hi Jonathon

On Thu, Mar 24, 2022 at 11:26 PM Jonathon Jongsma 
wrote:

> This patch series enables support for the qemu-vdagent character device
> which
> enables copy/paste support between guest and client when using vnc
> graphics.
>
> The guest must be configured with something like the following:
>
> 
>   
> 
> 
>   
>   
> 
>
> Copy/paste sync requires a vnc client that has support for copy/paste
> commands.
> Currently virt-viewer does not work, but the version of tigervnc provided
> by
> fedora (executable name 'vncviewer') does work.
>
> More details about this device on Gerd's blog:
> https://www.kraxel.org/blog/2021/05/qemu-cut-paste/
>
> For now I have left the target to be configurable to match the spicevmc
> channel, although Marc-Andre has suggested to simply hard-code it to the
> virtio
> name com.redhat.spice.0
>
> Changes in v2:
>  - change xml syntax to use  and  sub-elements of
> 
>defined in the same way as they are for the spice display.
>  - fix a build failure when apparmor was enabled
>  - Add another test for when features are turned off
>
>
Is there anything missing to merge this?
thanks


> Jonathon Jongsma (4):
>   qemu: add capability for qemu-vdagent chardev
>   Rename virDomainGraphicsSpiceMouseMode to virDomainMouseMode
>   conf: add qemu-vdagent channel
>   qemu: add support for qemu-vdagent channel
>
>  docs/formatdomain.rst | 23 ++
>  src/conf/domain_conf.c| 70 +--
>  src/conf/domain_conf.h| 24 ---
>  src/conf/domain_validate.c|  1 +
>  src/conf/schemas/domaincommon.rng | 51 +-
>  src/libvirt_private.syms  |  4 +-
>  src/libxl/libxl_conf.c|  8 +--
>  src/libxl/xen_xl.c| 16 ++---
>  src/qemu/qemu_capabilities.c  |  2 +
>  src/qemu/qemu_capabilities.h  |  1 +
>  src/qemu/qemu_command.c   | 32 +++--
>  src/qemu/qemu_monitor_json.c  | 27 +++
>  src/qemu/qemu_process.c   |  1 +
>  src/qemu/qemu_validate.c  |  9 +++
>  src/security/security_apparmor.c  |  2 +
>  src/security/security_dac.c   |  2 +
>  .../caps_6.1.0.x86_64.xml |  1 +
>  .../caps_6.2.0.aarch64.xml|  1 +
>  .../caps_6.2.0.x86_64.xml |  1 +
>  .../caps_7.0.0.x86_64.xml |  1 +
>  ...l-qemu-vdagent-features.x86_64-latest.args | 41 +++
>  .../channel-qemu-vdagent-features.xml | 37 ++
>  .../channel-qemu-vdagent.x86_64-latest.args   | 41 +++
>  .../qemuxml2argvdata/channel-qemu-vdagent.xml | 37 ++
>  tests/qemuxml2argvtest.c  |  2 +
>  ...el-qemu-vdagent-features.x86_64-latest.xml | 58 +++
>  .../channel-qemu-vdagent.x86_64-latest.xml| 58 +++
>  tests/qemuxml2xmltest.c   |  2 +
>  tests/testutilsqemu.c |  1 +
>  29 files changed, 500 insertions(+), 54 deletions(-)
>  create mode 100644
> tests/qemuxml2argvdata/channel-qemu-vdagent-features.x86_64-latest.args
>  create mode 100644
> tests/qemuxml2argvdata/channel-qemu-vdagent-features.xml
>  create mode 100644
> tests/qemuxml2argvdata/channel-qemu-vdagent.x86_64-latest.args
>  create mode 100644 tests/qemuxml2argvdata/channel-qemu-vdagent.xml
>  create mode 100644
> tests/qemuxml2xmloutdata/channel-qemu-vdagent-features.x86_64-latest.xml
>  create mode 100644
> tests/qemuxml2xmloutdata/channel-qemu-vdagent.x86_64-latest.xml
>
> --
> 2.35.1
>
>
>

-- 
Marc-André Lureau


Re: [libvirt PATCH 0/5] ci: Add an integration test job utilizing upstream QEMU

2022-05-09 Thread Michal Prívozník
On 5/6/22 17:35, Erik Skultety wrote:
> Since QEMU doesn't maintain a spec file in upstream, we cannot build RPM
> artifacts as part of the CI as we do for libvirt. Instead of hard-coding the
> build steps for QEMU though patch 3/5 pulls in QEMU's CI job template which
> means we'll remain in sync if QEMU makes changes to its build process.
> 
> Erik Skultety (5):
>   ci: Separate the integration job template to a separate file
>   ci: Break off the integration_tests template into more templates
>   ci: Introduce a template for upstream QEMU build
>   ci: Add a new integration job template for the upstream QEMU scenario
>   ci: Add a Fedora integration test job utilizing upstream QEMU
> 
>  ci/integration-template.yml | 98 +
>  ci/integration.yml  | 70 +++---
>  2 files changed, 116 insertions(+), 52 deletions(-)
>  create mode 100644 ci/integration-template.yml
> 

Reviewed-by: Michal Privoznik 

Michal



[PATCH RFC 00/10] qemu: Enable SCHED_CORE for domains and helper processes

2022-05-09 Thread Michal Privoznik
The Linux kernel offers a way to mitigate side channel attacks on Hyper
Threads (e.g. MDS and L1TF). Long story short, userspace can define
groups of processes (aka trusted groups) and only processes within one
group can run on sibling Hyper Threads. The group membership is
automatically preserved on fork() and exec().

Now, there is one scenario which I don't cover in my series and I'd like
to hear proposal: if there are two guests with odd number of vCPUs they
can no longer run on sibling Hyper Threads because my patches create
separate group for each QEMU. This is a performance penalty. Ideally, we
would have a knob inside domain XML that would place two or more domains
into the same trusted group. But since there's pre-existing example (of
sharing a piece of information between two domains) I've failed to come
up with something usable.

Also, it's worth noting, that on kernel level, group membership is
expressed by so called 'cookie' which is effectively an unique UL
number, but there's no API that would "set this number on given
process", so we may have to go with some abstraction layer.

Michal Prívozník (10):
  qemu_tpm: Make APIs work over a single virDomainTPMDef
  qemu_dbus: Separate PID read code into qemuDBusGetPID
  qemu_vhost_user_gpu: Export qemuVhostUserGPUGetPid()
  qemu_tpm: Expose qemuTPMEmulatorGetPid()
  qemu_virtiofs: Separate PID read code into qemuVirtioFSGetPid
  virprocess: Core Scheduling support
  virCommand: Introduce APIs for core scheduling
  qemu_conf: Introduce a knob to turn off SCHED_CORE
  qemu: Enable SCHED_CORE for domains and helper processes
  qemu: Place helper processes into the same trusted group

 src/libvirt_private.syms   |   6 +
 src/qemu/libvirtd_qemu.aug |   1 +
 src/qemu/qemu.conf.in  |   5 +
 src/qemu/qemu_conf.c   |  24 
 src/qemu/qemu_conf.h   |   2 +
 src/qemu/qemu_dbus.c   |  42 ---
 src/qemu/qemu_dbus.h   |   4 +
 src/qemu/qemu_extdevice.c  | 171 ++---
 src/qemu/qemu_extdevice.h  |   3 +
 src/qemu/qemu_process.c|   9 ++
 src/qemu/qemu_security.c   |   4 +
 src/qemu/qemu_tpm.c|  91 +--
 src/qemu/qemu_tpm.h|  18 ++-
 src/qemu/qemu_vhost_user_gpu.c |   2 +-
 src/qemu/qemu_vhost_user_gpu.h |   8 ++
 src/qemu/qemu_virtiofs.c   |  41 ---
 src/qemu/qemu_virtiofs.h   |   5 +
 src/qemu/test_libvirtd_qemu.aug.in |   1 +
 src/util/vircommand.c  |  74 +
 src/util/vircommand.h  |   5 +
 src/util/virprocess.c  | 124 +
 src/util/virprocess.h  |   8 ++
 22 files changed, 538 insertions(+), 110 deletions(-)

-- 
2.35.1



[PATCH RFC 10/10] qemu: Place helper processes into the same trusted group

2022-05-09 Thread Michal Privoznik
Since the level of trust that QEMU has is the same level of trust
that helper processes have there's no harm in placing all of them
into the same group.

Unfortunately, since these processes are started before QEMU we
can't use brand new virCommand*() APIs (those are used on hotplug
though) and have to use the low level virProcess*() APIs.

Moreover, because there no (kernel) API that would copy cookie
from one process to another WITHOUT modifying the cookie of the
process that's doing the copy, we have to fork() and use
available copy APIs.

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_extdevice.c | 120 ++
 src/qemu/qemu_extdevice.h |   3 +
 src/qemu/qemu_process.c   |   4 ++
 3 files changed, 127 insertions(+)

diff --git a/src/qemu/qemu_extdevice.c b/src/qemu/qemu_extdevice.c
index 234815c075..611ea8d640 100644
--- a/src/qemu/qemu_extdevice.c
+++ b/src/qemu/qemu_extdevice.c
@@ -337,3 +337,123 @@ qemuExtDevicesSetupCgroup(virQEMUDriver *driver,
 
 return 0;
 }
+
+
+static int
+qemuExtDevicesSetupSchedHelper(pid_t ppid G_GNUC_UNUSED,
+   void *opaque)
+{
+GSList *pids = opaque;
+GSList *next;
+pid_t vmPid;
+
+/* The first item on the list is special: it's the PID of the
+ * QEMU that has the cookie we want to copy to the rest. */
+vmPid = GPOINTER_TO_INT(pids->data);
+if (virProcessSchedCoreShareFrom(vmPid) < 0) {
+virReportSystemError(errno,
+ _("Unable to get core group of: %lld"),
+ (long long) vmPid);
+return -1;
+}
+
+VIR_DEBUG("SCHED_CORE: vmPid = %lld", (long long) vmPid);
+
+for (next = pids->next; next; next = next->next) {
+pid_t pid = GPOINTER_TO_INT(next->data);
+
+VIR_DEBUG("SCHED_CORE: share to %lld", (long long) pid);
+if (virProcessSchedCoreShareTo(pid) < 0) {
+virReportSystemError(errno,
+ _("Unable to share core group to: %lld"),
+ (long long) pid);
+return -1;
+}
+}
+
+return 0;
+}
+
+
+int
+qemuExtDevicesSetupSched(virQEMUDriver *driver,
+ virDomainObj *vm)
+{
+g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
+virDomainDef *def = vm->def;
+g_autofree char *shortname = NULL;
+g_autoptr(GSList) pids = NULL;
+size_t i;
+pid_t cpid = -1;
+
+if (cfg->schedCore == false)
+return 0;
+
+shortname = virDomainDefGetShortName(def);
+if (!shortname)
+return -1;
+
+if (qemuDBusGetPID(driver, vm, ) < 0)
+return -1;
+
+if (cpid != -1)
+pids = g_slist_prepend(pids, GINT_TO_POINTER(cpid));
+
+for (i = 0; i < def->nvideos; i++) {
+virDomainVideoDef *video = def->videos[i];
+
+if (video->backend != VIR_DOMAIN_VIDEO_BACKEND_TYPE_VHOSTUSER)
+continue;
+
+if (qemuVhostUserGPUGetPid(cfg->stateDir, shortname, 
video->info.alias, ) < 0)
+return -1;
+
+if (cpid != -1)
+pids = g_slist_prepend(pids, GINT_TO_POINTER(cpid));
+}
+
+for (i = 0; i < def->nnets; i++) {
+virDomainNetDef *net = def->nets[i];
+qemuSlirp *slirp = QEMU_DOMAIN_NETWORK_PRIVATE(net)->slirp;
+
+if (slirp && slirp->pid != -1)
+pids = g_slist_prepend(pids, GINT_TO_POINTER(slirp->pid));
+}
+
+for (i = 0; i < def->ntpms; i++) {
+virDomainTPMDef *tpm = def->tpms[i];
+
+if (tpm->type != VIR_DOMAIN_TPM_TYPE_EMULATOR)
+continue;
+
+if (qemuTPMEmulatorGetPid(cfg->swtpmStateDir, shortname, ) < 0)
+return -1;
+
+if (cpid != -1)
+pids = g_slist_prepend(pids, GINT_TO_POINTER(cpid));
+}
+
+for (i = 0; i < def->nfss; i++) {
+virDomainFSDef *fs = def->fss[i];
+
+if (fs->sock ||
+fs->fsdriver != VIR_DOMAIN_FS_DRIVER_TYPE_VIRTIOFS)
+continue;
+
+if (qemuVirtioFSGetPid(vm, fs, ) < 0)
+return -1;
+
+if (cpid != -1)
+pids = g_slist_prepend(pids, GINT_TO_POINTER(cpid));
+}
+
+/* Exit early if there's nothing to do, to avoid needless fork. */
+if (!pids)
+return 0;
+
+pids = g_slist_prepend(pids, GINT_TO_POINTER(vm->pid));
+
+/* Unfortunately, there's no better way of copying scheduling
+ * cookies than fork(). */
+return virProcessRunInFork(qemuExtDevicesSetupSchedHelper, pids);
+}
diff --git a/src/qemu/qemu_extdevice.h b/src/qemu/qemu_extdevice.h
index 43d2a4dfff..02397adc6c 100644
--- a/src/qemu/qemu_extdevice.h
+++ b/src/qemu/qemu_extdevice.h
@@ -59,3 +59,6 @@ bool qemuExtDevicesHasDevice(virDomainDef *def);
 int qemuExtDevicesSetupCgroup(virQEMUDriver *driver,
   virDomainObj *vm,
   virCgroup *cgroup);
+
+int qemuExtDevicesSetupSched(virQEMUDriver *driver,
+

[PATCH RFC 03/10] qemu_vhost_user_gpu: Export qemuVhostUserGPUGetPid()

2022-05-09 Thread Michal Privoznik
In near future it will be necessary to know the PID of
vhost-user-gpu process for QEMU. Export the function that does
just that (qemuVhostUserGPUGetPid()).

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_vhost_user_gpu.c | 2 +-
 src/qemu/qemu_vhost_user_gpu.h | 8 
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/src/qemu/qemu_vhost_user_gpu.c b/src/qemu/qemu_vhost_user_gpu.c
index 6f601cebde..d108566976 100644
--- a/src/qemu/qemu_vhost_user_gpu.c
+++ b/src/qemu/qemu_vhost_user_gpu.c
@@ -63,7 +63,7 @@ qemuVhostUserGPUCreatePidFilename(const char *stateDir,
  * If the PID was not still alive, zero will be returned, and @pid will be
  * set to -1;
  */
-static int
+int
 qemuVhostUserGPUGetPid(const char *stateDir,
const char *shortName,
const char *alias,
diff --git a/src/qemu/qemu_vhost_user_gpu.h b/src/qemu/qemu_vhost_user_gpu.h
index 0d50dd2464..bde7104af6 100644
--- a/src/qemu/qemu_vhost_user_gpu.h
+++ b/src/qemu/qemu_vhost_user_gpu.h
@@ -40,6 +40,14 @@ void qemuExtVhostUserGPUStop(virQEMUDriver *driver,
  virDomainVideoDef *video)
 ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2);
 
+int
+qemuVhostUserGPUGetPid(const char *stateDir,
+   const char *shortName,
+   const char *alias,
+   pid_t *pid)
+ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3)
+G_GNUC_WARN_UNUSED_RESULT;
+
 int
 qemuExtVhostUserGPUSetupCgroup(virQEMUDriver *driver,
virDomainDef *def,
-- 
2.35.1



[PATCH RFC 08/10] qemu_conf: Introduce a knob to turn off SCHED_CORE

2022-05-09 Thread Michal Privoznik
Ideally, we would just pick the best default and users wouldn't
have to intervene at all. But in some cases it may be handy to
not bother with SCHED_CORE at all and thus let users turn the
feature off in qemu.conf.

Signed-off-by: Michal Privoznik 
---
 src/qemu/libvirtd_qemu.aug |  1 +
 src/qemu/qemu.conf.in  |  5 +
 src/qemu/qemu_conf.c   | 24 
 src/qemu/qemu_conf.h   |  2 ++
 src/qemu/test_libvirtd_qemu.aug.in |  1 +
 5 files changed, 33 insertions(+)

diff --git a/src/qemu/libvirtd_qemu.aug b/src/qemu/libvirtd_qemu.aug
index 0f18775121..28a8db2b43 100644
--- a/src/qemu/libvirtd_qemu.aug
+++ b/src/qemu/libvirtd_qemu.aug
@@ -110,6 +110,7 @@ module Libvirtd_qemu =
  | bool_entry "dump_guest_core"
  | str_entry "stdio_handler"
  | int_entry "max_threads_per_process"
+ | bool_entry "sched_core"
 
let device_entry = bool_entry "mac_filter"
  | bool_entry "relaxed_acs_check"
diff --git a/src/qemu/qemu.conf.in b/src/qemu/qemu.conf.in
index 04b7740136..ece822edc3 100644
--- a/src/qemu/qemu.conf.in
+++ b/src/qemu/qemu.conf.in
@@ -952,3 +952,8 @@
 # DO NOT use in production.
 #
 #deprecation_behavior = "none"
+
+# If this is set then QEMU and its threads will run with SCHED_CORE set,
+# meaning no other foreign process will share Hyper Threads of a single core
+# with QEMU nor with any of its helper process.
+#sched_core = 1
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index c22cf79cbe..03d8da0157 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -286,6 +286,8 @@ virQEMUDriverConfig *virQEMUDriverConfigNew(bool privileged,
 
 cfg->deprecationBehavior = g_strdup("none");
 
+cfg->schedCore = virProcessSchedCoreAvailable() == 1;
+
 return g_steal_pointer();
 }
 
@@ -634,6 +636,8 @@ virQEMUDriverConfigLoadProcessEntry(virQEMUDriverConfig 
*cfg,
 g_auto(GStrv) hugetlbfs = NULL;
 g_autofree char *stdioHandler = NULL;
 g_autofree char *corestr = NULL;
+bool schedCore;
+int rc;
 size_t i;
 
 if (virConfGetValueStringList(conf, "hugetlbfs_mount", true,
@@ -711,6 +715,26 @@ virQEMUDriverConfigLoadProcessEntry(virQEMUDriverConfig 
*cfg,
 }
 }
 
+if ((rc = virConfGetValueBool(conf, "sched_core", )) < 0) {
+return -1;
+} else if (rc > 0) {
+if (schedCore) {
+int rv = virProcessSchedCoreAvailable();
+
+if (rv < 0) {
+virReportSystemError(errno, "%s",
+ _("Unable to detect SCHED_CORE"));
+return -1;
+} else if (rv == 0) {
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+   _("SCHED_CORE not supported by kernel"));
+return -1;
+}
+}
+
+cfg->schedCore = schedCore;
+}
+
 return 0;
 }
 
diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h
index c71a666aea..32899859c0 100644
--- a/src/qemu/qemu_conf.h
+++ b/src/qemu/qemu_conf.h
@@ -223,6 +223,8 @@ struct _virQEMUDriverConfig {
 char **capabilityfilters;
 
 char *deprecationBehavior;
+
+bool schedCore;
 };
 
 G_DEFINE_AUTOPTR_CLEANUP_FUNC(virQEMUDriverConfig, virObjectUnref);
diff --git a/src/qemu/test_libvirtd_qemu.aug.in 
b/src/qemu/test_libvirtd_qemu.aug.in
index 757d21c33f..9f3f98d524 100644
--- a/src/qemu/test_libvirtd_qemu.aug.in
+++ b/src/qemu/test_libvirtd_qemu.aug.in
@@ -116,3 +116,4 @@ module Test_libvirtd_qemu =
 { "1" = "capname" }
 }
 { "deprecation_behavior" = "none" }
+{ "sched_core" = "1" }
-- 
2.35.1



[PATCH RFC 06/10] virprocess: Core Scheduling support

2022-05-09 Thread Michal Privoznik
Since its 5.14 release the Linux kernel allows userspace to
define trusted groups of processes/threads that can run on
sibling Hyper Threads (HT) at the same time. This is to mitigate
side channel attacks like L1TF or MDS. If there are no tasks to
fully utilize all HTs, then a HT will idle instead of running a
task from another (un-)trusted group.

On low level, this is implemented by cookies (effectively an UL
value): processes in the same trusted group share the same cookie
and cookie is unique to the group. There are four basic
operations:

1) PR_SCHED_CORE_GET -- get cookie of given PID,
2) PR_SCHED_CORE_CREATE -- create a new unique cookie for PID,
3) PR_SCHED_CORE_SHARE_TO -- push cookie of the caller onto
   another PID,
4) PR_SCHED_CORE_SHARE_FROM -- pull cookie of another PID into
   the caller.

Since a system where the code is built can be different to the
one where the code is ran let's provide declaration of some
values. It's not unusual for distros to ship older linux-headers
than the actual kernel.

Signed-off-by: Michal Privoznik 
---
 src/libvirt_private.syms |   4 ++
 src/util/virprocess.c| 124 +++
 src/util/virprocess.h|   8 +++
 3 files changed, 136 insertions(+)

diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index 97bfca906b..252d7e029f 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -3129,6 +3129,10 @@ virProcessKillPainfullyDelay;
 virProcessNamespaceAvailable;
 virProcessRunInFork;
 virProcessRunInMountNamespace;
+virProcessSchedCoreAvailable;
+virProcessSchedCoreCreate;
+virProcessSchedCoreShareFrom;
+virProcessSchedCoreShareTo;
 virProcessSchedPolicyTypeFromString;
 virProcessSchedPolicyTypeToString;
 virProcessSetAffinity;
diff --git a/src/util/virprocess.c b/src/util/virprocess.c
index 36d7df050a..cd4f3fc7e7 100644
--- a/src/util/virprocess.c
+++ b/src/util/virprocess.c
@@ -57,6 +57,10 @@
 # include 
 #endif
 
+#if WITH_CAPNG
+# include 
+#endif
+
 #include "virprocess.h"
 #include "virerror.h"
 #include "viralloc.h"
@@ -1906,3 +1910,123 @@ virProcessGetSchedInfo(unsigned long long *cpuWait,
 return 0;
 }
 #endif /* __linux__ */
+
+#ifdef __linux__
+# ifndef PR_SCHED_CORE
+/* Copied from linux/prctl.h */
+#  define PR_SCHED_CORE 62
+#  define PR_SCHED_CORE_GET 0
+#  define PR_SCHED_CORE_CREATE  1 /* create unique core_sched cookie */
+#  define PR_SCHED_CORE_SHARE_TO2 /* push core_sched cookie to pid */
+#  define PR_SCHED_CORE_SHARE_FROM  3 /* pull core_sched cookie to pid */
+# endif
+
+/* Unfortunately, kernel-headers forgot to export these. */
+# ifndef PR_SCHED_CORE_SCOPE_THREAD
+#  define PR_SCHED_CORE_SCOPE_THREAD 0
+#  define PR_SCHED_CORE_SCOPE_THREAD_GROUP 1
+#  define PR_SCHED_CORE_SCOPE_PROCESS_GROUP 2
+# endif
+
+/**
+ * virProcessSchedCoreAvailable:
+ *
+ * Check whether kernel supports Core Scheduling (CONFIG_SCHED_CORE), i.e. only
+ * a defined set of PIDs/TIDs can run on sibling Hyper Threads at the same
+ * time.
+ *
+ * Returns: 1 if Core Scheduling is available,
+ *  0 if Core Scheduling is NOT available,
+ * -1 otherwise.
+ */
+int
+virProcessSchedCoreAvailable(void)
+{
+unsigned long cookie = 0;
+int rc;
+
+/* Let's just see if we can get our own sched cookie, and if yes we can
+ * safely assume CONFIG_SCHED_CORE kernel is available. */
+rc = prctl(PR_SCHED_CORE, PR_SCHED_CORE_GET, 0,
+   PR_SCHED_CORE_SCOPE_THREAD, );
+
+return rc == 0 ? 1 : errno == EINVAL ? 0 : -1;
+}
+
+/**
+ * virProcessSchedCoreCreate:
+ *
+ * Creates a new trusted group for the caller process.
+ *
+ * Returns: 0 on success,
+ * -1 otherwise, with errno set.
+ */
+int
+virProcessSchedCoreCreate(void)
+{
+/* pid = 0 (3rd argument) means the calling process. */
+return prctl(PR_SCHED_CORE, PR_SCHED_CORE_CREATE, 0,
+ PR_SCHED_CORE_SCOPE_THREAD_GROUP, 0);
+}
+
+/**
+ * virProcessSchedCoreShareFrom:
+ * @pid: PID to share group with
+ *
+ * Places the current caller process into the trusted group of @pid.
+ *
+ * Returns: 0 on success,
+ * -1 otherwise, with errno set.
+ */
+int
+virProcessSchedCoreShareFrom(pid_t pid)
+{
+return prctl(PR_SCHED_CORE, PR_SCHED_CORE_SHARE_FROM, pid,
+ PR_SCHED_CORE_SCOPE_THREAD, 0);
+}
+
+/**
+ * virProcessSchedCoreShareTo:
+ * @pid: PID to share group with
+ *
+ * Places foreign @pid into the trusted group of the current caller process.
+ *
+ * Returns: 0 on success,
+ * -1 otherwise, with errno set.
+ */
+int
+virProcessSchedCoreShareTo(pid_t pid)
+{
+return prctl(PR_SCHED_CORE, PR_SCHED_CORE_SHARE_TO, pid,
+ PR_SCHED_CORE_SCOPE_THREAD, 0);
+}
+
+#else /* !__linux__ */
+
+int
+virProcessSchedCoreAvailable(void)
+{
+return 0;
+}
+
+int
+virProcessSchedCoreCreate(void)
+{
+errno = ENOSYS;
+return -1;
+}
+
+int
+virProcessSchedCoreShareFrom(pid_t pid G_GNUC_UNUSED)
+{
+errno = ENOSYS;
+   

[PATCH RFC 05/10] qemu_virtiofs: Separate PID read code into qemuVirtioFSGetPid

2022-05-09 Thread Michal Privoznik
In near future it will be necessary to know the PID of virtiofsd
started for QEMU. Move the code into a separate function
(qemuVirtioFSGetPid()) and export it in the header file.

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_virtiofs.c | 38 +-
 src/qemu/qemu_virtiofs.h |  5 +
 2 files changed, 30 insertions(+), 13 deletions(-)

diff --git a/src/qemu/qemu_virtiofs.c b/src/qemu/qemu_virtiofs.c
index 7e3324b017..b3a2d2990a 100644
--- a/src/qemu/qemu_virtiofs.c
+++ b/src/qemu/qemu_virtiofs.c
@@ -319,26 +319,38 @@ qemuVirtioFSStop(virQEMUDriver *driver G_GNUC_UNUSED,
 }
 
 
+
+int
+qemuVirtioFSGetPid(virDomainObj *vm,
+   virDomainFSDef *fs,
+   pid_t *pid)
+{
+g_autofree char *pidfile = NULL;
+int rc;
+
+if (!(pidfile = qemuVirtioFSCreatePidFilename(vm, fs->info.alias)))
+return -1;
+
+rc = virPidFileReadPathIfAlive(pidfile, pid, NULL);
+if (rc < 0 || *pid == (pid_t) -1) {
+virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+   _("virtiofsd died unexpectedly"));
+return -1;
+}
+
+return 0;
+}
+
+
 int
 qemuVirtioFSSetupCgroup(virDomainObj *vm,
 virDomainFSDef *fs,
 virCgroup *cgroup)
 {
-g_autofree char *pidfile = NULL;
 pid_t pid = -1;
-int rc;
 
-if (!(pidfile = qemuVirtioFSCreatePidFilename(vm, fs->info.alias)))
-return -1;
-
-rc = virPidFileReadPathIfAlive(pidfile, , NULL);
-if (rc < 0 || pid == (pid_t) -1) {
-virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
-   _("virtiofsd died unexpectedly"));
-return -1;
-}
-
-if (virCgroupAddProcess(cgroup, pid) < 0)
+if (qemuVirtioFSGetPid(vm, fs, ) < 0 ||
+virCgroupAddProcess(cgroup, pid) < 0)
 return -1;
 
 return 0;
diff --git a/src/qemu/qemu_virtiofs.h b/src/qemu/qemu_virtiofs.h
index 5463acef98..dd3fbfa555 100644
--- a/src/qemu/qemu_virtiofs.h
+++ b/src/qemu/qemu_virtiofs.h
@@ -35,6 +35,11 @@ qemuVirtioFSStop(virQEMUDriver *driver,
  virDomainObj *vm,
  virDomainFSDef *fs);
 
+int
+qemuVirtioFSGetPid(virDomainObj *vm,
+   virDomainFSDef *fs,
+   pid_t *pid);
+
 int
 qemuVirtioFSSetupCgroup(virDomainObj *vm,
 virDomainFSDef *fs,
-- 
2.35.1



[PATCH RFC 09/10] qemu: Enable SCHED_CORE for domains and helper processes

2022-05-09 Thread Michal Privoznik
Despite all mitigations, side channel attacks when two processes
run at two Hyper Threads of the same core are still possible.
Fortunately, the Linux kernel came up with a solution: userspace
can create so called trusted groups, which are sets of processes
and only processes of the same group can run on sibling Hyper
Threads. Of course, two processes of different groups can run on
different cores, because there's no known side channel attack.
It's only Hyper Threads that are affected.

Having said that, it's a clear security win for users when
enabled for QEMU.

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_process.c  | 5 +
 src/qemu/qemu_security.c | 4 
 src/qemu/qemu_virtiofs.c | 3 +++
 3 files changed, 12 insertions(+)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index b0b00eb0a2..0a49008124 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -2923,6 +2923,9 @@ qemuProcessStartManagedPRDaemon(virDomainObj *vm)
  * qemu (so that it shares the same view of the system). */
 virCommandSetPreExecHook(cmd, qemuProcessStartPRDaemonHook, vm);
 
+if (cfg->schedCore && vm->pid != -1)
+virCommandRunAmong(cmd, vm->pid);
+
 if (virCommandRun(cmd, NULL) < 0)
 goto cleanup;
 
@@ -7472,6 +7475,8 @@ qemuProcessLaunch(virConnectPtr conn,
 virCommandSetMaxProcesses(cmd, cfg->maxProcesses);
 if (cfg->maxFiles > 0)
 virCommandSetMaxFiles(cmd, cfg->maxFiles);
+if (cfg->schedCore)
+virCommandRunAlone(cmd);
 
 /* In this case, however, zero means that core dumps should be
  * disabled, and so we always need to set the limit explicitly */
diff --git a/src/qemu/qemu_security.c b/src/qemu/qemu_security.c
index 3be1766764..0fe1555406 100644
--- a/src/qemu/qemu_security.c
+++ b/src/qemu/qemu_security.c
@@ -683,6 +683,8 @@ qemuSecurityCommandRun(virQEMUDriver *driver,
int *exitstatus,
int *cmdret)
 {
+g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
+
 if (virSecurityManagerSetChildProcessLabel(driver->securityManager,
vm->def, cmd) < 0)
 return -1;
@@ -691,6 +693,8 @@ qemuSecurityCommandRun(virQEMUDriver *driver,
 virCommandSetUID(cmd, uid);
 if (gid != (gid_t) -1)
 virCommandSetGID(cmd, gid);
+if (cfg->schedCore && vm->pid != -1)
+virCommandRunAmong(cmd, vm->pid);
 
 if (virSecurityManagerPreFork(driver->securityManager) < 0)
 return -1;
diff --git a/src/qemu/qemu_virtiofs.c b/src/qemu/qemu_virtiofs.c
index b3a2d2990a..0a3548065f 100644
--- a/src/qemu/qemu_virtiofs.c
+++ b/src/qemu/qemu_virtiofs.c
@@ -248,6 +248,9 @@ qemuVirtioFSStart(virQEMUDriver *driver,
 virCommandNonblockingFDs(cmd);
 virCommandDaemonize(cmd);
 
+if (cfg->schedCore && vm->pid != -1)
+virCommandRunAmong(cmd, vm->pid);
+
 if (qemuExtDeviceLogCommand(driver, vm, cmd, "virtiofsd") < 0)
 goto error;
 
-- 
2.35.1



[PATCH RFC 07/10] virCommand: Introduce APIs for core scheduling

2022-05-09 Thread Michal Privoznik
There are two modes of core scheduling that are handy wrt
virCommand:

1) create new trusted group when executing a virCommand

2) place freshly executed virCommand into the trusted group of
   another process.

Therefore, implement these two new operations as new APIs:
virCommandSetRunAlone() and virCommandSetRunAmong(),
respectively.

Signed-off-by: Michal Privoznik 
---
 src/libvirt_private.syms |  2 ++
 src/util/vircommand.c| 74 
 src/util/vircommand.h|  5 +++
 3 files changed, 81 insertions(+)

diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index 252d7e029f..8f2b789cee 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -2079,6 +2079,8 @@ virCommandSetOutputBuffer;
 virCommandSetOutputFD;
 virCommandSetPidFile;
 virCommandSetPreExecHook;
+virCommandSetRunAlone;
+virCommandSetRunAmong;
 virCommandSetSELinuxLabel;
 virCommandSetSendBuffer;
 virCommandSetUID;
diff --git a/src/util/vircommand.c b/src/util/vircommand.c
index 41cf552d7b..db20620f7c 100644
--- a/src/util/vircommand.c
+++ b/src/util/vircommand.c
@@ -148,6 +148,9 @@ struct _virCommand {
 #endif
 int mask;
 
+bool schedCore;
+pid_t schedCorePID;
+
 virCommandSendBuffer *sendBuffers;
 size_t numSendBuffers;
 };
@@ -434,6 +437,22 @@ virCommandHandshakeChild(virCommand *cmd)
 static int
 virExecCommon(virCommand *cmd, gid_t *groups, int ngroups)
 {
+/* Do this before dropping capabilities. */
+if (cmd->schedCore &&
+virProcessSchedCoreCreate() < 0) {
+virReportSystemError(errno, "%s",
+ _("Unable to set SCHED_CORE"));
+return -1;
+}
+
+if (cmd->schedCorePID >= 0 &&
+virProcessSchedCoreShareFrom(cmd->schedCorePID) < 0) {
+virReportSystemError(errno,
+ _("Unable to run among %llu"),
+ (unsigned long long) cmd->schedCorePID);
+return -1;
+}
+
 if (cmd->uid != (uid_t)-1 || cmd->gid != (gid_t)-1 ||
 cmd->capabilities || (cmd->flags & VIR_EXEC_CLEAR_CAPS)) {
 VIR_DEBUG("Setting child uid:gid to %d:%d with caps %llx",
@@ -964,6 +983,7 @@ virCommandNewArgs(const char *const*args)
 cmd->pid = -1;
 cmd->uid = -1;
 cmd->gid = -1;
+cmd->schedCorePID = -1;
 
 virCommandAddArgSet(cmd, args);
 
@@ -3437,3 +3457,57 @@ virCommandRunNul(virCommand *cmd G_GNUC_UNUSED,
 return -1;
 }
 #endif /* WIN32 */
+
+/**
+ * virCommandSetRunAlone:
+ *
+ * Create new trusted group when running the command. In other words, the
+ * process won't be scheduled to run on a core among with processes from
+ * another, untrusted group.
+ */
+void
+virCommandSetRunAlone(virCommand *cmd)
+{
+if (virCommandHasError(cmd))
+return;
+
+if (cmd->schedCorePID >= 0) {
+/* Can't mix these two. */
+cmd->has_error = -1;
+VIR_DEBUG("cannot mix with virCommandSetRunAmong()");
+return;
+}
+
+cmd->schedCore = true;
+}
+
+/**
+ * virCommandSetRunAmong:
+ * @pid: pid from a trusted group
+ *
+ * When spawning the command place it into the trusted group of @pid so that
+ * these two processes can run on Hyper Threads of a single core at the same
+ * time.
+ */
+void
+virCommandSetRunAmong(virCommand *cmd,
+  pid_t pid)
+{
+if (virCommandHasError(cmd))
+return;
+
+if (cmd->schedCore) {
+/* Can't mix these two. */
+VIR_DEBUG("cannot mix with virCommandSetRunAlone()");
+cmd->has_error = -1;
+return;
+}
+
+if (pid < 0) {
+VIR_DEBUG("invalid pid value: %lld", (long long) pid);
+cmd->has_error = -1;
+return;
+}
+
+cmd->schedCorePID = pid;
+}
diff --git a/src/util/vircommand.h b/src/util/vircommand.h
index 600806a987..0b03ea005c 100644
--- a/src/util/vircommand.h
+++ b/src/util/vircommand.h
@@ -225,4 +225,9 @@ int virCommandRunNul(virCommand *cmd,
  virCommandRunNulFunc func,
  void *data);
 
+void virCommandSetRunAlone(virCommand *cmd);
+
+void virCommandSetRunAmong(virCommand *cmd,
+   pid_t pid);
+
 G_DEFINE_AUTOPTR_CLEANUP_FUNC(virCommand, virCommandFree);
-- 
2.35.1



[PATCH RFC 04/10] qemu_tpm: Expose qemuTPMEmulatorGetPid()

2022-05-09 Thread Michal Privoznik
In near future it will be necessary to know the PID of swtpm
process for QEMU. Export the function that does just that
(qemuTPMEmulatorGetPid()).

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_tpm.c | 2 +-
 src/qemu/qemu_tpm.h | 7 +++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/src/qemu/qemu_tpm.c b/src/qemu/qemu_tpm.c
index 086780edcd..bf86f2fe39 100644
--- a/src/qemu/qemu_tpm.c
+++ b/src/qemu/qemu_tpm.c
@@ -143,7 +143,7 @@ qemuTPMEmulatorPidFileBuildPath(const char *swtpmStateDir,
  * If the PID was not still alive, zero will be returned, and @pid will be
  * set to -1;
  */
-static int
+int
 qemuTPMEmulatorGetPid(const char *swtpmStateDir,
   const char *shortName,
   pid_t *pid)
diff --git a/src/qemu/qemu_tpm.h b/src/qemu/qemu_tpm.h
index 9951f025a6..9f4d01f60b 100644
--- a/src/qemu/qemu_tpm.h
+++ b/src/qemu/qemu_tpm.h
@@ -50,6 +50,13 @@ void qemuExtTPMStop(virQEMUDriver *driver,
 virDomainObj *vm)
 ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2);
 
+int qemuTPMEmulatorGetPid(const char *swtpmStateDir,
+  const char *shortName,
+  pid_t *pid)
+ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2)
+ATTRIBUTE_NONNULL(3)
+G_GNUC_WARN_UNUSED_RESULT;
+
 int qemuExtTPMSetupCgroup(virQEMUDriver *driver,
   virDomainDef *def,
   virCgroup *cgroup)
-- 
2.35.1



[PATCH RFC 02/10] qemu_dbus: Separate PID read code into qemuDBusGetPID

2022-05-09 Thread Michal Privoznik
In near future it will be necessary to know the PID of DBus
daemon started for QEMU. Move the code into a separate function
(qemuDBusGetPID()) and export it in the header file.

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_dbus.c | 42 +-
 src/qemu/qemu_dbus.h |  4 
 2 files changed, 33 insertions(+), 13 deletions(-)

diff --git a/src/qemu/qemu_dbus.c b/src/qemu/qemu_dbus.c
index 2ed8f8640d..0eae1aa2fe 100644
--- a/src/qemu/qemu_dbus.c
+++ b/src/qemu/qemu_dbus.c
@@ -146,28 +146,44 @@ qemuDBusStop(virQEMUDriver *driver,
 }
 
 
+int
+qemuDBusGetPID(virQEMUDriver *driver,
+   virDomainObj *vm,
+   pid_t *pid)
+{
+g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
+qemuDomainObjPrivate *priv = vm->privateData;
+g_autofree char *shortName = NULL;
+g_autofree char *pidfile = NULL;
+
+if (!priv->dbusDaemonRunning)
+return 0;
+
+if (!(shortName = virDomainDefGetShortName(vm->def)))
+return -1;
+pidfile = qemuDBusCreatePidFilename(cfg, shortName);
+if (virPidFileReadPath(pidfile, pid) < 0) {
+VIR_WARN("Unable to get DBus PID");
+return -1;
+}
+
+return 0;
+}
+
+
 int
 qemuDBusSetupCgroup(virQEMUDriver *driver,
 virDomainObj *vm,
 virCgroup *cgroup)
 {
-g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
-qemuDomainObjPrivate *priv = vm->privateData;
-g_autofree char *shortName = NULL;
-g_autofree char *pidfile = NULL;
 pid_t cpid = -1;
 
-if (!priv->dbusDaemonRunning)
+if (qemuDBusGetPID(driver, vm, ) < 0)
+return -1;
+
+if (cpid == -1)
 return 0;
 
-if (!(shortName = virDomainDefGetShortName(vm->def)))
-return -1;
-pidfile = qemuDBusCreatePidFilename(cfg, shortName);
-if (virPidFileReadPath(pidfile, ) < 0) {
-VIR_WARN("Unable to get DBus PID");
-return -1;
-}
-
 return virCgroupAddProcess(cgroup, cpid);
 }
 
diff --git a/src/qemu/qemu_dbus.h b/src/qemu/qemu_dbus.h
index b27f38a591..a079976aa4 100644
--- a/src/qemu/qemu_dbus.h
+++ b/src/qemu/qemu_dbus.h
@@ -34,6 +34,10 @@ void qemuDBusVMStateAdd(virDomainObj *vm, const char *id);
 
 void qemuDBusVMStateRemove(virDomainObj *vm, const char *id);
 
+int qemuDBusGetPID(virQEMUDriver *driver,
+   virDomainObj *vm,
+   pid_t *pid);
+
 int qemuDBusSetupCgroup(virQEMUDriver *driver,
 virDomainObj *vm,
 virCgroup *cgroup);
-- 
2.35.1



[PATCH RFC 01/10] qemu_tpm: Make APIs work over a single virDomainTPMDef

2022-05-09 Thread Michal Privoznik
In qemu_extdevice.c lives code that handles helper daemons that
are required for some types of devices (e.g. virtiofsd,
vhost-user-gpu, swtpm, etc.). These devices have their own
handling code in separate files, with only a very basic functions
exposed (e.g. for starting/stopping helper process, placing it
into given CGroup, etc.). And these functions all work over a
single instance of device (virDomainVideoDef *, virDomainFSDef *,
etc.), except for TPM handling code which takes virDomainDef *
and iterates over it inside its module.

Remove this oddness and make qemuExtTPM*() functions look closer
to the rest of the code.

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_extdevice.c | 51 --
 src/qemu/qemu_tpm.c   | 89 +++
 src/qemu/qemu_tpm.h   | 11 +++--
 3 files changed, 69 insertions(+), 82 deletions(-)

diff --git a/src/qemu/qemu_extdevice.c b/src/qemu/qemu_extdevice.c
index 537b130394..234815c075 100644
--- a/src/qemu/qemu_extdevice.c
+++ b/src/qemu/qemu_extdevice.c
@@ -73,8 +73,15 @@ static int
 qemuExtDevicesInitPaths(virQEMUDriver *driver,
 virDomainDef *def)
 {
-if (def->ntpms > 0)
-return qemuExtTPMInitPaths(driver, def);
+size_t i;
+
+for (i = 0; i < def->ntpms; i++) {
+virDomainTPMDef *tpm = def->tpms[i];
+
+if (tpm->type == VIR_DOMAIN_TPM_TYPE_EMULATOR &&
+qemuExtTPMInitPaths(driver, def, tpm) < 0)
+return -1;
+}
 
 return 0;
 }
@@ -135,9 +142,13 @@ qemuExtDevicesPrepareHost(virQEMUDriver *driver,
 if (qemuExtDevicesInitPaths(driver, def) < 0)
 return -1;
 
-if (def->ntpms > 0 &&
-qemuExtTPMPrepareHost(driver, def) < 0)
-return -1;
+for (i = 0; i < def->ntpms; i++) {
+virDomainTPMDef *tpm = def->tpms[i];
+
+if (tpm->type == VIR_DOMAIN_TPM_TYPE_EMULATOR &&
+qemuExtTPMPrepareHost(driver, def, tpm) < 0)
+return -1;
+}
 
 for (i = 0; i < def->nnets; i++) {
 virDomainNetDef *net = def->nets[i];
@@ -155,11 +166,14 @@ void
 qemuExtDevicesCleanupHost(virQEMUDriver *driver,
   virDomainDef *def)
 {
+size_t i;
+
 if (qemuExtDevicesInitPaths(driver, def) < 0)
 return;
 
-if (def->ntpms > 0)
-qemuExtTPMCleanupHost(def);
+for (i = 0; i < def->ntpms; i++) {
+qemuExtTPMCleanupHost(def->tpms[i]);
+}
 }
 
 
@@ -180,8 +194,13 @@ qemuExtDevicesStart(virQEMUDriver *driver,
 }
 }
 
-if (def->ntpms > 0 && qemuExtTPMStart(driver, vm, incomingMigration) < 0)
-return -1;
+for (i = 0; i < def->ntpms; i++) {
+virDomainTPMDef *tpm = def->tpms[i];
+
+if (tpm->type == VIR_DOMAIN_TPM_TYPE_EMULATOR &&
+qemuExtTPMStart(driver, vm, tpm, incomingMigration) < 0)
+return -1;
+}
 
 for (i = 0; i < def->nnets; i++) {
 virDomainNetDef *net = def->nets[i];
@@ -222,8 +241,10 @@ qemuExtDevicesStop(virQEMUDriver *driver,
 qemuExtVhostUserGPUStop(driver, vm, video);
 }
 
-if (def->ntpms > 0)
-qemuExtTPMStop(driver, vm);
+for (i = 0; i < def->ntpms; i++) {
+if (def->tpms[i]->type == VIR_DOMAIN_TPM_TYPE_EMULATOR)
+qemuExtTPMStop(driver, vm);
+}
 
 for (i = 0; i < def->nnets; i++) {
 virDomainNetDef *net = def->nets[i];
@@ -299,9 +320,11 @@ qemuExtDevicesSetupCgroup(virQEMUDriver *driver,
 return -1;
 }
 
-if (def->ntpms > 0 &&
-qemuExtTPMSetupCgroup(driver, def, cgroup) < 0)
-return -1;
+for (i = 0; i < def->ntpms; i++) {
+if (def->tpms[i]->type == VIR_DOMAIN_TPM_TYPE_EMULATOR &&
+qemuExtTPMSetupCgroup(driver, def, cgroup) < 0)
+return -1;
+}
 
 for (i = 0; i < def->nfss; i++) {
 virDomainFSDef *fs = def->fss[i];
diff --git a/src/qemu/qemu_tpm.c b/src/qemu/qemu_tpm.c
index 56bccee128..086780edcd 100644
--- a/src/qemu/qemu_tpm.c
+++ b/src/qemu/qemu_tpm.c
@@ -971,86 +971,59 @@ qemuTPMEmulatorStart(virQEMUDriver *driver,
 
 int
 qemuExtTPMInitPaths(virQEMUDriver *driver,
-virDomainDef *def)
+virDomainDef *def,
+virDomainTPMDef *tpm)
 {
 g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
-size_t i;
 
-for (i = 0; i < def->ntpms; i++) {
-if (def->tpms[i]->type != VIR_DOMAIN_TPM_TYPE_EMULATOR)
-continue;
-
-return qemuTPMEmulatorInitPaths(def->tpms[i],
-cfg->swtpmStorageDir,
-cfg->swtpmLogDir,
-def->name,
-def->uuid);
-}
-
-return 0;
+return qemuTPMEmulatorInitPaths(tpm,
+cfg->swtpmStorageDir,
+cfg->swtpmLogDir,
+ 

Re: [PATCH] qemu_security: Drop qemuSecurityStartVhostUserGPU()

2022-05-09 Thread Ján Tomko

On a Monday in 2022, Michal Privoznik wrote:

There's no real difference between
qemuSecurityStartVhostUserGPU() and qemuSecurityCommandRun(). The
latter is used more frequently while the former has just one
user. Therefore, drop the less frequently used one.

Signed-off-by: Michal Privoznik 
---
src/qemu/qemu_security.c   | 40 --
src/qemu/qemu_security.h   |  6 -
src/qemu/qemu_vhost_user_gpu.c |  3 +--
3 files changed, 1 insertion(+), 48 deletions(-)



Reviewed-by: Ján Tomko 

Jano


signature.asc
Description: PGP signature


[PATCH] qemu_security: Drop qemuSecurityStartVhostUserGPU()

2022-05-09 Thread Michal Privoznik
There's no real difference between
qemuSecurityStartVhostUserGPU() and qemuSecurityCommandRun(). The
latter is used more frequently while the former has just one
user. Therefore, drop the less frequently used one.

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_security.c   | 40 --
 src/qemu/qemu_security.h   |  6 -
 src/qemu/qemu_vhost_user_gpu.c |  3 +--
 3 files changed, 1 insertion(+), 48 deletions(-)

diff --git a/src/qemu/qemu_security.c b/src/qemu/qemu_security.c
index 19d957dd4b..3be1766764 100644
--- a/src/qemu/qemu_security.c
+++ b/src/qemu/qemu_security.c
@@ -499,46 +499,6 @@ qemuSecurityRestoreNetdevLabel(virQEMUDriver *driver,
 }
 
 
-/*
- * qemuSecurityStartVhostUserGPU:
- *
- * @driver: the QEMU driver
- * @vm: the domain object
- * @cmd: the command to run
- * @existstatus: pointer to int returning exit status of process
- * @cmdret: pointer to int returning result of virCommandRun
- *
- * Start the vhost-user-gpu process with appropriate labels.
- * This function returns -1 on security setup error, 0 if all the
- * setup was done properly. In case the virCommand failed to run
- * 0 is returned but cmdret is set appropriately with the process
- * exitstatus also set.
- */
-int
-qemuSecurityStartVhostUserGPU(virQEMUDriver *driver,
-  virDomainObj *vm,
-  virCommand *cmd,
-  int *exitstatus,
-  int *cmdret)
-{
-if (virSecurityManagerSetChildProcessLabel(driver->securityManager,
-   vm->def, cmd) < 0)
-return -1;
-
-if (virSecurityManagerPreFork(driver->securityManager) < 0)
-return -1;
-
-*cmdret = virCommandRun(cmd, exitstatus);
-
-virSecurityManagerPostFork(driver->securityManager);
-
-if (*cmdret < 0)
-return -1;
-
-return 0;
-}
-
-
 /*
  * qemuSecurityStartTPMEmulator:
  *
diff --git a/src/qemu/qemu_security.h b/src/qemu/qemu_security.h
index 8b26ea3f99..eaf646f225 100644
--- a/src/qemu/qemu_security.h
+++ b/src/qemu/qemu_security.h
@@ -87,12 +87,6 @@ int qemuSecurityRestoreNetdevLabel(virQEMUDriver *driver,
virDomainObj *vm,
virDomainNetDef *net);
 
-int qemuSecurityStartVhostUserGPU(virQEMUDriver *driver,
-  virDomainObj *vm,
-  virCommand *cmd,
-  int *exitstatus,
-  int *cmdret);
-
 int qemuSecurityStartTPMEmulator(virQEMUDriver *driver,
  virDomainObj *vm,
  virCommand *cmd,
diff --git a/src/qemu/qemu_vhost_user_gpu.c b/src/qemu/qemu_vhost_user_gpu.c
index f7d444e851..6f601cebde 100644
--- a/src/qemu/qemu_vhost_user_gpu.c
+++ b/src/qemu/qemu_vhost_user_gpu.c
@@ -158,8 +158,7 @@ int qemuExtVhostUserGPUStart(virQEMUDriver *driver,
 virCommandAddArgFormat(cmd, "--render-node=%s", 
video->accel->rendernode);
 }
 
-if (qemuSecurityStartVhostUserGPU(driver, vm, cmd,
-  , ) < 0)
+if (qemuSecurityCommandRun(driver, vm, cmd, -1, -1, , ) 
< 0)
 goto error;
 
 if (cmdret < 0 || exitstatus != 0) {
-- 
2.35.1



Re: [PATCH v2 1/1] tests: qemucapabilities: update ppc64 qemu caps for 7.0.0 release

2022-05-09 Thread Daniel Henrique Barboza




On 5/9/22 10:30, Andrea Bolognani wrote:

On Mon, May 09, 2022 at 07:27:57AM -0300, Daniel Henrique Barboza wrote:

On 5/9/22 07:00, Andrea Bolognani wrote:

Would you be okay with something like

There are no major changes since 7.0.0-rc2, but a few additional
features are enabled in this build.

? If so, I can amend the commit message and push the patch


Yes please, go ahead. Thanks!


Done.


I've
installed the following packages in a Power9 running Fedora35:

dnf install libusb-devel  libcap-ng-devel libssh-devel libpmem-devel \
libiscsi-devel libnfs-devel libseccomp-devel libseccomp-static \
liburing-devel libbpf-devel librbd-devel \
libcurl-devel libaio-devel \
egl-utils egl-wayland-devel \
virglrenderer-devel \
gtk+-devel spice-gtk3-devel \
fuse3-devel gtkglext-devel \
lzo-devel brlapi-devel snappy-devel


FYI you don't have to play a guessing game here, and you can just
look at the contents of

   tests/docker/dockerfiles/fedora.docker

to figure out what packages you need to install.


Noted.




Aside from that, in the end it's hard to distinguish between "this feature isn't
present in ppc64" versus "the host that generated the capabilities didn't have 
the
support installed" because it's the same thing from the qemucaps standpoint.


I was thinking more about the fact that the diff for
caps_7.0.0.ppc64.xml is massive, but really it should look like

   @@ -133,6 +133,7 @@
  
  
  
   +  
  
  
  
   @@ -193,6 +194,8 @@
  
  
  
   +  
   +  
  
  
  
   @@ -210,10 +213,10 @@
  
  
  
   -  6002092
   +  700
  0
  42900243
   -  v7.0.0-rc2
   +  v7.0.0
  ppc64
  
  

Those are the only meaningful changes to the file: everything else is
just CPU models and machine types being shuffled around.

The same is true for the replies file: there are some actual
differences, but the patch is made unnecessarily big by the fact that
commands like query-machines and qom-list-types are returning lists
that have very similar contents but are ordered differently.

There's an argument to be made for storing QEMU's output verbatim,
but I don't see why we wouldn't guarantee that at least the data we
produce is not affected by this? Specifically, we could list CPU
models and machine types in alphabetical order.


I agree that trying to check the differences between different capabilities
file isn't trivial. Alphabetical order is a good start.


Thanks,


Daniel








Re: [PATCH v2 1/1] tests: qemucapabilities: update ppc64 qemu caps for 7.0.0 release

2022-05-09 Thread Andrea Bolognani
On Mon, May 09, 2022 at 07:27:57AM -0300, Daniel Henrique Barboza wrote:
> On 5/9/22 07:00, Andrea Bolognani wrote:
> > Would you be okay with something like
> >
> >There are no major changes since 7.0.0-rc2, but a few additional
> >features are enabled in this build.
> >
> > ? If so, I can amend the commit message and push the patch
>
> Yes please, go ahead. Thanks!

Done.

> I've
> installed the following packages in a Power9 running Fedora35:
>
> dnf install libusb-devel  libcap-ng-devel libssh-devel libpmem-devel \
> libiscsi-devel libnfs-devel libseccomp-devel libseccomp-static \
> liburing-devel libbpf-devel librbd-devel \
> libcurl-devel libaio-devel \
> egl-utils egl-wayland-devel \
> virglrenderer-devel \
> gtk+-devel spice-gtk3-devel \
> fuse3-devel gtkglext-devel \
> lzo-devel brlapi-devel snappy-devel

FYI you don't have to play a guessing game here, and you can just
look at the contents of

  tests/docker/dockerfiles/fedora.docker

to figure out what packages you need to install.

> Aside from that, in the end it's hard to distinguish between "this feature 
> isn't
> present in ppc64" versus "the host that generated the capabilities didn't 
> have the
> support installed" because it's the same thing from the qemucaps standpoint.

I was thinking more about the fact that the diff for
caps_7.0.0.ppc64.xml is massive, but really it should look like

  @@ -133,6 +133,7 @@
 
 
 
  +  
 
 
 
  @@ -193,6 +194,8 @@
 
 
 
  +  
  +  
 
 
 
  @@ -210,10 +213,10 @@
 
 
 
  -  6002092
  +  700
 0
 42900243
  -  v7.0.0-rc2
  +  v7.0.0
 ppc64
 
 

Those are the only meaningful changes to the file: everything else is
just CPU models and machine types being shuffled around.

The same is true for the replies file: there are some actual
differences, but the patch is made unnecessarily big by the fact that
commands like query-machines and qom-list-types are returning lists
that have very similar contents but are ordered differently.

There's an argument to be made for storing QEMU's output verbatim,
but I don't see why we wouldn't guarantee that at least the data we
produce is not affected by this? Specifically, we could list CPU
models and machine types in alphabetical order.

-- 
Andrea Bolognani / Red Hat / Virtualization



Re: [PATCH RESEND] apibuild: Fix self.waring method call

2022-05-09 Thread Martin Kletzander

On Sat, May 07, 2022 at 09:17:31AM +0800, luzhipeng wrote:

The parameters of self.warning is inconsistent with it's definition, So
fix it.

Signed-off-by: luzhipeng 


Reviewed-by: Martin Kletzander 

and pushed.


---
scripts/apibuild.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/apibuild.py b/scripts/apibuild.py
index 2a343208c5..23a66734ac 100755
--- a/scripts/apibuild.py
+++ b/scripts/apibuild.py
@@ -328,7 +328,7 @@ class index:
if type in type_map:
type_map[type][name] = d
else:
-self.warning("Unable to register type ", type)
+self.warning("Unable to register type %s" % type)

if name == debugsym and not quiet:
print("New symbol: %s" % (d))
--
2.34.0.windows.1





signature.asc
Description: PGP signature


Re: [PATCH v2 1/1] tests: qemucapabilities: update ppc64 qemu caps for 7.0.0 release

2022-05-09 Thread Daniel Henrique Barboza




On 5/9/22 07:00, Andrea Bolognani wrote:

On Fri, May 06, 2022 at 04:54:22PM -0300, Daniel Henrique Barboza wrote:

No relevant changes since the last update from 7.0.0-rc2. Sending it so
we're sure that we don't need to worry about ppc64 caps for the 7.0.0
release anymore.


There are actually a few additional features available in this build:

   
   
   

Not sure how much of that is because of build configuration rather
than changes in QEMU. Probably all of it :)



I'd say that these are all build configuration related. I would be surprised
if people added 7.0.0 capabilities after 7.0.0-rc2.





Regardless, considering the above I would say that the commit message
is not accurate. Would you be okay with something like

   There are no major changes since 7.0.0-rc2, but a few additional
   features are enabled in this build.

? If so, I can amend the commit message and push the patch as

   Reviewed-by: Andrea Bolognani 



Yes please, go ahead. Thanks!




As a side note, it's unfortunate that this sort of small change gets
lost in the noise when updating capabilities. I wonder if anything
can be done about it.



We can add a documentation about the host build configuration (i.e. packages 
installed
and QEMU build options) that were used in the recent/last qemucapabilities 
update. That
way one can try to replicate a similar build configuration setup and avoid 
these build
differences.

E.g: from the previous version that was missing a lot of stuff versus this one 
I've
installed the following packages in a Power9 running Fedora35:


dnf install libusb-devel  libcap-ng-devel libssh-devel libpmem-devel \
libiscsi-devel libnfs-devel libseccomp-devel libseccomp-static \
liburing-devel libbpf-devel librbd-devel \
libcurl-devel libaio-devel \
egl-utils egl-wayland-devel \
virglrenderer-devel \
gtk+-devel spice-gtk3-devel \
fuse3-devel gtkglext-devel \
lzo-devel brlapi-devel snappy-devel


Aside from that, in the end it's hard to distinguish between "this feature isn't
present in ppc64" versus "the host that generated the capabilities didn't have 
the
support installed" because it's the same thing from the qemucaps standpoint.








Re: [PATCH v2 1/1] tests: qemucapabilities: update ppc64 qemu caps for 7.0.0 release

2022-05-09 Thread Andrea Bolognani
On Fri, May 06, 2022 at 04:54:22PM -0300, Daniel Henrique Barboza wrote:
> No relevant changes since the last update from 7.0.0-rc2. Sending it so
> we're sure that we don't need to worry about ppc64 caps for the 7.0.0
> release anymore.

There are actually a few additional features available in this build:

  
  
  

Not sure how much of that is because of build configuration rather
than changes in QEMU. Probably all of it :)


Regardless, considering the above I would say that the commit message
is not accurate. Would you be okay with something like

  There are no major changes since 7.0.0-rc2, but a few additional
  features are enabled in this build.

? If so, I can amend the commit message and push the patch as

  Reviewed-by: Andrea Bolognani 


As a side note, it's unfortunate that this sort of small change gets
lost in the noise when updating capabilities. I wonder if anything
can be done about it.

-- 
Andrea Bolognani / Red Hat / Virtualization



Re: [PATCH 0/3] Add a retry procedure after failing to do post parsing

2022-05-09 Thread Daniel P . Berrangé
On Mon, May 09, 2022 at 10:22:56AM +0200, Peter Krempa wrote:
> On Mon, May 09, 2022 at 09:12:51 +0100, Daniel P. Berrangé wrote:
> > On Sat, May 07, 2022 at 05:40:13PM +0800, zhangjl02 wrote:
> > > Get default emulator based on guest's arch, and replace it in domain's
> > > definition after domainPostParseDataAlloc's failure, then alloc again.
> > > This will solve the migration problem because of qemu emulator location 
> > > error,
> > > especially, from host with to host without qemu-kvm.
> 
> [please primarily put justification of why you are doing something into
> the patches themselves. Apart from making it more obvious to reviewers
> it also records the justification in git once patches are commited]
> 
> > When you're migrating between hosts it is possible to provide libvirt an
> > updated XML doc at the time you initiate the migration. This allows you
> > to change any aspect that doesn't impact guest ABI, so you can provide
> > an updated emulator binary path at time of migration.
> 
> Actually there might be a problem with this. I've discussed this
> recently with Jirka.
> 
> Specifically the ABI stability check is done on the source of the
> migration. This means that the source has to actually parse and
> interpret the destination XML too if it's provided by the user.
> 
> Now if your source host doesn't have the qemu binary or doesn't have it
> in the path you have it on the destination this will fail.
> 
> I mentioned to Jirka that I think this is sub-optimal:
> 
> 1) see problem above
> 2) the post-parse callbacks might fill in different defaults e.g. if the
> destination qemu has different capabilities
> 3) if it were done on destination, the source portion of the XML can be
> parsed without post-parse callbacks as it comes actually from a live
> libvirt instance

Right so we need to pass the XML to the dest, have its details
expanded, then sent back to the src for ABI checking. Or alternatively
just send the current live XML from src to dest, and let the dest do
the ABI checking in the Prepare step. 

> So with the above if they have problem of qemu not being where they
> expect, using of the destination XML will not help.

Hmm, yes, annoying.

> 
> They might be able to use the hook script to filter it on the
> destination though:
> 
> https://www.libvirt.org/hooks.html#qemu-guest-migration
> 

With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH 0/3] Add a retry procedure after failing to do post parsing

2022-05-09 Thread Peter Krempa
On Mon, May 09, 2022 at 09:12:51 +0100, Daniel P. Berrangé wrote:
> On Sat, May 07, 2022 at 05:40:13PM +0800, zhangjl02 wrote:
> > Get default emulator based on guest's arch, and replace it in domain's
> > definition after domainPostParseDataAlloc's failure, then alloc again.
> > This will solve the migration problem because of qemu emulator location 
> > error,
> > especially, from host with to host without qemu-kvm.

[please primarily put justification of why you are doing something into
the patches themselves. Apart from making it more obvious to reviewers
it also records the justification in git once patches are commited]

> When you're migrating between hosts it is possible to provide libvirt an
> updated XML doc at the time you initiate the migration. This allows you
> to change any aspect that doesn't impact guest ABI, so you can provide
> an updated emulator binary path at time of migration.

Actually there might be a problem with this. I've discussed this
recently with Jirka.

Specifically the ABI stability check is done on the source of the
migration. This means that the source has to actually parse and
interpret the destination XML too if it's provided by the user.

Now if your source host doesn't have the qemu binary or doesn't have it
in the path you have it on the destination this will fail.

I mentioned to Jirka that I think this is sub-optimal:

1) see problem above
2) the post-parse callbacks might fill in different defaults e.g. if the
destination qemu has different capabilities
3) if it were done on destination, the source portion of the XML can be
parsed without post-parse callbacks as it comes actually from a live
libvirt instance

So with the above if they have problem of qemu not being where they
expect, using of the destination XML will not help.

They might be able to use the hook script to filter it on the
destination though:

https://www.libvirt.org/hooks.html#qemu-guest-migration



Re: [PATCH 0/3] Add a retry procedure after failing to do post parsing

2022-05-09 Thread Daniel P . Berrangé
On Sat, May 07, 2022 at 05:40:13PM +0800, zhangjl02 wrote:
> Get default emulator based on guest's arch, and replace it in domain's
> definition after domainPostParseDataAlloc's failure, then alloc again.
> This will solve the migration problem because of qemu emulator location error,
> especially, from host with to host without qemu-kvm.

When you're migrating between hosts it is possible to provide libvirt an
updated XML doc at the time you initiate the migration. This allows you
to change any aspect that doesn't impact guest ABI, so you can provide
an updated emulator binary path at time of migration.

With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



[PATCH] docs: apps: Add the app cockpit

2022-05-09 Thread Han Han
Signed-off-by: Han Han 
---
 docs/apps.rst | 4 
 1 file changed, 4 insertions(+)

diff --git a/docs/apps.rst b/docs/apps.rst
index a21e2249ea..d01ad33f37 100644
--- a/docs/apps.rst
+++ b/docs/apps.rst
@@ -331,6 +331,10 @@ Web applications
   Secrets
-  Create and launch VMs
-  Configure VMs with easy panels or go pro and edit the VM's XML
+`Cockpit `__
+   Cockpit is a web-based graphical interface for servers. With
+   `cockpit-machines `__
+   it can create and manage virtual machines via libvirt.
 
 Other
 -
-- 
2.36.0



Re: [PATCH 3/3] domain_conf: set default emulator into def if it fails to alloc

2022-05-09 Thread Peter Krempa
On Sat, May 07, 2022 at 17:40:16 +0800, zhangjl02 wrote:
> From: zhangjl02 
> 
> When emulator is not found on host, domainPostParseDataAlloc will return 1,
> and the domain will fail to start. Call domainPostParseDataDefEmulator to
> replace emulator with the default one of guest's arch, and try to alloc again
> after domainPostParseDataAlloc's failure. This will increase error tolerance,
> if emulator defined in xml is not found on host.

What you describe here is definitely not desired. When the user
specifies an emulator path in the XML we must not change it even if the
emulator is not present and we could theoretically use a different one.

This could mask problems and make users use a different binary without
even knowing about it.

If the user wishes to use the default they can certainly omit the
emulator.



Re: [PATCH v3 0/5] Introduce network backed NVRAM

2022-05-09 Thread Rohit Kumar

Ping.
Hi, requesting review on this patchset.
Thanks!

On 04/05/22 10:21 pm, Rohit Kumar wrote:

Libvirt domain XML currently allows only local filepaths
that can be used to specify a NVRAM disk. It should be
possible to support NVRAM disks on network storage as
it would give flexibility to start the VM on any host
without having to worry about where to get the latest
nvram image.

This series extends the NVRAM element to support hosting over
network-backed disks.
It achieves this by embedding virStorageSource pointer for
nvram into _virDomainLoaderDef.

It introduces a 'type' attribute for NVRAM element to
specify 'file' vs 'network' backed NVRAM.

XML with new annotation:


   
 
   
 
   
 
   


or


   
 
   


or


   


Changes v1->v2:
  - Split the patch into smaller patches
  - Added unit test
  - Updated the doc
  - Addressed Peter's comment on v1 
(https://listman.redhat.com/archives/libvir-list/2022-March/229684.html)

Changes v2->v3:
  - Added authentication with 'iscsi' protocol unit test
  - Updated the validation logic
  - Addressed Peter's other comments on v2 
patch(https://listman.redhat.com/archives/libvir-list/2022-April/229971.html)
  


Rohit Kumar (5):
   Make NVRAM a virStorageSource type.
   Add support to parse/format/validate virStorageSource type NVRAM
   Update schema, docs, and validation logic to support network backed
 NVRAM
   Add unit tests for network backed NVRAM
   Add unit test to support new 'file' type NVRAM

  NEWS.rst  |   5 +
  docs/formatdomain.rst |  34 +-
  src/conf/domain_conf.c| 115 +++---
  src/conf/domain_conf.h|   3 +-
  src/conf/schemas/domaincommon.rng |  21 +++-
  src/qemu/qemu_cgroup.c|   3 +-
  src/qemu/qemu_command.c   |   2 +-
  src/qemu/qemu_domain.c|  23 +++-
  src/qemu/qemu_driver.c|   5 +-
  src/qemu/qemu_firmware.c  |  23 +++-
  src/qemu/qemu_namespace.c |   5 +-
  src/qemu/qemu_process.c   |   5 +-
  src/qemu/qemu_validate.c  |  71 +++
  src/security/security_dac.c   |   6 +-
  src/security/security_selinux.c   |   6 +-
  src/security/virt-aa-helper.c |   5 +-
  src/vbox/vbox_common.c|   3 +-
  .../bios-nvram-file.x86_64-latest.args|  37 ++
  tests/qemuxml2argvdata/bios-nvram-file.xml|  23 
  .../bios-nvram-network-iscsi.x86_64-4.1.0.err |   1 +
  ...ios-nvram-network-iscsi.x86_64-latest.args |  38 ++
  .../bios-nvram-network-iscsi.xml  |  31 +
  .../bios-nvram-network-nbd.x86_64-latest.args |  37 ++
  .../bios-nvram-network-nbd.xml|  28 +
  tests/qemuxml2argvtest.c  |   4 +
  .../bios-nvram-file.x86_64-latest.xml |  39 ++
  ...bios-nvram-network-iscsi.x86_64-latest.xml |  44 +++
  .../bios-nvram-network-nbd.x86_64-latest.xml  |  41 +++
  tests/qemuxml2xmltest.c   |   3 +
  29 files changed, 618 insertions(+), 43 deletions(-)
  create mode 100644 tests/qemuxml2argvdata/bios-nvram-file.x86_64-latest.args
  create mode 100644 tests/qemuxml2argvdata/bios-nvram-file.xml
  create mode 100644 
tests/qemuxml2argvdata/bios-nvram-network-iscsi.x86_64-4.1.0.err
  create mode 100644 
tests/qemuxml2argvdata/bios-nvram-network-iscsi.x86_64-latest.args
  create mode 100644 tests/qemuxml2argvdata/bios-nvram-network-iscsi.xml
  create mode 100644 
tests/qemuxml2argvdata/bios-nvram-network-nbd.x86_64-latest.args
  create mode 100644 tests/qemuxml2argvdata/bios-nvram-network-nbd.xml
  create mode 100644 tests/qemuxml2xmloutdata/bios-nvram-file.x86_64-latest.xml
  create mode 100644 
tests/qemuxml2xmloutdata/bios-nvram-network-iscsi.x86_64-latest.xml
  create mode 100644 
tests/qemuxml2xmloutdata/bios-nvram-network-nbd.x86_64-latest.xml





Re: [libvirt RFCv8 12/27] qemu: capabilities: add multifd to the probed migration capabilities

2022-05-09 Thread Ani Sinha
Qemu folks,
It seems we do officially support multifd from version 4.0 :

commit cbfd6c957a4437d4759ca660e621daa381bf2898
Author: Juan Quintela 
Date:   Wed Feb 6 13:54:06 2019 +0100

multifd: Drop x-

We make it supported from now on.

Reviewed-by: Dr. David Alan Gilbert 
Reviewed-by: Markus Armbruster 
Signed-off-by: Juan Quintela 

$ git tag --contains cbfd6c957a4437d4759ca660e621daa381bf2898 | sort
-V | grep -v list | head -1
v4.0.0

Yet it seems we continue to prefix the migration property with "x-"
(x-multifd). This prop was added here and we have continued to use it
as is:

commit 30126bbf1f7fcad0bf4c65b01a21ff22a36a9759
Author: Juan Quintela 
Date:   Thu Jan 14 12:23:00 2016 +0100

migration: Add multifd capability

Signed-off-by: Juan Quintela 
Reviewed-by: Dr. David Alan Gilbert 
Reviewed-by: Peter Xu 
Reviewed-by: Daniel P. Berrange 

Can anyone explain why?

On Sat, May 7, 2022 at 7:13 PM Claudio Fontana  wrote:
>
> Signed-off-by: Claudio Fontana 

other than the question above,

Reviewed-by: Ani Sinha 

> ---
>  src/qemu/qemu_capabilities.c  | 4 
>  src/qemu/qemu_capabilities.h  | 3 +++
>  tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml | 1 +
>  tests/qemucapabilitiesdata/caps_4.0.0.ppc64.xml   | 1 +
>  tests/qemucapabilitiesdata/caps_4.0.0.riscv32.xml | 1 +
>  tests/qemucapabilitiesdata/caps_4.0.0.riscv64.xml | 1 +
>  tests/qemucapabilitiesdata/caps_4.0.0.s390x.xml   | 1 +
>  tests/qemucapabilitiesdata/caps_4.0.0.x86_64.xml  | 1 +
>  tests/qemucapabilitiesdata/caps_4.1.0.x86_64.xml  | 1 +
>  tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml | 1 +
>  tests/qemucapabilitiesdata/caps_4.2.0.ppc64.xml   | 1 +
>  tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml   | 1 +
>  tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml  | 1 +
>  tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml | 1 +
>  tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml   | 1 +
>  tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml | 1 +
>  tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml  | 1 +
>  tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml   | 1 +
>  tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml  | 1 +
>  tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml | 1 +
>  tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml   | 1 +
>  tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml | 1 +
>  tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml   | 1 +
>  tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml  | 1 +
>  tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml | 1 +
>  tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml   | 1 +
>  tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml  | 1 +
>  tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml  | 1 +
>  tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml | 1 +
>  tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml   | 1 +
>  tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml  | 1 +
>  tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml | 1 +
>  tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml   | 1 +
>  tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml  | 1 +
>  34 files changed, 39 insertions(+)
>
> diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
> index 1ed4cda7f0..581b6a40df 100644
> --- a/src/qemu/qemu_capabilities.c
> +++ b/src/qemu/qemu_capabilities.c
> @@ -672,6 +672,9 @@ VIR_ENUM_IMPL(virQEMUCaps,
>"virtio-iommu-pci", /* QEMU_CAPS_DEVICE_VIRTIO_IOMMU_PCI */
>"virtio-iommu.boot-bypass", /* 
> QEMU_CAPS_VIRTIO_IOMMU_BOOT_BYPASS */
>"virtio-net.rss", /* QEMU_CAPS_VIRTIO_NET_RSS */
> +
> +  /* 430 */
> +  "migrate-multifd", /* QEMU_CAPS_MIGRATE_MULTIFD */
>  );
>
>
> @@ -1230,6 +1233,7 @@ struct virQEMUCapsStringFlags virQEMUCapsCommands[] = {
>
>  struct virQEMUCapsStringFlags virQEMUCapsMigration[] = {
>  { "rdma-pin-all", QEMU_CAPS_MIGRATE_RDMA },
> +{ "multifd", QEMU_CAPS_MIGRATE_MULTIFD },
>  };
>
>  /* Use virQEMUCapsQMPSchemaQueries for querying parameters of events */
> diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
> index 9b240e47fb..b089f83da1 100644
> --- a/src/qemu/qemu_capabilities.h
> +++ b/src/qemu/qemu_capabilities.h
> @@ -648,6 +648,9 @@ typedef enum { /* virQEMUCapsFlags grouping marker for 
> syntax-check */
>  QEMU_CAPS_VIRTIO_IOMMU_BOOT_BYPASS, /* virtio-iommu.boot-bypass */
>  QEMU_CAPS_VIRTIO_NET_RSS, /* virtio-net rss feature */
>
> +/* 430 */
> +QEMU_CAPS_MIGRATE_MULTIFD, /* migrate can set multifd parameter */
> +
>  QEMU_CAPS_LAST /* this must always be the last item */
>  } virQEMUCapsFlags;
>
> diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml 
> b/tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml
> index 5adf904fc4..4ca2cfa81c 100644
> --- a/tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml
> +++ b/tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml
> @@ -148,6 +148,7 @@
>
>
>
> +  
>400
>0
>61700240
> diff