[PATCH v3 5/6] qemu: tpm: Avoid security labels on incoming migration with shared storage
When using shared storage there is no need to apply security labels on the storage since the files have to have been labeled already on the source side and we must assume that the source and destination side have been setup to use the same uid and gid for running swtpm as well as share the same security labels. Whether the security labels can be used at all depends on the shared storage and whether and how it supports them. Signed-off-by: Stefan Berger --- src/qemu/qemu_tpm.c | 16 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/src/qemu/qemu_tpm.c b/src/qemu/qemu_tpm.c index 7b0afe94ec..69410e36ff 100644 --- a/src/qemu/qemu_tpm.c +++ b/src/qemu/qemu_tpm.c @@ -933,10 +933,18 @@ qemuTPMEmulatorStart(virQEMUDriver *driver, virCommandSetPidFile(cmd, pidfile); virCommandSetErrorFD(cmd, ); -if (qemuSecurityStartTPMEmulator(driver, vm, cmd, - cfg->swtpm_user, cfg->swtpm_group, - NULL, ) < 0) -return -1; +if (incomingMigration && virFileIsSharedFS(tpm->data.emulator.storagepath)) { +/* security labels must have been set up on source already */ +if (qemuSecurityCommandRun(driver, vm, cmd, + cfg->swtpm_user, cfg->swtpm_group, + NULL, ) < 0) { +goto error; +} +} else if (qemuSecurityStartTPMEmulator(driver, vm, cmd, +cfg->swtpm_user, cfg->swtpm_group, +NULL, ) < 0) { +goto error; +} if (cmdret < 0) { /* virCommandRun() hidden in qemuSecurityStartTPMEmulator() -- 2.37.3
[PATCH v3 4/6] qemu: tpm: Pass --migration option to swtpm if supported and needed
Pass the --migration option to swtpm if swptm supports it (starting with v0.8) and if the TPM's state is written on shared storage. If this is the case apply the 'release-lock-outgoing' parameter with this option and apply the 'incoming' parameter for incoming migration so that swtpm releases the file lock on the source side when the state is migrated and locks the file on the destination side when the state is received. If a started swtpm instance is running with the necessary options of migrating with share storage then remember this with a flag in the virDomainTPMPrivateDef. Report an error if swtpm does not support the --migration option and an incoming migration across shared storage is requested. Signed-off-by: Stefan Berger --- src/qemu/qemu_migration.c | 8 + src/qemu/qemu_tpm.c | 66 ++- src/qemu/qemu_tpm.h | 6 3 files changed, 79 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 33105cf07b..5b4f4615ee 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -38,6 +38,7 @@ #include "qemu_security.h" #include "qemu_slirp.h" #include "qemu_block.h" +#include "qemu_tpm.h" #include "domain_audit.h" #include "virlog.h" @@ -2789,6 +2790,13 @@ qemuMigrationSrcBegin(virConnectPtr conn, goto cleanup; } +if (qemuTPMHasSharedStorage(vm->def) && +!qemuTPMCanMigrateSharedStorage(vm->def)) { +virReportError(VIR_ERR_NO_SUPPORT, "%s", + _("the running swtpm does not support migration with shared storage")); +goto cleanup; +} + if (flags & VIR_MIGRATE_POSTCOPY_RESUME) { ret = qemuMigrationSrcBeginResumePhase(conn, driver, vm, xmlin, cookieout, cookieoutlen, flags); diff --git a/src/qemu/qemu_tpm.c b/src/qemu/qemu_tpm.c index a45ad599aa..7b0afe94ec 100644 --- a/src/qemu/qemu_tpm.c +++ b/src/qemu/qemu_tpm.c @@ -557,6 +557,7 @@ qemuTPMEmulatorBuildCommand(virDomainTPMDef *tpm, int migpwdfile_fd = -1; const unsigned char *secretuuid = NULL; bool create_storage = true; +bool on_shared_storage; if (!swtpm) return NULL; @@ -564,7 +565,8 @@ qemuTPMEmulatorBuildCommand(virDomainTPMDef *tpm, /* Do not create storage and run swtpm_setup on incoming migration over * shared storage */ -if (incomingMigration && virFileIsSharedFS(tpm->data.emulator.storagepath)) +on_shared_storage = virFileIsSharedFS(tpm->data.emulator.storagepath); +if (incomingMigration && on_shared_storage) create_storage = false; if (create_storage && @@ -642,6 +644,31 @@ qemuTPMEmulatorBuildCommand(virDomainTPMDef *tpm, virCommandAddArgFormat(cmd, "pwdfd=%d,mode=aes-256-cbc", migpwdfile_fd); } +/* If swtpm supports it and the TPM state is stored on shared storage, + * start swtpm with --migration release-lock-outgoing so it can migrate + * across shared storage if needed. + */ +QEMU_DOMAIN_TPM_PRIVATE(tpm)->swtpm.can_migrate_shared_storage = false; +if (on_shared_storage && +virTPMSwtpmCapsGet(VIR_TPM_SWTPM_FEATURE_CMDARG_MIGRATION)) { + +virCommandAddArg(cmd, "--migration"); +virCommandAddArgFormat(cmd, "release-lock-outgoing%s", + incomingMigration ? ",incoming": ""); +QEMU_DOMAIN_TPM_PRIVATE(tpm)->swtpm.can_migrate_shared_storage = true; +} else { +/* Report an error if there's an incoming migration across shared + * storage and swtpm does not support the --migration option. + */ +if (incomingMigration && on_shared_storage) { +virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, +_("%s (on destination side) does not support the --migration option " + "needed for migration with shared storage"), + swtpm); +goto error; +} +} + return g_steal_pointer(); error: @@ -962,6 +989,43 @@ qemuTPMEmulatorStart(virQEMUDriver *driver, } +bool +qemuTPMHasSharedStorage(virDomainDef *def) +{ +size_t i; + +for (i = 0; i < def->ntpms; i++) { +virDomainTPMDef *tpm = def->tpms[i]; +switch (tpm->type) { +case VIR_DOMAIN_TPM_TYPE_EMULATOR: +return virFileIsSharedFS(tpm->data.emulator.storagepath); +case VIR_DOMAIN_TPM_TYPE_PASSTHROUGH: +case VIR_DOMAIN_TPM_TYPE_LAST: +} +} + +return false; +} + + +bool +qemuTPMCanMigrateSharedStorage(virDomainDef *def) +{ +size_t i; + +for (i = 0; i < def->ntpms; i++) { +virDomainTPMDef *tpm = def->tpms[i]; +switch (tpm->type) { +case VIR_DOMAIN_TPM_TYPE_EMULATOR: +return QEMU_DOMAIN_TPM_PRIVATE(tpm)->swtpm.can_migrate_shared_storage; +case VIR_DOMAIN_TPM_TYPE_PASSTHROUGH: +case VIR_DOMAIN_TPM_TYPE_LAST: +} +} +
[PATCH v3 3/6] qemu: tpm: Add support for storing private TPM-related data
Add support for storing private TPM-related data. The first private data will be related to the capability of the started swtpm indicating whether it is capable of migration with a shared storage setup since that requires support for certain command line flags that were only becoming available in v0.8. Signed-off-by: Stefan Berger --- src/conf/domain_conf.c | 63 +--- src/conf/domain_conf.h | 9 ++ src/qemu/qemu_domain.c | 73 ++ src/qemu/qemu_domain.h | 14 4 files changed, 154 insertions(+), 5 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 7dba65cfeb..4178583950 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -3276,6 +3276,22 @@ void virDomainHostdevDefClear(virDomainHostdevDef *def) } } +static virDomainTPMDef * +virDomainTPMDefNew(virDomainXMLOption *xmlopt) +{ +virDomainTPMDef *def; + +def = g_new0(virDomainTPMDef, 1); + +if (xmlopt && xmlopt->privateData.tpmNew && +!(def->privateData = xmlopt->privateData.tpmNew())) { +VIR_FREE(def); +return NULL; +} + +return def; +} + void virDomainTPMDefFree(virDomainTPMDef *def) { if (!def) @@ -3296,6 +3312,7 @@ void virDomainTPMDefFree(virDomainTPMDef *def) } virDomainDeviceInfoClear(>info); +virObjectUnref(def->privateData); g_free(def); } @@ -10238,7 +10255,8 @@ virDomainTPMDefParseXML(virDomainXMLOption *xmlopt, g_autofree xmlNodePtr *nodes = NULL; int bank; -def = g_new0(virDomainTPMDef, 1); +if (!(def = virDomainTPMDefNew(xmlopt))) +return NULL; if (virXMLPropEnum(node, "model", virDomainTPMModelTypeFromString, @@ -10329,6 +10347,14 @@ virDomainTPMDefParseXML(virDomainXMLOption *xmlopt, if (virDomainDeviceInfoParseXML(xmlopt, node, ctxt, >info, flags) < 0) goto error; +if (flags & VIR_DOMAIN_DEF_PARSE_STATUS && +xmlopt && xmlopt->privateData.tpmParse) { +if ((ctxt->node = virXPathNode("./privateData", ctxt))) { +if (xmlopt->privateData.tpmParse(ctxt, def) < 0) +goto error; +} +} + return def; error: @@ -24049,10 +24075,32 @@ virDomainSoundCodecDefFormat(virBuffer *buf, return 0; } -static void +static int +virDomainTPMDefFormatPrivateData(virBuffer *buf, + const virDomainTPMDef *tpm, + unsigned int flags, + virDomainXMLOption *xmlopt) +{ +g_auto(virBuffer) childBuf = VIR_BUFFER_INIT_CHILD(buf); + +if (!(flags & VIR_DOMAIN_DEF_FORMAT_STATUS) || +!xmlopt || +!xmlopt->privateData.tpmFormat) +return 0; + +if (xmlopt->privateData.tpmFormat(tpm, ) < 0) +return -1; + +virXMLFormatElement(buf, "privateData", NULL, ); +return 0; +} + + +static int virDomainTPMDefFormat(virBuffer *buf, const virDomainTPMDef *def, - unsigned int flags) + unsigned int flags, + virDomainXMLOption *xmlopt) { g_auto(virBuffer) attrBuf = VIR_BUFFER_INITIALIZER; g_auto(virBuffer) childBuf = VIR_BUFFER_INIT_CHILD(buf); @@ -24101,8 +24149,12 @@ virDomainTPMDefFormat(virBuffer *buf, virXMLFormatElement(, "backend", , ); virDomainDeviceInfoFormat(, >info, flags); +if (virDomainTPMDefFormatPrivateData(, def, flags, xmlopt) < 0) +return -1; virXMLFormatElement(buf, "tpm", , ); + +return 0; } @@ -27188,7 +27240,8 @@ virDomainDefFormatInternalSetRootName(virDomainDef *def, } for (n = 0; n < def->ntpms; n++) { -virDomainTPMDefFormat(buf, def->tpms[n], flags); +if (virDomainTPMDefFormat(buf, def->tpms[n], flags, xmlopt) < 0) +return -1; } for (n = 0; n < def->ngraphics; n++) { @@ -28454,7 +28507,7 @@ virDomainDeviceDefCopy(virDomainDeviceDef *src, rc = virDomainChrDefFormat(, src->data.chr, flags); break; case VIR_DOMAIN_DEVICE_TPM: -virDomainTPMDefFormat(, src->data.tpm, flags); +virDomainTPMDefFormat(, src->data.tpm, flags, xmlopt); rc = 0; break; case VIR_DOMAIN_DEVICE_PANIC: diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 8f8a54bc41..82f71f8853 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -1445,6 +1445,8 @@ typedef enum { #define VIR_DOMAIN_TPM_DEFAULT_DEVICE "/dev/tpm0" struct _virDomainTPMDef { +virObject *privateData; + virDomainTPMModel model; virDomainTPMBackendType type; virDomainDeviceInfo info; @@ -3248,6 +3250,10 @@ typedef int (*virDomainXMLPrivateDataStorageSourceParseFunc)(xmlXPathContextPtr typedef int (*virDomainXMLPrivateDataStorageSourceFormatFunc)(virStorageSource *src, virBuffer *buf);
[PATCH v3 6/6] qemu: tpm: Never remove state on outgoing migration and shared storage
Never remove the TPM state on outgoing migration if the storage setup has shared storage for the TPM state files. Also, do not do the security cleanup on outgoing migration if shared storage is detected. Signed-off-by: Stefan Berger --- src/qemu/qemu_domain.c| 12 +++- src/qemu/qemu_domain.h| 3 ++- src/qemu/qemu_driver.c| 20 ++-- src/qemu/qemu_extdevice.c | 10 ++ src/qemu/qemu_extdevice.h | 6 -- src/qemu/qemu_migration.c | 12 ++-- src/qemu/qemu_process.c | 9 ++--- src/qemu/qemu_snapshot.c | 4 ++-- src/qemu/qemu_tpm.c | 21 - src/qemu/qemu_tpm.h | 6 -- 10 files changed, 63 insertions(+), 40 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 97c62e2c9e..20cc2409fc 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -7245,7 +7245,8 @@ qemuDomainSnapshotDiscardAllMetadata(virQEMUDriver *driver, static void qemuDomainRemoveInactiveCommon(virQEMUDriver *driver, virDomainObj *vm, - virDomainUndefineFlagsValues flags) + virDomainUndefineFlagsValues flags, + bool outgoingMigration) { g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); g_autofree char *snapDir = NULL; @@ -7271,7 +7272,7 @@ qemuDomainRemoveInactiveCommon(virQEMUDriver *driver, if (rmdir(chkDir) < 0 && errno != ENOENT) VIR_WARN("unable to remove checkpoint directory %s", chkDir); } -qemuExtDevicesCleanupHost(driver, vm->def, flags); +qemuExtDevicesCleanupHost(driver, vm->def, flags, outgoingMigration); } @@ -7283,14 +7284,15 @@ qemuDomainRemoveInactiveCommon(virQEMUDriver *driver, void qemuDomainRemoveInactive(virQEMUDriver *driver, virDomainObj *vm, - virDomainUndefineFlagsValues flags) + virDomainUndefineFlagsValues flags, + bool outgoingMigration) { if (vm->persistent) { /* Short-circuit, we don't want to remove a persistent domain */ return; } -qemuDomainRemoveInactiveCommon(driver, vm, flags); +qemuDomainRemoveInactiveCommon(driver, vm, flags, outgoingMigration); virDomainObjListRemove(driver->domains, vm); } @@ -7312,7 +7314,7 @@ qemuDomainRemoveInactiveLocked(virQEMUDriver *driver, return; } -qemuDomainRemoveInactiveCommon(driver, vm, 0); +qemuDomainRemoveInactiveCommon(driver, vm, 0, false); virDomainObjListRemoveLocked(driver->domains, vm); } diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index e7d3e1be40..11ea52c32d 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -696,7 +696,8 @@ int qemuDomainSnapshotDiscardAllMetadata(virQEMUDriver *driver, void qemuDomainRemoveInactive(virQEMUDriver *driver, virDomainObj *vm, - virDomainUndefineFlagsValues flags); + virDomainUndefineFlagsValues flags, + bool outgoingMigration); void qemuDomainRemoveInactiveLocked(virQEMUDriver *driver, diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 5c75000742..017cda2a9c 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1611,7 +1611,7 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, goto cleanup; if (qemuProcessBeginJob(vm, VIR_DOMAIN_JOB_OPERATION_START, flags) < 0) { -qemuDomainRemoveInactive(driver, vm, 0); +qemuDomainRemoveInactive(driver, vm, 0, false); goto cleanup; } @@ -1620,7 +1620,7 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags) < 0) { virDomainAuditStart(vm, "booted", false); -qemuDomainRemoveInactive(driver, vm, 0); +qemuDomainRemoveInactive(driver, vm, 0, false); qemuProcessEndJob(vm); goto cleanup; } @@ -2103,7 +2103,7 @@ qemuDomainDestroyFlags(virDomainPtr dom, ret = 0; endjob: if (ret == 0) -qemuDomainRemoveInactive(driver, vm, 0); +qemuDomainRemoveInactive(driver, vm, 0, false); virDomainObjEndJob(vm); cleanup: @@ -2723,7 +2723,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, } virDomainObjEndAsyncJob(vm); if (ret == 0) -qemuDomainRemoveInactive(driver, vm, 0); +qemuDomainRemoveInactive(driver, vm, 0, false); cleanup: virQEMUSaveDataFree(data); @@ -3263,7 +3263,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom, virDomainObjEndAsyncJob(vm); if (ret == 0 && flags & VIR_DUMP_CRASH) -qemuDomainRemoveInactive(driver, vm, 0); +qemuDomainRemoveInactive(driver, vm, 0, false); cleanup:
[PATCH v3 2/6] qemu: tpm: Conditionally create storage on incoming migration
Do not create storage if the TPM state files are on shared storage and there's an incoming migration since in this case the storage directory must already exist. Also do not run swtpm_setup in this case. Signed-off-by: Stefan Berger --- src/qemu/qemu_tpm.c | 10 +- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_tpm.c b/src/qemu/qemu_tpm.c index dc09c94a4d..a45ad599aa 100644 --- a/src/qemu/qemu_tpm.c +++ b/src/qemu/qemu_tpm.c @@ -556,11 +556,19 @@ qemuTPMEmulatorBuildCommand(virDomainTPMDef *tpm, int pwdfile_fd = -1; int migpwdfile_fd = -1; const unsigned char *secretuuid = NULL; +bool create_storage = true; if (!swtpm) return NULL; -if (qemuTPMEmulatorCreateStorage(tpm, , swtpm_user, swtpm_group) < 0) +/* Do not create storage and run swtpm_setup on incoming migration over + * shared storage + */ +if (incomingMigration && virFileIsSharedFS(tpm->data.emulator.storagepath)) +create_storage = false; + +if (create_storage && +qemuTPMEmulatorCreateStorage(tpm, , swtpm_user, swtpm_group) < 0) return NULL; if (tpm->data.emulator.hassecretuuid) -- 2.37.3
[PATCH v3 1/6] util: Add parsing support for swtpm's cmdarg-migration capability
Add support for parsing swtpm 'cmdarg-migration' capability (since v0.8). Signed-off-by: Stefan Berger --- src/util/virtpm.c | 1 + src/util/virtpm.h | 1 + 2 files changed, 2 insertions(+) diff --git a/src/util/virtpm.c b/src/util/virtpm.c index 91db0f31eb..19850de1c8 100644 --- a/src/util/virtpm.c +++ b/src/util/virtpm.c @@ -39,6 +39,7 @@ VIR_LOG_INIT("util.tpm"); VIR_ENUM_IMPL(virTPMSwtpmFeature, VIR_TPM_SWTPM_FEATURE_LAST, "cmdarg-pwd-fd", + "cmdarg-migration", ); VIR_ENUM_IMPL(virTPMSwtpmSetupFeature, diff --git a/src/util/virtpm.h b/src/util/virtpm.h index a873881b23..fb330effa8 100644 --- a/src/util/virtpm.h +++ b/src/util/virtpm.h @@ -30,6 +30,7 @@ bool virTPMHasSwtpm(void); typedef enum { VIR_TPM_SWTPM_FEATURE_CMDARG_PWD_FD, +VIR_TPM_SWTPM_FEATURE_CMDARG_MIGRATION, VIR_TPM_SWTPM_FEATURE_LAST } virTPMSwtpmFeature; -- 2.37.3
[PATCH v3 0/6] qemu: tpm: Add support for migration across shared storage
This series of patches adds support for migrating vTPMs across hosts whose storage has been set up to share the directory structure holding the state of the TPM (swtpm). The existence of share storage influences the management of the directory structure holding the TPM state, which for example is only removed when a domain is undefined and not when a VM is removed on the migration source host. Further, when shared storage is used then security labeling on the destination side is skipped assuming that the labeling was already done on the source side. I have tested this with an NFS setup where I had to turn SELinux off on the hosts since the SELinux MLS range labeling is not supported by NFS. For shared storage support to work properly both sides of the migration need to be clients of the shared storage setup, meaning that they both have to have /var/lib/libvirt/swtpm mounted as shared storage, because other- wise the virFileIsSharedFS() may not detect shared storage and in the worst case may cause the TPM emulator (swtpm) to malfunction if for example the source side removed the TPM state directory structure. Shared storage migration requires (upcoming) swtpm v0.8. Stefan v3: - Relying entirely on virFileIsSharedFS() on migration source and destination sides to detect whether shared storage is set up between hosts; no more hint about shared storage from user via flag - Added support for virDomainTPMPrivate structure to store and persist TPM-related private data Stefan Berger (6): util: Add parsing support for swtpm's cmdarg-migration capability qemu: tpm: Conditionally create storage on incoming migration qemu: tpm: Add support for storing private TPM-related data qemu: tpm: Pass --migration option to swtpm if supported and needed qemu: tpm: Avoid security labels on incoming migration with shared storage qemu: tpm: Never remove state on outgoing migration and shared storage src/conf/domain_conf.c| 63 -- src/conf/domain_conf.h| 9 src/qemu/qemu_domain.c| 85 +++-- src/qemu/qemu_domain.h| 17 +- src/qemu/qemu_driver.c| 20 +++ src/qemu/qemu_extdevice.c | 10 ++-- src/qemu/qemu_extdevice.h | 6 ++- src/qemu/qemu_migration.c | 20 --- src/qemu/qemu_process.c | 9 ++-- src/qemu/qemu_snapshot.c | 4 +- src/qemu/qemu_tpm.c | 111 ++ src/qemu/qemu_tpm.h | 12 - src/util/virtpm.c | 1 + src/util/virtpm.h | 1 + 14 files changed, 318 insertions(+), 50 deletions(-) -- 2.37.3
Re: [PATCH 3/3] docs: Do not support non-socket activated modular daemons with systemd
On Tue, Oct 18, 2022 at 04:37:32PM +0200, Martin Kletzander wrote: > Due to the setup of the modular daemon service files the reverting to > non-socket > activated daemons could have never worked. The reason is that masking the > socket files prevents starting the daemons since they require (as in Requires= > rather than Wants= in the service file) the sockets. On top of that it > creates > issues with some libvirt-guests setups and needlessly increases our support > matrix. > > Nothing prevents users to modify their setup in a way that will still work > without socket activation, but supporting such setup only creates burden on > our > part. > > This technically reverts most of commit 59d30adacd1d except the change made to > the libvirtd manpage since the monolithic daemon still supports traditional > mode > of starting even on systemd. > > Signed-off-by: Martin Kletzander > --- > docs/manpages/virtinterfaced.rst | 23 +-- > docs/manpages/virtlxcd.rst | 37 +-- > docs/manpages/virtnetworkd.rst | 23 +-- > docs/manpages/virtnodedevd.rst | 23 +-- > docs/manpages/virtnwfilterd.rst | 23 +-- > docs/manpages/virtqemud.rst | 37 +-- > docs/manpages/virtsecretd.rst| 23 +-- > docs/manpages/virtstoraged.rst | 23 +-- > docs/manpages/virtvboxd.rst | 38 ++-- > docs/manpages/virtvzd.rst| 37 +-- > docs/manpages/virtxend.rst | 37 +-- > 11 files changed, 122 insertions(+), 202 deletions(-) Also virtproxyd.rst based on my comment to #2 > > diff --git a/docs/manpages/virtinterfaced.rst > b/docs/manpages/virtinterfaced.rst > index 5777dba638b0..0d5ac8b489ad 100644 > --- a/docs/manpages/virtinterfaced.rst > +++ b/docs/manpages/virtinterfaced.rst > @@ -39,26 +39,25 @@ during startup. None the less it is recommended to avoid > restarting with > running guests whenever practical. > > > -SYSTEM SOCKET ACTIVATION > - > +DAEMON STARTUP MODES > + > > The ``virtinterfaced`` daemon is capable of starting in two modes. > > -In the traditional mode, it will create and listen on UNIX sockets itself. > > -In socket activation mode, it will rely on systemd to create and listen > -on the UNIX sockets and pass them as pre-opened file descriptors. In this > -mode most of the socket related config options in > +Socket activation mode > + Nitpick - two too many '-' here (repeated throughout). Reviewed-by: Daniel P. Berrangé With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH 2/3] virtproxyd: use Wants instead of Requires for sockets
On Tue, Oct 18, 2022 at 04:37:31PM +0200, Martin Kletzander wrote: > Similarly to commit ec7e31ed3206, allow traditional daemon activation for > virtproxyd. I'm not convinced we want todo this. virtproxyd has supported socket activation since day 1, so I think we are right to enforce this, as we have no back compat/upgrade story to consider for virtproxyd, only libvirtd. > > Signed-off-by: Martin Kletzander > --- > src/remote/virtproxyd.service.in | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/src/remote/virtproxyd.service.in > b/src/remote/virtproxyd.service.in > index f9bb6b84a97a..2557539fca8f 100644 > --- a/src/remote/virtproxyd.service.in > +++ b/src/remote/virtproxyd.service.in > @@ -1,9 +1,9 @@ > [Unit] > Description=Virtualization daemon > Conflicts=libvirtd.service > -Requires=virtproxyd.socket > -Requires=virtproxyd-ro.socket > -Requires=virtproxyd-admin.socket > +Wants=virtproxyd.socket > +Wants=virtproxyd-ro.socket > +Wants=virtproxyd-admin.socket > After=network.target > After=dbus.service > After=apparmor.service > -- > 2.38.0 > With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH v2 2/9] qemu: Introduced VIR_MIGRATE_TPM_SHARED_STORAGE for TPM migration
On 10/18/22 09:47, Michal Prívozník wrote: On 10/18/22 14:23, Stefan Berger wrote: On 10/18/22 04:15, Daniel P. Berrangé wrote: On Mon, Oct 17, 2022 at 11:17:56AM -0400, Stefan Berger wrote: On 10/17/22 09:48, Daniel P. Berrangé wrote: On Mon, Oct 17, 2022 at 09:39:52AM -0400, Stefan Berger wrote: The key is in qemuMigrationSrcIsSafe(), and how it determines if a migration is safe. * Disk on local storage, no flags => unsafe, migration error * Disk on local storage, VIR_MIGRATE_NON_SHARED_DISK => ok, copies disk storage * Disk on shared storage, no flags => safe * Disk on shared storage, VIR_MIGRATE_NON_SHARED_DISK => ok, but needlessly copies disk storage The key helper methods are virFileIsSharedFS and virFileIsClusterFS which check the filesystem type for the path against a known list of shared/cluster FS. So we won't do it this way for TPM state migration. Instead we can try to write on the source migration side a) a simple file and detect whether the file is at the destination b) a file with either a name or content that only the source and destination libvirts would know at this point b) is to prevent stale files from becoming indicators for shared storage. No, please use the virFileIsSharedFS/ClusterFS helpers. I tried to use virFileIsSharedFS on the source and destination side of my NFS setup to see how they work. The issue here is that the NFS server side, that exports /var/lib/libvirt/swtpm and is also the migration source at some point, says this: We expect both sides to have the same storage configurtion. ie both must be NFS. I'm pretty sure our code for handling disks does not work when you have a local FS on one side, which is exported to the other side as NFS. Conceptally that's not something someone would do in production, since they you make the target host dependant on the src compute host, which is poor for resilience. Ok, so then we can replace (accesses to) VIR_MIGARTE_TPM_SHARED_STORAGE with calls to virFilesIsSharedFS() and simply assume that this function will return the same results on both sides of the migration (without testing it) and if it doesn't and failures occur then it's an unsupported shared storage setup (that is documented somewhere). I hope to not receive reports from ppl that don't describe their setup appropriately but see some odd failures because of this. Just FYI, I'm testing the following patches: https://gitlab.com/MichalPrivoznik/libvirt/-/commits/tpm_migration_mine_v1 There're still some parts missing. but I'm continuing to work on them. Oh well, I just morphed my series to get rid of the flag: https://github.com/stefanberger/libvirt-tpm/tree/swtpm_shared_storage.v3 Stefan Michal
[PATCH 0/3] Support only socket activation for libvirtd and virtproxyd on systemd
After dealing with some issues and talking to Daniel it seems we don't really need to properly support non-socket activation for all the daemons on systemd. I'm not sure how to phrase that in the documentation, but I gave it a shot. Martin Kletzander (3): docs: Specify reverting to traditional service mode more clearly virtproxyd: use Wants instead of Requires for sockets docs: Do not support non-socket activated modular daemons with systemd docs/manpages/libvirtd.rst | 10 +++-- docs/manpages/virtinterfaced.rst | 23 +-- docs/manpages/virtlxcd.rst | 37 +-- docs/manpages/virtnetworkd.rst | 23 +-- docs/manpages/virtnodedevd.rst | 23 +-- docs/manpages/virtnwfilterd.rst | 23 +-- docs/manpages/virtproxyd.rst | 10 +++-- docs/manpages/virtqemud.rst | 37 +-- docs/manpages/virtsecretd.rst| 23 +-- docs/manpages/virtstoraged.rst | 23 +-- docs/manpages/virtvboxd.rst | 38 ++-- docs/manpages/virtvzd.rst| 37 +-- docs/manpages/virtxend.rst | 37 +-- src/remote/virtproxyd.service.in | 6 ++--- 14 files changed, 141 insertions(+), 209 deletions(-) -- 2.38.0
[PATCH 2/3] virtproxyd: use Wants instead of Requires for sockets
Similarly to commit ec7e31ed3206, allow traditional daemon activation for virtproxyd. Signed-off-by: Martin Kletzander --- src/remote/virtproxyd.service.in | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/remote/virtproxyd.service.in b/src/remote/virtproxyd.service.in index f9bb6b84a97a..2557539fca8f 100644 --- a/src/remote/virtproxyd.service.in +++ b/src/remote/virtproxyd.service.in @@ -1,9 +1,9 @@ [Unit] Description=Virtualization daemon Conflicts=libvirtd.service -Requires=virtproxyd.socket -Requires=virtproxyd-ro.socket -Requires=virtproxyd-admin.socket +Wants=virtproxyd.socket +Wants=virtproxyd-ro.socket +Wants=virtproxyd-admin.socket After=network.target After=dbus.service After=apparmor.service -- 2.38.0
[PATCH 1/3] docs: Specify reverting to traditional service mode more clearly
Commit 59d30adacd1d added information about how to properly adjust libvirt-guests service file when disabling socket activation, but it was still not clear that it is meant only in the aforementioned case. Signed-off-by: Martin Kletzander --- docs/manpages/libvirtd.rst | 10 -- docs/manpages/virtproxyd.rst | 10 -- 2 files changed, 16 insertions(+), 4 deletions(-) diff --git a/docs/manpages/libvirtd.rst b/docs/manpages/libvirtd.rst index 1347b9b21042..d10eb4e73ef7 100644 --- a/docs/manpages/libvirtd.rst +++ b/docs/manpages/libvirtd.rst @@ -71,8 +71,14 @@ Or $ systemctl start libvirtd-tcp.socket Socket activation mode is generally the default when running on a host -OS that uses systemd. To revert to the traditional mode, all the socket -unit files must be masked: +OS that uses systemd. + + +Reverting to the traditional mode +- + +To revert to the traditional mode on a systemd host, all the socket unit files +must be masked: :: diff --git a/docs/manpages/virtproxyd.rst b/docs/manpages/virtproxyd.rst index 0366935b9a73..029c0c8d0421 100644 --- a/docs/manpages/virtproxyd.rst +++ b/docs/manpages/virtproxyd.rst @@ -74,8 +74,14 @@ Or $ systemctl start virtproxyd-tcp.socket Socket activation mode is generally the default when running on a host -OS that uses systemd. To revert to the traditional mode, all the socket -unit files must be masked: +OS that uses systemd. + + +Reverting to the traditional mode +- + +To revert to the traditional mode on a systemd host, all the socket unit files +must be masked: :: -- 2.38.0
[PATCH 3/3] docs: Do not support non-socket activated modular daemons with systemd
Due to the setup of the modular daemon service files the reverting to non-socket activated daemons could have never worked. The reason is that masking the socket files prevents starting the daemons since they require (as in Requires= rather than Wants= in the service file) the sockets. On top of that it creates issues with some libvirt-guests setups and needlessly increases our support matrix. Nothing prevents users to modify their setup in a way that will still work without socket activation, but supporting such setup only creates burden on our part. This technically reverts most of commit 59d30adacd1d except the change made to the libvirtd manpage since the monolithic daemon still supports traditional mode of starting even on systemd. Signed-off-by: Martin Kletzander --- docs/manpages/virtinterfaced.rst | 23 +-- docs/manpages/virtlxcd.rst | 37 +-- docs/manpages/virtnetworkd.rst | 23 +-- docs/manpages/virtnodedevd.rst | 23 +-- docs/manpages/virtnwfilterd.rst | 23 +-- docs/manpages/virtqemud.rst | 37 +-- docs/manpages/virtsecretd.rst| 23 +-- docs/manpages/virtstoraged.rst | 23 +-- docs/manpages/virtvboxd.rst | 38 ++-- docs/manpages/virtvzd.rst| 37 +-- docs/manpages/virtxend.rst | 37 +-- 11 files changed, 122 insertions(+), 202 deletions(-) diff --git a/docs/manpages/virtinterfaced.rst b/docs/manpages/virtinterfaced.rst index 5777dba638b0..0d5ac8b489ad 100644 --- a/docs/manpages/virtinterfaced.rst +++ b/docs/manpages/virtinterfaced.rst @@ -39,26 +39,25 @@ during startup. None the less it is recommended to avoid restarting with running guests whenever practical. -SYSTEM SOCKET ACTIVATION - +DAEMON STARTUP MODES + The ``virtinterfaced`` daemon is capable of starting in two modes. -In the traditional mode, it will create and listen on UNIX sockets itself. -In socket activation mode, it will rely on systemd to create and listen -on the UNIX sockets and pass them as pre-opened file descriptors. In this -mode most of the socket related config options in +Socket activation mode + + +On hosts with systemd it is started in socket activation mode and it will rely +on systemd to create and listen on the UNIX sockets and pass them as pre-opened +file descriptors. In this mode most of the socket related config options in ``/etc/libvirt/virtinterfaced.conf`` will no longer have any effect. -Socket activation mode is generally the default when running on a host -OS that uses systemd. To revert to the traditional mode, all the socket -unit files must be masked: -:: +Traditional service mode + - $ systemctl mask virtinterfaced.socket virtinterfaced-ro.socket \ - virtinterfaced-admin.socket +On hosts without systemd, it will create and listen on UNIX sockets itself. OPTIONS diff --git a/docs/manpages/virtlxcd.rst b/docs/manpages/virtlxcd.rst index aebc8adb5822..704e30dfea3e 100644 --- a/docs/manpages/virtlxcd.rst +++ b/docs/manpages/virtlxcd.rst @@ -39,40 +39,25 @@ during startup. None the less it is recommended to avoid restarting with running guests whenever practical. -SYSTEM SOCKET ACTIVATION - +DAEMON STARTUP MODES + The ``virtlxcd`` daemon is capable of starting in two modes. -In the traditional mode, it will create and listen on UNIX sockets itself. -In socket activation mode, it will rely on systemd to create and listen -on the UNIX sockets and pass them as pre-opened file descriptors. In this -mode most of the socket related config options in -``/etc/libvirt/virtlxcd.conf`` will no longer have any effect. - -Socket activation mode is generally the default when running on a host -OS that uses systemd. To revert to the traditional mode, all the socket -unit files must be masked: - -:: +Socket activation mode + - $ systemctl mask virtlxcd.socket virtlxcd-ro.socket \ - virtlxcd-admin.socket +On hosts with systemd it is started in socket activation mode and it will rely +on systemd to create and listen on the UNIX sockets and pass them as pre-opened +file descriptors. In this mode most of the socket related config options in +``/etc/libvirt/virtlxcd.conf`` will no longer have any effect. -If using libvirt-guests service then the ordering for that service needs to be -adapted so that it is ordered after the service unit instead of the socket unit. -Since dependencies and ordering cannot be changed with drop-in overrides, the -whole libvirt-guests unit file needs to be changed. In order to preserve such -change copy the installed ``/usr/lib/systemd/system/libvirt-guests.service`` to
Re: [PATCH v2 2/9] qemu: Introduced VIR_MIGRATE_TPM_SHARED_STORAGE for TPM migration
On 10/18/22 14:23, Stefan Berger wrote: > > > On 10/18/22 04:15, Daniel P. Berrangé wrote: >> On Mon, Oct 17, 2022 at 11:17:56AM -0400, Stefan Berger wrote: >>> >>> >>> On 10/17/22 09:48, Daniel P. Berrangé wrote: On Mon, Oct 17, 2022 at 09:39:52AM -0400, Stefan Berger wrote: > > >>> The key is in qemuMigrationSrcIsSafe(), and how it determines if a migration is safe. * Disk on local storage, no flags => unsafe, migration error * Disk on local storage, VIR_MIGRATE_NON_SHARED_DISK => ok, copies disk storage * Disk on shared storage, no flags => safe * Disk on shared storage, VIR_MIGRATE_NON_SHARED_DISK => ok, but needlessly copies disk storage The key helper methods are virFileIsSharedFS and virFileIsClusterFS which check the filesystem type for the path against a known list of shared/cluster FS. > So we won't do it this way for TPM state migration. Instead we can > try to write on the source migration side > > a) a simple file and detect whether the file is at the destination > b) a file with either a name or content that only the source and > destination libvirts would know at this point > > b) is to prevent stale files from becoming indicators for shared > storage. No, please use the virFileIsSharedFS/ClusterFS helpers. >>> >>> I tried to use virFileIsSharedFS on the source and destination side of >>> my NFS setup to see how they work. The issue here is that the NFS server >>> side, that exports /var/lib/libvirt/swtpm and is also the migration >>> source at some point, says this: >> >> We expect both sides to have the same storage configurtion. ie both >> must be NFS. I'm pretty sure our code for handling disks does not >> work when you have a local FS on one side, which is exported to the >> other side as NFS. Conceptally that's not something someone would >> do in production, since they you make the target host dependant >> on the src compute host, which is poor for resilience. >> > > Ok, so then we can replace (accesses to) VIR_MIGARTE_TPM_SHARED_STORAGE > with calls to virFilesIsSharedFS() and simply assume that this function > will return the same results on both sides of the migration (without > testing it) and if it doesn't and failures occur then it's an > unsupported shared storage setup (that is documented somewhere). I hope > to not receive reports from ppl that don't describe their setup > appropriately but see some odd failures because of this. Just FYI, I'm testing the following patches: https://gitlab.com/MichalPrivoznik/libvirt/-/commits/tpm_migration_mine_v1 There're still some parts missing. but I'm continuing to work on them. Michal
Re: [PATCH v2 2/9] qemu: Introduced VIR_MIGRATE_TPM_SHARED_STORAGE for TPM migration
On 10/18/22 04:15, Daniel P. Berrangé wrote: On Mon, Oct 17, 2022 at 11:17:56AM -0400, Stefan Berger wrote: On 10/17/22 09:48, Daniel P. Berrangé wrote: On Mon, Oct 17, 2022 at 09:39:52AM -0400, Stefan Berger wrote: The key is in qemuMigrationSrcIsSafe(), and how it determines if a migration is safe. * Disk on local storage, no flags => unsafe, migration error * Disk on local storage, VIR_MIGRATE_NON_SHARED_DISK => ok, copies disk storage * Disk on shared storage, no flags => safe * Disk on shared storage, VIR_MIGRATE_NON_SHARED_DISK => ok, but needlessly copies disk storage The key helper methods are virFileIsSharedFS and virFileIsClusterFS which check the filesystem type for the path against a known list of shared/cluster FS. So we won't do it this way for TPM state migration. Instead we can try to write on the source migration side a) a simple file and detect whether the file is at the destination b) a file with either a name or content that only the source and destination libvirts would know at this point b) is to prevent stale files from becoming indicators for shared storage. No, please use the virFileIsSharedFS/ClusterFS helpers. I tried to use virFileIsSharedFS on the source and destination side of my NFS setup to see how they work. The issue here is that the NFS server side, that exports /var/lib/libvirt/swtpm and is also the migration source at some point, says this: We expect both sides to have the same storage configurtion. ie both must be NFS. I'm pretty sure our code for handling disks does not work when you have a local FS on one side, which is exported to the other side as NFS. Conceptally that's not something someone would do in production, since they you make the target host dependant on the src compute host, which is poor for resilience. Ok, so then we can replace (accesses to) VIR_MIGARTE_TPM_SHARED_STORAGE with calls to virFilesIsSharedFS() and simply assume that this function will return the same results on both sides of the migration (without testing it) and if it doesn't and failures occur then it's an unsupported shared storage setup (that is documented somewhere). I hope to not receive reports from ppl that don't describe their setup appropriately but see some odd failures because of this. Stefan With regards, Daniel
Re: [PATCH 3/3] test: Fix parsing nested XML
On Mon, Oct 17, 2022 at 12:16:41 -0400, Cole Robinson wrote: > Reproducer: > > ./build/tools/virsh \ > --connect test:///`pwd`/examples/xml/test/testnodeinline.xml \ > vol-list default-pool > > Fixes: b3e33a0ef7e62169175280c647aa9ac361bd554d > > Signed-off-by: Cole Robinson > --- Oops. Reviewed-by: Peter Krempa
Re: [PATCH 2/3] examples: testdriver: Add a nested inline example
On Mon, Oct 17, 2022 at 12:16:40 -0400, Cole Robinson wrote: > Signed-off-by: Cole Robinson > --- > examples/xml/test/testnodeinline.xml | 22 ++ > 1 file changed, 22 insertions(+) Reviewed-by: Peter Krempa
Re: [PATCH 1/3] examples: testdriver: Add xmlns runstate example
On Mon, Oct 17, 2022 at 12:16:39 -0400, Cole Robinson wrote: > The testdriver has xmlns support for overriding object default > state. demo it by pausing a VM > > Signed-off-by: Cole Robinson > --- > examples/xml/test/testnodeinline.xml | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) Reviewed-by: Peter Krempa
Re: [libvirt PATCH 00/12] tools: provide virt-qemu-sev-validate for SEV(-ES) launch attestation
On Sun, Oct 16, 2022 at 03:06:17PM -0400, Cole Robinson wrote: > On 10/7/22 7:42 AM, Daniel P. Berrangé wrote: > > The libvirt QEMU driver provides all the functionality required for > > launching a guest on AMD SEV(-ES) platforms, with a configuration > > that enables attestation of the launch measurement. The documentation > > for how to actually perform an attestation is severely lacking and > > not suitable for mere mortals to understand. IOW, someone trying to > > implement attestation is in for a world of pain and suffering. > > > > This series doesn't fix the documentation problem, but it does > > provide a reference implementation of a tool for performing > > attestation of SEV(-ES) guests in the context of libvirt / KVM. > > > > There will be other tools and libraries that implement attestation > > logic too, but this tool is likely somewhat unique in its usage of > > libvirt. Now for a attestation to be trustworthy you don't want to > > perform it on the hypervisor host, since the goal is to prove that > > the hypervisor has not acted maliciously. None the less it is still > > beneficial to have libvirt integration to some extent. > > > > When running this tool on a remote (trusted) host, it can connect > > to the libvirt hypervisor and fetch the data provided by the > > virDomainLaunchSecurityInfo API, which is safe to trust as the > > key pieces are cryptographically measured. > > > > Attestation is a complex problem though and it is very easy to > > screw up and feed the wrong information and then waste hours trying > > to figure out what piece was wrong, to cause the hash digest to > > change. For debugging such problems, you can thus tell the tool > > to operate insecurely, by querying libvirt for almost all of the > > configuration information required to determine the expected > > measurement. By comparing these results,to the results obtained > > in offline mode it helps narrow down where the mistake lies. > > > > So I view this tool as being useful in a number of ways: > > > > * Quality assurance engineers needing to test libvirt/QEMU/KVM > >get a simple and reliable tool for automating tests with. > > > > * Users running simple libvirt deployments without any large > >management stack, get a standalone tool for attestation > >they can rely on. > > > > * Developers writing/integrating attestation support into > >management stacks above libvirt, get a reference against > >which they can debug their own tools. > > > > * Users wanting to demonstrate the core SEV/SEV-ES functionality > >get a simple and reliable tool to illustrate the core concepts > >involved. > > > > Since I didn't fancy writing such complex logic in C, this tool is > > a python3 program. As such, we don't want to include it in the > > main libvirt-client RPM, nor any other existing RPM. THus, this > > series puts it in a new libvirt-client-qemu RPM which, through no > > co-inicidence at all, is the same RPM I invented a few days ago to > > hold the virt-qemu-qmp-proxy command. > > > > Note, people will have already seen an earlier version of this > > tool I hacked up some months ago. This code is very significantly > > changed since that earlier version, to make it more maintainable, > > and simpler to use (especially for SEV-ES) but the general theme > > is still the same. > > > > Daniel P. Berrangé (12): > > build-aux: only forbid gethostname in C files > > tools: support validating SEV firmware boot measurements > > tools: load guest config from libvirt > > tools: support validating SEV direct kernel boot measurements > > tools: load direct kernel config from libvirt > > tools: support validating SEV-ES initial vCPU state measurements > > tools: support automatically constructing SEV-ES vCPU state > > tools: load CPU count and CPU SKU from libvirt > > tools: support generating SEV secret injection tables > > docs/kbase: describe attestation for SEV guests > > scripts: add systemtap script for capturing SEV-ES VMSA > > docs/manpages: add checklist of problems for SEV attestation > > > After the fix I mentioned on patch 7, the script worked for my sev, > sev-es, sev-es + kernel setup, with vmsa passed in and vmsa generated. > > Attached is some pylint output, nothing really serious: We've not run pylint on libvirt tree historically, but I wonder if we should introduce it to check all our build scripts too. With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [libvirt PATCH 12/12] docs/manpages: add checklist of problems for SEV attestation
On Sun, Oct 16, 2022 at 03:27:39PM -0400, Cole Robinson wrote: > On 10/7/22 7:43 AM, Daniel P. Berrangé wrote: > > Despite efforts to make the virt-qemu-sev-validate tool friendly, it is > > a certainty that almost everyone who tries it will hit false negative > > results, getting a failure despite the VM being trustworthy. > > > > Diagnosing these problems is no easy matter, especially for those not > > familiar with SEV/SEV-ES in general. This extra docs text attempts to > > set out a checklist of items to look at to identify what went wrong. > > > > Signed-off-by: Daniel P. Berrangé > > --- > > docs/manpages/virt-qemu-sev-validate.rst | 112 +++ > > 1 file changed, 112 insertions(+) > > > > diff --git a/docs/manpages/virt-qemu-sev-validate.rst > > b/docs/manpages/virt-qemu-sev-validate.rst > > index 7542bea9aa..e0c18f2d20 100644 > > --- a/docs/manpages/virt-qemu-sev-validate.rst > > +++ b/docs/manpages/virt-qemu-sev-validate.rst > > @@ -437,6 +437,118 @@ inject a disk password on success: > > --domain fedora34x86_64 \ > > --disk-password passwd.txt > > > > +COMMON MISTAKES CHECKLIST > > += > > + > > +The complexity of configuring a guest and validating its boot measurement > > +means it is very likely to see the failure:: > > + > > + ERROR: Measurement does not match, VM is not trustworthy > > + > > +This error message assumes the worst, but in most cases will failure will > > be > > +a result of either mis-configuring the guest, or passing the wrong > > information > > +when trying to validate it. The following information is a guide for what > > +items to check in order to stand the best chance of diagnosing the problem > > + > > +* Check the VM configuration for the DH certificate and session > > + blob in the libvirt guest XML. > > + > > + The content for these fields should be in base64 format, which is > > + what ``sevctl session`` generates. Other tools may generate the files > > + in binary format, so ensure it has been correctly converted to base64. > > + > > +* Check the VM configuration policy value matches the session blob > > + > > + The value in libvirt guest XML has to match the value > > + passed to the ``sevctl session`` command. > > + > > FWIW In this case, qemu will explicitly error. From 7.0.0-6.fc36: > > -accel kvm: sev_launch_start: LAUNCH_START ret=1 fw_error=11 'Bad > measurement' Oh, I had forgotten that > > I think it's worth putting some subset of that qemu error string at the > top of this section too. If users hit it, going through the checklist > here may solve their issue. > > For example, If you're flailing around with sevctl like I have, some of > the sub commands will invalidate all your previous generated > session/dhCert blobs, and subsequent VM boots will fail as above. > `sevctl reset` and/or `sevctl rotate`. That's probably obscure enough to > not need documenting, but if the first item here leads to re-running > sevctl session then you'll fix your problem :) Hmm, yes, I'd stayed away from reset/rotate to avoid trouble :-) With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [libvirt PATCH 08/12] tools: load CPU count and CPU SKU from libvirt
On Sun, Oct 16, 2022 at 03:09:43PM -0400, Cole Robinson wrote: > On 10/7/22 7:43 AM, Daniel P. Berrangé wrote: > > When validating a SEV-ES guest, we need to know the CPU count and VMSA > > state. We can get the CPU count directly from libvirt's guest info. The > > VMSA state can be constructed automatically if we query the CPU SKU from > > host capabilities XML. Neither of these is secure, however, so this > > behaviour is restricted. > > > > Signed-off-by: Daniel P. Berrangé > > --- > > docs/manpages/virt-qemu-sev-validate.rst | 4 > > tools/virt-qemu-sev-validate.py | 23 +++ > > 2 files changed, 23 insertions(+), 4 deletions(-) > > > > diff --git a/docs/manpages/virt-qemu-sev-validate.rst > > b/docs/manpages/virt-qemu-sev-validate.rst > > index 7ba7323e13..fcc13d68c8 100644 > > --- a/docs/manpages/virt-qemu-sev-validate.rst > > +++ b/docs/manpages/virt-qemu-sev-validate.rst > > @@ -356,7 +356,6 @@ Validate the measurement of a SEV-ES SMP guest booting > > from disk: > > > > # virt-dom-sev-validate \ > > --insecure \ > > - --num-cpus 2 \ > > --vmsa-cpu0 vmsa0.bin \ > > --vmsa-cpu1 vmsa1.bin \ > > --tk this-guest-tk.bin \ > > @@ -369,9 +368,6 @@ automatically constructed VMSA: > > > > # virt-dom-sev-validate \ > > --insecure \ > > - --cpu-family 23 \ > > - --cpu-model 49 \ > > - --cpu-stepping 0 \ > > --tk this-guest-tk.bin \ > > --domain fedora34x86_64 > > > > diff --git a/tools/virt-qemu-sev-validate.py > > b/tools/virt-qemu-sev-validate.py > > index 2505aff07f..5da1353e60 100755 > > --- a/tools/virt-qemu-sev-validate.py > > +++ b/tools/virt-qemu-sev-validate.py > > @@ -869,6 +869,14 @@ class LibvirtConfidentialVM(ConfidentialVM): > > if self.policy is None: > > self.policy = sevinfo["sev-policy"] > > > > +if self.is_sev_es() and self.num_cpus is None: > > +if secure: > > +raise InsecureUsageException( > > +"Using CPU count from guest is not secure") > > + > > +info = self.dom.info() > > +self.num_cpus = info[3] > > + > > if self.firmware is None: > > if remote: > > raise UnsupportedUsageException( > > @@ -914,6 +922,21 @@ class LibvirtConfidentialVM(ConfidentialVM): > > "Using cmdline string from XML is not secure") > > self.kernel_table.load_cmdline(cmdlinenodes[0].text) > > > > +capsxml = self.conn.getCapabilities() > > +capsdoc = etree.fromstring(capsxml) > > + > > +if self.is_sev_es() and self.vmsa_cpu0 is None: > > +if secure: > > +raise InsecureUsageException( > > +"Using CPU SKU from capabilities is not secure") > > + > > +sig = capsdoc.xpath("/capabilities/host/cpu/signature") > > +if len(sig) == 1: > > If this is missing, I'd make it fatal, libvirtd isn't new enough. Could > happen if talking to a remote machine (or testing the script while f36 > fedora libvirtd is running, which I did :) ) . It's going to fail later > anyways. Makes sense. With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [libvirt PATCH 07/12] tools: support automatically constructing SEV-ES vCPU state
On Sun, Oct 16, 2022 at 03:00:25PM -0400, Cole Robinson wrote: > On 10/7/22 7:43 AM, Daniel P. Berrangé wrote: > > The VMSA files contain the expected CPU register state for the VM. Their > > content varies based on a few pieces of the stack > > > > - AMD CPU architectural initial state > > - KVM hypervisor VM CPU initialization > > - QEMU userspace VM CPU initialization > > - AMD CPU SKU (family/model/stepping) > > > > The first three pieces of information we can obtain through code > > inspection. The last piece of information we can take on the command > > line. This allows a user to validate a SEV-ES guest merely by providing > > the CPU SKU information, using --cpu-family, --cpu-model, > > --cpu-stepping. This avoids the need to obtain or construct VMSA files > > directly. > > > > Signed-off-by: Daniel P. Berrangé > > --- > > docs/manpages/virt-qemu-sev-validate.rst | 45 +++ > > tools/virt-qemu-sev-validate.py | 475 +++ > > 2 files changed, 520 insertions(+) > > > > diff --git a/docs/manpages/virt-qemu-sev-validate.rst > > b/docs/manpages/virt-qemu-sev-validate.rst > > index 24bca98d28..7ba7323e13 100644 > > --- a/docs/manpages/virt-qemu-sev-validate.rst > > +++ b/docs/manpages/virt-qemu-sev-validate.rst > > @@ -243,6 +243,24 @@ Validate the measurement of a SEV-ES SMP guest booting > > from disk: > > --build-id 13 \ > > --policy 7 > > snip > > +class OVMF(object): > > + > > +OVMF_TABLE_FOOTER_GUID = UUID("96b582de-1fb2-45f7-baea-a366c55a082d") > > +SEV_INFO_BLOCK_GUID = UUID("00f771de-1a7e-4fcb-890e-68c77e2fb44e") > > + > > +def __init__(self): > > +self.entries = {} > > + > > +def load(self, content): > > +expect = OVMF.OVMF_TABLE_FOOTER_GUID.bytes_le > > +actual = content[-48:-32] > > +if expect != actual: > > +raise Exception("OVMF footer GUID not found") > > + > > +tablelen = int.from_bytes(content[-50:-48], byteorder='little') > > + > > +if tablelen == 0: > > +raise Exception("OVMF tables zero length") > > + > > +table = content[-(50 + tablelen):-50] > > + > > +self.parse_table(table) > > + > > +def parse_table(self, data): > > +while len(data) > 0: > > +entryuuid = UUID(bytes_le=data[-16:]) > > +entrylen = int.from_bytes(data[-18:-16], byteorder='little') > > +entrydata = data[-entrylen:-18] > > + > > +self.entries[str(entryuuid)] = entrydata > > + > > +data = data[0:-(18 + entrylen)] > > + > > > I noticed this with your older branch, but the parsing here only works > for the first entry, print(self.entries) will show you. That's all we > need for the script, but this will fix later entry parsing: > > > --- a/tools/virt-qemu-sev-validate.py > +++ b/tools/virt-qemu-sev-validate.py > @@ -480,7 +480,7 @@ class OVMF(object): > if tablelen == 0: > raise Exception("OVMF tables zero length") > > -table = content[-(50 + tablelen):-50] > +table = content[-(32 + tablelen):-50] > > self.parse_table(table) > > @@ -492,7 +492,7 @@ class OVMF(object): > > self.entries[str(entryuuid)] = entrydata > > -data = data[0:-(18 + entrylen)] > +data = data[0:-entrylen] Ah yes, well spotted. > > +def reset_addr(self): > > +if str(OVMF.SEV_INFO_BLOCK_GUID) not in self.entries: > > +raise Exception("SEV info block GUID not found") > > + > > +info = SevInfoBlock() > > +info.unpack(self.entries[str(OVMF.SEV_INFO_BLOCK_GUID)]) > > + > > +return info.reset_addr.value > > unpack() isn't implemented, so this will error. You could implement it > but it's kinda overkill. All you need is: Sigh. It *was* implemented fully. I tested every possible scenario. Then right before sending this, I deleted the unpack code as I thought it wasn't used. /facepalm. > > diff --git a/tools/virt-qemu-sev-validate.py > b/tools/virt-qemu-sev-validate.py > index 2c5ad9083d..78d94604d5 100755 > --- a/tools/virt-qemu-sev-validate.py > +++ b/tools/virt-qemu-sev-validate.py > @@ -454,13 +454,6 @@ class VMSA(Struct): > self.cs_base.value = reset_cs > > > -class SevInfoBlock(Struct): > - > -def __init__(self): > -super().__init__(size=4) > -self.register_field("reset_addr", Field.U32) > - > - > class OVMF(object): > > OVMF_TABLE_FOOTER_GUID = UUID("96b582de-1fb2-45f7-baea-a366c55a082d") > @@ -498,10 +491,9 @@ class OVMF(object): > if str(OVMF.SEV_INFO_BLOCK_GUID) not in self.entries: > raise Exception("SEV info block GUID not found") > > -info = SevInfoBlock() > -info.unpack(self.entries[str(OVMF.SEV_INFO_BLOCK_GUID)]) > - > -return info.reset_addr.value > +reset_addr = int.from_bytes( > +self.entries[str(OVMF.SEV_INFO_BLOCK_GUID)], "little") > +
Re: [libvirt PATCH 02/12] tools: support validating SEV firmware boot measurements
On Sun, Oct 16, 2022 at 02:54:47PM -0400, Cole Robinson wrote: > On 10/7/22 7:42 AM, Daniel P. Berrangé wrote: > > The virt-qemu-sev-validate program will compare a reported SEV/SEV-ES > > domain launch measurement, to a computed launch measurement. This > > determines whether the domain has been tampered with during launch. > > > > This initial implementation requires all inputs to be provided > > explicitly, and as such can run completely offline, without any > > connection to libvirt. > > > > The tool is placed in the libvirt-client-qemu sub-RPM since it is > > specific to the QEMU driver. > > > > Signed-off-by: Daniel P. Berrangé > > > +try: > > +check_usage(args) > > + > > +attest(args) > > + > > +sys.exit(0) > > +except AttestationFailedException as e: > > +if not args.quiet: > > +print("ERROR: %s" % e, file=sys.stderr) > > +sys.exit(1) > > +except UnsupportedUsageException as e: > > +if not args.quiet: > > +print("ERROR: %s" % e, file=sys.stderr) > > +sys.exit(2) > > +except Exception as e: > > +if args.debug: > > +traceback.print_tb(e.__traceback__) > > +if not args.quiet: > > +print("ERROR: %s" % e, file=sys.stderr) > > +sys.exit(3) > > This only tracebacks on --debug for an unexpected error. I think it's > more useful to have --debug always print backtrace. It helped me > debugging usage of the script Ok, I can do that. Do you recall what sort of problems required you to be looking at the debug output ? Wondering if there's anything we can do to make it more foolproof for less knowledgable users ? With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
[PATCH 5/5] qemu: Refresh rx-filters more often
There are couple of scenarios where we need to reflect MAC change done in the guest: 1) domain restore from a file (here, we don't store updated MAC in the save file and thus on restore create the macvtap with the original MAC), 2) reconnecting to a running domain (here, the guest might have changed the MAC while we were not running), 3) migration (here, guest might change the MAC address but we fail to respond to it, Signed-off-by: Michal Privoznik --- src/qemu/qemu_domain.c | 9 ++--- src/qemu/qemu_domain.h | 3 ++- src/qemu/qemu_driver.c | 2 +- src/qemu/qemu_process.c | 27 +++ 4 files changed, 36 insertions(+), 5 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 1b93ebe579..b408ec0607 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -11975,14 +11975,17 @@ syncNicRxFilterMulticast(char *ifname, int qemuDomainSyncRxFilter(virDomainObj *vm, - virDomainNetDef *def) + virDomainNetDef *def, + virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv = vm->privateData; g_autoptr(virNetDevRxFilter) guestFilter = NULL; g_autoptr(virNetDevRxFilter) hostFilter = NULL; int rc; -qemuDomainObjEnterMonitor(vm); +if (qemuDomainObjEnterMonitorAsync(vm, asyncJob) < 0) +return -1; + rc = qemuMonitorQueryRxFilter(priv->mon, def->info.alias, ); qemuDomainObjExitMonitor(vm); if (rc < 0) @@ -11990,7 +11993,7 @@ qemuDomainSyncRxFilter(virDomainObj *vm, if (virDomainNetGetActualType(def) == VIR_DOMAIN_NET_TYPE_DIRECT) { if (virNetDevGetRxFilter(def->ifname, )) { -VIR_WARN("Couldn't get current RX filter for device %s while responding to NIC_RX_FILTER_CHANGED", +VIR_WARN("Couldn't get current RX filter for device %s", def->ifname); return -1; } diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index f436861efc..37e0a90452 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -1101,4 +1101,5 @@ qemuDomainObjWait(virDomainObj *vm); int qemuDomainSyncRxFilter(virDomainObj *vm, - virDomainNetDef *def); + virDomainNetDef *def, + virDomainAsyncJob asyncJob); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index d5fb2913be..803d2c1771 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -3662,7 +3662,7 @@ processNicRxFilterChangedEvent(virDomainObj *vm, VIR_DEBUG("process NIC_RX_FILTER_CHANGED event for network " "device %s in domain %s", def->info.alias, vm->def->name); -if (qemuDomainSyncRxFilter(vm, def) < 0) +if (qemuDomainSyncRxFilter(vm, def, VIR_ASYNC_JOB_NONE) < 0) goto endjob; endjob: diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 1a9175f40f..fe98601fce 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -7756,6 +7756,26 @@ qemuProcessLaunch(virConnectPtr conn, } +static int +qemuProcessRefreshRxFilters(virDomainObj *vm, +virDomainAsyncJob asyncJob) +{ +size_t i; + +for (i = 0; i < vm->def->nnets; i++) { +virDomainNetDef *def = vm->def->nets[i]; + +if (!virDomainNetGetActualTrustGuestRxFilters(def)) +continue; + +if (qemuDomainSyncRxFilter(vm, def, asyncJob) < 0) +return -1; +} + +return 0; +} + + /** * qemuProcessRefreshState: * @driver: qemu driver data @@ -7787,6 +7807,10 @@ qemuProcessRefreshState(virQEMUDriver *driver, if (qemuProcessRefreshDisks(vm, asyncJob) < 0) return -1; +VIR_DEBUG("Updating rx-filter data"); +if (qemuProcessRefreshRxFilters(vm, asyncJob) < 0) +return -1; + return 0; } @@ -8807,6 +8831,9 @@ qemuProcessReconnect(void *opaque) if (qemuSecurityReserveLabel(driver->securityManager, obj->def, obj->pid) < 0) goto error; +if (qemuProcessRefreshRxFilters(obj, VIR_ASYNC_JOB_NONE) < 0) +goto error; + qemuProcessNotifyNets(obj->def); qemuProcessFiltersInstantiate(obj->def); -- 2.37.3
[PATCH 2/5] qemu: Move parts of NIC_RX_FILTER_CHANGED even handling into a function
Parts of the code that responds to the NIC_RX_FILTER_CHANGED event are going to be re-used. Separate them into a function (qemuDomainSyncRxFilter()) and move the code into qemu_domain.c so that it can be re-used from other places of the driver. There's one slight change though: instead of passing device alias from the just received event to qemuMonitorQueryRxFilter(), I've switched to using the alias stored in our domain definition. But these two are guaranteed to be equal. virDomainDefFindDevice() made sure about that, if nothing else. Signed-off-by: Michal Privoznik --- src/qemu/qemu_domain.c | 251 + src/qemu/qemu_domain.h | 4 + src/qemu/qemu_driver.c | 242 +-- 3 files changed, 256 insertions(+), 241 deletions(-) diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 4c14fc2aef..1b93ebe579 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -11774,3 +11774,254 @@ qemuDomainObjWait(virDomainObj *vm) return 0; } + + +static void +syncNicRxFilterMacAddr(char *ifname, virNetDevRxFilter *guestFilter, + virNetDevRxFilter *hostFilter) +{ +char newMacStr[VIR_MAC_STRING_BUFLEN]; + +if (virMacAddrCmp(>mac, >mac)) { +virMacAddrFormat(>mac, newMacStr); + +/* set new MAC address from guest to associated macvtap device */ +if (virNetDevSetMAC(ifname, >mac) < 0) { +VIR_WARN("Couldn't set new MAC address %s to device %s " + "while responding to NIC_RX_FILTER_CHANGED", + newMacStr, ifname); +} else { +VIR_DEBUG("device %s MAC address set to %s", ifname, newMacStr); +} +} +} + + +static void +syncNicRxFilterGuestMulticast(char *ifname, virNetDevRxFilter *guestFilter, + virNetDevRxFilter *hostFilter) +{ +size_t i, j; +bool found; +char macstr[VIR_MAC_STRING_BUFLEN]; + +for (i = 0; i < guestFilter->multicast.nTable; i++) { +found = false; + +for (j = 0; j < hostFilter->multicast.nTable; j++) { +if (virMacAddrCmp(>multicast.table[i], + >multicast.table[j]) == 0) { +found = true; +break; +} +} + +if (!found) { +virMacAddrFormat(>multicast.table[i], macstr); + +if (virNetDevAddMulti(ifname, >multicast.table[i]) < 0) { +VIR_WARN("Couldn't add new multicast MAC address %s to " + "device %s while responding to NIC_RX_FILTER_CHANGED", + macstr, ifname); +} else { +VIR_DEBUG("Added multicast MAC %s to %s interface", + macstr, ifname); +} +} +} +} + + +static void +syncNicRxFilterHostMulticast(char *ifname, virNetDevRxFilter *guestFilter, + virNetDevRxFilter *hostFilter) +{ +size_t i, j; +bool found; +char macstr[VIR_MAC_STRING_BUFLEN]; + +for (i = 0; i < hostFilter->multicast.nTable; i++) { +found = false; + +for (j = 0; j < guestFilter->multicast.nTable; j++) { +if (virMacAddrCmp(>multicast.table[i], + >multicast.table[j]) == 0) { +found = true; +break; +} +} + +if (!found) { +virMacAddrFormat(>multicast.table[i], macstr); + +if (virNetDevDelMulti(ifname, >multicast.table[i]) < 0) { +VIR_WARN("Couldn't delete multicast MAC address %s from " + "device %s while responding to NIC_RX_FILTER_CHANGED", + macstr, ifname); +} else { +VIR_DEBUG("Deleted multicast MAC %s from %s interface", + macstr, ifname); +} +} +} +} + + +static void +syncNicRxFilterPromiscMode(char *ifname, + virNetDevRxFilter *guestFilter, + virNetDevRxFilter *hostFilter) +{ +bool promisc; +bool setpromisc = false; + +/* Set macvtap promisc mode to true if the guest has vlans defined */ +/* or synchronize the macvtap promisc mode if different from guest */ +if (guestFilter->vlan.nTable > 0) { +if (!hostFilter->promiscuous) { +setpromisc = true; +promisc = true; +} +} else if (hostFilter->promiscuous != guestFilter->promiscuous) { +setpromisc = true; +promisc = guestFilter->promiscuous; +} + +if (setpromisc) { +if (virNetDevSetPromiscuous(ifname, promisc) < 0) { +VIR_WARN("Couldn't set PROMISC flag to %s for device %s " + "while responding to NIC_RX_FILTER_CHANGED", + promisc ? "true" : "false", ifname); +} +} +} + + +static void +syncNicRxFilterMultiMode(char
[PATCH 0/5] qemu: Refresh rx-filters more often
See the last patch for explanation. I haven't found a gitlab issue for this nor a bug open. But I remember somebody complaining about problems during restore from a save file, on IRC perhaps? Michal Prívozník (5): processNicRxFilterChangedEvent: Free @guestFilter and @hostFilter automatically qemu: Move parts of NIC_RX_FILTER_CHANGED even handling into a function qemu: Acquire QUERY job instead of MODIFY when handling NIC_RX_FILTER_CHANGED event qemu: Refresh state after restore from a save image qemu: Refresh rx-filters more often src/qemu/qemu_domain.c| 254 ++ src/qemu/qemu_domain.h| 5 + src/qemu/qemu_driver.c| 250 + src/qemu/qemu_process.c | 27 src/qemu/qemu_saveimage.c | 2 + 5 files changed, 291 insertions(+), 247 deletions(-) -- 2.37.3
[PATCH 4/5] qemu: Refresh state after restore from a save image
When restoring a domain from a save image, we need to query QEMU for some runtime information that is not stored in status XML, or even if it is, it's not parsed (e.g. virtio-mem actual size, or soon rx-filters for macvtaps). During migration, this is done in qemuMigrationDstFinishFresh(), or in case of newly started domain in qemuProcessStart(). Except, the way that the code is written, when restoring from a save image (which is effectively a migration), the state is never refreshed, because qemuProcessStart() sees incoming migration so it does not refresh the state thinking it'll be done in the finish phase. But restoring from a save image has no finish phase. Therefore, refresh the state explicitly after the domain was restored but before vCPUs are resumed. Signed-off-by: Michal Privoznik --- src/qemu/qemu_saveimage.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 79567bf17d..ef62303728 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -672,6 +672,8 @@ qemuSaveImageStartVM(virConnectPtr conn, VIR_DOMAIN_EVENT_STARTED_RESTORED); virObjectEventStateQueue(driver->domainEventState, event); +if (qemuProcessRefreshState(driver, vm, asyncJob) < 0) +goto cleanup; /* If it was running before, resume it now unless caller requested pause. */ if (header->was_running && !start_paused) { -- 2.37.3
[PATCH 1/5] processNicRxFilterChangedEvent: Free @guestFilter and @hostFilter automatically
There's no need to call virNetDevRxFilterFree() explicitly, when corresponding variables can be declared as g_autoptr(virNetDevRxFilter). Signed-off-by: Michal Privoznik --- src/qemu/qemu_driver.c | 10 +++--- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 5c75000742..afebae3b93 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -3817,8 +3817,8 @@ processNicRxFilterChangedEvent(virDomainObj *vm, qemuDomainObjPrivate *priv = vm->privateData; virDomainDeviceDef dev; virDomainNetDef *def; -virNetDevRxFilter *guestFilter = NULL; -virNetDevRxFilter *hostFilter = NULL; +g_autoptr(virNetDevRxFilter) guestFilter = NULL; +g_autoptr(virNetDevRxFilter) hostFilter = NULL; int ret; VIR_DEBUG("Received NIC_RX_FILTER_CHANGED event for device %s " @@ -3826,7 +3826,7 @@ processNicRxFilterChangedEvent(virDomainObj *vm, devAlias, vm, vm->def->name); if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) -goto cleanup; +return; if (!virDomainObjIsActive(vm)) { VIR_DEBUG("Domain is not running"); @@ -3907,10 +3907,6 @@ processNicRxFilterChangedEvent(virDomainObj *vm, endjob: virDomainObjEndJob(vm); - - cleanup: -virNetDevRxFilterFree(hostFilter); -virNetDevRxFilterFree(guestFilter); } -- 2.37.3
[PATCH 3/5] qemu: Acquire QUERY job instead of MODIFY when handling NIC_RX_FILTER_CHANGED event
We are not updating domain XML to new MAC address, just merely setting host side of macvtap. But we don't need a MODIFY job for that, QUERY is just fine. This allows us to process the event should it occur during migration. Signed-off-by: Michal Privoznik --- src/qemu/qemu_driver.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 86bc35ca92..d5fb2913be 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -3625,7 +3625,7 @@ processNicRxFilterChangedEvent(virDomainObj *vm, "from domain %p %s", devAlias, vm, vm->def->name); -if (virDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0) +if (virDomainObjBeginJob(vm, VIR_JOB_QUERY) < 0) return; if (!virDomainObjIsActive(vm)) { -- 2.37.3
Re: [PATCH v2 2/9] qemu: Introduced VIR_MIGRATE_TPM_SHARED_STORAGE for TPM migration
On Mon, Oct 17, 2022 at 11:17:56AM -0400, Stefan Berger wrote: > > > On 10/17/22 09:48, Daniel P. Berrangé wrote: > > On Mon, Oct 17, 2022 at 09:39:52AM -0400, Stefan Berger wrote: > > > > > > > > > > > The key is in qemuMigrationSrcIsSafe(), and how it determines if a > > migration is safe. > > > >* Disk on local storage, no flags => unsafe, migration error > >* Disk on local storage, VIR_MIGRATE_NON_SHARED_DISK => ok, copies disk > > storage > >* Disk on shared storage, no flags => safe > >* Disk on shared storage, VIR_MIGRATE_NON_SHARED_DISK => ok, but > > needlessly copies disk storage > > > > The key helper methods are virFileIsSharedFS and virFileIsClusterFS > > which check the filesystem type for the path against a known list > > of shared/cluster FS. > > > > > So we won't do it this way for TPM state migration. Instead we can > > > try to write on the source migration side > > > > > > a) a simple file and detect whether the file is at the destination > > > b) a file with either a name or content that only the source and > > > destination libvirts would know at this point > > > > > > b) is to prevent stale files from becoming indicators for shared storage. > > > > No, please use the virFileIsSharedFS/ClusterFS helpers. > > > > I tried to use virFileIsSharedFS on the source and destination side of > my NFS setup to see how they work. The issue here is that the NFS server > side, that exports /var/lib/libvirt/swtpm and is also the migration > source at some point, says this: We expect both sides to have the same storage configurtion. ie both must be NFS. I'm pretty sure our code for handling disks does not work when you have a local FS on one side, which is exported to the other side as NFS. Conceptally that's not something someone would do in production, since they you make the target host dependant on the src compute host, which is poor for resilience. With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH] NEWS: Mention new channel and redirdev devices in domcaps
On 10/17/22 19:53, Jim Fehlig wrote: > Signed-off-by: Jim Fehlig > --- > NEWS.rst | 5 + > 1 file changed, 5 insertions(+) > Reviewed-by: Michal Privoznik And you get bonus points for remembering to write NEWS item :-) Michal