Re: [libvirt] [PATCH v5 4/5] storage: Support chap authentication for iscsi pool
On 07/23/2013 04:56 PM, Daniel P. Berrange wrote: On Tue, Jul 23, 2013 at 10:47:46AM -0400, John Ferlan wrote: On 07/23/2013 10:18 AM, Ján Tomko wrote: On 07/22/2013 10:31 PM, John Ferlan wrote: --- src/storage/storage_backend_iscsi.c | 111 +++- 1 file changed, 110 insertions(+), 1 deletion(-) I can confirm this works, but it's a shame it doesn't work on autostart. ACK if you clarify the error. Jan The autostart changes require getting a connection to the secret driver which I felt may take more time than I had to figure out how to get to work properly... In any case, I adjusted the message as follows (same in 5/5): diff --git a/src/storage/storage_backend_iscsi.c b/src/storage/storage_backend_i index 388d6ed..ee8dd2e 100644 --- a/src/storage/storage_backend_iscsi.c +++ b/src/storage/storage_backend_iscsi.c @@ -714,7 +714,8 @@ virStorageBackendISCSISetAuth(const char *portal, if (!conn) { virReportError(VIR_ERR_INTERNAL_ERROR, %s, - _(iscsi 'chap' authentication requires connection)); + _(iscsi 'chap' authentication not supported + for autostarted pools)); return -1; } I noticed that the nwfilter already unconditionally calls virConnectOpen(qemu://system); so we're already in fact suffering from the problem with autostart having a qemu dependency. Given this, I'd support a patch which simply did conn = virConnectOpen(privilege ? qemu:///system : qemu:///session); in storageDriverAutostart, provided that we ignore any errors from virConnectOpen, and fallback to use NULL for the connection in that case. Obviously this is something we'll still need to fix properly in a future release, but at least it'll make autostart of storage pools with auth work in the common case in the short term for this release. Both secret and qemu drivers are registered after the storage driver on libvirtd startup, so autostarting these pools will only work on storage driver reload. On libvirtd startup it fails with: qemuConnectOpen:1033 : internal error qemu state driver is not active (And it seems nwfilter only opens the qemu:// connection on reload) Jan -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] Don't overwrite errors in qemuTranslateDiskSourcePool
Both virStoragePoolFree and virStorageVolFree reset the last error, which might lead to the cryptic message: An error occurred, but the cause is unknown When the volume wasn't found, virStorageVolFree was called with NULL, leading to an error: invalid storage volume pointer in virStorageVolFree This patch changes it to: Storage volume not found: no storage vol with matching name 'tomato' --- src/qemu/qemu_conf.c | 11 +-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index 3e7b78a..80b8156 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -1248,6 +1248,7 @@ qemuTranslateDiskSourcePool(virConnectPtr conn, char *poolxml = NULL; virStorageVolInfo info; int ret = -1; +virErrorPtr savedError; if (def-type != VIR_DOMAIN_DISK_TYPE_VOLUME) return 0; @@ -1324,8 +1325,14 @@ qemuTranslateDiskSourcePool(virConnectPtr conn, def-srcpool-voltype = info.type; ret = 0; cleanup: -virStoragePoolFree(pool); -virStorageVolFree(vol); +savedError = virSaveLastError(); +if (pool) +virStoragePoolFree(pool); +if (vol) +virStorageVolFree(vol); +virSetError(savedError); +virFreeError(savedError); + VIR_FREE(poolxml); virStoragePoolDefFree(pooldef); return ret; -- 1.8.1.5 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] Add a colon after 'internal error'
As we do for other errors with an extra string. --- src/util/virerror.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/util/virerror.c b/src/util/virerror.c index b92c6d1..ca25678 100644 --- a/src/util/virerror.c +++ b/src/util/virerror.c @@ -730,7 +730,7 @@ virErrorMsg(virErrorNumber error, const char *info) return NULL; case VIR_ERR_INTERNAL_ERROR: if (info != NULL) - errmsg = _(internal error %s); + errmsg = _(internal error: %s); else errmsg = _(internal error); break; -- 1.8.1.5 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 1/3] Expose ownership ID parsing
On 24.05.2013 22:25, Martin Kletzander wrote: Parsing 'user:group' is useful even outside the DAC security driver, so expose the most abstract function which has no DAC security driver bits in itself. Signed-off-by: Martin Kletzander mklet...@redhat.com --- src/libvirt_private.syms| 1 + src/security/security_dac.c | 51 +++-- src/util/virutil.c | 56 + src/util/virutil.h | 2 ++ 4 files changed, 62 insertions(+), 48 deletions(-) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 9d5f74b..1927451 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1951,6 +1951,7 @@ virIsCapableVport; virIsDevMapperDevice; virManageVport; virParseNumber; +virParseOwnershipIds; virParseVersionString; virPipeReadUntilEOF; virReadFCHost; diff --git a/src/security/security_dac.c b/src/security/security_dac.c index b8d1a92..0264c28 100644 --- a/src/security/security_dac.c +++ b/src/security/security_dac.c @@ -33,6 +33,7 @@ #include virscsi.h #include virstoragefile.h #include virstring.h +#include virutil.h #define VIR_FROM_THIS VIR_FROM_SECURITY #define SECURITY_DAC_NAME dac @@ -70,52 +71,6 @@ virSecurityDACSetDynamicOwnership(virSecurityManagerPtr mgr, priv-dynamicOwnership = dynamicOwnership; } -static int -parseIds(const char *label, uid_t *uidPtr, gid_t *gidPtr) -{ -int rc = -1; -uid_t theuid; -gid_t thegid; -char *tmp_label = NULL; -char *sep = NULL; -char *owner = NULL; -char *group = NULL; - -if (VIR_STRDUP(tmp_label, label) 0) -goto cleanup; - -/* Split label */ -sep = strchr(tmp_label, ':'); -if (sep == NULL) { -virReportError(VIR_ERR_INVALID_ARG, - _(Missing separator ':' in DAC label \%s\), - label); -goto cleanup; -} -*sep = '\0'; -owner = tmp_label; -group = sep + 1; - -/* Parse owner and group, error message is defined by - * virGetUserID or virGetGroupID. - */ -if (virGetUserID(owner, theuid) 0 || -virGetGroupID(group, thegid) 0) -goto cleanup; - -if (uidPtr) -*uidPtr = theuid; -if (gidPtr) -*gidPtr = thegid; - -rc = 0; - -cleanup: -VIR_FREE(tmp_label); - -return rc; -} - /* returns 1 if label isn't found, 0 on success, -1 on error */ static int virSecurityDACParseIds(virDomainDefPtr def, uid_t *uidPtr, gid_t *gidPtr) @@ -133,7 +88,7 @@ virSecurityDACParseIds(virDomainDefPtr def, uid_t *uidPtr, gid_t *gidPtr) return 1; } -if (parseIds(seclabel-label, uid, gid) 0) +if (virParseOwnershipIds(seclabel-label, uid, gid) 0) return -1; if (uidPtr) @@ -194,7 +149,7 @@ virSecurityDACParseImageIds(virDomainDefPtr def, return 1; } -if (parseIds(seclabel-imagelabel, uid, gid) 0) +if (virParseOwnershipIds(seclabel-imagelabel, uid, gid) 0) return -1; if (uidPtr) diff --git a/src/util/virutil.c b/src/util/virutil.c index 028f1d1..450e5e3 100644 --- a/src/util/virutil.c +++ b/src/util/virutil.c @@ -2071,3 +2071,59 @@ virCompareLimitUlong(unsigned long long a, unsigned long b) return -1; } + +/** + * virParseOwnershipIds: + * + * Parse the usual uid:gid ownership specification into uid_t and + * gid_t passed as parameters. NULL value for those parameters mean + * the information is not needed. Also, none of those values are + * changed in case of any error. + * + * Returns -1 on error, 0 otherwise. + */ +int +virParseOwnershipIds(const char *label, uid_t *uidPtr, gid_t *gidPtr) +{ +int rc = -1; +uid_t theuid; +gid_t thegid; +char *tmp_label = NULL; +char *sep = NULL; +char *owner = NULL; +char *group = NULL; + +if (VIR_STRDUP(tmp_label, label) 0) +goto cleanup; + +/* Split label */ +sep = strchr(tmp_label, ':'); +if (sep == NULL) { +virReportError(VIR_ERR_INVALID_ARG, + _(Failed to parse uid and gid from '%s'), This is the only change to the original impl. + label); +goto cleanup; +} +*sep = '\0'; +owner = tmp_label; +group = sep + 1; + +/* Parse owner and group, error message is defined by + * virGetUserID or virGetGroupID. + */ +if (virGetUserID(owner, theuid) 0 || +virGetGroupID(group, thegid) 0) +goto cleanup; + +if (uidPtr) +*uidPtr = theuid; +if (gidPtr) +*gidPtr = thegid; + +rc = 0; + +cleanup: +VIR_FREE(tmp_label); + +return rc; +} diff --git a/src/util/virutil.h b/src/util/virutil.h index 280a18d..0f6bcc1 100644 --- a/src/util/virutil.h +++ b/src/util/virutil.h @@ -166,4 +166,6 @@ char
Re: [libvirt] [PATCH 0/3] qemu: Fix how files are being opened
On 24.05.2013 22:25, Martin Kletzander wrote: There were some places in the code, where files were being opened with uid:gid of the daemon instead of the qemu process related to the file. First patch exposes the parseIds() function in order for it to be used somewhere else in the code than in the DAC security driver. The next patch fixes how the files are opened and the last one fixes occurences of open() that should use different uid:gid for opening files. There maybe should be a check for whether the file being opened is an image and whether the label used to open the file should be imagelabel or not. But, the QEMU process opening the file is running as the label (not imagelabel) and accessing the files as such. Martin Kletzander (3): Expose ownership ID parsing Make qemuOpenFile aware of per-VM DAC seclabel. Use qemuOpenFile in qemu_driver.c src/libvirt_private.syms| 1 + src/qemu/qemu_driver.c | 87 +++-- src/security/security_dac.c | 51 ++ src/util/virutil.c | 56 + src/util/virutil.h | 2 ++ 5 files changed, 122 insertions(+), 75 deletions(-) -- 1.8.2.1 ACK series, -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] Don't overwrite errors in qemuTranslateDiskSourcePool
On 07/24/2013 04:43 PM, Ján Tomko wrote: Both virStoragePoolFree and virStorageVolFree reset the last error, which might lead to the cryptic message: An error occurred, but the cause is unknown When the volume wasn't found, virStorageVolFree was called with NULL, leading to an error: invalid storage volume pointer in virStorageVolFree This patch changes it to: Storage volume not found: no storage vol with matching name 'tomato' --- src/qemu/qemu_conf.c | 11 +-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index 3e7b78a..80b8156 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -1248,6 +1248,7 @@ qemuTranslateDiskSourcePool(virConnectPtr conn, char *poolxml = NULL; virStorageVolInfo info; int ret = -1; +virErrorPtr savedError; if (def-type != VIR_DOMAIN_DISK_TYPE_VOLUME) return 0; @@ -1324,8 +1325,14 @@ qemuTranslateDiskSourcePool(virConnectPtr conn, def-srcpool-voltype = info.type; ret = 0; cleanup: -virStoragePoolFree(pool); -virStorageVolFree(vol); +savedError = virSaveLastError(); It would be better to save error only when ret 0. In other place, we use format like this: if (ret 0 !savedError) savedError = virSaveLastError(); virStoragePoolFree(pool); virStorageVolFree(vol); if (savedError) { virSetError(orig_err); virFreeError(orig_err); } Guannan -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] Add a colon after 'internal error'
On 24.07.2013 10:47, Ján Tomko wrote: As we do for other errors with an extra string. --- src/util/virerror.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/util/virerror.c b/src/util/virerror.c index b92c6d1..ca25678 100644 --- a/src/util/virerror.c +++ b/src/util/virerror.c @@ -730,7 +730,7 @@ virErrorMsg(virErrorNumber error, const char *info) return NULL; case VIR_ERR_INTERNAL_ERROR: if (info != NULL) - errmsg = _(internal error %s); + errmsg = _(internal error: %s); else errmsg = _(internal error); break; ACK Michal -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v5 4/5] storage: Support chap authentication for iscsi pool
On 24/07/13 16:25, Ján Tomko wrote: On 07/23/2013 04:56 PM, Daniel P. Berrange wrote: On Tue, Jul 23, 2013 at 10:47:46AM -0400, John Ferlan wrote: On 07/23/2013 10:18 AM, Ján Tomko wrote: On 07/22/2013 10:31 PM, John Ferlan wrote: --- src/storage/storage_backend_iscsi.c | 111 +++- 1 file changed, 110 insertions(+), 1 deletion(-) I can confirm this works, but it's a shame it doesn't work on autostart. ACK if you clarify the error. Jan The autostart changes require getting a connection to the secret driver which I felt may take more time than I had to figure out how to get to work properly... In any case, I adjusted the message as follows (same in 5/5): diff --git a/src/storage/storage_backend_iscsi.c b/src/storage/storage_backend_i index 388d6ed..ee8dd2e 100644 --- a/src/storage/storage_backend_iscsi.c +++ b/src/storage/storage_backend_iscsi.c @@ -714,7 +714,8 @@ virStorageBackendISCSISetAuth(const char *portal, if (!conn) { virReportError(VIR_ERR_INTERNAL_ERROR, %s, - _(iscsi 'chap' authentication requires connection)); + _(iscsi 'chap' authentication not supported + for autostarted pools)); return -1; } I noticed that the nwfilter already unconditionally calls virConnectOpen(qemu://system); so we're already in fact suffering from the problem with autostart having a qemu dependency. Given this, I'd support a patch which simply did conn = virConnectOpen(privilege ? qemu:///system : qemu:///session); in storageDriverAutostart, provided that we ignore any errors from virConnectOpen, and fallback to use NULL for the connection in that case. Obviously this is something we'll still need to fix properly in a future release, but at least it'll make autostart of storage pools with auth work in the common case in the short term for this release. Both secret and qemu drivers are registered after the storage driver on libvirtd startup, so autostarting these pools will only work on storage driver reload. On libvirtd startup it fails with: qemuConnectOpen:1033 : internal error qemu state driver is not active oh, that's bad, fortunately we are just entering the freezing. nwfilter is also loaded before qemu driver too, but it only creates the connection when reloading. changing to load the storage and secret modules after qemu module should be okay, unless there are other depedancies between those modules in the middle. (And it seems nwfilter only opens the qemu:// connection on reload) conn = virConnectOpen(qemu:///system); Jan -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] caps: use -device for primary video when qemu =1.6
https://bugzilla.redhat.com/show_bug.cgi?id=981094 The commit 0ad9025ef introduce qemu flag QEMU_CAPS_DEVICE_VIDEO_PRIMARY for using -device VGA, -device cirrus-vga, -device vmware-svga and -device qxl-vga. In use, for -device qxl-vga, mouse doesn't display in guest window like the desciption in above bug. This patch try to use -device for primary video when qemu =1.6 which is safe. --- src/qemu/qemu_capabilities.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 5dc3c9e..08406b8 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -1189,8 +1189,6 @@ virQEMUCapsComputeCmdFlags(const char *help, virQEMUCapsSet(qemuCaps, QEMU_CAPS_VNC_SHARE_POLICY); } -if (version = 1002000) -virQEMUCapsSet(qemuCaps, QEMU_CAPS_DEVICE_VIDEO_PRIMARY); return 0; } @@ -2424,7 +2422,6 @@ virQEMUCapsInitQMPBasic(virQEMUCapsPtr qemuCaps) virQEMUCapsSet(qemuCaps, QEMU_CAPS_MACHINE_OPT); virQEMUCapsSet(qemuCaps, QEMU_CAPS_DUMP_GUEST_CORE); virQEMUCapsSet(qemuCaps, QEMU_CAPS_VNC_SHARE_POLICY); -virQEMUCapsSet(qemuCaps, QEMU_CAPS_DEVICE_VIDEO_PRIMARY); } /* Capabilities that are architecture depending @@ -2597,6 +2594,9 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps, if (qemuCaps-version = 1003001) virQEMUCapsSet(qemuCaps, QEMU_CAPS_VNC_WEBSOCKET); +if (qemuCaps-version = 1006000) +virQEMUCapsSet(qemuCaps, QEMU_CAPS_DEVICE_VIDEO_PRIMARY); + if (virQEMUCapsProbeQMPCommands(qemuCaps, mon) 0) goto cleanup; if (virQEMUCapsProbeQMPEvents(qemuCaps, mon) 0) -- 1.8.3.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 1/2] Introduce virDBusCallMethod virDBusMessageRead methods
On Tue, Jul 23, 2013 at 05:02:54PM -0600, Eric Blake wrote: On 07/18/2013 07:27 AM, Daniel P. Berrange wrote: From: Daniel P. Berrange berra...@redhat.com Doing DBus method calls using libdbus.so is tedious in the extreme. systemd developers came up with a nice high level API for DBus method calls (sd_bus_call_method). While systemd doesn't use libdbus.so, their API design can easily be ported to libdbus.so. This patch thus introduces methods virDBusCallMethod virDBusMessageRead, which are based on the code used for sd_bus_call_method and sd_bus_message_read. This code in systemd is under the LGPLv2+, so we're license compatible. This code is probably pretty unintelligible unless you are familiar with the DBus type system. So I added some API docs trying to explain how to use them, as well as test cases to validate that I didn't screw up the adaptation from the original systemd code. Signed-off-by: Daniel P. Berrange berra...@redhat.com --- +static const char virDBusBasicTypes[] = { +DBUS_TYPE_BYTE, +DBUS_TYPE_BOOLEAN, +DBUS_TYPE_INT16, +DBUS_TYPE_UINT16, +DBUS_TYPE_INT32, +DBUS_TYPE_UINT32, +DBUS_TYPE_INT64, +DBUS_TYPE_UINT64, +DBUS_TYPE_DOUBLE, +DBUS_TYPE_STRING, +DBUS_TYPE_OBJECT_PATH, +DBUS_TYPE_SIGNATURE, +DBUS_TYPE_UNIX_FD +}; This fails to build on RHEL 6.4: CC libvirt_util_la-virdbus.lo util/virdbus.c:242: error: 'DBUS_TYPE_UNIX_FD' undeclared here (not in a function) cc1: warnings being treated as errors DBUS_TYPE_UNIX_FD was added sometime after dbus-devel-1.2.24 and before 1.6.12; we'll have to make the code conditional and error out if a client tries to pass an fd when using an older version of dbus. Ok, I'll add an #ifdef Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC] Image Fleecing for Libvirt (BZ 955734, 905125)
On Tue, Jul 23, 2013 at 10:22:23PM -0600, Eric Blake wrote: On 07/15/2013 03:04 PM, Richard W.M. Jones wrote: On Mon, Jul 15, 2013 at 05:57:12PM +0800, Fam Zheng wrote: Hi all, QEMU-KVM BZ 955734, and libvirt BZ 905125 are about feature Read-only point-in-time throwaway snapshot. The development is ongoing on upstream, which implements the core functionality by QMP command drive-backup. I want to demonstrate the HMP/QMP commands here for image fleecing tasks (again) and make sure this interface looks ready and satisfying from Libvirt point of view. I'm wondering if we can still get something committed in time for the freeze for 1.1.1. At this point, we're close enough to the freeze, and with no patches submitted in libvirt and the qemu design still under discussion, that I'm worried about whether we are rushing things too much to take a new interface this late in a libvirt release cycle, or whether we should wait until after 1.1.1 before attempting to add things. On the other hand, if we can agree on a sane design now (or at least before rc2, if we miss rc1), then we can commit to that design for this libvirt release, and downstream distros can use libvirt 1.1.1 as a starting point for rebases without worrying about so-name compatibility, by signing up to the efforts of backporting actual implementation from future upstream qemu and libvirt releases. IMHO we've missed the boat for adding new APIs in this release, particularly given that there isn't even an finalized design implementation ready to review. Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC] Image Fleecing for Libvirt (BZ 955734, 905125)
Il 24/07/2013 07:38, Wenchao Xia ha scritto: Because libvirt would use a local (Unix) socket to communicate with QEMU and pass the file descriptor, there is no need to authenticate the NBD connection. There is no need for ticketing, though if necessary we can make QEMU only accept connections from libvirtd's pid. libvirt and VDSM already do authentication and/or encryption. How do I get the info about IP/port needed to access that snapshot? call virSnapshotGetInfo(or similar API) later? See above. You don't, you use the libvirt channel. Paolo -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH v2] Don't overwrite errors in qemuTranslateDiskSourcePool
Both virStoragePoolFree and virStorageVolFree reset the last error, which might lead to the cryptic message: An error occurred, but the cause is unknown When the volume wasn't found, virStorageVolFree was called with NULL, leading to an error: invalid storage volume pointer in virStorageVolFree This patch changes it to: Storage volume not found: no storage vol with matching name 'tomato' --- v1: https://www.redhat.com/archives/libvir-list/2013-July/msg01522.html v2: only save/restore the error when ret 0 src/qemu/qemu_conf.c | 14 -- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c index 3e7b78a..18e926c 100644 --- a/src/qemu/qemu_conf.c +++ b/src/qemu/qemu_conf.c @@ -1248,6 +1248,7 @@ qemuTranslateDiskSourcePool(virConnectPtr conn, char *poolxml = NULL; virStorageVolInfo info; int ret = -1; +virErrorPtr savedError = NULL; if (def-type != VIR_DOMAIN_DISK_TYPE_VOLUME) return 0; @@ -1324,8 +1325,17 @@ qemuTranslateDiskSourcePool(virConnectPtr conn, def-srcpool-voltype = info.type; ret = 0; cleanup: -virStoragePoolFree(pool); -virStorageVolFree(vol); +if (ret 0) +savedError = virSaveLastError(); +if (pool) +virStoragePoolFree(pool); +if (vol) +virStorageVolFree(vol); +if (savedError) { +virSetError(savedError); +virFreeError(savedError); +} + VIR_FREE(poolxml); virStoragePoolDefFree(pooldef); return ret; -- 1.8.1.5 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC] Image Fleecing for Libvirt (BZ 955734, 905125)
On Tue, Jul 23, 2013 at 09:40:56PM -0600, Eric Blake wrote: [replying with useful information from another off-list email] On 07/15/2013 03:04 PM, Richard W.M. Jones wrote: On Mon, Jul 15, 2013 at 05:57:12PM +0800, Fam Zheng wrote: Hi all, QEMU-KVM BZ 955734, and libvirt BZ 905125 are about feature Read-only point-in-time throwaway snapshot. The development is ongoing on upstream, which implements the core functionality by QMP command drive-backup. I want to demonstrate the HMP/QMP commands here for image fleecing tasks (again) and make sure this interface looks ready and satisfying from Libvirt point of view. On 07/15/2013 06:24 AM, Paolo Bonzini wrote: Il 15/07/2013 11:57, Fam Zheng ha scritto: Hi all, QEMU-KVM BZ 955734, and libvirt BZ 905125 are about feature Read-only point-in-time throwaway snapshot. The development is ongoing on upstream, which implements the core functionality by QMP command drive-backup. I want to demonstrate the HMP/QMP commands here for image fleecing tasks (again) and make sure this interface looks ready and satisfying from Libvirt point of view. And since we are at it, here is a possible libvirt API to expose this functionality (cut-and-paste from an old email). If needed, VDSM can provide a similar API and proxy the libvirt API. Would something like this work? intvirDomainBlockPeekStart(virDomainPtr dom, const char ** disks, unsigned int flags); Make it possible to use virDomainBlockPeek on the given disks with the new VIR_DOMAIN_BLOCK_PEEK_IMAGE flag. It is okay to create multiple snapshot groups, i.e. to invoke the function multiple times with VIR_DOMAIN_BLOCK_PEEK_SNAPSHOT. It is however not okay to specify the same disk multiple times unless all of them are _without_ VIR_DOMAIN_BLOCK_PEEK_SNAPSHOT. flags: VIR_DOMAIN_BLOCK_PEEK_SNAPSHOT Make an atomic point-in-time snapshot of all the disks included in the list of strings disks, and expose the snapshot via virDomainBlockPeek Note: if the virtual machine is running, this will use nbd-server-start/add/end. If the virtual machine is paused, this will use qemu-nbd. Libvirt should be able to switch transparently from one method to the other. intvirDomainBlockPeekStop (virDomainPtr dom); Stop communication with qemu-nbd or the hypervisor. VIR_DOMAIN_BLOCK_PEEK_IMAGE A new flag for virDomainBlockPeek. If specified, virDomainBlockPeek will access the disk image, not the raw file (i.e. it will read data as seen by the guest). This is only valid if virDomainBlockPeekStart has been called before for this disk. I don't much like this retro-fitting of start/stop actions into the virDomainBlockPeek API as a design, particularly the binding of the PEEK_IMAGE flag to the start/stop actions. Conceptually it would be perfectly possible for a hypervisor to implement support PEEK_IMAGE without these start/stop actions, which are somewhat specific to the need for QEMU to start an NBD driver. The virDomainBlockPeek API is also not particularly efficient as an API, because each read incurrs a round-trip over libvirt's RPC service already. We'd then be adding a round-trip over NBD too. I'm wondering if we could instead try to utilize the virStreamPtr APIs for this task. From a libvirt's RPC POV this much more efficient because once you open the region with a stream API, you don't have any round trips at all - the data is pushed out to/from the client async. Now those APIs are currently designed for sequential streaming of entire data regions only, but I wonder if we could extend them somehow to enable seek'ing within the stream. Alternatively perhaps we could just say if you want to read from dis-joint regions, that you can just re-open a stream for each region to be processed. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v3 4/7] qemu: Translate the iscsi pool/volume disk source
On Wed, Jul 24, 2013 at 11:12:18AM +0800, Osier Yang wrote: On 24/07/13 01:11, Paolo Bonzini wrote: Il 23/07/2013 18:47, John Ferlan ha scritto: On 07/23/2013 12:18 PM, Paolo Bonzini wrote: Il 23/07/2013 18:01, Daniel P. Berrange ha scritto: On Tue, Jul 23, 2013 at 05:35:57PM +0200, Paolo Bonzini wrote: Il 23/07/2013 16:14, Daniel P. Berrange ha scritto: Perhaps the default could be specified in a configuration file (and the default should be the safe one). No, that is even worse because now the default is not predictable.. We simply default to host mode and if applications want to use the other mode they can configure the XML as desired. Can we just forbid mode='default' for iSCSI and force the user to specify host vs. direct? That would mean that apps cannot simply configure a guest volume without first checking to find out what type of pool it is, and then specifying this extra arg for iSCSI. IMHO the value of the volume XML is that you don't have to know anything about the pool to be able to configure it - we're completely decoupled. Thinking more about it, it would only be needed for disk type='volume' device='lun'. And for that case, some knowledge of the pool is necessary anyway (for one thing, it won't work with filesystem or LVM pools). So if we could forbid mode='default' for that case only, it would be enough as far as I'm concernde. Using default in the mode field would result in the following XML error message (I just quickly changed a test to prove the point): 121) QEMU XML-2-XML disk-source-pool-mode ... libvirt: Domain Config error : XML error: unknown source mode 'default' for volume type disk FAILED Sorry, by mode='default' I really meant no mode at all (I was under the false impression that you could also specify a mode='default' with the same effect as no mode at all). The XML parsing code only looks for mode='direct' or mode='host'. If mode isn't present in the XML, that's when that default comes into play. Since 'mode' is new there could be configurations where its not in an XML file, thus a 0 (zero e.g. default) value is provided for the field. Once the XML is parsed we still needed a default when it's going to be added, thus the magic to set the default to HOST is in qemu_conf.c in qemuTranslateDiskSourcePool(): if (pooldef-type == VIR_STORAGE_POOL_ISCSI) { /* Default to use the LUN's path on host */ if (!def-srcpool-mode) def-srcpool-mode = VIR_DOMAIN_DISK_SOURCE_POOL_MODE_HOST; I think this answers your primary concern - correct? No, my concern is that mode='host' is a bad default for the device='lun' case. after going through the thread.. i think the point is the relationship between the host and direct mode. if users used the host mode in the past, but most of them suffered from the persistent reservatioins, and direct mode is the answer for them, then i indent to agree with paolo that defaulting to direct is better. if they used the host mode broadly, but persistent reservation is just requirement of a small part of the users. then i don't see any problem to default to host mode. i.e. the point is if direct mode is the answer for host mode. but anyway, we provide good enough documentations, so if one wants to use direct mode, he can still change the xml to use it, any big problems here? though defaulting to direct is more convenient for these users. Defaulting to direct is *NOT* more convenient for all users, only those who happen to have a new enough QEMU, from a vendor who actually decided to compile in the iSCSI support. If they're lacking such a QEMU then the direct mode will prevent the default config from even starting. This can never be an acceptable default choice. mode=host, while it huas some limitations in features, is the only option that is capable of working with any QEMU Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v5 4/5] storage: Support chap authentication for iscsi pool
On Wed, Jul 24, 2013 at 10:25:06AM +0200, Ján Tomko wrote: On 07/23/2013 04:56 PM, Daniel P. Berrange wrote: On Tue, Jul 23, 2013 at 10:47:46AM -0400, John Ferlan wrote: On 07/23/2013 10:18 AM, Ján Tomko wrote: On 07/22/2013 10:31 PM, John Ferlan wrote: --- src/storage/storage_backend_iscsi.c | 111 +++- 1 file changed, 110 insertions(+), 1 deletion(-) I can confirm this works, but it's a shame it doesn't work on autostart. ACK if you clarify the error. Jan The autostart changes require getting a connection to the secret driver which I felt may take more time than I had to figure out how to get to work properly... In any case, I adjusted the message as follows (same in 5/5): diff --git a/src/storage/storage_backend_iscsi.c b/src/storage/storage_backend_i index 388d6ed..ee8dd2e 100644 --- a/src/storage/storage_backend_iscsi.c +++ b/src/storage/storage_backend_iscsi.c @@ -714,7 +714,8 @@ virStorageBackendISCSISetAuth(const char *portal, if (!conn) { virReportError(VIR_ERR_INTERNAL_ERROR, %s, - _(iscsi 'chap' authentication requires connection)); + _(iscsi 'chap' authentication not supported + for autostarted pools)); return -1; } I noticed that the nwfilter already unconditionally calls virConnectOpen(qemu://system); so we're already in fact suffering from the problem with autostart having a qemu dependency. Given this, I'd support a patch which simply did conn = virConnectOpen(privilege ? qemu:///system : qemu:///session); in storageDriverAutostart, provided that we ignore any errors from virConnectOpen, and fallback to use NULL for the connection in that case. Obviously this is something we'll still need to fix properly in a future release, but at least it'll make autostart of storage pools with auth work in the common case in the short term for this release. Both secret and qemu drivers are registered after the storage driver on libvirtd startup, so autostarting these pools will only work on storage driver reload. On libvirtd startup it fails with: qemuConnectOpen:1033 : internal error qemu state driver is not active (And it seems nwfilter only opens the qemu:// connection on reload) Oh damn, yes, that pretty much dooms us. We can't change the order of the drivers either, because autostarting of QEMU guests, requires that the storage pools be autostarted already. To fix this would require that we split virStateInitialize into two parts, virStateInitialize() and virStateAutoStart(). That's too big a change todo for this release, but we could do it for next release without too much trouble. Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 2/5] qemu: only check for PIIX3-specific device addrs on pc-* machinetypes
On Tue, Jul 23, 2013 at 06:18:14PM -0400, Laine Stump wrote: On 07/23/2013 11:24 AM, Daniel P. Berrange wrote: I'm thinking that it would probably be better to move all the re-indented code out into a qemuValidateDevicePCISlotsPIIX3() and just call that function from qemuAssignDevicePCISlots(). That way if we need to add more validation for other machine types in the future, we have a good modular code structure. This would probably make the diff more sane too, since you wouldn't be indenting code. Ah yes, good idea! I'll do that and resend. On Tue, Jul 23, 2013 at 10:44:52AM -0400, Laine Stump wrote: (Note that, according to the qemuxml2argv-pseries-usb-multi test, ppc pseries machines also have a PIIX3 chip (since that test file adds a piix3-uhci usb controller). I don't know if this is really the case or not, but had to include that machine type in the checks in order for make check to succeed with no changes to the test data.) Anyone have better information about this? Does the pseries really have a PIIX3? Or was that just an arbitrary entry added for a test case? Looking at QEMU's GIT hw/ppc/* I see no reference to piix at all. It is only referenced in hw/i386/*. So I think the test case is bogus Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 4/7] qemu: Add monitor APIs to fetch CPUID data from QEMU
On Tue, Jul 23, 2013 at 07:28:38PM +0200, Jiri Denemark wrote: On Tue, Jul 23, 2013 at 17:32:42 +0100, Daniel Berrange wrote: On Tue, Jul 23, 2013 at 06:11:33PM +0200, Jiri Denemark wrote: --- src/qemu/qemu_monitor.c| 21 +++ src/qemu/qemu_monitor.h| 3 + src/qemu/qemu_monitor_json.c | 162 + src/qemu/qemu_monitor_json.h | 6 + tests/Makefile.am | 1 + .../qemumonitorjson-getcpu-empty.data | 2 + .../qemumonitorjson-getcpu-empty.json | 46 ++ .../qemumonitorjson-getcpu-filtered.data | 4 + .../qemumonitorjson-getcpu-filtered.json | 46 ++ .../qemumonitorjson-getcpu-full.data | 4 + .../qemumonitorjson-getcpu-full.json | 46 ++ .../qemumonitorjson-getcpu-host.data | 5 + .../qemumonitorjson-getcpu-host.json | 45 ++ tests/qemumonitorjsontest.c| 74 ++ 14 files changed, 465 insertions(+) create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-empty.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-empty.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-filtered.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-filtered.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-full.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-full.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-host.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-host.json ACK, though I believe the design of this monitor API is flawed because it requires you to re-launch QEMU with different accel args Not really, this can be used in tcg too. It's just when we want to get the data for host CPU, we need to enable kvm as tcg knows nothing about that CPU. Which makes sense as kvm (the kernel module) influences how the host CPU will look like. Is there no ioctl() for the KVM module we can just invoke directly to discover what CPU flag filtering it will perform. Presumably QEMU is using some ioctl to discover this, so libvirt ought to be able to too. Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] dbus: work with older dbus
On Tue, Jul 23, 2013 at 05:14:20PM -0600, Eric Blake wrote: dbus 1.2.24 (on RHEL 6) lacks DBUS_TYPE_UNIX_FD; but as we aren't trying to pass one of those anyways, we can just drop support for it in our wrapper. Solves this build error: CC libvirt_util_la-virdbus.lo util/virdbus.c:242: error: 'DBUS_TYPE_UNIX_FD' undeclared here (not in a function) * src/util/virdbus.c (virDBusBasicTypes): Drop support for unix fds. Signed-off-by: Eric Blake ebl...@redhat.com --- Pushing under the build-breaker rule. src/util/virdbus.c | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/util/virdbus.c b/src/util/virdbus.c index 9b0977a..4ce6c46 100644 --- a/src/util/virdbus.c +++ b/src/util/virdbus.c @@ -1,7 +1,7 @@ /* * virdbus.c: helper for using DBus * - * Copyright (C) 2012 Red Hat, Inc. + * Copyright (C) 2012-2013 Red Hat, Inc. * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public @@ -239,7 +239,6 @@ static const char virDBusBasicTypes[] = { DBUS_TYPE_STRING, DBUS_TYPE_OBJECT_PATH, DBUS_TYPE_SIGNATURE, -DBUS_TYPE_UNIX_FD }; static bool virDBusIsBasicType(char c) { @@ -1024,6 +1023,9 @@ int virDBusMessageDecode(DBusMessage* msg, * '{' - start of a dictionary entry (pair of types) * '}' - start of a dictionary entry (pair of types) * + * At this time, there is no support for Unix fd's ('h'), which only + * newer DBus supports. + * * Passing values in variadic args for basic types is * simple, the value is just passed directly using the * corresponding C type listed against the type code -- Ahh, ACK you beat me to the fix. Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC] Image Fleecing for Libvirt (BZ 955734, 905125)
On Wed, Jul 24, 2013 at 10:51:35AM +0100, Daniel P. Berrange wrote: I'm wondering if we could instead try to utilize the virStreamPtr APIs for this task. From a libvirt's RPC POV this much more efficient because once you open the region with a stream API, you don't have any round trips at all - the data is pushed out to/from the client async. Now those APIs are currently designed for sequential streaming of entire data regions only, but I wonder if we could extend them somehow to enable seek'ing within the stream. Alternatively perhaps we could just say if you want to read from dis-joint regions, that you can just re-open a stream for each region to be processed. It'd be so much easier from a client point of view if you just exposed the NBD Unix socket directly. libvirt already exposes qemu sockets directly (eg. console, virtio-serial sockets). It should forward those sockets from the remote side transparently too. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] virLXCMonitorClose: Unlock domain while closing monitor
There's a race in lxc driver causing a deadlock. If a domain is destroyed immediately after started, the deadlock can occur. When domain is started, the even loop tries to connect to the monitor. If the connecting succeeds, virLXCProcessMonitorInitNotify() is called with @mon-client locked. The first thing that callee does, is virObjectLock(vm). So the order of locking is: 1) @mon-client, 2) @vm. However, if there's another thread executing virDomainDestroy on the very same domain, the first thing done here is locking the @vm. Then, the corresponding libvirt_lxc process is killed and monitor is closed via calling virLXCMonitorClose(). This callee tries to lock @mon-client too. So the order is reversed to the first case. This situation results in deadlock and unresponsive libvirtd (since the eventloop is involved). The proper solution is to unlock the @vm in virLXCMonitorClose prior entering virNetClientClose(). See the backtrace as follows: Thread 25 (Thread 0x7f1b7c9b8700 (LWP 16312)): 0 0x7f1b80539714 in __lll_lock_wait () from /lib64/libpthread.so.0 1 0x7f1b8053516c in _L_lock_516 () from /lib64/libpthread.so.0 2 0x7f1b80534fbb in pthread_mutex_lock () from /lib64/libpthread.so.0 3 0x7f1b82a637cf in virMutexLock (m=0x7f1b3c0038d0) at util/virthreadpthread.c:85 4 0x7f1b82a4ccf2 in virObjectLock (anyobj=0x7f1b3c0038c0) at util/virobject.c:320 5 0x7f1b82b861f6 in virNetClientCloseInternal (client=0x7f1b3c0038c0, reason=3) at rpc/virnetclient.c:696 6 0x7f1b82b862f5 in virNetClientClose (client=0x7f1b3c0038c0) at rpc/virnetclient.c:721 7 0x7f1b6ee12500 in virLXCMonitorClose (mon=0x7f1b3c007210) at lxc/lxc_monitor.c:216 8 0x7f1b6ee129f0 in virLXCProcessCleanup (driver=0x7f1b68100240, vm=0x7f1b680ceb70, reason=VIR_DOMAIN_SHUTOFF_DESTROYED) at lxc/lxc_process.c:174 9 0x7f1b6ee14106 in virLXCProcessStop (driver=0x7f1b68100240, vm=0x7f1b680ceb70, reason=VIR_DOMAIN_SHUTOFF_DESTROYED) at lxc/lxc_process.c:710 10 0x7f1b6ee1aa36 in lxcDomainDestroyFlags (dom=0x7f1b5c002560, flags=0) at lxc/lxc_driver.c:1291 11 0x7f1b6ee1ab1a in lxcDomainDestroy (dom=0x7f1b5c002560) at lxc/lxc_driver.c:1321 12 0x7f1b82b05be5 in virDomainDestroy (domain=0x7f1b5c002560) at libvirt.c:2303 13 0x7f1b835a7e85 in remoteDispatchDomainDestroy (server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0, rerr=0x7f1b7c9b7c30, args=0x7f1b5c004a50) at remote_dispatch.h:3143 14 0x7f1b835a7d78 in remoteDispatchDomainDestroyHelper (server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0, rerr=0x7f1b7c9b7c30, args=0x7f1b5c004a50, ret=0x7f1b5c0029e0) at remote_dispatch.h:3121 15 0x7f1b82b93704 in virNetServerProgramDispatchCall (prog=0x7f1b8573af90, server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0) at rpc/virnetserverprogram.c:435 16 0x7f1b82b93263 in virNetServerProgramDispatch (prog=0x7f1b8573af90, server=0x7f1b857419d0, client=0x7f1b8574ae40, msg=0x7f1b8574acf0) at rpc/virnetserverprogram.c:305 17 0x7f1b82b8c0f6 in virNetServerProcessMsg (srv=0x7f1b857419d0, client=0x7f1b8574ae40, prog=0x7f1b8573af90, msg=0x7f1b8574acf0) at rpc/virnetserver.c:163 18 0x7f1b82b8c1da in virNetServerHandleJob (jobOpaque=0x7f1b8574dca0, opaque=0x7f1b857419d0) at rpc/virnetserver.c:184 19 0x7f1b82a64158 in virThreadPoolWorker (opaque=0x7f1b8573cb10) at util/virthreadpool.c:144 20 0x7f1b82a63ae5 in virThreadHelper (data=0x7f1b8574b9f0) at util/virthreadpthread.c:161 21 0x7f1b80532f4a in start_thread () from /lib64/libpthread.so.0 22 0x7f1b7fc4f20d in clone () from /lib64/libc.so.6 Thread 1 (Thread 0x7f1b83546740 (LWP 16297)): 0 0x7f1b80539714 in __lll_lock_wait () from /lib64/libpthread.so.0 1 0x7f1b8053516c in _L_lock_516 () from /lib64/libpthread.so.0 2 0x7f1b80534fbb in pthread_mutex_lock () from /lib64/libpthread.so.0 3 0x7f1b82a637cf in virMutexLock (m=0x7f1b680ceb80) at util/virthreadpthread.c:85 4 0x7f1b82a4ccf2 in virObjectLock (anyobj=0x7f1b680ceb70) at util/virobject.c:320 5 0x7f1b6ee13bd7 in virLXCProcessMonitorInitNotify (mon=0x7f1b3c007210, initpid=4832, vm=0x7f1b680ceb70) at lxc/lxc_process.c:601 6 0x7f1b6ee11fd3 in virLXCMonitorHandleEventInit (prog=0x7f1b3c001f10, client=0x7f1b3c0038c0, evdata=0x7f1b8574a7d0, opaque=0x7f1b3c007210) at lxc/lxc_monitor.c:109 7 0x7f1b82b8a196 in virNetClientProgramDispatch (prog=0x7f1b3c001f10, client=0x7f1b3c0038c0, msg=0x7f1b3c003928) at rpc/virnetclientprogram.c:259 8 0x7f1b82b87030 in virNetClientCallDispatchMessage (client=0x7f1b3c0038c0) at rpc/virnetclient.c:1019 9 0x7f1b82b876bb in virNetClientCallDispatch (client=0x7f1b3c0038c0) at rpc/virnetclient.c:1140 10 0x7f1b82b87d41 in virNetClientIOHandleInput (client=0x7f1b3c0038c0) at rpc/virnetclient.c:1312 11 0x7f1b82b88f51 in virNetClientIncomingEvent (sock=0x7f1b3c0044e0, events=1, opaque=0x7f1b3c0038c0) at rpc/virnetclient.c:1832 12
Re: [libvirt] [RFC] Image Fleecing for Libvirt (BZ 955734, 905125)
On Wed, Jul 24, 2013 at 11:12:01AM +0100, Richard W.M. Jones wrote: On Wed, Jul 24, 2013 at 10:51:35AM +0100, Daniel P. Berrange wrote: I'm wondering if we could instead try to utilize the virStreamPtr APIs for this task. From a libvirt's RPC POV this much more efficient because once you open the region with a stream API, you don't have any round trips at all - the data is pushed out to/from the client async. Now those APIs are currently designed for sequential streaming of entire data regions only, but I wonder if we could extend them somehow to enable seek'ing within the stream. Alternatively perhaps we could just say if you want to read from dis-joint regions, that you can just re-open a stream for each region to be processed. It'd be so much easier from a client point of view if you just exposed the NBD Unix socket directly. libvirt already exposes qemu sockets directly (eg. console, virtio-serial sockets). It should forward those sockets from the remote side transparently too. That's a possibility, though it pretty much rules out implementing this functionality for other hypervisors, unless they add a dep on qemu-nbd or another NBD server. eg we could potentially design an API that works fine with VMWare ESX, but if we expose NBD as the api, then our VMWare driver is doomed, since I don't expect VMWare to ever implement NBD. A question around direct exposure of NBD would be that of authentication and data security of the NBD server. QEMU's NBD server has no auth and if you're accessing it over TCP from a remote host you have no data encryption either. A libvirt stream API could address both of those issues. That all said I'm not neccessarily against just exposing NBD directly. For local use, we could also have a FD passing based API where the livirt API call returns a pre-opened FD connected to the NBD server, and simply warn people that exposing NBD over TCP requires a trusted network. Another option would be to actually tunnel the NBD protocol over the pre-authenticated secured libvirt RPC protocol. This isn't much different to using the virStreamPr APIs, but perhaps would be easier than trying to add random-access to the stream APIs be easier for a existing NBD client to integrate with. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v3 4/7] qemu: Translate the iscsi pool/volume disk source
Il 24/07/2013 11:54, Daniel P. Berrange ha scritto: but anyway, we provide good enough documentations, so if one wants to use direct mode, he can still change the xml to use it, any big problems here? though defaulting to direct is more convenient for these users. Defaulting to direct is *NOT* more convenient for all users, only those who happen to have a new enough QEMU, from a vendor who actually decided to compile in the iSCSI support. If they're lacking such a QEMU then the direct mode will prevent the default config from even starting. This can never be an acceptable default choice. mode=host, while it huas some limitations in features, is the only option that is capable of working with any QEMU s/limitations in features/possibility of wreaking havoc to clusters/ Didn't you break features that worked when you started forbidding migration unless cache=none? This time there is even hardly a feature that worked, because the feature is new. Paolo -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] virLXCMonitorClose: Unlock domain while closing monitor
On Wed, Jul 24, 2013 at 12:15:32PM +0200, Michal Privoznik wrote: There's a race in lxc driver causing a deadlock. If a domain is destroyed immediately after started, the deadlock can occur. When domain is started, the even loop tries to connect to the monitor. If the connecting succeeds, virLXCProcessMonitorInitNotify() is called with @mon-client locked. The first thing that callee does, is virObjectLock(vm). So the order of locking is: 1) @mon-client, 2) @vm. However, if there's another thread executing virDomainDestroy on the very same domain, the first thing done here is locking the @vm. Then, the corresponding libvirt_lxc process is killed and monitor is closed via calling virLXCMonitorClose(). This callee tries to lock @mon-client too. So the order is reversed to the first case. This situation results in deadlock and unresponsive libvirtd (since the eventloop is involved). The proper solution is to unlock the @vm in virLXCMonitorClose prior entering virNetClientClose(). See the backtrace as follows: Hmm, I think I'd say that the flaw is in the way virLXCProcessMonitorInitNotify is invoked. In the QEMU driver monitor, we unlock the monitor before invoking any callbacks. In the LXC driver monitor we're invoking the callbacks with the monitor lock held. I think we need to make the LXC monitor locking wrt callbacks do what QEMU does, and unlock the monitor. See QEMU_MONITOR_CALLBACK in qemu_monitor.c Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v3 4/7] qemu: Translate the iscsi pool/volume disk source
On Wed, Jul 24, 2013 at 12:25:17PM +0200, Paolo Bonzini wrote: Il 24/07/2013 11:54, Daniel P. Berrange ha scritto: but anyway, we provide good enough documentations, so if one wants to use direct mode, he can still change the xml to use it, any big problems here? though defaulting to direct is more convenient for these users. Defaulting to direct is *NOT* more convenient for all users, only those who happen to have a new enough QEMU, from a vendor who actually decided to compile in the iSCSI support. If they're lacking such a QEMU then the direct mode will prevent the default config from even starting. This can never be an acceptable default choice. mode=host, while it huas some limitations in features, is the only option that is capable of working with any QEMU s/limitations in features/possibility of wreaking havoc to clusters/ This isn't a new issue. People have been using KVM with iSCSI in host mode for as long as KVM has existed and the sky hasn't fallen in clusters. As long as it is a documented limitation, that admins can address with a config setting if desired, I don't see that this is suddenly something we must fix at the cost of breaking the default config for QEMU's without iSCSI support. Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v3 4/7] qemu: Translate the iscsi pool/volume disk source
Il 24/07/2013 12:31, Daniel P. Berrange ha scritto: On Wed, Jul 24, 2013 at 12:25:17PM +0200, Paolo Bonzini wrote: Il 24/07/2013 11:54, Daniel P. Berrange ha scritto: but anyway, we provide good enough documentations, so if one wants to use direct mode, he can still change the xml to use it, any big problems here? though defaulting to direct is more convenient for these users. Defaulting to direct is *NOT* more convenient for all users, only those who happen to have a new enough QEMU, from a vendor who actually decided to compile in the iSCSI support. If they're lacking such a QEMU then the direct mode will prevent the default config from even starting. This can never be an acceptable default choice. mode=host, while it huas some limitations in features, is the only option that is capable of working with any QEMU s/limitations in features/possibility of wreaking havoc to clusters/ This isn't a new issue. People have been using KVM with iSCSI in host mode for as long as KVM has existed and the sky hasn't fallen in clusters. As long as it is a documented limitation, that admins can address with a config setting if desired, I don't see that this is suddenly something we must fix at the cost of breaking the default config for QEMU's without iSCSI support. Understood, that's why I suggested not having a default config at all for type='lun' devices. Also, can libvirt at least detect using a volume on a pool that wasn't started, and fail unless mode='direct'? Paolo -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v3 4/7] qemu: Translate the iscsi pool/volume disk source
On Wed, Jul 24, 2013 at 12:37:10PM +0200, Paolo Bonzini wrote: Il 24/07/2013 12:31, Daniel P. Berrange ha scritto: On Wed, Jul 24, 2013 at 12:25:17PM +0200, Paolo Bonzini wrote: Il 24/07/2013 11:54, Daniel P. Berrange ha scritto: but anyway, we provide good enough documentations, so if one wants to use direct mode, he can still change the xml to use it, any big problems here? though defaulting to direct is more convenient for these users. Defaulting to direct is *NOT* more convenient for all users, only those who happen to have a new enough QEMU, from a vendor who actually decided to compile in the iSCSI support. If they're lacking such a QEMU then the direct mode will prevent the default config from even starting. This can never be an acceptable default choice. mode=host, while it huas some limitations in features, is the only option that is capable of working with any QEMU s/limitations in features/possibility of wreaking havoc to clusters/ This isn't a new issue. People have been using KVM with iSCSI in host mode for as long as KVM has existed and the sky hasn't fallen in clusters. As long as it is a documented limitation, that admins can address with a config setting if desired, I don't see that this is suddenly something we must fix at the cost of breaking the default config for QEMU's without iSCSI support. Understood, that's why I suggested not having a default config at all for type='lun' devices. As I said before, that causes apps to have to do special handling for iSCSI pools, which is not something we want. Also, can libvirt at least detect using a volume on a pool that wasn't started, and fail unless mode='direct'? It should always fail to start the guest if the pool is not started, regardless of the mode used. Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v3 4/7] qemu: Translate the iscsi pool/volume disk source
Il 24/07/2013 12:39, Daniel P. Berrange ha scritto: On Wed, Jul 24, 2013 at 12:37:10PM +0200, Paolo Bonzini wrote: Il 24/07/2013 12:31, Daniel P. Berrange ha scritto: breaking the default config for QEMU's without iSCSI support. Understood, that's why I suggested not having a default config at all for type='lun' devices. As I said before, that causes apps to have to do special handling for iSCSI pools, which is not something we want. This is not about iSCSI pools, it is about type='lun' devices. And as I said before, knowing the kind of pool is something you have to do anyway for type='lun' devices, because they won't work if the underlying pool is filesystem- or LVM-based. Also, can libvirt at least detect using a volume on a pool that wasn't started, and fail unless mode='direct'? It should always fail to start the guest if the pool is not started, regardless of the mode used. Another point of mode='direct' is that the pool need not be started. Paolo -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v3 4/7] qemu: Translate the iscsi pool/volume disk source
On Wed, Jul 24, 2013 at 12:41:49PM +0200, Paolo Bonzini wrote: Il 24/07/2013 12:39, Daniel P. Berrange ha scritto: On Wed, Jul 24, 2013 at 12:37:10PM +0200, Paolo Bonzini wrote: Il 24/07/2013 12:31, Daniel P. Berrange ha scritto: breaking the default config for QEMU's without iSCSI support. Understood, that's why I suggested not having a default config at all for type='lun' devices. As I said before, that causes apps to have to do special handling for iSCSI pools, which is not something we want. This is not about iSCSI pools, it is about type='lun' devices. And as I said before, knowing the kind of pool is something you have to do anyway for type='lun' devices, because they won't work if the underlying pool is filesystem- or LVM-based. Regardless of whether all pools types can use type=lun, I don't want to see special case handling of this attribute. If it is optional in the XML schema, it should be unconditionally optional for any usage. Also, can libvirt at least detect using a volume on a pool that wasn't started, and fail unless mode='direct'? It should always fail to start the guest if the pool is not started, regardless of the mode used. Another point of mode='direct' is that the pool need not be started. That is not something we can allow in the long term. When we expand our fine grained access control further, we're likely going to need to do ACL checks for the virStorageVolPtr object referenced by the XML config here. We can't do that unless the pool is running. So we cannot set a precedent of allowing use without the pool being active IMHO. Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v3 4/7] qemu: Translate the iscsi pool/volume disk source
Il 24/07/2013 12:46, Daniel P. Berrange ha scritto: On Wed, Jul 24, 2013 at 12:41:49PM +0200, Paolo Bonzini wrote: Il 24/07/2013 12:39, Daniel P. Berrange ha scritto: On Wed, Jul 24, 2013 at 12:37:10PM +0200, Paolo Bonzini wrote: Il 24/07/2013 12:31, Daniel P. Berrange ha scritto: breaking the default config for QEMU's without iSCSI support. Understood, that's why I suggested not having a default config at all for type='lun' devices. As I said before, that causes apps to have to do special handling for iSCSI pools, which is not something we want. This is not about iSCSI pools, it is about type='lun' devices. And as I said before, knowing the kind of pool is something you have to do anyway for type='lun' devices, because they won't work if the underlying pool is filesystem- or LVM-based. Regardless of whether all pools types can use type=lun, I don't want to see special case handling of this attribute. If it is optional in the XML schema, it should be unconditionally optional for any usage. The XML schema can mark it as optional for type!=lun and mandatory for type=lun, but I agree it may be unnecessarily complicated. Also, can libvirt at least detect using a volume on a pool that wasn't started, and fail unless mode='direct'? It should always fail to start the guest if the pool is not started, regardless of the mode used. Another point of mode='direct' is that the pool need not be started. That is not something we can allow in the long term. When we expand our fine grained access control further, we're likely going to need to do ACL checks for the virStorageVolPtr object referenced by the XML config here. We can't do that unless the pool is running. So we cannot set a precedent of allowing use without the pool being active IMHO. Can you make the pool active, but at the same time not expose it as devices on the host? LUNs can be discovered by talking directly to the target (similar to the iscsi-ls command included with libiscsi). In other words, perhaps mode='host' vs. mode='direct' could be a property of the pool rather than the disk? That would certainly work fine for me. Paolo -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v3 4/7] qemu: Translate the iscsi pool/volume disk source
On Wed, Jul 24, 2013 at 12:59:27PM +0200, Paolo Bonzini wrote: Il 24/07/2013 12:46, Daniel P. Berrange ha scritto: On Wed, Jul 24, 2013 at 12:41:49PM +0200, Paolo Bonzini wrote: Il 24/07/2013 12:39, Daniel P. Berrange ha scritto: On Wed, Jul 24, 2013 at 12:37:10PM +0200, Paolo Bonzini wrote: Il 24/07/2013 12:31, Daniel P. Berrange ha scritto: breaking the default config for QEMU's without iSCSI support. Understood, that's why I suggested not having a default config at all for type='lun' devices. As I said before, that causes apps to have to do special handling for iSCSI pools, which is not something we want. This is not about iSCSI pools, it is about type='lun' devices. And as I said before, knowing the kind of pool is something you have to do anyway for type='lun' devices, because they won't work if the underlying pool is filesystem- or LVM-based. Regardless of whether all pools types can use type=lun, I don't want to see special case handling of this attribute. If it is optional in the XML schema, it should be unconditionally optional for any usage. The XML schema can mark it as optional for type!=lun and mandatory for type=lun, but I agree it may be unnecessarily complicated. Also, can libvirt at least detect using a volume on a pool that wasn't started, and fail unless mode='direct'? It should always fail to start the guest if the pool is not started, regardless of the mode used. Another point of mode='direct' is that the pool need not be started. That is not something we can allow in the long term. When we expand our fine grained access control further, we're likely going to need to do ACL checks for the virStorageVolPtr object referenced by the XML config here. We can't do that unless the pool is running. So we cannot set a precedent of allowing use without the pool being active IMHO. Can you make the pool active, but at the same time not expose it as devices on the host? LUNs can be discovered by talking directly to the target (similar to the iscsi-ls command included with libiscsi). I've never considered that actually. From the API POV, what we need to know is the list of LUNS and their sizes. We don't require that they have files visible in the host - for example the VMWare storage pool impls don't expose files on the host. So adding a mode for the pool which let you enumerate LUNs, but not expose them in the FS would be an option. In other words, perhaps mode='host' vs. mode='direct' could be a property of the pool rather than the disk? That would certainly work fine for me. It doesn't neccessarily need to be exclusive. If the pool used mode='host' we could still allow mode=host|direct in the guest. Only if the pool used mode=direct, would have to restrict it to mode=direct in the guest. We could make the guest config setting for mode default to match the pool's config. Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH 1/5] domifaddr: Implement the public API
Define an new API virDomainInterfacesAddresses, which returns the address information of a running domain's interfaces(s). If no interface name is specified, it returns the information of all interfaces, otherwise it only returns the information of the specificed interface. The address information includes the MAC and IP addresses. The API is going to provide multiple methods by flags, e.g. * Query guest agent * Parse lease file of dnsmasq * DHCP snooping But at this stage, it will only work with guest agent, and flags won't be supported. include/libvirt/libvirt.h.in: * Define virDomainInterfacesAddresses * Define structs virDomainInterface, virDomainIPAddress python/generator.py: * Skip the auto-generation for virDomainInterfacesAddresses src/driver.h: * Define domainInterfacesAddresses src/libvirt.c: * Implement virDomainInterfacesAddresses src/libvirt_public.syms: * Export the new symbol --- include/libvirt/libvirt.h.in | 32 ++ python/generator.py | 1 + src/driver.h | 7 src/libvirt.c| 99 src/libvirt_public.syms | 5 +++ 5 files changed, 144 insertions(+) diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index c0eb25b..183efba 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -2045,6 +2045,38 @@ int virDomainGetInterfaceParameters (virDomainPtr dom, virTypedParameterPtr params, int *nparams, unsigned int flags); +typedef enum { +VIR_IP_ADDR_TYPE_IPV4, +VIR_IP_ADDR_TYPE_IPV6, + +#ifdef VIR_ENUM_SENTINELS +VIR_IP_ADDR_TYPE_LAST +#endif +} virIPAddrType; + + +typedef struct _virDomainInterfaceIPAddress virDomainIPAddress; +typedef virDomainIPAddress *virDomainIPAddressPtr; +struct _virDomainInterfaceIPAddress { +int type; /* virIPAddrType */ +char *addr; /* IP address */ +int prefix; /* IP address prefix */ +}; + +typedef struct _virDomainInterface virDomainInterface; +typedef virDomainInterface *virDomainInterfacePtr; +struct _virDomainInterface { +char *name; /* interface name */ +char *hwaddr; /* hardware address */ +unsigned int ip_addrs_count;/* number of items in @ip_addr */ +virDomainIPAddressPtr ip_addrs; /* array of IP addresses */ +}; + +int virDomainInterfacesAddresses (virDomainPtr dom, + virDomainInterfacePtr *ifaces, + unsigned int *ifaces_count, + unsigned int flags); + /* Management of domain block devices */ int virDomainBlockPeek (virDomainPtr dom, diff --git a/python/generator.py b/python/generator.py index 427cebc..ceef9a7 100755 --- a/python/generator.py +++ b/python/generator.py @@ -458,6 +458,7 @@ skip_impl = ( 'virNodeGetMemoryParameters', 'virNodeSetMemoryParameters', 'virNodeGetCPUMap', +'virDomainInterfacesAddresses', 'virDomainMigrate3', 'virDomainMigrateToURI3', ) diff --git a/src/driver.h b/src/driver.h index cc03e9f..771dc35 100644 --- a/src/driver.h +++ b/src/driver.h @@ -518,6 +518,12 @@ typedef int unsigned int flags); typedef int +(*virDrvDomainInterfacesAddresses)(virDomainPtr dom, + virDomainInterfacePtr *ifaces, + unsigned int *ifaces_count, + unsigned int flags); + +typedef int (*virDrvDomainMemoryStats)(virDomainPtr domain, struct _virDomainMemoryStat *stats, unsigned int nr_stats, @@ -1238,6 +1244,7 @@ struct _virDriver { virDrvDomainInterfaceStats domainInterfaceStats; virDrvDomainSetInterfaceParameters domainSetInterfaceParameters; virDrvDomainGetInterfaceParameters domainGetInterfaceParameters; +virDrvDomainInterfacesAddressesdomainInterfacesAddresses; virDrvDomainMemoryStats domainMemoryStats; virDrvDomainBlockPeek domainBlockPeek; virDrvDomainMemoryPeek domainMemoryPeek; diff --git a/src/libvirt.c b/src/libvirt.c index 444c1c3..cb5639c 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -8629,6 +8629,105 @@ error: return -1; } + /** + * virDomainInterfacesAddresses: + * @dom: domain object + * @ifaces: array of @dom interfaces + * @ifaces_count: number of items in @ifaces + * @flags: extra flags; not used yet, so callers should always pass 0 + * + * Return an array of interfaces present in given domain along with + * their IP and MAC addresses. Note that single interface can have + * multiple or even 0 IP address. + * + * This API dynamically allocates the virDomainInterfacePtr struct + * based on how many interfaces domain @dom has, usually
Re: [libvirt] [PATCH 12/13] Protection against doing bad stuff to the root group
On Tue, Jul 23, 2013 at 03:58:30PM -0600, Eric Blake wrote: On 07/23/2013 09:21 AM, Daniel P. Berrange wrote: From: Daniel P. Berrange berra...@redhat.com Add protection such that the virCgroupRemove and virCgroupKill* do not do anything to the root cgroup. Killing all PIDs in the root cgroup does not end well. I take it you tried this, on accident :) Yes, due to a bug elsewhere in the code I ended up with an LXC container in the / cgroup. When killing the container libvirt recursively kills all processes in the container's cgroup. When the cgroup is / this is all processes on the host until it kills itself :-) I now have a 30 mile journey to the colo to reboot this machine since I have no remote power control on it :-) Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH 3/5] domifaddr: Implement the API for qemu
By querying the qemu guest agent with the QMP command guest-network-get-interfaces and converting the received JSON output to structured objects. src/qemu/qemu_agent.h: * Define qemuAgentGetInterfaces src/qemu/qemu_agent.c: * Implement qemuAgentGetInterface src/qemu/qemu_driver.c: * New function qemuDomainInterfacesAddresses src/remote_protocol-sructs: * Define new structs --- src/qemu/qemu_agent.c | 150 + src/qemu/qemu_agent.h | 4 ++ src/qemu/qemu_driver.c | 57 +++ 3 files changed, 211 insertions(+) diff --git a/src/qemu/qemu_agent.c b/src/qemu/qemu_agent.c index 72bf211..6889195 100644 --- a/src/qemu/qemu_agent.c +++ b/src/qemu/qemu_agent.c @@ -1323,6 +1323,156 @@ cleanup: return ret; } + +int +qemuAgentGetInterfaces(qemuAgentPtr mon, + virDomainInterfacePtr *ifaces, + unsigned int *ifaces_count) +{ +int ret = -1; +size_t i, j; +int size = -1; +virJSONValuePtr cmd; +virJSONValuePtr reply = NULL; +virJSONValuePtr ret_array = NULL; + +cmd = qemuAgentMakeCommand(guest-network-get-interfaces, NULL); + +if (!cmd) +return -1; + +if (qemuAgentCommand(mon, cmd, reply, 1) 0 || +qemuAgentCheckError(cmd, reply) 0) +goto cleanup; + +if (!(ret_array = virJSONValueObjectGet(reply, return))) { +virReportError(VIR_ERR_INTERNAL_ERROR, %s, +_(qemu agent didn't provide 'return' field)); +goto cleanup; +} + +if ((size = virJSONValueArraySize(ret_array)) 0) { +virReportError(VIR_ERR_INTERNAL_ERROR, %s, +_(qemu agent didn't return an array of interfaces)); +goto cleanup; +} + +*ifaces_count = (unsigned int) size; + +if (VIR_ALLOC_N(*ifaces, size) 0) +goto cleanup; + +for (i = 0; i size; i++) { +virJSONValuePtr tmp_iface = virJSONValueArrayGet(ret_array, i); +virJSONValuePtr ip_addr_arr = NULL; +const char *name, *hwaddr; +int ip_addr_arr_size; + +/* Shouldn't happen but doesn't hurt to check neither */ +if (!tmp_iface) { +virReportError(VIR_ERR_INTERNAL_ERROR, %s, +_(something has went really wrong)); +goto cleanup; +} + +/* interface name is required to be presented */ +name = virJSONValueObjectGetString(tmp_iface, name); +if (!name) { +virReportError(VIR_ERR_INTERNAL_ERROR, %s, +_(qemu agent didn't provide 'name' field)); +goto cleanup; +} + +if (VIR_STRDUP((*ifaces)[i].name, name) 0) +goto cleanup; + +/* hwaddr might be omitted */ +hwaddr = virJSONValueObjectGetString(tmp_iface, hardware-address); +if (hwaddr VIR_STRDUP((*ifaces)[i].hwaddr, hwaddr) 0) +goto cleanup; + +/* as well as IP address which - moreover - + * can be presented multiple times */ +ip_addr_arr = virJSONValueObjectGet(tmp_iface, ip-addresses); +if (!ip_addr_arr) +continue; + +if ((ip_addr_arr_size = virJSONValueArraySize(ip_addr_arr)) 0) +/* Mmm, empty 'ip-address'? */ +continue; + +(*ifaces)[i].ip_addrs_count = (unsigned int) ip_addr_arr_size; + +if (VIR_ALLOC_N((*ifaces)[i].ip_addrs, ip_addr_arr_size) 0) +goto cleanup; + +for (j = 0; j ip_addr_arr_size; j++) { +virJSONValuePtr ip_addr_obj = virJSONValueArrayGet(ip_addr_arr, j); +virDomainIPAddressPtr ip_addr = (*ifaces)[i].ip_addrs[j]; +const char *type, *addr; + +/* Shouldn't happen but doesn't hurt to check neither */ +if (!ip_addr_obj) { +virReportError(VIR_ERR_INTERNAL_ERROR, %s, +_(something has went really wrong)); +goto cleanup; +} + +type = virJSONValueObjectGetString(ip_addr_obj, ip-address-type); +if (!type) { +virReportError(VIR_ERR_INTERNAL_ERROR, +_(qemu agent didn't provide 'ip-address-type' + field for interface '%s'), name); +goto cleanup; +} else if (STREQ(type, ipv4)) { +ip_addr-type = VIR_IP_ADDR_TYPE_IPV4; +} else if (STREQ(type, ipv6)) { +ip_addr-type = VIR_IP_ADDR_TYPE_IPV6; +} else { +virReportError(VIR_ERR_INTERNAL_ERROR, +_(unknown ip address type '%s'), +type); +goto cleanup; +} + +addr = virJSONValueObjectGetString(ip_addr_obj, ip-address); +if (!addr) { +virReportError(VIR_ERR_INTERNAL_ERROR, +_(qemu
[libvirt] [PATCH 4/5] domifaddr: Add virsh support
Use virDomainInterfacesAddresses in virsh tools/virsh-domain-monitor.c * Introduce new command : domifaddr Example Usage: domifaddr domain-name interface-name (optional) tools/virsh.pod * Document new command --- tools/virsh-domain-monitor.c | 103 +++ tools/virsh.pod | 10 + 2 files changed, 113 insertions(+) diff --git a/tools/virsh-domain-monitor.c b/tools/virsh-domain-monitor.c index 773f96d..d85dfbd 100644 --- a/tools/virsh-domain-monitor.c +++ b/tools/virsh-domain-monitor.c @@ -1873,6 +1873,103 @@ cleanup: } #undef FILTER +/* domifaddr command + */ +static const vshCmdInfo info_domifaddr[] = { +{help, N_(Get network interfaces addresses for a running domain)}, +{desc, N_(Get network interfaces addresses for a running domain)}, +{NULL, NULL} +}; + +static const vshCmdOptDef opts_domifaddr[] = { +{domain, VSH_OT_DATA, VSH_OFLAG_REQ, N_(domain name, id or uuid)}, +{interface, VSH_OT_DATA, VSH_OFLAG_NONE, N_(network interface name)}, +{NULL, 0, 0, NULL} +}; + +static bool +cmdDomIfAddr(vshControl *ctl, const vshCmd *cmd) +{ +virDomainPtr dom = NULL; +const char *interface = NULL; +virDomainInterfacePtr ifaces = NULL; +size_t i, j; +unsigned int ifaces_count = 0; +unsigned int flags = 0; +bool ret = false; + +if (!(dom = vshCommandOptDomain(ctl, cmd, NULL))) +return false; + +if (vshCommandOptString(cmd, interface, interface) 0) { +goto cleanup; +} + +if (virDomainInterfacesAddresses(dom, ifaces, ifaces_count, flags) 0) { +vshError(ctl, _(Failed to query for interfaces addresses)); +goto cleanup; +} + +vshPrintExtra(ctl, %-10s %-17s%s\n%s\n, + _(Name), _(MAC address), _(IP address), + ---); + +for (i = 0; i ifaces_count; i++) { +virDomainInterfacePtr iface = (ifaces[i]); +virBuffer buf = VIR_BUFFER_INITIALIZER; +const char *hwaddr = ; +const char *ip_addr_str = NULL; + +if (interface STRNEQ_NULLABLE(interface, iface-name)) { +virBufferFreeAndReset(buf); +continue; +} + +if (iface-hwaddr) +hwaddr = iface-hwaddr; + +for (j = 0; j iface-ip_addrs_count; j++) { +if (j) +virBufferAddChar(buf, ' '); +virBufferAsprintf(buf, %s/%d, + iface-ip_addrs[j].addr, + iface-ip_addrs[j].prefix); +} + +if (virBufferError(buf)) { +virBufferFreeAndReset(buf); +virReportOOMError(); +return ret; +} + +ip_addr_str = virBufferContentAndReset(buf); + +if (!ip_addr_str) +ip_addr_str = ; + +vshPrintExtra(ctl, %-10s %-17s%s\n, + iface-name, hwaddr, ip_addr_str); + +virBufferFreeAndReset(buf); +} + +ret = true; + +cleanup: +for (i = 0; i ifaces_count; i++) { +VIR_FREE(ifaces[i].name); +VIR_FREE(ifaces[i].hwaddr); +for (j = 0; j ifaces[i].ip_addrs_count; j++) +VIR_FREE(ifaces[i].ip_addrs[j].addr); +VIR_FREE(ifaces[i].ip_addrs); +} +VIR_FREE(ifaces); + +if (dom) +virDomainFree(dom); +return ret; +} + const vshCmdDef domMonitoringCmds[] = { {.name = domblkerror, .handler = cmdDomBlkError, @@ -1946,5 +2043,11 @@ const vshCmdDef domMonitoringCmds[] = { .info = info_list, .flags = 0 }, +{.name = domifaddr, + .handler = cmdDomIfAddr, + .opts = opts_domifaddr, + .info = info_domifaddr, + .flags = 0 +}, {.name = NULL} }; diff --git a/tools/virsh.pod b/tools/virsh.pod index 3ff6da1..bf13be4 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -633,6 +633,16 @@ BExplanation of fields (fields appear in the following order): flush_total_times - total time flush operations took (ns) -- other fields provided by hypervisor -- + +=item Bdomifaddr Idomain [Iinterface-device] + +Get a list of interfaces of a running domain along with their IP and MAC +addresses, or limited output just for one interface if Iinterface-device +is specified. Note, that Iinterface-device can be driver dependent, it can +be the name within guest OS or the name you would see in domain XML. +Moreover, the whole command may require a guest agent to be configured +for the queried domain under some drivers, notably qemu. + =item Bdomifstat Idomain Iinterface-device Get network interface stats for a running domain. -- 1.7.11.7 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH 0/5] Introduce API to query IP addresses for given domain
This feature has been requested for a very long time. Since qemu guest agent gives us reliable results, now the wait is over. The RFC was first proposed by Michal Privoznik: http://www.mail-archive.com/libvir-list@redhat.com/msg51857.html A patch was submitted, using structs: http://www.mail-archive.com/libvir-list@redhat.com/msg57141.html Another patch was submitted, using XML: http://www.mail-archive.com/libvir-list@redhat.com/msg57829.html Neither of the patches were accepted, probably due to lack of extensibility and usability. Hence, we thought of using virTypedParameters for reporting list of interfaces along with their MAC address and IP addresses. The RFC can be found here: http://www.mail-archive.com/libvir-list@redhat.com/msg79793.html The idea of extensibility was rejected and rendered out of scope of libvirt. Hence, we were back to structs. This API is called virDomainInterfacesAddresses which returns a dynamically allocated array of virDomainInterface struct. The great disadvantage is once this gets released, it's written in stone and we cannot change or add an item into it. The API supports two methods: * Return information (list of all associated interfaces with MAC address and IP addresses) of all of the domain interfaces by default (if no interface name is provided) * Return information for the specified interface (if an interface name is provided) The API queries qemu guest agent to obtain ip addresses and MAC address of each interface the given domain is associated with. In the future, support for more flags will be added to support for DHCP and snooping methods nehaljwani (5): domifaddr: Implement the public API domifaddr: Implement the remote protocol domifaddr: Implement the API for qemu domifaddr: Add virsh support domifaddr: Expose python binding daemon/remote.c | 123 examples/python/Makefile.am | 2 +- examples/python/README | 1 + examples/python/domipaddrs.py | 50 ++ include/libvirt/libvirt.h.in| 32 + python/generator.py | 1 + python/libvirt-override-api.xml | 8 ++- python/libvirt-override.c | 117 +++ src/driver.h| 7 ++ src/libvirt.c | 99 ++ src/libvirt_public.syms | 5 ++ src/qemu/qemu_agent.c | 150 src/qemu/qemu_agent.h | 4 ++ src/qemu/qemu_driver.c | 57 +++ src/remote/remote_driver.c | 93 + src/remote/remote_protocol.x| 32 - src/remote_protocol-structs | 24 +++ tools/virsh-domain-monitor.c| 103 +++ tools/virsh.pod | 10 +++ 19 files changed, 914 insertions(+), 4 deletions(-) create mode 100755 examples/python/domipaddrs.py -- 1.7.11.7 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH 2/5] domifaddr: Implement the remote protocol
Implement RPC calls for virDomainInterfacesAddresses daemon/remote.c * Define remoteSerializeDomainInterfacePtr, remoteDispatchDomainInterfacesAddresses src/remote/remote_driver.c * Define remoteDomainInterfacesAddresses src/remote/remote_protocol.x * New RPC procedure: REMOTE_PROC_DOMAIN_INTERFACES_ADDRESSES * Define structs remote_domain_ip_addr, remote_domain_interface, remote_domain_interfaces_addresses_args, remote_domain_interfaces_addresses_ret src/remote_protocol-structs * New structs added --- daemon/remote.c | 123 +++ src/remote/remote_driver.c | 93 src/remote/remote_protocol.x | 32 ++- src/remote_protocol-structs | 24 + 4 files changed, 270 insertions(+), 2 deletions(-) diff --git a/daemon/remote.c b/daemon/remote.c index 03d5557..4c3bb4f 100644 --- a/daemon/remote.c +++ b/daemon/remote.c @@ -5025,6 +5025,129 @@ cleanup: return rv; } +static int +remoteSerializeDomainInterfacePtr(virDomainInterfacePtr ifaces, + unsigned int ifaces_count, + remote_domain_interfaces_addresses_ret *ret) +{ +size_t i, j; + +if (VIR_ALLOC_N(ret-ifaces.ifaces_val, ifaces_count) 0) +return -1; + +ret-ifaces.ifaces_len = ifaces_count; + +for (i = 0; i ifaces_count; i++) { +virDomainInterfacePtr iface = (ifaces[i]); +remote_domain_interface *iface_ret = (ret-ifaces.ifaces_val[i]); + +if ((VIR_STRDUP(iface_ret-name, iface-name)) 0) +goto no_memory; + +if (iface-hwaddr) { +char **hwaddr_p = NULL; +if (VIR_ALLOC(hwaddr_p) 0) +goto no_memory; +if (VIR_STRDUP(*hwaddr_p, iface-hwaddr) 0) +goto no_memory; + +iface_ret-hwaddr = hwaddr_p; +} + +if (VIR_ALLOC_N(iface_ret-ip_addrs.ip_addrs_val, +iface-ip_addrs_count) 0) +goto no_memory; + +iface_ret-ip_addrs.ip_addrs_len = iface-ip_addrs_count; + +for (j = 0; j iface-ip_addrs_count; j++) { +virDomainIPAddressPtr ip_addr = (iface-ip_addrs[j]); +remote_domain_ip_addr *ip_addr_ret = +(iface_ret-ip_addrs.ip_addrs_val[j]); + +if (VIR_STRDUP(ip_addr_ret-addr, ip_addr-addr) 0) +goto no_memory; + +ip_addr_ret-prefix = ip_addr-prefix; +ip_addr_ret-type = ip_addr-type; +} +} + +return 0; + +no_memory: +if (ret-ifaces.ifaces_val) { +for (i = 0; i ifaces_count; i++) { +remote_domain_interface *iface_ret = (ret-ifaces.ifaces_val[i]); +VIR_FREE(iface_ret-name); +VIR_FREE(iface_ret-hwaddr); +for (j = 0; j iface_ret-ip_addrs.ip_addrs_len; j++) { +remote_domain_ip_addr *ip_addr = +(iface_ret-ip_addrs.ip_addrs_val[j]); +VIR_FREE(ip_addr-addr); +} +VIR_FREE(iface_ret); +} +VIR_FREE(ret-ifaces.ifaces_val); +} + +return -1; +} + +static int +remoteDispatchDomainInterfacesAddresses( +virNetServerPtr server ATTRIBUTE_UNUSED, +virNetServerClientPtr client, +virNetMessagePtr msg ATTRIBUTE_UNUSED, +virNetMessageErrorPtr rerr, +remote_domain_interfaces_addresses_args *args, +remote_domain_interfaces_addresses_ret *ret) +{ +int rv = -1; +virDomainPtr dom = NULL; +virDomainInterfacePtr ifaces = NULL; +unsigned int ifaces_count = 0; +struct daemonClientPrivate *priv = +virNetServerClientGetPrivateData(client); + +if (!priv-conn) { +virReportError(VIR_ERR_INTERNAL_ERROR, %s, _(connection not open)); +goto cleanup; +} + +if (!(dom = get_nonnull_domain(priv-conn, args-dom))) +goto cleanup; + +if (virDomainInterfacesAddresses(dom, ifaces, + ifaces_count, args-flags) 0) +goto cleanup; + +if (remoteSerializeDomainInterfacePtr(ifaces, ifaces_count, ret) 0) +goto cleanup; + +rv = 0; + +cleanup: +if (rv 0) +virNetMessageSaveError(rerr); +if (dom) +virDomainFree(dom); +if (ifaces) { +size_t i, j; + +for (i = 0; i ifaces_count; i++) { +VIR_FREE(ifaces[i].name); +VIR_FREE(ifaces[i].hwaddr); +for (j = 0; j ifaces[i].ip_addrs_count; j++) +VIR_FREE(ifaces[i].ip_addrs[j].addr); +VIR_FREE(ifaces[i].ip_addrs); +} +VIR_FREE(ifaces); +} +return rv; +} + + /*- Helpers. -*/ diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index f828eef..18f18c7 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -6476,6 +6476,98 @@ done: return rv; } +static int +remoteDomainInterfacesAddresses(virDomainPtr
[libvirt] [PATCH 5/5] domifaddr: Expose python binding
Expose virDomainInterfacesAddresses to python binding examples/python/Makefile.am: * Add new file domipaddrs.py examples/python/README: * Add documentation for the python example python/libvirt-override-api.xml: * Add new symbol for virDomainInterfacesAddresses python/libvirt-override.c: * Hand written python api --- examples/python/Makefile.am | 2 +- examples/python/README | 1 + examples/python/domipaddrs.py | 50 + python/libvirt-override-api.xml | 8 ++- python/libvirt-override.c | 117 5 files changed, 176 insertions(+), 2 deletions(-) create mode 100755 examples/python/domipaddrs.py diff --git a/examples/python/Makefile.am b/examples/python/Makefile.am index 2cacfa1..d33ee17 100644 --- a/examples/python/Makefile.am +++ b/examples/python/Makefile.am @@ -17,4 +17,4 @@ EXTRA_DIST=\ README \ consolecallback.py \ - dominfo.py domrestore.py domsave.py domstart.py esxlist.py + dominfo.py domrestore.py domsave.py domstart.py esxlist.py domipaddrs.py diff --git a/examples/python/README b/examples/python/README index f4db76c..1285d52 100644 --- a/examples/python/README +++ b/examples/python/README @@ -10,6 +10,7 @@ domsave.py - save all running domU's into a directory domrestore.py - restore domU's from their saved files in a directory esxlist.py - list active domains of an VMware ESX host and print some info. also demonstrates how to use the libvirt.openAuth() method +domipaddrs.py - print domain interfaces along with their MAC and IP addresses The XML files in this directory are examples of the XML format that libvirt expects, and will have to be adapted for your setup. They are only needed diff --git a/examples/python/domipaddrs.py b/examples/python/domipaddrs.py new file mode 100755 index 000..cc16c67 --- /dev/null +++ b/examples/python/domipaddrs.py @@ -0,0 +1,50 @@ +#!/usr/bin/env python +# domipaddrds - print domain interfaces along with their MAC and IP addresses + +import libvirt +import sys + +def usage(): +print Usage: %s [URI] DOMAIN % sys.argv[0] +print Print domain interfaces along with their MAC and IP addresses + +uri = None +name = None +args = len(sys.argv) + +if args == 2: +name = sys.argv[1] +elif args == 3: +uri = sys.argv[1] +name = sys.argv[2] +else: +usage() +sys.exit(2) + +conn = libvirt.openReadOnly(uri) +if conn == None: +print Unable to open connection to libvirt +sys.exit(1) + +try: +dom = conn.lookupByName(name) +except libvirt.libvirtError: +print Domain %s not found % name +sys.exit(0) + +ifaces = dom.interfacesAddresses(0) +if (ifaces == None): +print Failed to get domain interfaces +sys.exit(0) + +print {0:10} {1:17}{2}.format(Interface, MAC address, IP Address) + +for (name, val) in ifaces.iteritems(): +print {0:10} {1:17}.format(name, val['hwaddr']), + +if (val['ip_addrs'] None): +print , +for addr in val['ip_addrs']: +print {0}/{1} .format(addr['addr'], addr['prefix']), + +print diff --git a/python/libvirt-override-api.xml b/python/libvirt-override-api.xml index 9a88215..1fcfa82 100644 --- a/python/libvirt-override-api.xml +++ b/python/libvirt-override-api.xml @@ -602,5 +602,11 @@ arg name='conn' type='virConnectPtr' info='pointer to the hypervisor connection'/ arg name='flags' type='int' info='unused, pass 0'/ /function - /symbols +function name='virDomainInterfacesAddresses' file='python' + inforeturns an array of domain interfaces along with their MAC and IP addresses/info + arg name='dom' type='virDomainPtr' info='pointer to the domain'/ + arg name='flags' type='unsigned int' info='extra flags; not used yet, so callers should always pass 0'/ + return type='virDomainInterfacePtr' info=dictionary of domain interfaces among with their MAC and IP addresses/ +/function +/symbols /api diff --git a/python/libvirt-override.c b/python/libvirt-override.c index d16b9a2..11f3570 100644 --- a/python/libvirt-override.c +++ b/python/libvirt-override.c @@ -4761,6 +4761,122 @@ cleanup: return py_retval; } + +static PyObject * +libvirt_virDomainInterfacesAddresses(PyObject *self ATTRIBUTE_UNUSED, + PyObject *args) +{ +PyObject *py_retval = VIR_PY_NONE; +virDomainPtr domain; +PyObject *pyobj_domain; +unsigned int flags; +virDomainInterfacePtr ifaces = NULL; +unsigned int ifaces_count = 0; +size_t i, j; +int ret; +bool full_free = true; + +if (!PyArg_ParseTuple(args, (char *) Oi:virDomainInterfacePtr, + pyobj_domain, flags)) +return NULL; + +domain = (virDomainPtr) PyvirDomain_Get(pyobj_domain); + +LIBVIRT_BEGIN_ALLOW_THREADS; +ret
Re: [libvirt] [PATCH v3 4/7] qemu: Translate the iscsi pool/volume disk source
Il 24/07/2013 13:03, Daniel P. Berrange ha scritto: Can you make the pool active, but at the same time not expose it as devices on the host? LUNs can be discovered by talking directly to the target (similar to the iscsi-ls command included with libiscsi). I've never considered that actually. From the API POV, what we need to know is the list of LUNS and their sizes. We don't require that they have files visible in the host - for example the VMWare storage pool impls don't expose files on the host. So adding a mode for the pool which let you enumerate LUNs, but not expose them in the FS would be an option. In other words, perhaps mode='host' vs. mode='direct' could be a property of the pool rather than the disk? That would certainly work fine for me. It doesn't neccessarily need to be exclusive. If the pool used mode='host' we could still allow mode=host|direct in the guest. Only if the pool used mode=direct, would have to restrict it to mode=direct in the guest. We could make the guest config setting for mode default to match the pool's config. Cool! Sounds like a plan. libvirt itself could use libiscsi to support discovery for mode='direct' pools. Paolo -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH V4] add console support in libxl
this patch introduce the console api in libxl driver for both pv and hvm guest. and import and update the libxlMakeChrdevStr function which was deleted in commit dfa1e1dd. Signed-off-by: Bamvor Jian Zhang bjzh...@suse.com --- Changes since V3: implicity forbit dev_name pass to libxl driver. Changes since V2: 1), forbid parallel configure because libxl do not support it 2), only support one serial on libxl driver. 3), also remove console code in libxl driver, AFAICS serial is enough for connecting to libxl console. changes since V1: 1), add virDomainOpenConsoleEnsureACL 3), remove virReportOOMErrorFull when virAsprintf fail. 4), change size_t for non-nagetive number in libxlDomainOpenConsole size_t i; size_t num = 0; 5), fix for make check (1), replace virAsprintf with VIR_STRDUP in two places (2), delete space. src/libxl/libxl_conf.c | 100 +++ src/libxl/libxl_conf.h | 3 ++ src/libxl/libxl_driver.c | 94 3 files changed, 197 insertions(+) diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c index 4a0fba9..539d537 100644 --- a/src/libxl/libxl_conf.c +++ b/src/libxl/libxl_conf.c @@ -331,6 +331,90 @@ error: } static int +libxlMakeChrdevStr(virDomainChrDefPtr def, char **buf) +{ +const char *type = virDomainChrTypeToString(def-source.type); +int ret; + +if (!type) { +virReportError(VIR_ERR_INTERNAL_ERROR, + %s, _(unexpected chr device type)); +return -1; +} + +switch (def-source.type) { +case VIR_DOMAIN_CHR_TYPE_NULL: +case VIR_DOMAIN_CHR_TYPE_STDIO: +case VIR_DOMAIN_CHR_TYPE_VC: +case VIR_DOMAIN_CHR_TYPE_PTY: +if (VIR_STRDUP(*buf, type) 0) +return -1; +break; + +case VIR_DOMAIN_CHR_TYPE_FILE: +case VIR_DOMAIN_CHR_TYPE_PIPE: +if (virAsprintf(buf, %s:%s, type, +def-source.data.file.path) 0) +return -1; +break; + +case VIR_DOMAIN_CHR_TYPE_DEV: +if (VIR_STRDUP(*buf, def-source.data.file.path) 0) +return -1; +break; + +case VIR_DOMAIN_CHR_TYPE_UDP: { +const char *connectHost = def-source.data.udp.connectHost; +const char *bindHost = def-source.data.udp.bindHost; +const char *bindService = def-source.data.udp.bindService; + +if (connectHost == NULL) +connectHost = ; +if (bindHost == NULL) +bindHost = ; +if (bindService == NULL) +bindService = 0; + +ret = virAsprintf(buf, udp:%s:%s@%s:%s, + connectHost, + def-source.data.udp.connectService, + bindHost, + bindService); +if (ret 0) +return -1; +break; + +} +case VIR_DOMAIN_CHR_TYPE_TCP: +if (def-source.data.tcp.protocol == VIR_DOMAIN_CHR_TCP_PROTOCOL_TELNET) +ret = virAsprintf(buf, telnet:%s:%s%s, + def-source.data.tcp.host, + def-source.data.tcp.service, + def-source.data.tcp.listen ? ,server,nowait : ); +else +ret = virAsprintf(buf, tcp:%s:%s%s, + def-source.data.tcp.host, + def-source.data.tcp.service, + def-source.data.tcp.listen ? ,server,nowait : ); + +if (ret 0) +return -1; +break; + +case VIR_DOMAIN_CHR_TYPE_UNIX: +ret = virAsprintf(buf, unix:%s%s, + def-source.data.nix.path, + def-source.data.nix.listen ? ,server,nowait : ); +if (ret 0) +return -1; +break; + +} + +return 0; +} + +static int libxlMakeDomBuildInfo(virDomainDefPtr def, libxl_domain_config *d_config) { libxl_domain_build_info *b_info = d_config-b_info; @@ -403,6 +487,22 @@ libxlMakeDomBuildInfo(virDomainDefPtr def, libxl_domain_config *d_config) if (VIR_STRDUP(b_info-u.hvm.boot, bootorder) 0) goto error; +if (def-nserials) { +if (def-nserials 1) { +virReportError(VIR_ERR_INTERNAL_ERROR, + %s, _(Only one serial is supported by libxl)); +goto error; +} +if (libxlMakeChrdevStr(def-serials[0], b_info-u.hvm.serial) 0) +goto error; +} + +if (def-nparallels) { +virReportError(VIR_ERR_INTERNAL_ERROR, + %s, _(Parallel is not supported)); +goto error; +} + /*
Re: [libvirt] [PATCH] virLXCMonitorClose: Unlock domain while closing monitor
On 24.07.2013 12:29, Daniel P. Berrange wrote: On Wed, Jul 24, 2013 at 12:15:32PM +0200, Michal Privoznik wrote: There's a race in lxc driver causing a deadlock. If a domain is destroyed immediately after started, the deadlock can occur. When domain is started, the even loop tries to connect to the monitor. If the connecting succeeds, virLXCProcessMonitorInitNotify() is called with @mon-client locked. The first thing that callee does, is virObjectLock(vm). So the order of locking is: 1) @mon-client, 2) @vm. However, if there's another thread executing virDomainDestroy on the very same domain, the first thing done here is locking the @vm. Then, the corresponding libvirt_lxc process is killed and monitor is closed via calling virLXCMonitorClose(). This callee tries to lock @mon-client too. So the order is reversed to the first case. This situation results in deadlock and unresponsive libvirtd (since the eventloop is involved). The proper solution is to unlock the @vm in virLXCMonitorClose prior entering virNetClientClose(). See the backtrace as follows: Hmm, I think I'd say that the flaw is in the way virLXCProcessMonitorInitNotify is invoked. In the QEMU driver monitor, we unlock the monitor before invoking any callbacks. In the LXC driver monitor we're invoking the callbacks with the monitor lock held. I think we need to make the LXC monitor locking wrt callbacks do what QEMU does, and unlock the monitor. See QEMU_MONITOR_CALLBACK in qemu_monitor.c Daniel I don't think so. It's not the monitor lock what is causing deadlock here. In fact, the monitor is unlocked: Thread 1 (Thread 0x7f35a348e740 (LWP 18839)): #0 0x7f35a0481714 in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x7f35a047d16c in _L_lock_516 () from /lib64/libpthread.so.0 #2 0x7f35a047cfbb in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x7f35a29ab83f in virMutexLock (m=0x7f3588024e80) at util/virthreadpthread.c:85 #4 0x7f35a2994d62 in virObjectLock (anyobj=0x7f3588024e70) at util/virobject.c:320 #5 0x7f358ed5bbd7 in virLXCProcessMonitorInitNotify (mon=0x7f356ab0, initpid=29062, vm=0x7f3588024e70) at lxc/lxc_process.c:601 #6 0x7f358ed59fd3 in virLXCMonitorHandleEventInit (prog=0x7f35600087b0, client=0x7f3560001fd0, evdata=0x7f35a53bc1e0, opaque=0x7f356ab0) at lxc/lxc_monitor.c:109 #7 0x7f35a2ad2206 in virNetClientProgramDispatch (prog=0x7f35600087b0, client=0x7f3560001fd0, msg=0x7f3560002038) at rpc/virnetclientprogram.c:259 #8 0x7f35a2acf0a0 in virNetClientCallDispatchMessage (client=0x7f3560001fd0) at rpc/virnetclient.c:1019 #9 0x7f35a2acf72b in virNetClientCallDispatch (client=0x7f3560001fd0) at rpc/virnetclient.c:1140 #10 0x7f35a2acfdb1 in virNetClientIOHandleInput (client=0x7f3560001fd0) at rpc/virnetclient.c:1312 #11 0x7f35a2ad0fc1 in virNetClientIncomingEvent (sock=0x7f3560008350, events=1, opaque=0x7f3560001fd0) at rpc/virnetclient.c:1832 #12 0x7f35a2ae6238 in virNetSocketEventHandle (watch=47, fd=40, events=1, opaque=0x7f3560008350) at rpc/virnetsocket.c:1695 #13 0x7f35a296f33f in virEventPollDispatchHandles (nfds=22, fds=0x7f35a53bc7a0) at util/vireventpoll.c:498 #14 0x7f35a296fb62 in virEventPollRunOnce () at util/vireventpoll.c:645 #15 0x7f35a296dad1 in virEventRunDefaultImpl () at util/virevent.c:273 #16 0x7f35a2ad69ee in virNetServerRun (srv=0x7f35a53b09d0) at rpc/virnetserver.c:1097 #17 0x7f35a34e5b6b in main (argc=2, argv=0x7fffe188e778) at libvirtd.c:1512 (gdb) up #1 0x7f35a047d16c in _L_lock_516 () from /lib64/libpthread.so.0 (gdb) up #2 0x7f35a047cfbb in pthread_mutex_lock () from /lib64/libpthread.so.0 (gdb) #3 0x7f35a29ab83f in virMutexLock (m=0x7f3588024e80) at util/virthreadpthread.c:85 85 pthread_mutex_lock(m-lock); (gdb) #4 0x7f35a2994d62 in virObjectLock (anyobj=0x7f3588024e70) at util/virobject.c:320 320 virMutexLock(obj-lock); (gdb) #5 0x7f358ed5bbd7 in virLXCProcessMonitorInitNotify (mon=0x7f356ab0, initpid=29062, vm=0x7f3588024e70) at lxc/lxc_process.c:601 601 virObjectLock(vm); (gdb) p *mon $1 = {parent = {parent = {magic = 3405643812, refs = 2, klass = 0x7f3588102cb0}, lock = {lock = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' repeats 39 times, __align = 0}}}, vm = 0x7f3588024e70, cb = {destroy = 0x0, eofNotify = 0x0, exitNotify = 0x7f358ed5b928 virLXCProcessMonitorExitNotify, initNotify = 0x7f358ed5bb8b virLXCProcessMonitorInitNotify}, client = 0x7f3560001fd0, program = 0x7f35600087b0} (gdb) Michal -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [Xen-devel] [PATCH] libxl: Correctly initialize vcpu bitmap
On 23.07.2013 23:20, Jim Fehlig wrote: One comment below in addition to Konrad's... Konrad Rzeszutek Wilk wrote: On Mon, Jul 22, 2013 at 12:51:05PM +0200, Stefan Bader wrote: This fixes the basic setup but there is likely more to do if things like manual CPU hirarchy (nodes, cores, threads) to be working. Cross-posting to xen-devel to make sure I am doing things correctly. -Stefan From 1ec5e7ea0d3498b9f61b83e8aed87cc3cae106de Mon Sep 17 00:00:00 2001 From: Stefan Bader stefan.ba...@canonical.com Date: Fri, 19 Jul 2013 15:20:00 +0200 Subject: [PATCH] libxl: Correctly initialize vcpu bitmap The avai_vcpu bitmap has to be allocated before it can be used (using avail_vcpu ? the maximum allowed value for that). Then for each available VCPU the bit in the mask has to be set (libxl_bitmap_set takes a bit position as an argument, not the number of bits to set). Without this, I would always only get one VCPU for guests created through libvirt/libxl. Signed-off-by: Stefan Bader stefan.ba...@canonical.com The libxl calling logic looks Ok to me. So from the libxl perspective you can tack on Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com --- src/libxl/libxl_conf.c | 14 +++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c index 4a0fba9..7592dd2 100644 --- a/src/libxl/libxl_conf.c +++ b/src/libxl/libxl_conf.c @@ -331,7 +331,8 @@ error: } static int -libxlMakeDomBuildInfo(virDomainDefPtr def, libxl_domain_config *d_config) +libxlMakeDomBuildInfo(libxlDriverPrivatePtr driver, virDomainDefPtr def, + libxl_domain_config *d_config) { libxl_domain_build_info *b_info = d_config-b_info; int hvm = STREQ(def-os.type, hvm); @@ -343,8 +344,15 @@ libxlMakeDomBuildInfo(virDomainDefPtr def, libxl_domain_config *d_config) libxl_domain_build_info_init_type(b_info, LIBXL_DOMAIN_TYPE_HVM); else libxl_domain_build_info_init_type(b_info, LIBXL_DOMAIN_TYPE_PV); + b_info-max_vcpus = def-maxvcpus; -libxl_bitmap_set((b_info-avail_vcpus), def-vcpus); +if (libxl_cpu_bitmap_alloc(driver-ctx, b_info-avail_vcpus, + def-maxvcpus)) You are using the driver-wide libxl_ctx in libxlDriverPrivate here, but should be using the per-domain libxl_ctx in libxlDomainObjPrivate structure IMO. It looks like libxlBuildDomainConfig, which calls libxlMakeDomBuildInfo, should have just taken virDomainObjPtr instead of virDomainDefPtr. libxlBuildDomainConfig would then have access to the per-domain libxl_ctx, and no longer need the libxlDriverPrivatePtr parameter as well. So more like attached v2. I am not sure libxlDriverPrivatePtr can really be dropped (and maybe should not as part of a fix to this issue). Maybe calling libxl_flask_context_to_sid also should use the per domain context. But at least libxlMakeVfbList-libxlMakeVfb-virPortAllocatorAcquire sounds a bit like it might need the driver context. -Stefan From 2d4b7e5ae2644e6f7a2ea930d0899ea426cb9c84 Mon Sep 17 00:00:00 2001 From: Stefan Bader stefan.ba...@canonical.com Date: Fri, 19 Jul 2013 15:20:00 +0200 Subject: [PATCH] libxl: Correctly initialize vcpu bitmap The avail_vcpu bitmap has to be allocated before it can be used (using the maximum allowed value for that). Then for each available VCPU the bit in the mask has to be set (libxl_bitmap_set takes a bit position as an argument, not the number of bits to set). Without this, I would always only get one VCPU for guests created through libvirt/libxl. [v2] Use private ctx from virDomainObjPtr-privateData Signed-off-by: Stefan Bader stefan.ba...@canonical.com --- src/libxl/libxl_conf.c | 19 +++ src/libxl/libxl_conf.h |2 +- src/libxl/libxl_driver.c |2 +- 3 files changed, 17 insertions(+), 6 deletions(-) diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c index 4a0fba9..f732135 100644 --- a/src/libxl/libxl_conf.c +++ b/src/libxl/libxl_conf.c @@ -331,8 +331,10 @@ error: } static int -libxlMakeDomBuildInfo(virDomainDefPtr def, libxl_domain_config *d_config) +libxlMakeDomBuildInfo(virDomainObjPtr vm, libxl_domain_config *d_config) { +virDomainDefPtr def = vm-def; +libxlDomainObjPrivatePtr priv = vm-privateData; libxl_domain_build_info *b_info = d_config-b_info; int hvm = STREQ(def-os.type, hvm); size_t i; @@ -343,8 +345,15 @@ libxlMakeDomBuildInfo(virDomainDefPtr def, libxl_domain_config *d_config) libxl_domain_build_info_init_type(b_info, LIBXL_DOMAIN_TYPE_HVM); else libxl_domain_build_info_init_type(b_info, LIBXL_DOMAIN_TYPE_PV); + b_info-max_vcpus = def-maxvcpus; -libxl_bitmap_set((b_info-avail_vcpus), def-vcpus); +if (libxl_cpu_bitmap_alloc(priv-ctx, b_info-avail_vcpus, + def-maxvcpus)) +goto error; +
[libvirt] [PATCH V4] add console support in libxl
this patch introduce the console api in libxl driver for both pv and hvm guest. and import and update the libxlMakeChrdevStr function which was deleted in commit dfa1e1dd. Signed-off-by: Bamvor Jian Zhang bjzh...@suse.com --- Changes since V3: implicity forbit dev_name pass to libxl driver due to only one console supported by libxl. Changes since V2: 1), forbid parallel configure because libxl do not support it 2), only support one serial on libxl driver. 3), also remove console code in libxl driver, AFAICS serial is enough for connecting to libxl console. Changes since V1: 1), add virDomainOpenConsoleEnsureACL 3), remove virReportOOMErrorFull when virAsprintf fail. 4), change size_t for non-nagetive number in libxlDomainOpenConsole size_t i; size_t num = 0; 5), fix for make check (1), replace virAsprintf with VIR_STRDUP in two places (2), delete space. src/libxl/libxl_conf.c | 100 +++ src/libxl/libxl_conf.h | 3 ++ src/libxl/libxl_driver.c | 94 3 files changed, 197 insertions(+) diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c index 4a0fba9..539d537 100644 --- a/src/libxl/libxl_conf.c +++ b/src/libxl/libxl_conf.c @@ -331,6 +331,90 @@ error: } static int +libxlMakeChrdevStr(virDomainChrDefPtr def, char **buf) +{ +const char *type = virDomainChrTypeToString(def-source.type); +int ret; + +if (!type) { +virReportError(VIR_ERR_INTERNAL_ERROR, + %s, _(unexpected chr device type)); +return -1; +} + +switch (def-source.type) { +case VIR_DOMAIN_CHR_TYPE_NULL: +case VIR_DOMAIN_CHR_TYPE_STDIO: +case VIR_DOMAIN_CHR_TYPE_VC: +case VIR_DOMAIN_CHR_TYPE_PTY: +if (VIR_STRDUP(*buf, type) 0) +return -1; +break; + +case VIR_DOMAIN_CHR_TYPE_FILE: +case VIR_DOMAIN_CHR_TYPE_PIPE: +if (virAsprintf(buf, %s:%s, type, +def-source.data.file.path) 0) +return -1; +break; + +case VIR_DOMAIN_CHR_TYPE_DEV: +if (VIR_STRDUP(*buf, def-source.data.file.path) 0) +return -1; +break; + +case VIR_DOMAIN_CHR_TYPE_UDP: { +const char *connectHost = def-source.data.udp.connectHost; +const char *bindHost = def-source.data.udp.bindHost; +const char *bindService = def-source.data.udp.bindService; + +if (connectHost == NULL) +connectHost = ; +if (bindHost == NULL) +bindHost = ; +if (bindService == NULL) +bindService = 0; + +ret = virAsprintf(buf, udp:%s:%s@%s:%s, + connectHost, + def-source.data.udp.connectService, + bindHost, + bindService); +if (ret 0) +return -1; +break; + +} +case VIR_DOMAIN_CHR_TYPE_TCP: +if (def-source.data.tcp.protocol == VIR_DOMAIN_CHR_TCP_PROTOCOL_TELNET) +ret = virAsprintf(buf, telnet:%s:%s%s, + def-source.data.tcp.host, + def-source.data.tcp.service, + def-source.data.tcp.listen ? ,server,nowait : ); +else +ret = virAsprintf(buf, tcp:%s:%s%s, + def-source.data.tcp.host, + def-source.data.tcp.service, + def-source.data.tcp.listen ? ,server,nowait : ); + +if (ret 0) +return -1; +break; + +case VIR_DOMAIN_CHR_TYPE_UNIX: +ret = virAsprintf(buf, unix:%s%s, + def-source.data.nix.path, + def-source.data.nix.listen ? ,server,nowait : ); +if (ret 0) +return -1; +break; + +} + +return 0; +} + +static int libxlMakeDomBuildInfo(virDomainDefPtr def, libxl_domain_config *d_config) { libxl_domain_build_info *b_info = d_config-b_info; @@ -403,6 +487,22 @@ libxlMakeDomBuildInfo(virDomainDefPtr def, libxl_domain_config *d_config) if (VIR_STRDUP(b_info-u.hvm.boot, bootorder) 0) goto error; +if (def-nserials) { +if (def-nserials 1) { +virReportError(VIR_ERR_INTERNAL_ERROR, + %s, _(Only one serial is supported by libxl)); +goto error; +} +if (libxlMakeChrdevStr(def-serials[0], b_info-u.hvm.serial) 0) +goto error; +} + +if (def-nparallels) { +virReportError(VIR_ERR_INTERNAL_ERROR, + %s, _(Parallel is not supported)); +
Re: [libvirt] [PATCH] virLXCMonitorClose: Unlock domain while closing monitor
On Wed, Jul 24, 2013 at 01:43:02PM +0200, Michal Privoznik wrote: On 24.07.2013 12:29, Daniel P. Berrange wrote: On Wed, Jul 24, 2013 at 12:15:32PM +0200, Michal Privoznik wrote: There's a race in lxc driver causing a deadlock. If a domain is destroyed immediately after started, the deadlock can occur. When domain is started, the even loop tries to connect to the monitor. If the connecting succeeds, virLXCProcessMonitorInitNotify() is called with @mon-client locked. The first thing that callee does, is virObjectLock(vm). So the order of locking is: 1) @mon-client, 2) @vm. However, if there's another thread executing virDomainDestroy on the very same domain, the first thing done here is locking the @vm. Then, the corresponding libvirt_lxc process is killed and monitor is closed via calling virLXCMonitorClose(). This callee tries to lock @mon-client too. So the order is reversed to the first case. This situation results in deadlock and unresponsive libvirtd (since the eventloop is involved). The proper solution is to unlock the @vm in virLXCMonitorClose prior entering virNetClientClose(). See the backtrace as follows: Hmm, I think I'd say that the flaw is in the way virLXCProcessMonitorInitNotify is invoked. In the QEMU driver monitor, we unlock the monitor before invoking any callbacks. In the LXC driver monitor we're invoking the callbacks with the monitor lock held. I think we need to make the LXC monitor locking wrt callbacks do what QEMU does, and unlock the monitor. See QEMU_MONITOR_CALLBACK in qemu_monitor.c Daniel I don't think so. It's not the monitor lock what is causing deadlock here. In fact, the monitor is unlocked: Oh, I was mixing the monitor vs virnetclient locks up. ACK to your original patch. Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] Add new virAuth symbols to private.syms
Otherwise libvirtd fails to load the lockd plugin. --- src/libvirt_private.syms | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 5a5d112..526010f 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -1095,8 +1095,11 @@ virAuditSend; # util/virauth.h virAuthGetConfigFilePath; +virAuthGetConfigFilePathURI; virAuthGetPassword; +virAuthGetPasswordPath; virAuthGetUsername; +virAuthGetUsernamePath; # util/virauthconfig.h -- 1.8.1.5 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] Add new virAuth symbols to private.syms
On 07/24/13 14:02, Ján Tomko wrote: Otherwise libvirtd fails to load the lockd plugin. --- src/libvirt_private.syms | 3 +++ 1 file changed, 3 insertions(+) Oops :/ Ack. Peter -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] Fix virCgroupAvailable() w/o HAVE_GETMNTENT_R defined
virCgroupAvailable() implementation calls getmntent_r without checking if HAVE_GETMNTENT_R is defined, so it fails to build on platforms without getmntent_r support. Make virCgroupAvailable() just return false without HAVE_GETMNTENT_R. --- src/util/vircgroup.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c index 5251611..d328212 100644 --- a/src/util/vircgroup.c +++ b/src/util/vircgroup.c @@ -69,9 +69,10 @@ typedef enum { bool virCgroupAvailable(void) { +bool ret = false; +#ifdef HAVE_GETMNTENT_R FILE *mounts = NULL; struct mntent entry; -bool ret = false; char buf[CGROUP_MAX_VAL]; if (!virFileExists(/proc/cgroups)) @@ -88,6 +89,7 @@ bool virCgroupAvailable(void) } VIR_FORCE_FCLOSE(mounts); +#endif return ret; } -- 1.7.11.5 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 0/3] qemu: Fix how files are being opened
On Wed 24 Jul 2013 10:56:56 AM CEST, Michal Privoznik wrote: On 24.05.2013 22:25, Martin Kletzander wrote: There were some places in the code, where files were being opened with uid:gid of the daemon instead of the qemu process related to the file. First patch exposes the parseIds() function in order for it to be used somewhere else in the code than in the DAC security driver. The next patch fixes how the files are opened and the last one fixes occurences of open() that should use different uid:gid for opening files. There maybe should be a check for whether the file being opened is an image and whether the label used to open the file should be imagelabel or not. But, the QEMU process opening the file is running as the label (not imagelabel) and accessing the files as such. Martin Kletzander (3): Expose ownership ID parsing Make qemuOpenFile aware of per-VM DAC seclabel. Use qemuOpenFile in qemu_driver.c src/libvirt_private.syms| 1 + src/qemu/qemu_driver.c | 87 +++-- src/security/security_dac.c | 51 ++ src/util/virutil.c | 56 + src/util/virutil.h | 2 ++ 5 files changed, 122 insertions(+), 75 deletions(-) -- 1.8.2.1 ACK series, Thanks, pushed. Martin -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] Fix link_addr detection
link_addr detection in configure always reports that link_addr is missing because it uses link_addr(NULL, NULL) in AC_LINK_IFELSE check with limited set of headers that doesn't define NULL. Fix by replacing 'NULL' with just '0'. --- configure.ac | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/configure.ac b/configure.ac index 27531a1..cc9942a 100644 --- a/configure.ac +++ b/configure.ac @@ -2401,7 +2401,7 @@ AC_LINK_IFELSE([AC_LANG_PROGRAM( #include net/if_dl.h ]], [[ -link_addr(NULL, NULL)]])], +link_addr(0, 0)]])], [AC_DEFINE([HAVE_DECL_LINK_ADDR], [1], [whether link_addr is available])]) -- 1.7.11.5 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [RFC] Image Fleecing for Libvirt (BZ 955734, 905125)
于 2013-7-24 17:44, Paolo Bonzini 写道: Il 24/07/2013 07:38, Wenchao Xia ha scritto: Because libvirt would use a local (Unix) socket to communicate with QEMU and pass the file descriptor, there is no need to authenticate the NBD connection. There is no need for ticketing, though if necessary we can make QEMU only accept connections from libvirtd's pid. libvirt and VDSM already do authentication and/or encryption. How do I get the info about IP/port needed to access that snapshot? call virSnapshotGetInfo(or similar API) later? See above. You don't, you use the libvirt channel. Paolo So I will got a libvirt API like virSnapshotRead(Domain *domain, SnapshotPtr *sn, uint64 sector_num, uint64 sector_len, char *buf)? then libvirt automatically access the snapshot and fill the buffer for user? A API let me access in some way would be my concern. -- Best Regards Wenchao Xia -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] Fix link_addr detection
On 07/24/2013 03:02 PM, Roman Bogorodskiy wrote: link_addr detection in configure always reports that link_addr is missing because it uses link_addr(NULL, NULL) in AC_LINK_IFELSE check with limited set of headers that doesn't s/doesn't/don't/ define NULL. Fix by replacing 'NULL' with just '0'. --- configure.ac | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/configure.ac b/configure.ac index 27531a1..cc9942a 100644 --- a/configure.ac +++ b/configure.ac @@ -2401,7 +2401,7 @@ AC_LINK_IFELSE([AC_LANG_PROGRAM( #include net/if_dl.h ]], [[ -link_addr(NULL, NULL)]])], +link_addr(0, 0)]])], [AC_DEFINE([HAVE_DECL_LINK_ADDR], [1], [whether link_addr is available])]) ACK, fixed and pushed. Martin -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] Fix virCgroupAvailable() w/o HAVE_GETMNTENT_R defined
On 07/24/2013 02:30 PM, Roman Bogorodskiy wrote: virCgroupAvailable() implementation calls getmntent_r without checking if HAVE_GETMNTENT_R is defined, so it fails to build on platforms without getmntent_r support. Make virCgroupAvailable() just return false without HAVE_GETMNTENT_R. --- src/util/vircgroup.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c index 5251611..d328212 100644 --- a/src/util/vircgroup.c +++ b/src/util/vircgroup.c @@ -69,9 +69,10 @@ typedef enum { bool virCgroupAvailable(void) { +bool ret = false; +#ifdef HAVE_GETMNTENT_R FILE *mounts = NULL; struct mntent entry; -bool ret = false; char buf[CGROUP_MAX_VAL]; if (!virFileExists(/proc/cgroups)) @@ -88,6 +89,7 @@ bool virCgroupAvailable(void) } VIR_FORCE_FCLOSE(mounts); +#endif return ret; } ACKed and pushed. Martin -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] Add a colon after 'internal error'
On 07/24/2013 11:22 AM, Michal Privoznik wrote: On 24.07.2013 10:47, Ján Tomko wrote: As we do for other errors with an extra string. --- src/util/virerror.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) ACK Thanks, pushed. Jan -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] Add new virAuth symbols to private.syms
On 07/24/2013 02:08 PM, Peter Krempa wrote: On 07/24/13 14:02, Ján Tomko wrote: Otherwise libvirtd fails to load the lockd plugin. --- src/libvirt_private.syms | 3 +++ 1 file changed, 3 insertions(+) Oops :/ Ack. Thanks, now pushed. Jan -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] Fix link_addr detection
Martin Kletzander wrote: On 07/24/2013 03:02 PM, Roman Bogorodskiy wrote: link_addr detection in configure always reports that link_addr is missing because it uses link_addr(NULL, NULL) in AC_LINK_IFELSE check with limited set of headers that doesn't s/doesn't/don't/ doesn't was related to set, not headers, but probably that's not the best wording, I'm not native speaker :-) define NULL. Fix by replacing 'NULL' with just '0'. --- configure.ac | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/configure.ac b/configure.ac index 27531a1..cc9942a 100644 --- a/configure.ac +++ b/configure.ac @@ -2401,7 +2401,7 @@ AC_LINK_IFELSE([AC_LANG_PROGRAM( #include net/if_dl.h ]], [[ -link_addr(NULL, NULL)]])], +link_addr(0, 0)]])], [AC_DEFINE([HAVE_DECL_LINK_ADDR], [1], [whether link_addr is available])]) ACK, fixed and pushed. Martin Roman Bogorodskiy pgpYsNh_Qxmof.pgp Description: PGP signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [Xen-devel] [PATCH] libxl: Correctly initialize vcpu bitmap
Stefan Bader wrote: On 23.07.2013 23:20, Jim Fehlig wrote: [...] You are using the driver-wide libxl_ctx in libxlDriverPrivate here, but should be using the per-domain libxl_ctx in libxlDomainObjPrivate structure IMO. It looks like libxlBuildDomainConfig, which calls libxlMakeDomBuildInfo, should have just taken virDomainObjPtr instead of virDomainDefPtr. libxlBuildDomainConfig would then have access to the per-domain libxl_ctx, and no longer need the libxlDriverPrivatePtr parameter as well. So more like attached v2. I am not sure libxlDriverPrivatePtr can really be dropped (and maybe should not as part of a fix to this issue). Maybe calling libxl_flask_context_to_sid also should use the per domain context. Yeah, I think so, but as a separate patch. But at least libxlMakeVfbList-libxlMakeVfb-virPortAllocatorAcquire sounds a bit like it might need the driver context. Ah, right, neglected that call chain when suggesting to remove libxlDriverPrivatePtr. From 2d4b7e5ae2644e6f7a2ea930d0899ea426cb9c84 Mon Sep 17 00:00:00 2001 From: Stefan Bader stefan.ba...@canonical.com Date: Fri, 19 Jul 2013 15:20:00 +0200 Subject: [PATCH] libxl: Correctly initialize vcpu bitmap The avail_vcpu bitmap has to be allocated before it can be used (using the maximum allowed value for that). Then for each available VCPU the bit in the mask has to be set (libxl_bitmap_set takes a bit position as an argument, not the number of bits to set). Without this, I would always only get one VCPU for guests created through libvirt/libxl. [v2] Use private ctx from virDomainObjPtr-privateData No need for history of patch revisions in the commit message. They can be added with 'git send-email --annotate ...'. Signed-off-by: Stefan Bader stefan.ba...@canonical.com --- src/libxl/libxl_conf.c | 19 +++ src/libxl/libxl_conf.h |2 +- src/libxl/libxl_driver.c |2 +- 3 files changed, 17 insertions(+), 6 deletions(-) diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c index 4a0fba9..f732135 100644 --- a/src/libxl/libxl_conf.c +++ b/src/libxl/libxl_conf.c @@ -331,8 +331,10 @@ error: } static int -libxlMakeDomBuildInfo(virDomainDefPtr def, libxl_domain_config *d_config) +libxlMakeDomBuildInfo(virDomainObjPtr vm, libxl_domain_config *d_config) { +virDomainDefPtr def = vm-def; +libxlDomainObjPrivatePtr priv = vm-privateData; libxl_domain_build_info *b_info = d_config-b_info; int hvm = STREQ(def-os.type, hvm); size_t i; @@ -343,8 +345,15 @@ libxlMakeDomBuildInfo(virDomainDefPtr def, libxl_domain_config *d_config) libxl_domain_build_info_init_type(b_info, LIBXL_DOMAIN_TYPE_HVM); else libxl_domain_build_info_init_type(b_info, LIBXL_DOMAIN_TYPE_PV); + b_info-max_vcpus = def-maxvcpus; -libxl_bitmap_set((b_info-avail_vcpus), def-vcpus); +if (libxl_cpu_bitmap_alloc(priv-ctx, b_info-avail_vcpus, + def-maxvcpus)) Fits on one line. ACK to the patch though. I fixed these minor nits and pushed. Regards, Jim +goto error; +libxl_bitmap_set_none(b_info-avail_vcpus); +for (i = 0; i def-vcpus; i++) +libxl_bitmap_set((b_info-avail_vcpus), i); + if (def-clock.ntimers 0 def-clock.timers[0]-name == VIR_DOMAIN_TIMER_NAME_TSC) { switch (def-clock.timers[0]-mode) { @@ -795,14 +804,16 @@ libxlMakeCapabilities(libxl_ctx *ctx) int libxlBuildDomainConfig(libxlDriverPrivatePtr driver, - virDomainDefPtr def, libxl_domain_config *d_config) + virDomainObjPtr vm, libxl_domain_config *d_config) { +virDomainDefPtr def = vm-def; + libxl_domain_config_init(d_config); if (libxlMakeDomCreateInfo(driver, def, d_config-c_info) 0) return -1; -if (libxlMakeDomBuildInfo(def, d_config) 0) { +if (libxlMakeDomBuildInfo(vm, d_config) 0) { return -1; } diff --git a/src/libxl/libxl_conf.h b/src/libxl/libxl_conf.h index 2b4a281..942cdd5 100644 --- a/src/libxl/libxl_conf.h +++ b/src/libxl/libxl_conf.h @@ -126,6 +126,6 @@ libxlMakeVfb(libxlDriverPrivatePtr driver, int libxlBuildDomainConfig(libxlDriverPrivatePtr driver, - virDomainDefPtr def, libxl_domain_config *d_config); + virDomainObjPtr vm, libxl_domain_config *d_config); #endif /* LIBXL_CONF_H */ diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c index 358d387..98b1985 100644 --- a/src/libxl/libxl_driver.c +++ b/src/libxl/libxl_driver.c @@ -968,7 +968,7 @@ libxlVmStart(libxlDriverPrivatePtr driver, virDomainObjPtr vm, libxl_domain_config_init(d_config); -if (libxlBuildDomainConfig(driver, vm-def, d_config) 0) +if (libxlBuildDomainConfig(driver, vm, d_config) 0) goto error; if (driver-autoballoon libxlFreeMem(priv,
Re: [libvirt] [PATCH 6/7] qemu: Probe QEMU binary for host CPU
On Tue, Jul 23, 2013 at 05:19:03PM +0100, Daniel P. Berrange wrote: On Tue, Jul 23, 2013 at 06:11:35PM +0200, Jiri Denemark wrote: Since QEMU and kvm may filter some host CPU features or add efficiently emulated features, asking QEMU binary for host CPU data provides better results when we later use the data for building guest CPUs. --- src/qemu/qemu_capabilities.c | 44 +++- src/qemu/qemu_capabilities.h | 2 ++ 2 files changed, 45 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 9440396..d46a059 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -253,6 +253,7 @@ struct _virQEMUCaps { size_t ncpuDefinitions; char **cpuDefinitions; +virCPUDefPtr hostCPU; size_t nmachineTypes; char **machineTypes; @@ -1757,6 +1758,9 @@ virQEMUCapsPtr virQEMUCapsNewCopy(virQEMUCapsPtr qemuCaps) goto error; } +if (!(ret-hostCPU = virCPUDefCopy(qemuCaps-hostCPU))) +goto error; + if (VIR_ALLOC_N(ret-machineTypes, qemuCaps-nmachineTypes) 0) goto error; if (VIR_ALLOC_N(ret-machineAliases, qemuCaps-nmachineTypes) 0) @@ -1796,6 +1800,7 @@ void virQEMUCapsDispose(void *obj) VIR_FREE(qemuCaps-cpuDefinitions[i]); } VIR_FREE(qemuCaps-cpuDefinitions); +virCPUDefFree(qemuCaps-hostCPU); virBitmapFree(qemuCaps-flags); @@ -2485,7 +2490,6 @@ virQEMUCapsInitQMPCommandNew(const char *binary, -no-user-config, -nodefaults, -nographic, - -M, none, -qmp, monitor, -pidfile, pidfile, -daemonize, @@ -2617,6 +2621,7 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps, cmd = virQEMUCapsInitQMPCommandNew(qemuCaps-binary, monarg, pidfile, runUid, runGid); +virCommandAddArgList(cmd, -M, none, NULL); if ((ret = virQEMUCapsInitQMPCommandRun(cmd, qemuCaps-binary, pidfile, config, mon, pid)) 0) { @@ -2679,6 +2684,37 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps, if (virQEMUCapsProbeQMPCommandLine(qemuCaps, mon) 0) goto cleanup; +if ((qemuCaps-arch == VIR_ARCH_I686 || + qemuCaps-arch == VIR_ARCH_X86_64) +(virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM) || + virQEMUCapsGet(qemuCaps, QEMU_CAPS_ENABLE_KVM)) +virQEMUCapsGet(qemuCaps, QEMU_CAPS_CPU_HOST) +qemuCaps-nmachineTypes) { +virQEMUCapsInitQMPCommandAbort(cmd, mon, pid, pidfile); + +VIR_DEBUG(Checking host CPU data provided by %s, qemuCaps-binary); +cmd = virQEMUCapsInitQMPCommandNew(qemuCaps-binary, monarg, pidfile, + runUid, runGid); +virCommandAddArgList(cmd, -cpu, host, NULL); +/* -cpu host gives the same CPU for all machine types so we just + * use the first one when probing + */ +virCommandAddArg(cmd, -machine); +virCommandAddArgFormat(cmd, %s,accel=kvm, + qemuCaps-machineTypes[0]); + +if (virQEMUCapsInitQMPCommandRun(cmd, qemuCaps-binary, pidfile, + config, mon, pid) 0) +goto cleanup; + +qemuCaps-hostCPU = qemuMonitorGetCPU(mon, qemuCaps-arch); +if (qemuCaps-hostCPU) { +char *cpu = virCPUDefFormat(qemuCaps-hostCPU, 0); +VIR_DEBUG(Host CPU reported by %s: %s, qemuCaps-binary, cpu); +VIR_FREE(cpu); +} +} This code is causing us to invoke the QEMU binary multiple times, which is something we worked really hard to get away from. I really, really don't like this as an approach. QEMU needs to be able to give us the data we need here without multiple invocations. eg, by allowing the monitor command to specify 'kvm' vs 'qemu' when asking for data, so you can interrogate it without having to re-launch it with different accel=XXX args. The specific information libvirt requires here depend on KVM being initialized, and QEMU code/interfaces currently assume that: 1) there's only 1 machine being initialized, and it is initialized very early; 2) there's only one accelerator being initialized, and it is initialized very early. I have no idea how long it will take for QEMU to provide a QMP interface for late/multiple initialization of machines/accelerators. In the meantime, the way libvirt queries for host CPU capabilities without taking QEMU and kernel capabilities into account is a serious bug we need to solve. Maybe if we are lucky we can
Re: [libvirt] [PATCH 4/7] qemu: Add monitor APIs to fetch CPUID data from QEMU
On Tue, Jul 23, 2013 at 07:32:46PM +0200, Jiri Denemark wrote: On Tue, Jul 23, 2013 at 19:28:38 +0200, Jiri Denemark wrote: On Tue, Jul 23, 2013 at 17:32:42 +0100, Daniel Berrange wrote: On Tue, Jul 23, 2013 at 06:11:33PM +0200, Jiri Denemark wrote: --- src/qemu/qemu_monitor.c| 21 +++ src/qemu/qemu_monitor.h| 3 + src/qemu/qemu_monitor_json.c | 162 + src/qemu/qemu_monitor_json.h | 6 + tests/Makefile.am | 1 + .../qemumonitorjson-getcpu-empty.data | 2 + .../qemumonitorjson-getcpu-empty.json | 46 ++ .../qemumonitorjson-getcpu-filtered.data | 4 + .../qemumonitorjson-getcpu-filtered.json | 46 ++ .../qemumonitorjson-getcpu-full.data | 4 + .../qemumonitorjson-getcpu-full.json | 46 ++ .../qemumonitorjson-getcpu-host.data | 5 + .../qemumonitorjson-getcpu-host.json | 45 ++ tests/qemumonitorjsontest.c| 74 ++ 14 files changed, 465 insertions(+) create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-empty.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-empty.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-filtered.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-filtered.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-full.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-full.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-host.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-host.json ACK, though I believe the design of this monitor API is flawed because it requires you to re-launch QEMU with different accel args Not really, this can be used in tcg too. It's just when we want to get the data for host CPU, we need to enable kvm as tcg knows nothing about that CPU. Which makes sense as kvm (the kernel module) influences how the host CPU will look like. However, you need to have a CPU to be able to ask for his properties (which kinda makes sense too) and for that you also need a machine with type != none. Which makes sense too, as the CPU may differ depending on machine type (which, however, does not happen for host CPU). In addition to the -cpu host KVM initialization problem, this is an additional problem with the current interfaces provided by QEMU: 1) libvirt needs to query data that depend on chosen machine-type and CPU model 2) Some machine-type behavior is code and not introspectable data * Luckily most of the data we need in this case should/will be encoded in the compat_props tables. * In either case, we don't have an API to query for machine-type compat_props information yet. 3) CPU model behavior will be modelled as CPU class behavior. Like on the machine-type case, some of the CPU-model-specific behavior may be modelled as code, and not introspectable data. * However, e may be able to eventually encode most or all of CPU-model-specific behavior simply as different per-CPU-class property defaults. * In either case, we don't have an API for QOM class introspection, yet. But there's something important in this case: the resulting CPUID data for a specific machine-type + CPU-model combination must be always the same, forever. This means libvirt may even use a static table, or cache this information indefinitely. (Note that I am not talking about -cpu host, here, but about all the other CPU models) -- Eduardo -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 4/7] qemu: Add monitor APIs to fetch CPUID data from QEMU
On Wed, Jul 24, 2013 at 11:03:03AM +0100, Daniel P. Berrange wrote: On Tue, Jul 23, 2013 at 07:28:38PM +0200, Jiri Denemark wrote: On Tue, Jul 23, 2013 at 17:32:42 +0100, Daniel Berrange wrote: On Tue, Jul 23, 2013 at 06:11:33PM +0200, Jiri Denemark wrote: --- src/qemu/qemu_monitor.c| 21 +++ src/qemu/qemu_monitor.h| 3 + src/qemu/qemu_monitor_json.c | 162 + src/qemu/qemu_monitor_json.h | 6 + tests/Makefile.am | 1 + .../qemumonitorjson-getcpu-empty.data | 2 + .../qemumonitorjson-getcpu-empty.json | 46 ++ .../qemumonitorjson-getcpu-filtered.data | 4 + .../qemumonitorjson-getcpu-filtered.json | 46 ++ .../qemumonitorjson-getcpu-full.data | 4 + .../qemumonitorjson-getcpu-full.json | 46 ++ .../qemumonitorjson-getcpu-host.data | 5 + .../qemumonitorjson-getcpu-host.json | 45 ++ tests/qemumonitorjsontest.c| 74 ++ 14 files changed, 465 insertions(+) create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-empty.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-empty.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-filtered.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-filtered.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-full.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-full.json create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-host.data create mode 100644 tests/qemumonitorjsondata/qemumonitorjson-getcpu-host.json ACK, though I believe the design of this monitor API is flawed because it requires you to re-launch QEMU with different accel args Not really, this can be used in tcg too. It's just when we want to get the data for host CPU, we need to enable kvm as tcg knows nothing about that CPU. Which makes sense as kvm (the kernel module) influences how the host CPU will look like. Is there no ioctl() for the KVM module we can just invoke directly to discover what CPU flag filtering it will perform. Presumably QEMU is using some ioctl to discover this, so libvirt ought to be able to too. Yes, there is: GET_SUPPORTED_CPUID. But availability of some features may depend on QEMU capabilities as well. On those cases libvirt may need to combine the GET_SUPPORTED_CPUID results with what it knows about QEMU capabilities. But this should work as long as we report and document QEMU capabilities/options that affect CPU features very clearly. That may be an appropriate way to go to, if you don't mind having low-level KVM ioctl() code inside libvirt, that duplicates some QEMU logic. (But we still have the problem of querying/reporting CPU feature details that depend on machine-type+CPU-model [that is not addressed by this series yet]. See my previous message about it) -- Eduardo -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH v3] xml: introduce startupPolicy for chardev device
[Problem] Currently, guest OS's messages can be logged to a local disk of host OS by creating chadevs with options below. -chardev file,id=charserial0,path=log file's path -device isa-serial,chardev=chardevserial0,id=serial0 When a hardware failure happens in the disk, qemu-kvm can't create the chardevs. In this case, guest OS doesn't boot up. Actually, there are users who don't desire that guest OS goes down due to a hardware failure of a log disk only. Therefore, qemu should offer some way to boot guest OS up even if the log disk is broken. [Solution] This patch supports startupPolicy for chardev. The starupPolicy is introduced just in cases where chardev is file because this patch aims for making guest OS boot up when a hardware failure happens. In other cases (pty, dev, pipe and unix) it is not introduced because they don't access to hardware. The policy works as follows. - If the value is optional, guestOS boots up by dropping the chardev. - If other values are specified, guestOS fails to boot up. (the default) Description about original startupPolicy attribute: http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=e5a84d74a278 Signed-off-by: Seiji Aguchi seiji.agu...@hds.com Signed-off-by: Eric Blake ebl...@redhat.com --- Change from v2 - To pass make check, add followings. - Add serial source optional testing. - check if startupPolicy is NULL in virDomainChrSourceDefParseXML(). - Add xml format of startupPolicy in virDomainChrSourceDefFormat(). Patch v2 and comment from Eric Blake - https://www.redhat.com/archives/libvir-list/2013-May/msg01814.html - https://www.redhat.com/archives/libvir-list/2013-May/msg01943.html --- docs/formatdomain.html.in | 16 - docs/schemas/domaincommon.rng |3 ++ src/conf/domain_conf.c | 22 +++- src/conf/domain_conf.h |1 + src/qemu/qemu_process.c| 25 +- .../qemuxml2argv-serial-source-optional.args |9 + .../qemuxml2argv-serial-source-optional.xml| 35 tests/qemuxml2argvtest.c |2 + tests/qemuxml2xmltest.c|1 + tests/virt-aa-helper-test |3 ++ 10 files changed, 113 insertions(+), 4 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-serial-source-optional.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-serial-source-optional.xml diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index 7601aaa..5c9d4fb 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -4281,13 +4281,27 @@ qemu-kvm -net nic,model=? /dev/null p A file is opened and all data sent to the character device is written to the file. + span class=sinceSince 1.0.6/span, it is possible to define + policy on what happens if the file is not accessible when + booting or migrating. This is done by + a codestartupPolicy/code attribute: /p +ul + liIf the value is mandatory (the default), the guest fails + to boot or migrate if the file is not found./li + liIf the value is optional, a missing file is at boot or + migration is substituted with /dev/null, so the guest still sees + the device but the host no longer tracks guest data on the device./li + liIf the value is requisite, the file is required for + booting, but optional on migration./li + /ul + pre ... lt;devicesgt; lt;serial type=filegt; - lt;source path=/var/log/vm/vm-serial.log/gt; + lt;source path=/var/log/vm/vm-serial.log startupPolicy=optional/gt; lt;target port=1/gt; lt;/serialgt; lt;/devicesgt; diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index 745b959..10b3365 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -2817,6 +2817,9 @@ /optional optional attribute name=path/ +optional + ref name=startupPolicy/ +/optional /optional optional attribute name=host/ diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 10cb7f6..279ff9e 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -6819,6 +6819,7 @@ virDomainChrSourceDefParseXML(virDomainChrSourceDefPtr def, char *path = NULL; char *mode = NULL; char *protocol = NULL; +char *startupPolicy = NULL; int remaining = 0; while (cur != NULL) { @@ -6839,6 +6840,9 @@ virDomainChrSourceDefParseXML(virDomainChrSourceDefPtr def, !(flags VIR_DOMAIN_XML_INACTIVE))) path = virXMLPropString(cur, path); +if (startupPolicy == NULL +def-type == VIR_DOMAIN_CHR_TYPE_FILE) +
Re: [libvirt] [PATCH v5 4/5] storage: Support chap authentication for iscsi pool
On 07/24/2013 05:57 AM, Daniel P. Berrange wrote: On Wed, Jul 24, 2013 at 10:25:06AM +0200, Ján Tomko wrote: ...snip... Both secret and qemu drivers are registered after the storage driver on libvirtd startup, so autostarting these pools will only work on storage driver reload. On libvirtd startup it fails with: qemuConnectOpen:1033 : internal error qemu state driver is not active (And it seems nwfilter only opens the qemu:// connection on reload) Oh damn, yes, that pretty much dooms us. We can't change the order of the drivers either, because autostarting of QEMU guests, requires that the storage pools be autostarted already. To fix this would require that we split virStateInitialize into two parts, virStateInitialize() and virStateAutoStart(). That's too big a change todo for this release, but we could do it for next release without too much trouble. Daniel Could we just do it for storage driver? It seems you are indicating that the following would suffice, right? Or does this problem go deeper? John diff --git a/src/driver.h b/src/driver.h index cc03e9f..b416902 100644 --- a/src/driver.h +++ b/src/driver.h @@ -1801,6 +1801,9 @@ typedef int void *opaque); typedef int +(*virDrvStateAutoStart)(void); + +typedef int (*virDrvStateCleanup)(void); typedef int @@ -1815,6 +1818,7 @@ typedef virStateDriver *virStateDriverPtr; struct _virStateDriver { const char *name; virDrvStateInitialize stateInitialize; +virDrvStateAutoStart stateAutoStart; virDrvStateCleanup stateCleanup; virDrvStateReload stateReload; virDrvStateStop stateStop; diff --git a/src/libvirt.c b/src/libvirt.c index 444c1c3..e0bd7aa 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -808,7 +808,11 @@ virRegisterStateDriver(virStateDriverPtr driver) * @callback: callback to invoke to inhibit shutdown of the daemon * @opaque: data to pass to @callback * - * Initialize all virtualization drivers. + * Initialize all virtualization drivers. Accomplished in two phases, + * the first being state and structure initialization followed by any + * auto start supported by the driver. This is done to ensure dependencies + * that some drivers may have will exist. Such as the storage driver's need + * to use the secret driver. * * Returns 0 if all succeed, -1 upon any failure. */ @@ -836,6 +840,22 @@ int virStateInitialize(bool privileged, } } } + +for (i = 0; i virStateDriverTabCount; i++) { +if (virStateDriverTab[i]-stateAutoStart) { +VIR_DEBUG(Running global auto start for %s state driver, + virStateDriverTab[i]-name); +if (virStateDriverTab[i]-stateAutoStart() 0) { +virErrorPtr err = virGetLastError(); +VIR_ERROR(_(Auto start for %s driver failed: %s), + virStateDriverTab[i]-name, + err err-message ? err-message : +_(Unknown problem)); +if (virStateDriverTab[i]-stateCleanup) +ignore_value(virStateDriverTab[i]-stateCleanup()); +return -1; +} +} +} return 0; } diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c index 4f0c631..390e196 100644 --- a/src/storage/storage_driver.c +++ b/src/storage/storage_driver.c @@ -182,7 +182,6 @@ storageStateInitialize(bool privileged, driverState-configDir, driverState-autostartDir) 0) goto error; -storageDriverAutostart(driverState); storageDriverUnlock(driverState); return 0; @@ -195,6 +194,20 @@ error: } /** + * storageStateAutoStart: + * + * Function to auto start the storage driver + */ +static int +storageStateAutoStart(void) +{ +storageDriverLock(driverState); +storageDriverAutostart(driverState); +storageDriverUnlock(driverState); +return 0; +} + +/** * storageStateReload: * * Function to restart the storage driver, it will recheck the configuration @@ -2599,6 +2612,7 @@ static virStorageDriver storageDriver = { static virStateDriver stateDriver = { .name = Storage, .stateInitialize = storageStateInitialize, +.stateAutoStart = storageStateAutoStart, .stateCleanup = storageStateCleanup, .stateReload = storageStateReload, }; -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] caps: use -device for primary video when qemu =1.6
[adding qemu-devel] On 07/24/2013 03:41 AM, Guannan Ren wrote: https://bugzilla.redhat.com/show_bug.cgi?id=981094 The commit 0ad9025ef introduce qemu flag QEMU_CAPS_DEVICE_VIDEO_PRIMARY for using -device VGA, -device cirrus-vga, -device vmware-svga and -device qxl-vga. In use, for -device qxl-vga, mouse doesn't display in guest window like the desciption in above bug. s/desciption/description/ As someone that was hit by the qemu bug of no mouse cursor display on a qxl guest, I'm more than willing to agree to this patch. Upstream qemu has a patch in their queue to fix the qxl mouse display bug for qemu 1.6. This patch try to use -device for primary video when qemu =1.6 which is safe. --- src/qemu/qemu_capabilities.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) ACK. I hate hard-coded version checks for capability bits, but don't know if we can do any better. It would be nice if we had some other capability bit that we could query in QMP to know for certain whether '-device qxl-vga' properly works without eating the mouse cursor. diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 5dc3c9e..08406b8 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -1189,8 +1189,6 @@ virQEMUCapsComputeCmdFlags(const char *help, virQEMUCapsSet(qemuCaps, QEMU_CAPS_VNC_SHARE_POLICY); } -if (version = 1002000) -virQEMUCapsSet(qemuCaps, QEMU_CAPS_DEVICE_VIDEO_PRIMARY); return 0; } @@ -2424,7 +2422,6 @@ virQEMUCapsInitQMPBasic(virQEMUCapsPtr qemuCaps) virQEMUCapsSet(qemuCaps, QEMU_CAPS_MACHINE_OPT); virQEMUCapsSet(qemuCaps, QEMU_CAPS_DUMP_GUEST_CORE); virQEMUCapsSet(qemuCaps, QEMU_CAPS_VNC_SHARE_POLICY); -virQEMUCapsSet(qemuCaps, QEMU_CAPS_DEVICE_VIDEO_PRIMARY); } /* Capabilities that are architecture depending @@ -2597,6 +2594,9 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps, if (qemuCaps-version = 1003001) virQEMUCapsSet(qemuCaps, QEMU_CAPS_VNC_WEBSOCKET); +if (qemuCaps-version = 1006000) +virQEMUCapsSet(qemuCaps, QEMU_CAPS_DEVICE_VIDEO_PRIMARY); + if (virQEMUCapsProbeQMPCommands(qemuCaps, mon) 0) goto cleanup; if (virQEMUCapsProbeQMPEvents(qemuCaps, mon) 0) -- Eric Blake eblake redhat com+1-919-301-3266 Libvirt virtualization library http://libvirt.org signature.asc Description: OpenPGP digital signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v2] Don't overwrite errors in qemuTranslateDiskSourcePool
On 07/24/2013 03:47 AM, Ján Tomko wrote: Both virStoragePoolFree and virStorageVolFree reset the last error, which might lead to the cryptic message: An error occurred, but the cause is unknown When the volume wasn't found, virStorageVolFree was called with NULL, leading to an error: invalid storage volume pointer in virStorageVolFree This patch changes it to: Storage volume not found: no storage vol with matching name 'tomato' --- v1: https://www.redhat.com/archives/libvir-list/2013-July/msg01522.html v2: only save/restore the error when ret 0 src/qemu/qemu_conf.c | 14 -- 1 file changed, 12 insertions(+), 2 deletions(-) ACK. -- Eric Blake eblake redhat com+1-919-301-3266 Libvirt virtualization library http://libvirt.org signature.asc Description: OpenPGP digital signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH V4] add console support in libxl
Bamvor Jian Zhang wrote: this patch introduce the console api in libxl driver for both pv and hvm guest. and import and update the libxlMakeChrdevStr function which was deleted in commit dfa1e1dd. Signed-off-by: Bamvor Jian Zhang bjzh...@suse.com --- Changes since V3: implicity forbit dev_name pass to libxl driver due to only one console supported by libxl. Changes since V2: 1), forbid parallel configure because libxl do not support it 2), only support one serial on libxl driver. 3), also remove console code in libxl driver, AFAICS serial is enough for connecting to libxl console. Changes since V1: 1), add virDomainOpenConsoleEnsureACL 3), remove virReportOOMErrorFull when virAsprintf fail. 4), change size_t for non-nagetive number in libxlDomainOpenConsole size_t i; size_t num = 0; 5), fix for make check (1), replace virAsprintf with VIR_STRDUP in two places (2), delete space. src/libxl/libxl_conf.c | 100 +++ src/libxl/libxl_conf.h | 3 ++ src/libxl/libxl_driver.c | 94 3 files changed, 197 insertions(+) diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c index 4a0fba9..539d537 100644 --- a/src/libxl/libxl_conf.c +++ b/src/libxl/libxl_conf.c @@ -331,6 +331,90 @@ error: } static int +libxlMakeChrdevStr(virDomainChrDefPtr def, char **buf) +{ +const char *type = virDomainChrTypeToString(def-source.type); +int ret; + +if (!type) { +virReportError(VIR_ERR_INTERNAL_ERROR, The error should be VIR_ERR_CONFIG_UNSUPPORTED. + %s, _(unexpected chr device type)); +return -1; +} + +switch (def-source.type) { +case VIR_DOMAIN_CHR_TYPE_NULL: +case VIR_DOMAIN_CHR_TYPE_STDIO: +case VIR_DOMAIN_CHR_TYPE_VC: +case VIR_DOMAIN_CHR_TYPE_PTY: Super nit: a majority of libvirt code has 'switch' and 'case' at same indentation. I realize there are inconsistencies even within this file, but new code should follow the predominant style. +if (VIR_STRDUP(*buf, type) 0) +return -1; +break; + +case VIR_DOMAIN_CHR_TYPE_FILE: +case VIR_DOMAIN_CHR_TYPE_PIPE: +if (virAsprintf(buf, %s:%s, type, +def-source.data.file.path) 0) +return -1; +break; + +case VIR_DOMAIN_CHR_TYPE_DEV: +if (VIR_STRDUP(*buf, def-source.data.file.path) 0) +return -1; +break; + +case VIR_DOMAIN_CHR_TYPE_UDP: { +const char *connectHost = def-source.data.udp.connectHost; +const char *bindHost = def-source.data.udp.bindHost; +const char *bindService = def-source.data.udp.bindService; + +if (connectHost == NULL) +connectHost = ; +if (bindHost == NULL) +bindHost = ; +if (bindService == NULL) +bindService = 0; + +ret = virAsprintf(buf, udp:%s:%s@%s:%s, + connectHost, + def-source.data.udp.connectService, + bindHost, + bindService); +if (ret 0) +return -1; +break; + +} +case VIR_DOMAIN_CHR_TYPE_TCP: +if (def-source.data.tcp.protocol == VIR_DOMAIN_CHR_TCP_PROTOCOL_TELNET) These long lines could be shortened by having a function-local virDomainChrSourceDef variable. +ret = virAsprintf(buf, telnet:%s:%s%s, + def-source.data.tcp.host, + def-source.data.tcp.service, + def-source.data.tcp.listen ? ,server,nowait : ); +else +ret = virAsprintf(buf, tcp:%s:%s%s, + def-source.data.tcp.host, + def-source.data.tcp.service, + def-source.data.tcp.listen ? ,server,nowait : ); + +if (ret 0) No need for 'ret' here. See my attached diff, which contains a bit of logic simplification here. +return -1; +break; + +case VIR_DOMAIN_CHR_TYPE_UNIX: +ret = virAsprintf(buf, unix:%s%s, + def-source.data.nix.path, + def-source.data.nix.listen ? ,server,nowait : ); +if (ret 0) +return -1; +break; There should be a default case to report error for unsupported types. + +} + +return 0; +} + +static int libxlMakeDomBuildInfo(virDomainDefPtr def, libxl_domain_config *d_config) { libxl_domain_build_info *b_info = d_config-b_info; @@ -403,6
Re: [libvirt] [PATCH] virLXCMonitorClose: Unlock domain while closing monitor
On 07/24/2013 04:15 AM, Michal Privoznik wrote: There's a race in lxc driver causing a deadlock. If a domain is destroyed immediately after started, the deadlock can occur. When domain is started, the even loop tries to connect to the monitor. If the s/even/event/ connecting succeeds, virLXCProcessMonitorInitNotify() is called with @mon-client locked. The first thing that callee does, is virObjectLock(vm). So the order of locking is: 1) @mon-client, 2) @vm. -- Eric Blake eblake redhat com+1-919-301-3266 Libvirt virtualization library http://libvirt.org signature.asc Description: OpenPGP digital signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] python: virConnect: fix destructor
On 06/17/2013 04:40 AM, Sandro Bonazzola wrote: Fixed virConnect destructor checking for attribute existance before trying to use it. s/existance/existence/ Avoids: Exception AttributeError: AttributeError(virConnect instance has no attribute 'domainEventCallbacks',) in bound method virConnect.__del__ of libvirt.virConnect instance at 0x4280f38 ignored However, something still doesn't work: Exception TypeError: TypeError('NoneType' object is not callable,) in bound method virConnect.__del__ of libvirt.virConnect object at 0x37576d0 ignored Does that mean this patch still needs work? Is this something we still need to fix before the 1.1.1 release? Signed-off-by: Sandro Bonazzola sbona...@redhat.com --- python/libvirt-override-virConnect.py | 32 +--- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/python/libvirt-override-virConnect.py b/python/libvirt-override-virConnect.py index 5495b70..28d6d41 100644 --- a/python/libvirt-override-virConnect.py +++ b/python/libvirt-override-virConnect.py @@ -1,11 +1,12 @@ def __del__(self): -try: - for cb,opaque in self.domainEventCallbacks.items(): - del self.domainEventCallbacks[cb] - del self.domainEventCallbacks - libvirtmod.virConnectDomainEventDeregister(self._o, self) -except AttributeError: - pass +if hasattr(self, 'domainEventCallbacks'): +try: +for cb,opaque in self.domainEventCallbacks.items(): +del self.domainEventCallbacks[cb] +del self.domainEventCallbacks +libvirtmod.virConnectDomainEventDeregister(self._o, self) +except AttributeError: +pass if self._o != None: libvirtmod.virConnectClose(self._o) @@ -14,14 +15,15 @@ def domainEventDeregister(self, cb): Removes a Domain Event Callback. De-registering for a domain callback will disable delivery of this event type -try: -del self.domainEventCallbacks[cb] -if len(self.domainEventCallbacks) == 0: -del self.domainEventCallbacks -ret = libvirtmod.virConnectDomainEventDeregister(self._o, self) -if ret == -1: raise libvirtError ('virConnectDomainEventDeregister() failed', conn=self) -except AttributeError: -pass +if hasattr(self, 'domainEventCallbacks'): +try: +del self.domainEventCallbacks[cb] +if len(self.domainEventCallbacks) == 0: +del self.domainEventCallbacks +ret = libvirtmod.virConnectDomainEventDeregister(self._o, self) +if ret == -1: raise libvirtError ('virConnectDomainEventDeregister() failed', conn=self) +except AttributeError: +pass def domainEventRegister(self, cb, opaque): Adds a Domain Event Callback. Registering for a domain -- Eric Blake eblake redhat com+1-919-301-3266 Libvirt virtualization library http://libvirt.org signature.asc Description: OpenPGP digital signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] virdbustest: Don't pass number of arguments as long long
since sizeof(size_t) != sizeof(long long) on 32bit archs. This unbreaks virdbustest which otherwise fails like: (gdb) bt #0 __strlen_sse2_bsf () at ../sysdeps/i386/i686/multiarch/strlen-sse2-bsf.S:50 #1 0x405907d2 in ?? () from /lib/i386-linux-gnu/libdbus-1.so.3 #2 0x4057c140 in ?? () from /lib/i386-linux-gnu/libdbus-1.so.3 #3 0x4057e7ec in dbus_message_iter_append_basic () from /lib/i386-linux-gnu/libdbus-1.so.3 #4 0x400742ec in virDBusMessageIterEncode (args=0xbfd4b8f0 k\321\004\b., types=0x804d260 , rootiter=0xbfd4b844) at util/virdbus.c:560 #5 virDBusMessageEncodeArgs (msg=msg@entry=0x893c278, types=types@entry=0x804d25c sais, args=args@entry=0xbfd4b8d8 r\320\004\b\003) at util/virdbus.c:921 #6 0x40075917 in virDBusMessageEncode (msg=0x893c278, types=0x804d25c sais) at util/virdbus.c:959 #7 0x0804a4a1 in testMessageArray (args=0x0) at virdbustest.c:195 #8 0x0804c404 in virtTestRun (title=title@entry=0x804cfcb Test message array , nloops=nloops@entry=1, body=body@entry=0x804a3f0 testMessageArray, data=data@entry=0x0) at testutils.c:168 #9 0x08049346 in mymain () at virdbustest.c:384 #10 0x0804cb2e in virtTestMain (argc=argc@entry=1, argv=argv@entry=0xbfd4bb24, func=func@entry=0x80492c0 mymain) at testutils.c:764 #11 0x080491af in main (argc=1, argv=0xbfd4bb24) at virdbustest.c:393 --- tests/virdbustest.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/virdbustest.c b/tests/virdbustest.c index e054716..ac57422 100644 --- a/tests/virdbustest.c +++ b/tests/virdbustest.c @@ -195,7 +195,7 @@ static int testMessageArray(const void *args ATTRIBUTE_UNUSED) if (virDBusMessageEncode(msg, sais, in_str1, - (long long)3, in_int32a, in_int32b, in_int32c, + (size_t)3, in_int32a, in_int32b, in_int32c, in_str2) 0) { VIR_DEBUG(Failed to encode arguments); goto cleanup; -- 1.8.3.2 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] tests: PCI controller checks
On 07/22/2013 08:48 AM, Ján Tomko wrote: Check if PCI bridges with duplicate indexes are rejected. PCI root controllers with non-zero indexes or addresses should also be rejected. --- .../qemuxml2argv-pci-bridge-duplicate-index.xml | 16 tests/qemuxml2argvdata/qemuxml2argv-pci-root-address.xml | 16 .../qemuxml2argv-pci-root-nonzero-index.xml | 14 ++ tests/qemuxml2argvtest.c | 6 ++ 4 files changed, 52 insertions(+) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-pci-bridge-duplicate-index.xml create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-pci-root-address.xml create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-pci-root-nonzero-index.xml ACK. Adding test cases is always safe, even in freeze. -- Eric Blake eblake redhat com+1-919-301-3266 Libvirt virtualization library http://libvirt.org signature.asc Description: OpenPGP digital signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] Cannot compile libvirt
Hi, I am new to libvirt and I am trying to compile libvirt from the git repository. I mainly plan to use it for managing ESX server. However I get the following error during make. *libvirt undefined reference to virNetTLSInit* Here are the steps which I took. 1. Get the code. git clone git://libvirt.org/libvirt.git 2. Run autogen ./autogen.sh --without-xen --with-esx --without-numactl --without-remote --without-libvirtd --without-one --without-openvz --without-qemu --without-uml --without-phyp --without-vbox --without-lxc --without-one --without-test --without-network --without-selinux --with-python 3. make Looks like the following commit somehow separated the gnutls symbols. *commit 83d7e4e4607a96b6959a6b0afd44a10a783fd928 Author: Daniel P. Berrange berra...@redhat.com Date: Wed Mar 13 13:37:29 2013 + Use separate symbol file for GNUTLS symbols* How can I successfully compile libvirt ? Thanks, Abhishek -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] virdbustest: Don't pass number of arguments as long long
On 07/24/2013 03:29 PM, Guido Günther wrote: since sizeof(size_t) != sizeof(long long) on 32bit archs. This unbreaks virdbustest which otherwise fails like: +++ b/tests/virdbustest.c @@ -195,7 +195,7 @@ static int testMessageArray(const void *args ATTRIBUTE_UNUSED) if (virDBusMessageEncode(msg, sais, in_str1, - (long long)3, in_int32a, in_int32b, in_int32c, + (size_t)3, in_int32a, in_int32b, in_int32c, This fix looks correct, but it's annoying that we have to cast the 'a' length argument in every caller. I'm wondering if a better fix would be to virDBusMessageEncode to take an 'int' instead of a 'size_t' arg for a length; even though that is technically an arbitrary limitation on 64-bit platforms, I seriously doubt anyone plans to use dBus to send more than 2G of objects. -- Eric Blake eblake redhat com+1-919-301-3266 Libvirt virtualization library http://libvirt.org signature.asc Description: OpenPGP digital signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 1/4] Use separate macros for failure/success in vol-to-argv test
On 07/22/2013 08:52 AM, Ján Tomko wrote: Reindent them to put the input volume on a separate line. --- tests/storagevolxml2argvtest.c | 64 +- 1 file changed, 44 insertions(+), 20 deletions(-) diff --git a/tests/storagevolxml2argvtest.c b/tests/storagevolxml2argvtest.c index 89c233f..4f4bf7d 100644 --- a/tests/storagevolxml2argvtest.c +++ b/tests/storagevolxml2argvtest.c @@ -161,7 +161,8 @@ mymain(void) int ret = 0; unsigned int flags = VIR_STORAGE_VOL_CREATE_PREALLOC_METADATA; -#define DO_TEST(shouldFail, pool, vol, inputvol, cmdline, flags, imgformat) \ +#define DO_TEST_FULL(shouldFail, pool, vol, inputvol, cmdline, flags, \ + imgformat) \ Is it worth lining up the \ to a single column? But that's cosmetic, I don't care if you do that or not. ACK. -- Eric Blake eblake redhat com+1-919-301-3266 Libvirt virtualization library http://libvirt.org signature.asc Description: OpenPGP digital signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 2/4] Move pool XML out of storagevolxml2argvdata
On 07/22/2013 08:52 AM, Ján Tomko wrote: Reuse the pool definition from storagepoolxml2xmlin. --- tests/storagevolxml2argvdata/pool-dir.xml | 18 -- tests/storagevolxml2argvtest.c| 2 +- 2 files changed, 1 insertion(+), 19 deletions(-) delete mode 100644 tests/storagevolxml2argvdata/pool-dir.xml ACK. -- Eric Blake eblake redhat com+1-919-301-3266 Libvirt virtualization library http://libvirt.org signature.asc Description: OpenPGP digital signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 3/4] Move volume XMLs out of storagevolxml2argvdata
On 07/22/2013 08:52 AM, Ján Tomko wrote: Reuse the XML files in storagevolxml2xmlin. (This requires changing a few backing files to /dev/null, since virStorageBackendCreateQemuImgCmd checks for its presence) Fair enough; it's a bit annoying that we rely on the actual file system, but /dev/null is safe. --- tests/storagevolxml2argvdata/vol-file.xml | 20 -- ACK. -- Eric Blake eblake redhat com+1-919-301-3266 Libvirt virtualization library http://libvirt.org signature.asc Description: OpenPGP digital signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH 4/4] Add inputpool to storagevolxml2argvtest
On 07/22/2013 08:53 AM, Ján Tomko wrote: This allows testing the command line for cloning file-based volumes into logical volumes and vice versa. --- .../storagevolxml2argvdata/logical-from-qcow2.argv | 2 + .../storagevolxml2argvdata/qcow2-from-logical.argv | 2 + tests/storagevolxml2argvtest.c | 90 +- 3 files changed, 74 insertions(+), 20 deletions(-) create mode 100644 tests/storagevolxml2argvdata/logical-from-qcow2.argv create mode 100644 tests/storagevolxml2argvdata/qcow2-from-logical.argv ACK. -- Eric Blake eblake redhat com+1-919-301-3266 Libvirt virtualization library http://libvirt.org signature.asc Description: OpenPGP digital signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] Call for Proposals: 2013 Linux Plumbers Virtualization Microconference
Reminder, there's one week left to submit proposals for the virtualization micro-conference at LPC. Please see below for details and note the update to submit proposals through the Linux Plumbers website: http://www.linuxplumbersconf.org/2013/ocw/events/LPC2013/proposals/new Thanks, Alex On Sun, 2013-07-14 at 15:59 -0600, Alex Williamson wrote: On Fri, 2013-07-12 at 14:38 -0600, Alex Williamson wrote: The Call for Proposals for the 2013 Linux Plumbers Virtualization Microconference is now open. This uconf is being held as part of Linux Plumbers Conference in New Orleans, Louisiana, USA September 18-20th and is co-located with LinuxCon North America. For more information see: http://www.linuxplumbersconf.org/2013/ The tentative deadline for proposals is August 1st. To submit a topic please email a brief abstract to lpc2013-virt...@codemonkey.ws If you require travel assistance (extremely limited) in order to attend, please note that in your submission. Also, please keep an eye on: http://www.linuxplumbersconf.org/2013/submitting-topic/ http://www.linuxplumbersconf.org/2013/participate/ We've setup the above email submission as an interim approach until the LPC program committee brings the official submission tool online. I'll send a follow-up message when that occurs, but please send your proposals as soon as possible. Thanks, And the official tool is now online. Please see: http://www.linuxplumbersconf.org/2013/microconference-discussion-topic-bof-submissions-now-open/ for instructions to propose a discussion topic for the virtualization microconference. Thanks, Alex -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list