Re: GSoC'20 Interested Student: Adding support to Jailhouse Hypervisor

2020-03-30 Thread PRAKHAR BANSAL
Hi Jan,

Thanks for the confirmation to proceed on project proposal.

Also, I tried installing Jailhouse on my VM after enabling VT-x/EPT and
IOMMU for my VM(Guest OS- Ubuntu 18.04) on VMware fusion hypervisor with
MacOS on the host side.
However,  Jailhouse hardware check was failed because of missing
*Preemption timer
and Virtualize APIC access*, could you please suggest, if this is hardware
limitation?  Is there any workaround here?
My laptop's processor is Intel quad-core i7.

[image: image.png]

Also, could you please suggest, if I can talk to you through an IRC or
slack channel since it is a bit hard to communicate over email every time.

Thanks,
Prakhar


On Mon, Mar 30, 2020 at 6:15 AM Jan Kiszka  wrote:

> On 30.03.20 10:02, PRAKHAR BANSAL wrote:
> > Hi Jan,
> >
> > On Sat, Mar 28, 2020 at 4:12 AM Jan Kiszka  > > wrote:
> >
> > On 28.03.20 08:47, PRAKHAR BANSAL wrote:
> >  > Hi Jan,
> >  >
> >  > Thanks for the reply!
> >  >
> >  > I was only considering the command-line tool "code" for reference
> > to the
> >  > Jailhouse kernel API(ioctl calls) because I didn't find a
> > documentation
> >  > of the Jailhouse kernel APIs.
> >
> > Right, the IOCTL API is not documented so far. It is currently only
> used
> > inside the Jailhouse project. This needs to be formalized when there
> > shall be external users like a libvirt driver.
> >
> > That might be a nice small contribution task: Create some
> > Documentation/driver-interfaces.md that describes the IOCTLs along
> with
> > their parameter structures and that also includes the current
> > sysfs-entries.txt as a section. Then send this as patch here. I'll
> help
> > out when details are not clear from reading the code.
> >
> > Sure. I will do that.
> >
> >  >
> >  > For the second part as you mentioned that Jailhouse can only
> create
> >  > cells with the constraints defined in the root cell
> configuration. I
> >  > have a few questions regarding that.
> >  >
> >  > 1. Is there a way to know if Jailhouse is enabled on the host and
> get
> >  > the root cell configuration(s) from Jailhouse through an API?
> > This can
> >  > be used while binding the libvirt to the Jailhouse hypervisor.
> >
> > Look at
> >
> https://github.com/siemens/jailhouse/blob/master/Documentation/sysfs-entries.txt
> > for what is reported as runtime information. Full configurations
> can't
> > be read back at this point. This might be reconsidered in the light
> of
> > [1], but I wouldn't plat for that yet.
> >
> >
> > Ok, sure. I am looking into it.
> >
> >
> >  >
> >  > 2. If Jailhouse is not enabled(again can we know this using some
> API)
> >  > then, can libvirt enable/disable Jailhouse during the libvirt
> > binding of
> >  > the Jailhouse driver with a given set of Jailhouse cell
> > configurations
> >  > describing a complete partitioned system?
> >
> > With the API above and a given configuration set, yes. The config set
> > would have to be provided to the libvirt driver in some to-be-defined
> > way (e.g. /etc/libvirt/jailhouse.conf ->
> /etc/libvirt/jailhouse/*.cell).
> >
> > Cool, got it. Thanks!
> >
> >  >
> >  > 3. I was wondering, as you mentioned that libvirt driver should
> check
> >  > for mismatch of the cell configuration with the root cell
> > configuration,
> >  > the question is, isn't that done by Jailhouse itself? If yes, then
> >  > libvirt can just pass on the cell creation requests to Jailhouse
> and
> >  > return the response to the user as it is, rather than driver
> > doing any
> >  > such mismatch check.
> >
> > With matching I'm referring to a libvirt user request like "create a
> > domain with 2 CPUs", while there are no cells left that have more
> than
> > one CPU. Or "give the domain 1G RAM", and you need to find an
> available
> > cell with that much memory. Those are simple examples. A request that
> > states "connect the domain to the host network A" implies that a cell
> > has a shared-memory link to, say, the root cell that can be
> configured
> > to bridge this. But let's keep that for later and start as simple as
> > possible.
> >
> >
> > Do I need to match the libvirt user-requested cell config with only root
> > cells or with all cells present at that time?
>
> With all non-root cells - the root cell will be occupied already (it
> runs libvirt e.g.).
>
> >
> > I wanted to request you for a favor for the proposal as the deadline is
> > approaching. Could I prepare a proposal for this project based on our
> > discussion here and improve it later based on feedback comments after
> > the deadline? I understand that I got late in starting on the project
> > search and selection.
>
> Sure, please go ahead.
>
> Jan
>
> >
> > Thanks,
> > Prakhar
> >
> >
> > Jan
> >
> > [1]
> 

[GSoC][RPC] Project Proposal: "Introducing Job control to the storage driver"

2020-03-30 Thread Prathamesh Chavan
GSoC Proposal - 2020


Project Name: Introducing Job control to the storage driver
===

About Me:
=

Name:   Prathamesh Chavan
University: Indian Institute of Technology Kharagpur
Major:  Computer Science and Engineering(Dual Degree)
Email:  pc44...@gmail.com
Blog:   pratham-pc.github.io
Contact:+91-993-235-8333
Time Zone:  IST (UTC +5:30)


Background:
===

I am a final year dual degree (B.Tech & M.Tech) student from the Department
of Computer Science and Engineering at IIT Kharagpur. During my first year
of college, I got introduced to open source through Kharagpur Open Source
Society and later I became part of it.

As my master's thesis project, I'm working on a tiered file system with
Software Wear Management for NVM Technologies. I always wanted to get
involved in storage technologies and the development process and Google
Summer of Code is a great way to achieve it.

I've been part of Google Summer of code in my second year of college under
the Git Organization. It was my first experience with a large codebase.
Information related to it is available in GSoC - 2017's archive[1].

Last year summers, I also interned at Nutanix. I worked on Logbay, which is
a configuration-based data collection, archiving and streaming tool used by
all services on Nutanix Controller-VM (CVM). I added multi-platform support
to Logbay by identifying all the dependencies of Logbay on the other CVM
based services and introduced an interface between these dependencies and
Logbay-Core for allowing it to be used on different platforms where they
aren’t available.  I also implemented the interface on a Dev-VM and added
multi-port support in it for allowing multiple instances of Logbay to run
on a single Dev-VM to simulate a multi-node cluster which allowed the
developers to test their changes on their Dev-VM itself.


The Project:


Introducing job control to the storage driver
Summary: Implement abstract job control and use it to improve storage driver.
Mentor: Pavel Hrdina

Abstract:
=

Currently, libvirt support job cancellation and progress reporting on domains.
That is, if there's a long-running job on a domain, e.g. migration, libvirt
reports how much data has already been transferred to the destination and how
much still needs to be transferred. However, libvirt lacks such information
reporting in storage area, to which libvirt developers refer to as the storage
driver. The aim is to report progress on several storage tasks, like volume
wiping, file allocation an others.


Job Control in Domain:
==

In src/qemu/qemu_domain.h, we can find the struct qemuDomainJobObj, which is
a job object in domains, and is used for: coordinating between jobs, help
identify which API call owns the job object, and contain rest additional info
regarding the normal job/agent job/async job.  This qemuDomainJobObj is part
of another struct qemuDomainObjPrivate, which majorly is the object, the
driver's API majorly interacts with which calling jobs on.

Whenever an API call is made, depending upon the type of the job, specific
locks are acquired and then the job is carried out. Exact design details
regarding the implementation of such APIs are present in
`src/qemu/THREADS.txt`.


Job Control in Storage:
===

Whenever an API call is made to a storageVol, the member `in_use` of the
struct `virStorageVolDef` is used as a reference counter. This allows us to
check whether the storage volume is already in use or not, and whether the
current API call can be carried out.  Once the API call exits, the reference
counter is decremented.


Reporting of job-progress as done in case of Domains:
=

Additionally, when an async job is running: it also contains
qemuDomainJobInfo: which stores the progress data of the async job,
and an another qemuDomainJobInfo stores the statistics data of a recently
completed job.

Functions virDomainGetJobInfo() and virDomainGetJobStats() present in
libvirt-domain.c help extract information about progress of a background
job on a domain.


Plan to implement something similar in Storage Jobs:


Firstly, it's important to bring in the notion of jobs to the storage driver.
Right now the API calls get directly executed if the required mutex locks are
acquired. But this gives the rest of the API calls less information about what
is running currently or has the locks acquired. Further, the domain jobs
additionally contain a lot of information which can even be useful in case of
the storage API calls.

Firstly, identification of what all API calls are occurring on Storage Volumes
in storage driver, and classifying them into something similar to normal jobs
and async jobs (the long-running ones). Also, some of the API calls will not
be acquiring a job (ones which 

[PATCH v3 1/3] qemu: capabilities: add QEMU_CAPS_VIRTFS_MULTIDEVS

2020-03-30 Thread Christian Schoenebeck
The QEMU 9pfs 'multidevs' option exists since QEMU 4.2. Probe QEMU's
command line set though to check whether this option is really
available, and if yes enable this new QEMU_CAPS_VIRTFS_MULTIDEVS
capability on libvirt side.

Signed-off-by: Christian Schoenebeck 
---
 src/qemu/qemu_capabilities.c  | 2 ++
 src/qemu/qemu_capabilities.h  | 1 +
 tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml | 1 +
 tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml   | 1 +
 tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml  | 1 +
 tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml | 1 +
 tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml   | 1 +
 tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml  | 1 +
 8 files changed, 9 insertions(+)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index a95a60c36a..5514f8eabf 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -567,6 +567,7 @@ VIR_ENUM_IMPL(virQEMUCaps,
   "query-named-block-nodes.flat",
   "blockdev-snapshot.allow-write-only-overlay",
   "blockdev-reopen",
+  "virtfs-multidevs",
 );
 
 
@@ -3156,6 +3157,7 @@ static struct virQEMUCapsCommandLineProps 
virQEMUCapsCommandLine[] = {
 { "chardev", "fd", QEMU_CAPS_CHARDEV_FD_PASS },
 { "overcommit", NULL, QEMU_CAPS_OVERCOMMIT },
 { "smp-opts", "dies", QEMU_CAPS_SMP_DIES },
+{ "fsdev", "multidevs", QEMU_CAPS_VIRTFS_MULTIDEVS },
 };
 
 static int
diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index f0961e273c..bd9f9f3537 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -548,6 +548,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for 
syntax-check */
 QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT, /* query-named-block-nodes 
supports the 'flat' option */
 QEMU_CAPS_BLOCKDEV_SNAPSHOT_ALLOW_WRITE_ONLY, /* blockdev-snapshot has the 
'allow-write-only-overlay' feature */
 QEMU_CAPS_BLOCKDEV_REOPEN, /* 'blockdev-reopen' qmp command is supported */
+QEMU_CAPS_VIRTFS_MULTIDEVS, /* fsdev.multidevs */
 
 QEMU_CAPS_LAST /* this must always be the last item */
 } virQEMUCapsFlags;
diff --git a/tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml 
b/tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml
index 640ce29c8c..4874721305 100644
--- a/tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml
+++ b/tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml
@@ -179,6 +179,7 @@
   
   
   
+  
   4001050
   0
   61700242
diff --git a/tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml 
b/tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml
index 37776e1bbe..e2df6e3af0 100644
--- a/tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml
+++ b/tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml
@@ -138,6 +138,7 @@
   
   
   
+  
   4001050
   0
   39100242
diff --git a/tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml 
b/tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml
index 83e804ea36..d8b0de46cd 100644
--- a/tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml
+++ b/tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml
@@ -223,6 +223,7 @@
   
   
   
+  
   4002000
   0
   43100242
diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml 
b/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml
index e52c60607d..3a695fbe79 100644
--- a/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml
+++ b/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml
@@ -181,6 +181,7 @@
   
   
   
+  
   4002050
   0
   61700241
diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml 
b/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml
index 85fdb21c56..2c194301db 100644
--- a/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml
+++ b/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml
@@ -189,6 +189,7 @@
   
   
   
+  
   4002050
   0
   42900241
diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml 
b/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml
index d773f7e356..95fa0813dd 100644
--- a/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml
+++ b/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml
@@ -226,6 +226,7 @@
   
   
   
+  
   4002050
   0
   43100241
-- 
2.20.1




[PATCH v3 3/3] qemu: add support for 'multidevs' option

2020-03-30 Thread Christian Schoenebeck
This option prevents misbehaviours on guest if a qemu 9pfs export
contains multiple devices, due to the potential file ID collisions
this otherwise may cause.

Signed-off-by: Christian Schoenebeck 
---
 src/qemu/qemu_command.c   |  7 +++
 src/qemu/qemu_domain.c| 12 +
 .../virtio-9p-multidevs.x86_64-latest.args| 45 +++
 tests/qemuxml2argvtest.c  |  2 +
 4 files changed, 66 insertions(+)
 create mode 100644 
tests/qemuxml2argvdata/virtio-9p-multidevs.x86_64-latest.args

diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index d1b689dfd3..5b774973b7 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -2632,6 +2632,13 @@ qemuBuildFSStr(virDomainFSDefPtr fs)
 } else if (fs->accessmode == VIR_DOMAIN_FS_ACCESSMODE_SQUASH) {
 virBufferAddLit(, ",security_model=none");
 }
+if (fs->multidevs == VIR_DOMAIN_FS_MULTIDEVS_REMAP) {
+virBufferAddLit(, ",multidevs=remap");
+} else if (fs->multidevs == VIR_DOMAIN_FS_MULTIDEVS_FORBID) {
+virBufferAddLit(, ",multidevs=forbid");
+} else if (fs->multidevs == VIR_DOMAIN_FS_MULTIDEVS_WARN) {
+virBufferAddLit(, ",multidevs=warn");
+}
 } else if (fs->fsdriver == VIR_DOMAIN_FS_DRIVER_TYPE_HANDLE) {
 /* removed since qemu 4.0.0 see v3.1.0-29-g93aee84f57 */
 virBufferAddLit(, "handle");
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index adda17a49f..21a01f2908 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -8569,6 +8569,13 @@ qemuDomainDeviceDefValidateFS(virDomainFSDefPtr fs,
_("only supports mount filesystem type"));
 return -1;
 }
+if (fs->multidevs != VIR_DOMAIN_FS_MODEL_DEFAULT &&
+!virQEMUCapsGet(qemuCaps, QEMU_CAPS_VIRTFS_MULTIDEVS))
+{
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+   _("multidevs is not supported with this QEMU binary"));
+return -1;
+}
 
 switch ((virDomainFSDriverType) fs->fsdriver) {
 case VIR_DOMAIN_FS_DRIVER_TYPE_DEFAULT:
@@ -8621,6 +8628,11 @@ qemuDomainDeviceDefValidateFS(virDomainFSDefPtr fs,
_("virtiofs is not supported with this QEMU 
binary"));
 return -1;
 }
+if (fs->multidevs != VIR_DOMAIN_FS_MULTIDEVS_DEFAULT) {
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+   _("virtiofs does not support multidevs"));
+return -1;
+}
 if (qemuDomainDefValidateVirtioFSSharedMemory(def) < 0)
 return -1;
 break;
diff --git a/tests/qemuxml2argvdata/virtio-9p-multidevs.x86_64-latest.args 
b/tests/qemuxml2argvdata/virtio-9p-multidevs.x86_64-latest.args
new file mode 100644
index 00..5a2e1fafe1
--- /dev/null
+++ b/tests/qemuxml2argvdata/virtio-9p-multidevs.x86_64-latest.args
@@ -0,0 +1,45 @@
+LC_ALL=C \
+PATH=/bin \
+HOME=/tmp/lib/domain--1-QEMUGuest1 \
+USER=test \
+LOGNAME=test \
+XDG_DATA_HOME=/tmp/lib/domain--1-QEMUGuest1/.local/share \
+XDG_CACHE_HOME=/tmp/lib/domain--1-QEMUGuest1/.cache \
+XDG_CONFIG_HOME=/tmp/lib/domain--1-QEMUGuest1/.config \
+QEMU_AUDIO_DRV=none \
+/usr/bin/qemu-system-x86_64 \
+-name guest=QEMUGuest1,debug-threads=on \
+-S \
+-object secret,id=masterKey0,format=raw,\
+file=/tmp/lib/domain--1-QEMUGuest1/master-key.aes \
+-machine pc,accel=tcg,usb=off,dump-guest-core=off \
+-cpu qemu64 \
+-m 214 \
+-overcommit mem-lock=off \
+-smp 1,sockets=1,cores=1,threads=1 \
+-uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \
+-display none \
+-no-user-config \
+-nodefaults \
+-chardev socket,id=charmonitor,fd=1729,server,nowait \
+-mon chardev=charmonitor,id=monitor,mode=control \
+-rtc base=utc \
+-no-shutdown \
+-no-acpi \
+-boot strict=on \
+-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
+-fsdev local,security_model=mapped,multidevs=remap,id=fsdev-fs0,\
+path=/export/fs0 \
+-device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=fs0,bus=pci.0,addr=0x2 \
+-fsdev local,security_model=mapped,multidevs=forbid,id=fsdev-fs1,\
+path=/export/fs1 \
+-device virtio-9p-pci,id=fs1,fsdev=fsdev-fs1,mount_tag=fs1,bus=pci.0,addr=0x3 \
+-fsdev local,security_model=mapped,multidevs=warn,id=fsdev-fs2,\
+path=/export/fs2 \
+-device virtio-9p-pci,id=fs2,fsdev=fsdev-fs2,mount_tag=fs2,bus=pci.0,addr=0x4 \
+-chardev pty,id=charserial0 \
+-device isa-serial,chardev=charserial0,id=serial0 \
+-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xc \
+-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,\
+resourcecontrol=deny \
+-msg timestamp=on
diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c
index 4d44286b5a..e0fc3ddf67 100644
--- a/tests/qemuxml2argvtest.c
+++ b/tests/qemuxml2argvtest.c
@@ -3122,6 +3122,8 @@ mymain(void)
 DO_TEST_CAPS_ARCH_LATEST("x86_64-default-cpu-kvm-q35-4.2", "x86_64");
 

[PATCH v3 0/3] add support for QEMU 9pfs 'multidevs' option

2020-03-30 Thread Christian Schoenebeck
QEMU 4.2 added a new option 'multidevs' for 9pfs. The following patch adds
support for this new option to libvirt.

In short, what is this about: to distinguish files uniquely from each other
in general, numeric file IDs are typically used for comparison, which in
practice is the combination of a file's device ID and the file's inode
number. Unfortunately 9p protocol's QID field used for this purpose,
currently is too small to fit both the device ID and inode number in, which
hence is a problem if one 9pfs export contains multiple devices and may
thus lead to misbheaviours on guest (e.g. with SAMBA file servers) in that
case due to potential file ID collisions.

To mitigate this problem with 9pfs a 'multidevs' option was introduced in
QEMU 4.2 for defining how to deal with this, e.g. multidevs=remap will cause
QEMU's 9pfs implementation to remap all inodes from host side to different
inode numbers on guest side in a way that prevents file ID collisions.

NOTE: In the libvirt docs changes of this libvirt patch I simply assumed
"since 6.2.0". So the final libvirt version number would need to be adjusted
in that text if necessary.

See QEMU discussion with following Message-ID for details:
8a2ffe17fda3a86b9a5a437e1245276881f1e235.1567680121.git.qemu_...@crudebyte.com

v2->v3:

  * Rebased to master (SHA-1 e4bf03b8ff).

  * Auto sense QEMU capability for command line option fsdev.multidevs
instead of checking for QEMU version 4.2. [patch 1]

  * Auto regenerated capabilities data for all archs (fixes capability
test). [patch 1]

  * Added XML test. [patch 2]

  * Added argv test. [patch 3]

Message-ID of v2: cover.1584723662.git.qemu_...@crudebyte.com

Christian Schoenebeck (3):
  qemu: capabilities: add QEMU_CAPS_VIRTFS_MULTIDEVS
  conf: add 'multidevs' option
  qemu: add support for 'multidevs' option

 docs/formatdomain.html.in | 40 -
 docs/schemas/domaincommon.rng | 10 
 src/conf/domain_conf.c| 29 ++
 src/conf/domain_conf.h| 13 +
 src/qemu/qemu_capabilities.c  |  2 +
 src/qemu/qemu_capabilities.h  |  1 +
 src/qemu/qemu_command.c   |  7 +++
 src/qemu/qemu_domain.c| 12 
 .../caps_4.2.0.aarch64.xml|  1 +
 .../qemucapabilitiesdata/caps_4.2.0.s390x.xml |  1 +
 .../caps_4.2.0.x86_64.xml |  1 +
 .../caps_5.0.0.aarch64.xml|  1 +
 .../qemucapabilitiesdata/caps_5.0.0.ppc64.xml |  1 +
 .../caps_5.0.0.x86_64.xml |  1 +
 .../virtio-9p-multidevs.x86_64-latest.args| 45 +++
 .../qemuxml2argvdata/virtio-9p-multidevs.xml  | 53 ++
 tests/qemuxml2argvtest.c  |  2 +
 .../virtio-9p-multidevs.x86_64-latest.xml | 56 +++
 tests/qemuxml2xmltest.c   |  2 +
 19 files changed, 277 insertions(+), 1 deletion(-)
 create mode 100644 
tests/qemuxml2argvdata/virtio-9p-multidevs.x86_64-latest.args
 create mode 100644 tests/qemuxml2argvdata/virtio-9p-multidevs.xml
 create mode 100644 
tests/qemuxml2xmloutdata/virtio-9p-multidevs.x86_64-latest.xml

-- 
2.20.1




[PATCH v3 2/3] conf: add 'multidevs' option

2020-03-30 Thread Christian Schoenebeck
Introduce new 'multidevs' option for filesystem.

  


  

This option prevents misbehaviours on guest if a qemu 9pfs export
contains multiple devices, due to the potential file ID collisions
this otherwise may cause.

Signed-off-by: Christian Schoenebeck 
---
 docs/formatdomain.html.in | 40 -
 docs/schemas/domaincommon.rng | 10 
 src/conf/domain_conf.c| 29 ++
 src/conf/domain_conf.h| 13 +
 .../qemuxml2argvdata/virtio-9p-multidevs.xml  | 53 ++
 .../virtio-9p-multidevs.x86_64-latest.xml | 56 +++
 tests/qemuxml2xmltest.c   |  2 +
 7 files changed, 202 insertions(+), 1 deletion(-)
 create mode 100644 tests/qemuxml2argvdata/virtio-9p-multidevs.xml
 create mode 100644 
tests/qemuxml2xmloutdata/virtio-9p-multidevs.x86_64-latest.xml

diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 28e779b90a..43e8a557c0 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -3967,7 +3967,7 @@
 source name='my-vm-template'/
 target dir='/'/
   /filesystem
-  filesystem type='mount' accessmode='passthrough'
+  filesystem type='mount' accessmode='passthrough' multidevs='remap'
 driver type='path' wrpolicy='immediate'/
 source dir='/export/to/guest'/
 target dir='/import/from/host'/
@@ -4092,6 +4092,44 @@
   for more details.
   
 
+  
+  The filesystem element has an optional attribute multidevs
+  which specifies how to deal with a filesystem export containing more than
+  one device, in order to avoid file ID collisions on guest when using 9pfs
+  (since 6.2.0, requires QEMU 4.2).
+  This attribute is not available for virtiofs. The possible values are:
+  
+
+
+default
+
+Use QEMU's default setting (which currently is warn).
+
+remap
+
+This setting allows guest to access multiple devices per export without
+encountering misbehaviours. Inode numbers from host are automatically
+remapped on guest to actively prevent file ID collisions if guest
+accesses one export containing multiple devices.
+
+forbid
+
+Only allow to access one device per export by guest. Attempts to access
+additional devices on the same export will cause the individual
+filesystem access by guest to fail with an error and being logged 
(once)
+as error on host side.
+
+warn
+
+This setting resembles the behaviour of 9pfs prior to QEMU 4.2, that is
+no action is performed to prevent any potential file ID collisions if 
an
+export contains multiple devices, with the only exception: a warning is
+logged (once) on host side now. This setting may lead to misbehaviours
+on guest side if more than one device is exported per export, due to 
the
+potential file ID collisions this may cause on guest side in that case.
+
+
+
   
 
   
diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng
index 5de17593c1..6bfb8e8d27 100644
--- a/docs/schemas/domaincommon.rng
+++ b/docs/schemas/domaincommon.rng
@@ -2700,6 +2700,16 @@
 
   
 
+
+  
+
+  default
+  remap
+  forbid
+  warn
+
+  
+
 
   
 
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 27bc5a797b..011d244ed6 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -502,6 +502,14 @@ VIR_ENUM_IMPL(virDomainFSModel,
   "virtio-non-transitional",
 );
 
+VIR_ENUM_IMPL(virDomainFSMultidevs,
+  VIR_DOMAIN_FS_MULTIDEVS_LAST,
+  "default",
+  "remap",
+  "forbid",
+  "warn",
+);
+
 VIR_ENUM_IMPL(virDomainFSCacheMode,
   VIR_DOMAIN_FS_CACHE_MODE_LAST,
   "default",
@@ -11385,6 +11393,7 @@ virDomainFSDefParseXML(virDomainXMLOptionPtr xmlopt,
 g_autofree char *usage = NULL;
 g_autofree char *units = NULL;
 g_autofree char *model = NULL;
+g_autofree char *multidevs = NULL;
 
 ctxt->node = node;
 
@@ -11423,6 +11432,17 @@ virDomainFSDefParseXML(virDomainXMLOptionPtr xmlopt,
 }
 }
 
+multidevs = virXMLPropString(node, "multidevs");
+if (multidevs) {
+if ((def->multidevs = virDomainFSMultidevsTypeFromString(multidevs)) < 
0) {
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
+   _("unknown multidevs '%s'"), multidevs);
+goto error;
+}
+} else {
+def->multidevs = VIR_DOMAIN_FS_MULTIDEVS_DEFAULT;
+}
+
 if (virDomainParseScaledValue("./space_hard_limit[1]",
   NULL, ctxt, 

[libvirt PATCH 2/2] gitlab: Enable improved ccache usage

2020-03-30 Thread Andrea Bolognani
Setting CC="ccache cc" works in most cases, but sometimes it will
break the build: in particular, we have experienced issues in the
past with that approach when using cgo to build our Go bindings.

A more robust approach is to have a directory containing symlinks
from the compiler name to the ccache binary: in that case, ccache
itself will invoke the compiler, and the build system will be none
the wiser.

Since libvirt-jenkins-ci commit 2563aebb6c5c, container images
contain a suitable symlink directory, so all that's needed to
enable the new approach is to add this directory to $PATH.

Since we're touching this anyway, we make a few more changes:
$CCACHE_DIR is no longer created manually, because ccache will
take care of creating it for us if it doesn't already exist; the
ccache setup is moved out of the job template and into
script_variables, removing unnecessary duplication; a limit is
set on the size of the cache (500 MB, which is twice the amount
used by a fresh build on my Fedora 31 machine).

Signed-off-by: Andrea Bolognani 
---
 .gitlab-ci.yml | 12 
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 3203434b99..99e7b510c7 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -8,6 +8,10 @@ stages:
 
 .script_variables: _variables |
   export MAKEFLAGS="-j$(getconf _NPROCESSORS_ONLN)"
+  export CCACHE_BASEDIR="$(pwd)"
+  export CCACHE_DIR="$CCACHE_BASEDIR/ccache"
+  export CCACHE_MAXSIZE="500M"
+  export PATH="$CCACHE_WRAPPERSDIR:$PATH"
 
 # Common templates
 
@@ -20,10 +24,6 @@ stages:
 key: "$CI_JOB_NAME"
   before_script:
 - *script_variables
-- mkdir -p ccache
-- export CC="ccache gcc"
-- export CCACHE_BASEDIR=${PWD}
-- export CCACHE_DIR=${PWD}/ccache
   script:
 - mkdir build
 - cd build
@@ -48,10 +48,6 @@ stages:
 key: "$CI_JOB_NAME"
   before_script:
 - *script_variables
-- mkdir -p ccache
-- export CC="ccache ${ABI}-gcc"
-- export CCACHE_BASEDIR=${PWD}
-- export CCACHE_DIR=${PWD}/ccache
   script:
 - mkdir build
 - cd build
-- 
2.25.1



[libvirt PATCH 1/2] gitlab: Don't define $MAKE

2020-03-30 Thread Andrea Bolognani
Since libvirt-jenkins-ci commit 27cfddee8835, paths to build tools
such as ninja and make are exported in the container's environment
and can be used directly.

Signed-off-by: Andrea Bolognani 
---
 .gitlab-ci.yml | 1 -
 1 file changed, 1 deletion(-)

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index beeae1df05..3203434b99 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -1,5 +1,4 @@
 variables:
-  MAKE: make
   GIT_DEPTH: 100
 
 stages:
-- 
2.25.1



[libvirt PATCH 0/2] gitlab: Clean up and improve ccache usage

2020-03-30 Thread Andrea Bolognani
Test pipeline: https://gitlab.com/abologna/libvirt/pipelines/131128028

Andrea Bolognani (2):
  gitlab: Don't define $MAKE
  gitlab: Enable improved ccache usage

 .gitlab-ci.yml | 13 -
 1 file changed, 4 insertions(+), 9 deletions(-)

-- 
2.25.1



Re: [libvirt PATCH] travis: delete all Linux jobs

2020-03-30 Thread Daniel P . Berrangé
On Mon, Mar 30, 2020 at 07:05:26PM +0200, Andrea Bolognani wrote:
> On Mon, 2020-03-30 at 17:04 +0100, Daniel P. Berrangé wrote:
> > The Fedora rawhide job started failing with the latest container build.
> > Since we now have broad CI coverage on GitLab CI, there's not a strong
> > reason to continue using Travis for Linux jobs. Deleting the redundant
> > jobs is a better use of time than trying to debug the failure.
> 
> Especially since it works fine both on GitLab, using the same
> container image, and even locally, using the very same build script.
> 
>   Reviewed-by: Andrea Bolognani 
> 
> and safe for freeze.
> 
> My only concern is that our ci/Makefile scaffolding will bitrot now
> that it will no longer be exercised directly through CI... Perhaps we
> could leave a single Linux job enabled on Travis CI for the sole
> purpose of preventing that?

True, but I'd rather like to eliminate duplicate failures being reported
by Travis that we're already getting from GitLab, and thus stop loooking
at Travis as much as is possible, so if it fails we know it is macOS
related.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt PATCH] travis: delete all Linux jobs

2020-03-30 Thread Andrea Bolognani
On Mon, 2020-03-30 at 17:04 +0100, Daniel P. Berrangé wrote:
> The Fedora rawhide job started failing with the latest container build.
> Since we now have broad CI coverage on GitLab CI, there's not a strong
> reason to continue using Travis for Linux jobs. Deleting the redundant
> jobs is a better use of time than trying to debug the failure.

Especially since it works fine both on GitLab, using the same
container image, and even locally, using the very same build script.

  Reviewed-by: Andrea Bolognani 

and safe for freeze.

My only concern is that our ci/Makefile scaffolding will bitrot now
that it will no longer be exercised directly through CI... Perhaps we
could leave a single Linux job enabled on Travis CI for the sole
purpose of preventing that?

-- 
Andrea Bolognani / Red Hat / Virtualization



Re: [PATCH v2 05/11] qemu: Introduce virQEMUDriverGetEmbedRoot

2020-03-30 Thread Daniel P . Berrangé
On Mon, Mar 30, 2020 at 06:20:28PM +0200, Michal Prívozník wrote:
> On 30. 3. 2020 13:16, Daniel P. Berrangé wrote:
> > On Thu, Mar 26, 2020 at 04:15:09PM +0100, Michal Privoznik wrote:
> >> This function returns embeddedRoot member of the driver
> >> structure.
> >>
> >> Signed-off-by: Michal Privoznik 
> >> Reviewed-by: Andrea Bolognani 
> >> Reviewed-by: Daniel Henrique Barboza 
> >> ---
> >>  src/qemu/qemu_conf.c | 12 
> >>  src/qemu/qemu_conf.h |  1 +
> >>  2 files changed, 13 insertions(+)
> >>
> >> diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
> >> index 9786e19f8f..fdc6f53ad3 100644
> >> --- a/src/qemu/qemu_conf.c
> >> +++ b/src/qemu/qemu_conf.c
> >> @@ -1230,6 +1230,18 @@ virQEMUDriverIsPrivileged(virQEMUDriverPtr driver)
> >>  return driver->privileged;
> >>  }
> >>  
> >> +/* virQEMUDriverGetEmbedRoot:
> >> + * @driver: the QEMU driver
> >> + *
> >> + * Returns root directory specified in connection URI for embed
> >> + * mode, NULL otherwise.
> >> + */
> >> +const char *
> >> +virQEMUDriverGetEmbedRoot(virQEMUDriverPtr driver)
> >> +{
> >> +return driver->embeddedRoot;
> >> +}
> > 
> > I don't really see the benefit in this method. The embeddedRoot
> > field is immutable so we can just be accessing it directly with
> > no need for APIs wrappers, as we often do when accesing the
> > privileged field.
> 
> I just wanted to follow what virQEMUDriverIsPrivileged() is doing. I
> don't care honestly.

Hmm, I had forgotten  virQEMUDriverIsPrivileged() exists - we don't
seem to be using that consistently either.

My preference is to have neither of those methods.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH v2 02/11] qemu: Drop two layers of nesting of memoryBackingDir

2020-03-30 Thread Daniel P . Berrangé
On Mon, Mar 30, 2020 at 06:16:29PM +0200, Michal Prívozník wrote:
> On 30. 3. 2020 13:09, Daniel P. Berrangé wrote:
> > On Thu, Mar 26, 2020 at 04:15:06PM +0100, Michal Privoznik wrote:
> >> Initially introduced in v3.10.0-rc1~172.
> >>
> >> When generating a path for memory-backend-file or -mem-path, qemu
> >> driver will use the following pattern:
> >>
> >>   $memoryBackingDir/libvirt/qemu/$id-$shortName
> >>
> >> where $memoryBackingDir defaults to /var/lib/libvirt/qemu/ram but
> >> can be overridden in qemu.conf. Anyway, the "/libvirt/qemu/" part
> >> looks redundant, because it's already contained in the default,
> >> or creates unnecessary nesting if overridden in qemu.conf.
> > 
> > I think this was copied from our earlier huge pages code
> > which used /dev/hugepages, and then added "/libvirt/qemu"
> > to it.
> > 
> > Now we're stripping off "/libvirt/qemu" though, we're liable
> > to a filename clashes if someone edits qemu.conf to point to
> > /dev/shm.
> > 
> > IOW, even though "/libvirt/qemu" is redundant when using our
> > default path, I think we need to keep it to avoid clashing
> > with custom paths.
> 
> Alright. So what if I'd squash this in?
> 
> diff --git i/src/qemu/qemu_conf.c w/src/qemu/qemu_conf.c
> index 9786e19f8f..2ead5d5920 100644
> --- i/src/qemu/qemu_conf.c
> +++ w/src/qemu/qemu_conf.c
> @@ -970,7 +970,18 @@ static int
>  virQEMUDriverConfigLoadMemoryEntry(virQEMUDriverConfigPtr cfg,
> virConfPtr conf)
>  {
> -return virConfGetValueString(conf, "memory_backing_dir", 
> >memoryBackingDir);
> +char *dir = NULL;
> +int rc;
> +
> +if ((rc = virConfGetValueString(conf, "memory_backing_dir", )) < 0) {
> +return -1;
> +} else if (rc > 0) {
> +VIR_FREE(cfg->memoryBackingDir);
> +cfg->memoryBackingDir = g_strdup_printf("%s/libvirt/qemu", dir);
> +return 1;
> +}
> +
> +return 0;
>  }
> 
> 
> This way we would drop "/libvirt/qemu" in the default configuration and keep 
> it if user overrides it in qemu.conf.

Yeah, I think that works.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH v2 05/11] qemu: Introduce virQEMUDriverGetEmbedRoot

2020-03-30 Thread Michal Prívozník
On 30. 3. 2020 13:16, Daniel P. Berrangé wrote:
> On Thu, Mar 26, 2020 at 04:15:09PM +0100, Michal Privoznik wrote:
>> This function returns embeddedRoot member of the driver
>> structure.
>>
>> Signed-off-by: Michal Privoznik 
>> Reviewed-by: Andrea Bolognani 
>> Reviewed-by: Daniel Henrique Barboza 
>> ---
>>  src/qemu/qemu_conf.c | 12 
>>  src/qemu/qemu_conf.h |  1 +
>>  2 files changed, 13 insertions(+)
>>
>> diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
>> index 9786e19f8f..fdc6f53ad3 100644
>> --- a/src/qemu/qemu_conf.c
>> +++ b/src/qemu/qemu_conf.c
>> @@ -1230,6 +1230,18 @@ virQEMUDriverIsPrivileged(virQEMUDriverPtr driver)
>>  return driver->privileged;
>>  }
>>  
>> +/* virQEMUDriverGetEmbedRoot:
>> + * @driver: the QEMU driver
>> + *
>> + * Returns root directory specified in connection URI for embed
>> + * mode, NULL otherwise.
>> + */
>> +const char *
>> +virQEMUDriverGetEmbedRoot(virQEMUDriverPtr driver)
>> +{
>> +return driver->embeddedRoot;
>> +}
> 
> I don't really see the benefit in this method. The embeddedRoot
> field is immutable so we can just be accessing it directly with
> no need for APIs wrappers, as we often do when accesing the
> privileged field.

I just wanted to follow what virQEMUDriverIsPrivileged() is doing. I
don't care honestly.

Michal



Re: [PATCH v2 02/11] qemu: Drop two layers of nesting of memoryBackingDir

2020-03-30 Thread Michal Prívozník
On 30. 3. 2020 13:09, Daniel P. Berrangé wrote:
> On Thu, Mar 26, 2020 at 04:15:06PM +0100, Michal Privoznik wrote:
>> Initially introduced in v3.10.0-rc1~172.
>>
>> When generating a path for memory-backend-file or -mem-path, qemu
>> driver will use the following pattern:
>>
>>   $memoryBackingDir/libvirt/qemu/$id-$shortName
>>
>> where $memoryBackingDir defaults to /var/lib/libvirt/qemu/ram but
>> can be overridden in qemu.conf. Anyway, the "/libvirt/qemu/" part
>> looks redundant, because it's already contained in the default,
>> or creates unnecessary nesting if overridden in qemu.conf.
> 
> I think this was copied from our earlier huge pages code
> which used /dev/hugepages, and then added "/libvirt/qemu"
> to it.
> 
> Now we're stripping off "/libvirt/qemu" though, we're liable
> to a filename clashes if someone edits qemu.conf to point to
> /dev/shm.
> 
> IOW, even though "/libvirt/qemu" is redundant when using our
> default path, I think we need to keep it to avoid clashing
> with custom paths.

Alright. So what if I'd squash this in?

diff --git i/src/qemu/qemu_conf.c w/src/qemu/qemu_conf.c
index 9786e19f8f..2ead5d5920 100644
--- i/src/qemu/qemu_conf.c
+++ w/src/qemu/qemu_conf.c
@@ -970,7 +970,18 @@ static int
 virQEMUDriverConfigLoadMemoryEntry(virQEMUDriverConfigPtr cfg,
virConfPtr conf)
 {
-return virConfGetValueString(conf, "memory_backing_dir", 
>memoryBackingDir);
+char *dir = NULL;
+int rc;
+
+if ((rc = virConfGetValueString(conf, "memory_backing_dir", )) < 0) {
+return -1;
+} else if (rc > 0) {
+VIR_FREE(cfg->memoryBackingDir);
+cfg->memoryBackingDir = g_strdup_printf("%s/libvirt/qemu", dir);
+return 1;
+}
+
+return 0;
 }


This way we would drop "/libvirt/qemu" in the default configuration and keep it 
if user overrides it in qemu.conf.

Michal



[libvirt PATCH] travis: delete all Linux jobs

2020-03-30 Thread Daniel P . Berrangé
The Fedora rawhide job started failing with the latest container build.
Since we now have broad CI coverage on GitLab CI, there's not a strong
reason to continue using Travis for Linux jobs. Deleting the redundant
jobs is a better use of time than trying to debug the failure.

Signed-off-by: Daniel P. Berrangé 
---
 .travis.yml | 47 ---
 1 file changed, 47 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index f400f76118..0548b28b01 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -18,53 +18,6 @@ addons:
 
 matrix:
   include:
-- services:
-- docker
-  env:
-- IMAGE="ubuntu-1804"
-- MAKE_ARGS="syntax-check distcheck"
-  script:
-- make -C ci/ ci-build@$IMAGE CI_MAKE_ARGS="$MAKE_ARGS"
-- services:
-- docker
-  env:
-- IMAGE="centos-7"
-- MAKE_ARGS="syntax-check distcheck"
-  script:
-- make -C ci/ ci-build@$IMAGE CI_MAKE_ARGS="$MAKE_ARGS"
-- services:
-- docker
-  env:
-- IMAGE="debian-9"
-- MAKE_ARGS="syntax-check distcheck"
-  script:
-- make -C ci/ ci-build@$IMAGE CI_MAKE_ARGS="$MAKE_ARGS"
-- services:
-- docker
-  env:
-- IMAGE="fedora-31"
-- MAKE_ARGS="syntax-check distcheck"
-  script:
-- make -C ci/ ci-build@$IMAGE CI_MAKE_ARGS="$MAKE_ARGS"
-- services:
-- docker
-  env:
-- IMAGE="fedora-rawhide"
-- MAKE_ARGS="syntax-check distcheck"
-  script:
-- make -C ci/ ci-build@$IMAGE CI_MAKE_ARGS="$MAKE_ARGS"
-- services:
-- docker
-  env:
-- IMAGE="fedora-30-cross-mingw32"
-  script:
-- make -C ci/ ci-build@$IMAGE
-- services:
-- docker
-  env:
-- IMAGE="fedora-30-cross-mingw64"
-  script:
-- make -C ci/ ci-build@$IMAGE
 - compiler: clang
   language: c
   os: osx
-- 
2.24.1



Re: [PATCH v2 4/4] qemu: add support for 'multidevs' option

2020-03-30 Thread Christian Schoenebeck
On Montag, 30. März 2020 16:45:08 CEST Ján Tomko wrote:
> >> A change to qemuxml2argvtest is needed.
> >> 
> >> If you already added the XML file to qemuxml2argvdata in the previous
> >> commit, all you need to to to generate the output file is:
> >> * add a new DO_TEST_CAPS_LATEST line to qemuxml2argvtest.c
> >
> >Looking at the existing test cases, it probably makes sense to extend
> >qemuxml2argvdata/virtio-options.xml instead of adding a separate XML file,
> 
> >that is:
> That one is aiming at testing the iommu and ats options common for
> virtio devices.
> 
> The 'fs9p' test would be a better candidate for adding these,
> but I'd still rather add a new file - to show that the command line
> for the old config remains unchanged.

Convinced. I change it.

Best regards,
Christian Schoenebeck





Re: RFC: qemu: use uuid instead of name for misc filenames

2020-03-30 Thread nshirokovskiy



On 30.03.2020 13:41, Daniel P. Berrangé wrote:
> On Sun, Mar 29, 2020 at 02:33:41PM +0300, nshirokovskiy wrote:
>>
>>
>> On 26.03.2020 20:50, Daniel P. Berrangé wrote:
>>> On Fri, Feb 28, 2020 at 10:09:41AM +0300, Nikolay Shirokovskiy wrote:
 On 27.02.2020 16:48, Daniel P. Berrangé wrote:
> On Thu, Feb 27, 2020 at 03:57:04PM +0300, Nikolay Shirokovskiy wrote:
>> Hi, everyone.
>>
>>  
>>
>> I'm working on supporting domain renaming when it has snapshots which is 
>> not
>> supported now. And it strikes me things will be much simplier to manage 
>> on  
>> renaming if we use uuid in filenames instead of domain names.
>>  
>>
>>>
>>>
>>>
>> 4. No issues with long domain names and filename length limit
>>
>>  
>>
>> If the above conversion makes sense I guess the good time to apply it is 
>>
>> on domain start (and rename to support renaming with snapshots). 
>>
>
> The above has not considered the benefit that using the VM name
> has. Essentially the UUID is good for machines, the VM name is
> good for humans.  Seeing the guest XML files, or VM log files
> using a filename based on UUID instead of name is a *really*
> unappealing idea to me. 

 I agree. But we can also keep symlinks with domain names for configs/logs 
 etc
 This can be done as a separate tool as I suggested in the letter or 
 maintain
 symlinks always. The idea is failing in this symlinking won't affect daemon
 functionality as symlinks are for humans)
>>>
>>> I've just realized that there is potential overlap between what we're
>>> discussing in this thread, and in the thread about localhost migration:
>>>
>>>   https://www.redhat.com/archives/libvir-list/2020-February/msg00061.html
>>>
>>> In the localhost migration case, we need to be able to startup a new
>>> guest with the same name as an existing guest.  The way we can achieve
>>> that is by thinking of localhost migration as being a pair of domain
>>> rename operations.
>>>
>>> ie, consider guest "foo" we want to localhost-migrate
>>>
>>>  - Start target guest "foo-incoming"
>>>  - Run live migration from "foo" -> "foo-incoming"
>>>  - Migration completes, CPUs stop
>>>  - Rename "foo" to "foo-outgoing"
>>>  - Rename "foo-incoming" to "foo"
>>>  - Tidy up migration state
>>>  - Destroy source guest "foo-outgoing"
>>
>> I think local migration does not fit really nicely in this scheme:
>>
>> - one can not treat outgoing and incoming VMs as just regular VMs as
>>   one can not put them into same list as they have same UUID
> 
> Yes, that is a tricky issue, but one that we need to solve, as the need
> to have a completely separate of list VMs is the thing I dislike the
> most about the local migration patches.
> 
> One option is to make the target VM have a different UUID by pickling
> its UUID. eg have a migration UUID generated on daemon startup.
> 0466e1ae-a71a-4e75-89ca-c3591a4cf220.  Then XOR this migration UUID
> with the source VM's UUID. So during live migration the target VM
> will appear with this XOR'd UUID, and once completed, it will get
> the real UUID again.
> 
> A different option is to not keep the target VM in the domain list
> at all. Instead  virDomainObjPtr, could have a pointer to a second
> virDomainObjPtr which stores the target VM temporarily.

Both choices have its issues/advantages

With the first approach incoming VM is visible as regular one. This
can be beneficial that one can inspect the VM in debug purpuose just
like regular one. On the other hand the appearance of the VM can 
be unexpected to mgmt thus some may mgmt even try to destroy it. So the other
approach looks more transparent from mgmt POV.

I should say in Virtuozzo we have a patch series for local migration.
I decided not to send it to the list as previously the decision was
that the trade off for complexity/utility is not on the feature side.

I decided to use the latter approach to keep the link to the peer.
The link is bidirectional, thus is is very simple to get peer object
from both sides.

I want to mention another decision that turns to be successful - 
using same mutex for both domain objects. This way we don't need
to care of "locking order"/"re-locking to keep the locking order" 
to avoid deadlocks. Accessing peer object is as simple as vm->peer->...
And this plays nicely with current migration code when we drop
the lock at the end of migration phase so that next phase can
take the same lock.

As to handling clashing resources we support next cases:

- tap interfaces
- tcp/unix chardev backends
- 

Re: [PATCH v2 4/4] qemu: add support for 'multidevs' option

2020-03-30 Thread Ján Tomko

On a Monday in 2020, Christian Schoenebeck wrote:

On Freitag, 27. März 2020 18:16:37 CEST Ján Tomko wrote:

On a Friday in 2020, Christian Schoenebeck wrote:
>This option prevents misbehaviours on guest if a qemu 9pfs export
>contains multiple devices, due to the potential file ID collisions
>this otherwise may cause.
>
>Signed-off-by: Christian Schoenebeck 
>---
>
> src/qemu/qemu_command.c |  7 +++
> src/qemu/qemu_domain.c  | 12 
> 2 files changed, 19 insertions(+)

A change to qemuxml2argvtest is needed.

If you already added the XML file to qemuxml2argvdata in the previous
commit, all you need to to to generate the output file is:
* add a new DO_TEST_CAPS_LATEST line to qemuxml2argvtest.c


Looking at the existing test cases, it probably makes sense to extend
qemuxml2argvdata/virtio-options.xml instead of adding a separate XML file,
that is:


That one is aiming at testing the iommu and ats options common for
virtio devices.

The 'fs9p' test would be a better candidate for adding these,
but I'd still rather add a new file - to show that the command line
for the old config remains unchanged.

Jano



diff --git a/tests/qemuxml2argvdata/virtio-options.xml b/tests/
qemuxml2argvdata/virtio-options.xml
index dd9a4f4a01..7ea624dfad 100644
--- a/tests/qemuxml2argvdata/virtio-options.xml
+++ b/tests/qemuxml2argvdata/virtio-options.xml
@@ -47,6 +47,21 @@
  
  

+
+  
+  
+  
+
+
+  
+  
+  
+
+
+  
+  
+  
+

  
  


* run the test with VIR_TEST_REGENERATE_OUTPUT=1


Ah, that's the one! :) I was already wondering whether other people are auto
generating the files I manually changed so far.

I'll post a rebased v3 with the things discussed so far. Thanks Ján!

Best regards,
Christian Schoenebeck




signature.asc
Description: PGP signature


Re: [PATCH v2 2/4] conf: Add support for http(s) query strings

2020-03-30 Thread Ján Tomko

On a Monday in 2020, Peter Krempa wrote:

On Mon, Mar 30, 2020 at 16:18:26 +0200, Ján Tomko wrote:

On a Monday in 2020, Peter Krempa wrote:
> Add a new attribute for holding the query part for http(s) disks.
>
> Signed-off-by: Peter Krempa 
> ---
> docs/formatdomain.html.in  | 7 ++-
> docs/schemas/domaincommon.rng  | 6 ++
> src/conf/domain_conf.c | 5 +
> src/util/virstoragefile.c  | 1 +
> src/util/virstoragefile.h  | 1 +
> tests/qemuxml2argvdata/disk-network-http.xml   | 2 +-
> .../qemuxml2xmloutdata/disk-network-http.x86_64-latest.xml | 2 +-
> 7 files changed, 21 insertions(+), 3 deletions(-)
>
> diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
> index 6dbf449698..aaeb05961f 100644
> --- a/docs/formatdomain.html.in
> +++ b/docs/formatdomain.html.in
> @@ -2847,7 +2847,7 @@
>   /disk
>   disk type='network' device='cdrom'
> driver name='qemu' type='raw'/
> -source protocol="http" name="url_path"
> +source protocol="http" name="url_path" 
query="foo=baramp;baz=flurb
>   host name="hostname" port="80"/
>   cookies
> cookie name="test"somevalue/cookie
> @@ -3113,6 +3113,11 @@
>   ('tls' Since 4.5.0)
>   
>
> +  For protocols http and https an
> +  optional attribute query specifies the query 
string.
> +  (Since 6.2.0)

Should this be 6.3.0? Or are we calling this a bugfix?


This is breaking certain versions of virt-v2v which didn't switch to
nbdkit yet so I'm willing to call this a bugfix.



Fine by me.

Jano


signature.asc
Description: PGP signature


Re: [PATCH v2 2/4] conf: Add support for http(s) query strings

2020-03-30 Thread Peter Krempa
On Mon, Mar 30, 2020 at 16:18:26 +0200, Ján Tomko wrote:
> On a Monday in 2020, Peter Krempa wrote:
> > Add a new attribute for holding the query part for http(s) disks.
> > 
> > Signed-off-by: Peter Krempa 
> > ---
> > docs/formatdomain.html.in  | 7 ++-
> > docs/schemas/domaincommon.rng  | 6 ++
> > src/conf/domain_conf.c | 5 +
> > src/util/virstoragefile.c  | 1 +
> > src/util/virstoragefile.h  | 1 +
> > tests/qemuxml2argvdata/disk-network-http.xml   | 2 +-
> > .../qemuxml2xmloutdata/disk-network-http.x86_64-latest.xml | 2 +-
> > 7 files changed, 21 insertions(+), 3 deletions(-)
> > 
> > diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
> > index 6dbf449698..aaeb05961f 100644
> > --- a/docs/formatdomain.html.in
> > +++ b/docs/formatdomain.html.in
> > @@ -2847,7 +2847,7 @@
> >   /disk
> >   disk type='network' device='cdrom'
> > driver name='qemu' type='raw'/
> > -source protocol="http" name="url_path"
> > +source protocol="http" name="url_path" 
> > query="foo=baramp;baz=flurb
> >   host name="hostname" port="80"/
> >   cookies
> > cookie name="test"somevalue/cookie
> > @@ -3113,6 +3113,11 @@
> >   ('tls' Since 4.5.0)
> >   
> > 
> > +  For protocols http and https an
> > +  optional attribute query specifies the query 
> > string.
> > +  (Since 6.2.0)
> 
> Should this be 6.3.0? Or are we calling this a bugfix?

This is breaking certain versions of virt-v2v which didn't switch to
nbdkit yet so I'm willing to call this a bugfix. 



Re: [PATCH v2 4/4] virStorageSourceParseBackingURI: Preserve query string of URI for http(s)

2020-03-30 Thread Ján Tomko

On a Monday in 2020, Peter Krempa wrote:

For http/https URIs we need to preserve the query part as it may be
important to refer to the image.

Signed-off-by: Peter Krempa 
---
src/util/virstoragefile.c | 13 ++---
tests/virstoragetest.c|  4 ++--
2 files changed, 12 insertions(+), 5 deletions(-)



Reviewed-by: Ján Tomko 

Jano


signature.asc
Description: PGP signature


Re: [PATCH v2 3/4] qemuBlockStorageSourceGetURI: Pass through query component

2020-03-30 Thread Ján Tomko

On a Monday in 2020, Peter Krempa wrote:

If the storage source has the query part set, format it in the output.

Signed-off-by: Peter Krempa 
---
src/qemu/qemu_block.c| 2 ++
src/qemu/qemu_domain.c   | 9 +
tests/qemuxml2argvdata/disk-cdrom-network.args   | 4 ++--
.../disk-cdrom-network.x86_64-2.12.0.args| 4 ++--
.../disk-cdrom-network.x86_64-latest.args| 3 ++-
tests/qemuxml2argvdata/disk-cdrom-network.xml| 2 +-
.../disk-network-http.x86_64-latest.args | 5 +++--
7 files changed, 21 insertions(+), 8 deletions(-)



Reviewed-by: Ján Tomko 

Jano


signature.asc
Description: PGP signature


Re: [PATCH v2 2/4] conf: Add support for http(s) query strings

2020-03-30 Thread Ján Tomko

On a Monday in 2020, Peter Krempa wrote:

Add a new attribute for holding the query part for http(s) disks.

Signed-off-by: Peter Krempa 
---
docs/formatdomain.html.in  | 7 ++-
docs/schemas/domaincommon.rng  | 6 ++
src/conf/domain_conf.c | 5 +
src/util/virstoragefile.c  | 1 +
src/util/virstoragefile.h  | 1 +
tests/qemuxml2argvdata/disk-network-http.xml   | 2 +-
.../qemuxml2xmloutdata/disk-network-http.x86_64-latest.xml | 2 +-
7 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 6dbf449698..aaeb05961f 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -2847,7 +2847,7 @@
  /disk
  disk type='network' device='cdrom'
driver name='qemu' type='raw'/
-source protocol="http" name="url_path"
+source protocol="http" name="url_path" 
query="foo=baramp;baz=flurb
  host name="hostname" port="80"/
  cookies
cookie name="test"somevalue/cookie
@@ -3113,6 +3113,11 @@
  ('tls' Since 4.5.0)
  

+  For protocols http and https an
+  optional attribute query specifies the query string.
+  (Since 6.2.0)


Should this be 6.3.0? Or are we calling this a bugfix?


+  
+
  For "iscsi" (since 1.0.4), the
  name attribute may include a logical unit number,
  separated from the target's name by a slash (e.g.,
diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng
index 5de17593c1..1807df521c 100644
--- a/docs/schemas/domaincommon.rng
+++ b/docs/schemas/domaincommon.rng
@@ -1869,6 +1869,9 @@
  


+
+  
+



@@ -1894,6 +1897,9 @@
  


+
+  
+



diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 27bc5a797b..914e03c705 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -9482,6 +9482,10 @@ virDomainDiskSourceNetworkParse(xmlNodePtr node,
/* config file currently only works with remote disks */
src->configFile = virXPathString("string(./config/@file)", ctxt);

+if (src->protocol == VIR_STORAGE_NET_PROTOCOL_HTTP ||
+src->protocol == VIR_STORAGE_NET_PROTOCOL_HTTPS)
+src->query = virXMLPropString(node, "query");
+
if (virDomainStorageNetworkParseHosts(node, >hosts, >nhosts) < 0)
return -1;

@@ -24591,6 +24595,7 @@ virDomainDiskSourceFormatNetwork(virBufferPtr attrBuf,
path = g_strdup_printf("%s/%s", src->volume, src->path);

virBufferEscapeString(attrBuf, " name='%s'", path ? path : src->path);
+virBufferEscapeString(attrBuf, " query='%s'", src->query);

if (src->haveTLS != VIR_TRISTATE_BOOL_ABSENT &&
!(flags & VIR_DOMAIN_DEF_FORMAT_MIGRATABLE &&
diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c
index c43e52d1f6..bccef863b4 100644
--- a/src/util/virstoragefile.c
+++ b/src/util/virstoragefile.c
@@ -2418,6 +2418,7 @@ virStorageSourceCopy(const virStorageSource *src,
def->compat = g_strdup(src->compat);
def->tlsAlias = g_strdup(src->tlsAlias);
def->tlsCertdir = g_strdup(src->tlsCertdir);
+def->query = g_strdup(src->query);


The string also needs to be freed in virStorageSourceClear.



if (src->sliceStorage)
def->sliceStorage = virStorageSourceSliceCopy(src->sliceStorage);


Reviewed-by: Ján Tomko 

Jano


signature.asc
Description: PGP signature


Re: [PATCH v2 1/4] docs: formatdomain: Mention missing protocols

2020-03-30 Thread Ján Tomko

On a Monday in 2020, Peter Krempa wrote:

http, https, ftp, ftps, and tftp were not mentioned in the
documentation. Note that 'ssh' is still omitted as it's used only
internally.

Signed-off-by: Peter Krempa 
---
docs/formatdomain.html.in | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)



Reviewed-by: Ján Tomko 

Jano


signature.asc
Description: PGP signature


[PATCH v2 1/4] docs: formatdomain: Mention missing protocols

2020-03-30 Thread Peter Krempa
http, https, ftp, ftps, and tftp were not mentioned in the
documentation. Note that 'ssh' is still omitted as it's used only
internally.

Signed-off-by: Peter Krempa 
---
 docs/formatdomain.html.in | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 28e779b90a..6dbf449698 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -3096,10 +3096,11 @@
   
   The protocol attribute specifies the protocol to
   access to the requested image. Possible values are "nbd",
-  "iscsi", "rbd", "sheepdog", "gluster" or "vxhs".
+  "iscsi", "rbd", "sheepdog", "gluster", "vxhs", "http", "https",
+  "ftp", ftps", or "tftp".

-  If the protocol attribute is "rbd", "sheepdog",
-  "gluster", or "vxhs", an additional attribute name
+  For any protocol other than nbd
+  an additional attribute name
   is mandatory to specify which volume/image will be used.
   

-- 
2.24.1



[PATCH v2 2/4] conf: Add support for http(s) query strings

2020-03-30 Thread Peter Krempa
Add a new attribute for holding the query part for http(s) disks.

Signed-off-by: Peter Krempa 
---
 docs/formatdomain.html.in  | 7 ++-
 docs/schemas/domaincommon.rng  | 6 ++
 src/conf/domain_conf.c | 5 +
 src/util/virstoragefile.c  | 1 +
 src/util/virstoragefile.h  | 1 +
 tests/qemuxml2argvdata/disk-network-http.xml   | 2 +-
 .../qemuxml2xmloutdata/disk-network-http.x86_64-latest.xml | 2 +-
 7 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 6dbf449698..aaeb05961f 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -2847,7 +2847,7 @@
   /disk
   disk type='network' device='cdrom'
 driver name='qemu' type='raw'/
-source protocol="http" name="url_path"
+source protocol="http" name="url_path" 
query="foo=baramp;baz=flurb
   host name="hostname" port="80"/
   cookies
 cookie name="test"somevalue/cookie
@@ -3113,6 +3113,11 @@
   ('tls' Since 4.5.0)
   

+  For protocols http and https an
+  optional attribute query specifies the query string.
+  (Since 6.2.0)
+  
+
   For "iscsi" (since 1.0.4), the
   name attribute may include a logical unit number,
   separated from the target's name by a slash (e.g.,
diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng
index 5de17593c1..1807df521c 100644
--- a/docs/schemas/domaincommon.rng
+++ b/docs/schemas/domaincommon.rng
@@ -1869,6 +1869,9 @@
   
 
 
+
+  
+
 
 
 
@@ -1894,6 +1897,9 @@
   
 
 
+
+  
+
 
 
 
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 27bc5a797b..914e03c705 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -9482,6 +9482,10 @@ virDomainDiskSourceNetworkParse(xmlNodePtr node,
 /* config file currently only works with remote disks */
 src->configFile = virXPathString("string(./config/@file)", ctxt);

+if (src->protocol == VIR_STORAGE_NET_PROTOCOL_HTTP ||
+src->protocol == VIR_STORAGE_NET_PROTOCOL_HTTPS)
+src->query = virXMLPropString(node, "query");
+
 if (virDomainStorageNetworkParseHosts(node, >hosts, >nhosts) < 0)
 return -1;

@@ -24591,6 +24595,7 @@ virDomainDiskSourceFormatNetwork(virBufferPtr attrBuf,
 path = g_strdup_printf("%s/%s", src->volume, src->path);

 virBufferEscapeString(attrBuf, " name='%s'", path ? path : src->path);
+virBufferEscapeString(attrBuf, " query='%s'", src->query);

 if (src->haveTLS != VIR_TRISTATE_BOOL_ABSENT &&
 !(flags & VIR_DOMAIN_DEF_FORMAT_MIGRATABLE &&
diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c
index c43e52d1f6..bccef863b4 100644
--- a/src/util/virstoragefile.c
+++ b/src/util/virstoragefile.c
@@ -2418,6 +2418,7 @@ virStorageSourceCopy(const virStorageSource *src,
 def->compat = g_strdup(src->compat);
 def->tlsAlias = g_strdup(src->tlsAlias);
 def->tlsCertdir = g_strdup(src->tlsCertdir);
+def->query = g_strdup(src->query);

 if (src->sliceStorage)
 def->sliceStorage = virStorageSourceSliceCopy(src->sliceStorage);
diff --git a/src/util/virstoragefile.h b/src/util/virstoragefile.h
index 9427057f94..7939c09cd5 100644
--- a/src/util/virstoragefile.h
+++ b/src/util/virstoragefile.h
@@ -284,6 +284,7 @@ struct _virStorageSource {
 char *snapshot; /* for storage systems supporting internal snapshots */
 char *configFile; /* some storage systems use config file as part of
  the source definition */
+char *query; /* query string for HTTP based protocols */
 size_t nhosts;
 virStorageNetHostDefPtr hosts;
 size_t ncookies;
diff --git a/tests/qemuxml2argvdata/disk-network-http.xml 
b/tests/qemuxml2argvdata/disk-network-http.xml
index 93e6617433..3abf499019 100644
--- a/tests/qemuxml2argvdata/disk-network-http.xml
+++ b/tests/qemuxml2argvdata/disk-network-http.xml
@@ -42,7 +42,7 @@
 
 
   
-  
+  
 
 
 
diff --git a/tests/qemuxml2xmloutdata/disk-network-http.x86_64-latest.xml 
b/tests/qemuxml2xmloutdata/disk-network-http.x86_64-latest.xml
index cf36331286..aaae21c5af 100644
--- a/tests/qemuxml2xmloutdata/disk-network-http.x86_64-latest.xml
+++ b/tests/qemuxml2xmloutdata/disk-network-http.x86_64-latest.xml
@@ -49,7 +49,7 @@
 
 
   
-  
+  
 
 
 
-- 
2.24.1



[PATCH v2 3/4] qemuBlockStorageSourceGetURI: Pass through query component

2020-03-30 Thread Peter Krempa
If the storage source has the query part set, format it in the output.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_block.c| 2 ++
 src/qemu/qemu_domain.c   | 9 +
 tests/qemuxml2argvdata/disk-cdrom-network.args   | 4 ++--
 .../disk-cdrom-network.x86_64-2.12.0.args| 4 ++--
 .../disk-cdrom-network.x86_64-latest.args| 3 ++-
 tests/qemuxml2argvdata/disk-cdrom-network.xml| 2 +-
 .../disk-network-http.x86_64-latest.args | 5 +++--
 7 files changed, 21 insertions(+), 8 deletions(-)

diff --git a/src/qemu/qemu_block.c b/src/qemu/qemu_block.c
index 648c3f1026..2ce14615a0 100644
--- a/src/qemu/qemu_block.c
+++ b/src/qemu/qemu_block.c
@@ -437,6 +437,8 @@ qemuBlockStorageSourceGetURI(virStorageSourcePtr src)
 }
 }

+uri->query = g_strdup(src->query);
+
 uri->server = g_strdup(src->hosts->name);

 return g_steal_pointer();
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index adda17a49f..dd48b6fff3 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -7132,6 +7132,15 @@ qemuDomainValidateStorageSource(virStorageSourcePtr src,
 }
 }

+if (src->query &&
+(actualType != VIR_STORAGE_TYPE_NETWORK ||
+ (src->protocol != VIR_STORAGE_NET_PROTOCOL_HTTPS &&
+  src->protocol != VIR_STORAGE_NET_PROTOCOL_HTTP))) {
+virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+   _("query is supported only with HTTP(S) protocols"));
+return -1;
+}
+
 return 0;
 }

diff --git a/tests/qemuxml2argvdata/disk-cdrom-network.args 
b/tests/qemuxml2argvdata/disk-cdrom-network.args
index be19bad68a..81ff324a0f 100644
--- a/tests/qemuxml2argvdata/disk-cdrom-network.args
+++ b/tests/qemuxml2argvdata/disk-cdrom-network.args
@@ -30,8 +30,8 @@ id=drive-ide0-0-0,readonly=on \
 -drive file=ftps://host.name:990/url/path/file.iso,format=raw,if=none,\
 id=drive-ide0-0-1,readonly=on \
 -device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 \
--drive file=https://host.name:443/url/path/file.iso,format=raw,if=none,\
-id=drive-ide0-1-0,readonly=on \
+-drive 'file=https://host.name:443/url/path/file.iso?test=val,format=raw,\
+if=none,id=drive-ide0-1-0,readonly=on' \
 -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 \
 -drive file=tftp://host.name:69/url/path/file.iso,format=raw,if=none,\
 id=drive-ide0-1-1,readonly=on \
diff --git a/tests/qemuxml2argvdata/disk-cdrom-network.x86_64-2.12.0.args 
b/tests/qemuxml2argvdata/disk-cdrom-network.x86_64-2.12.0.args
index 1ece3d6f46..81f6b400aa 100644
--- a/tests/qemuxml2argvdata/disk-cdrom-network.x86_64-2.12.0.args
+++ b/tests/qemuxml2argvdata/disk-cdrom-network.x86_64-2.12.0.args
@@ -32,8 +32,8 @@ id=drive-ide0-0-0,readonly=on \
 -drive file=ftps://host.name:990/url/path/file.iso,format=raw,if=none,\
 id=drive-ide0-0-1,readonly=on \
 -device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 \
--drive file=https://host.name:443/url/path/file.iso,format=raw,if=none,\
-id=drive-ide0-1-0,readonly=on \
+-drive 'file=https://host.name:443/url/path/file.iso?test=val,format=raw,\
+if=none,id=drive-ide0-1-0,readonly=on' \
 -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 \
 -drive file=tftp://host.name:69/url/path/file.iso,format=raw,if=none,\
 id=drive-ide0-1-1,readonly=on \
diff --git a/tests/qemuxml2argvdata/disk-cdrom-network.x86_64-latest.args 
b/tests/qemuxml2argvdata/disk-cdrom-network.x86_64-latest.args
index 0b4ac07f07..2515b256d0 100644
--- a/tests/qemuxml2argvdata/disk-cdrom-network.x86_64-latest.args
+++ b/tests/qemuxml2argvdata/disk-cdrom-network.x86_64-latest.args
@@ -37,7 +37,8 @@ file=/tmp/lib/domain--1-QEMUGuest1/master-key.aes \
 -blockdev '{"node-name":"libvirt-3-format","read-only":true,"driver":"raw",\
 "file":"libvirt-3-storage"}' \
 -device ide-cd,bus=ide.0,unit=1,drive=libvirt-3-format,id=ide0-0-1 \
--blockdev '{"driver":"https","url":"https://host.name:443/url/path/file.iso",\
+-blockdev '{"driver":"https",\
+"url":"https://host.name:443/url/path/file.iso?test=val",\
 "node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \
 -blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw",\
 "file":"libvirt-2-storage"}' \
diff --git a/tests/qemuxml2argvdata/disk-cdrom-network.xml 
b/tests/qemuxml2argvdata/disk-cdrom-network.xml
index 0bdc0e1883..44473f8ad4 100644
--- a/tests/qemuxml2argvdata/disk-cdrom-network.xml
+++ b/tests/qemuxml2argvdata/disk-cdrom-network.xml
@@ -39,7 +39,7 @@
 
 
   
-  
+  
 
   
   
diff --git a/tests/qemuxml2argvdata/disk-network-http.x86_64-latest.args 
b/tests/qemuxml2argvdata/disk-network-http.x86_64-latest.args
index ed44424dc3..7b4674a588 100644
--- a/tests/qemuxml2argvdata/disk-network-http.x86_64-latest.args
+++ b/tests/qemuxml2argvdata/disk-network-http.x86_64-latest.args
@@ -55,8 +55,9 

[PATCH v2 0/4] Preserve query component of path for HTTP disks

2020-03-30 Thread Peter Krempa
v2:
- fixed docs to mention http/ftp/tftp protocols
- query part is stored separately to prevent ambiguity in using of the
'?' sign
- added checks that query is valid only for http
- added tests for pre-blockdev configurations

Peter Krempa (4):
  docs: formatdomain: Mention missing protocols
  conf: Add support for http(s) query strings
  qemuBlockStorageSourceGetURI: Pass through query component
  virStorageSourceParseBackingURI: Preserve query string of URI for
http(s)

 docs/formatdomain.html.in  | 14 ++
 docs/schemas/domaincommon.rng  |  6 ++
 src/conf/domain_conf.c |  5 +
 src/qemu/qemu_block.c  |  2 ++
 src/qemu/qemu_domain.c |  9 +
 src/util/virstoragefile.c  | 14 +++---
 src/util/virstoragefile.h  |  1 +
 tests/qemuxml2argvdata/disk-cdrom-network.args |  4 ++--
 .../disk-cdrom-network.x86_64-2.12.0.args  |  4 ++--
 .../disk-cdrom-network.x86_64-latest.args  |  3 ++-
 tests/qemuxml2argvdata/disk-cdrom-network.xml  |  2 +-
 .../disk-network-http.x86_64-latest.args   |  5 +++--
 tests/qemuxml2argvdata/disk-network-http.xml   |  2 +-
 .../disk-network-http.x86_64-latest.xml|  2 +-
 tests/virstoragetest.c |  4 ++--
 15 files changed, 58 insertions(+), 19 deletions(-)

-- 
2.24.1



[PATCH v2 4/4] virStorageSourceParseBackingURI: Preserve query string of URI for http(s)

2020-03-30 Thread Peter Krempa
For http/https URIs we need to preserve the query part as it may be
important to refer to the image.

Signed-off-by: Peter Krempa 
---
 src/util/virstoragefile.c | 13 ++---
 tests/virstoragetest.c|  4 ++--
 2 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c
index bccef863b4..069bc8d776 100644
--- a/src/util/virstoragefile.c
+++ b/src/util/virstoragefile.c
@@ -2852,9 +2852,16 @@ virStorageSourceParseBackingURI(virStorageSourcePtr src,
 return -1;
 }

-/* handle socket stored as a query */
-if (uri->query)
-src->hosts->socket = g_strdup(STRSKIP(uri->query, "socket="));
+if (uri->query) {
+if (src->protocol == VIR_STORAGE_NET_PROTOCOL_HTTP ||
+src->protocol == VIR_STORAGE_NET_PROTOCOL_HTTPS) {
+src->query = g_strdup(uri->query);
+} else {
+/* handle socket stored as a query */
+if (STRPREFIX(uri->query, "socket="))
+src->hosts->socket = g_strdup(STRSKIP(uri->query, "socket="));
+}
+}

 /* uri->path is NULL if the URI does not contain slash after host:
  * transport://host:port */
diff --git a/tests/virstoragetest.c b/tests/virstoragetest.c
index 10d5421150..6e8ebeba13 100644
--- a/tests/virstoragetest.c
+++ b/tests/virstoragetest.c
@@ -1632,7 +1632,7 @@ mymain(void)
"\"file.url\": 
\"https://host/folder/esx6.5-rhel7.7-x86%5f64/esx6.5-rhel7.7-x86%5f64-flat.vmdk?dcPath=data=esx6.5-matrix\",;
"\"file.timeout\": 2000"
  "}",
-   "\n"
+   "\n"
"  \n"
"  \n"
"  \n"
@@ -1647,7 +1647,7 @@ mymain(void)
"\"file.url\": 
\"https://host/folder/esx6.5-rhel7.7-x86%5f64/esx6.5-rhel7.7-x86%5f64-flat.vmdk?dcPath=data=esx6.5-matrix\",;
"\"file.timeout\": 2000"
  "}",
-   "\n"
+   "\n"
"  \n"
"  \n"
"  \n"
-- 
2.24.1



Re: [PATCH] qemu-shim: add a --with-config option

2020-03-30 Thread Marc-André Lureau
Hi

On Mon, Mar 30, 2020 at 1:54 PM Daniel P. Berrangé  wrote:
>
> On Sat, Mar 28, 2020 at 12:38:57AM +0100, marcandre.lur...@redhat.com wrote:
> > From: Marc-André Lureau 
> >
> > Add an option to ease setting up a VM with existing libvirt qemu.conf.
> >
> > Fixes:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1817776
> >
> > Signed-off-by: Marc-André Lureau 
> > ---
> >  docs/manpages/virt-qemu-run.rst |  6 ++
> >  src/libvirt_private.syms|  1 -
> >  src/qemu/qemu_shim.c| 36 +
> >  3 files changed, 42 insertions(+), 1 deletion(-)
> >
> > diff --git a/docs/manpages/virt-qemu-run.rst 
> > b/docs/manpages/virt-qemu-run.rst
> > index b06c311b1d..3196b4c8f4 100644
> > --- a/docs/manpages/virt-qemu-run.rst
> > +++ b/docs/manpages/virt-qemu-run.rst
> > @@ -71,6 +71,12 @@ is a path to the XML description of the secret, whose 
> > UUID should
> >  match a secret referenced in the guest domain XML. The ``VALUE-FILE``
> >  is a path containing the raw value of the secret.
> >
> > +``-c``, ``--with-config``
> > +
> > +Copy the libvirt qemu.conf configuration to the root directory.  If
> > +there is already a qemu.conf file in the $ROOT/etc directory, exit
> > +with failure.
>
> As discussed in the bugzilla, we really don't want to be using the
> global qemu.conf file in combination with the embedded driver. We
> are not intending to guarantee that the embedded driver will share
> the same defaults as the global drivers.
>
> In general we want to eliminate the need to touch qemu.conf at all.

With qemu:///embed, is the $root/etc/qemu.conf supposed to be user configurable?

If not, then it should not even be accessed.

Otherwise, having an option to ease setting it up would be welcome! If
not from the user/system libvirt config, from a given path (for me, it
will likely always be my user libvirt config anyway)




Re: [PATCH 0/2] qemu: Fix --keep-relative for block-commit/pull with blockdev

2020-03-30 Thread Ján Tomko

On a Monday in 2020, Peter Krempa wrote:

Peter Krempa (2):
 qemu: block: Support VIR_DOMAIN_BLOCK_COMMIT/PULL/REBASE_RELATIVE with
   blockdev
 qemuDomainSnapshotDiskPrepareOne: Don't load the relative path with
   blockdev

src/qemu/qemu_block.c  | 46 ++
src/qemu/qemu_block.h  |  5 +
src/qemu/qemu_driver.c | 13 ++--
3 files changed, 62 insertions(+), 2 deletions(-)



Reviewed-by: Ján Tomko 

Jano


signature.asc
Description: PGP signature


[libvirt-dockerfiles PATCH] Refresh after recent changes

2020-03-30 Thread Andrea Bolognani
Paths to commands such as $MAKE and $NINJA are now exposed in the
container's environment, and the symlinks necessary to make ccache
work transparently are also included.

The corresponding libvirt-jenkins-ci commit is 2563aebb6c5c.

Signed-off-by: Andrea Bolognani 
---
Pushed under the Dockerfile update rule.
Plain text diff included for your convenience.

 buildenv-libosinfo-centos-7.zip   | Bin 1665 -> 1782 bytes
 buildenv-libosinfo-centos-8.zip   | Bin 594 -> 716 bytes
 buildenv-libosinfo-debian-10.zip  | Bin 663 -> 783 bytes
 buildenv-libosinfo-debian-9.zip   | Bin 683 -> 804 bytes
 buildenv-libosinfo-debian-sid.zip | Bin 663 -> 783 bytes
 ...denv-libosinfo-fedora-30-cross-mingw32.zip | Bin 629 -> 754 bytes
 ...denv-libosinfo-fedora-30-cross-mingw64.zip | Bin 631 -> 757 bytes
 buildenv-libosinfo-fedora-30.zip  | Bin 544 -> 668 bytes
 buildenv-libosinfo-fedora-31.zip  | Bin 544 -> 668 bytes
 buildenv-libosinfo-fedora-rawhide.zip | Bin 564 -> 688 bytes
 buildenv-libosinfo-opensuse-151.zip   | Bin 570 -> 694 bytes
 buildenv-libosinfo-ubuntu-1604.zip| Bin 686 -> 808 bytes
 buildenv-libosinfo-ubuntu-1804.zip| Bin 686 -> 808 bytes
 buildenv-libvirt-centos-7.zip | Bin 1888 -> 2006 bytes
 buildenv-libvirt-centos-8.zip | Bin 802 -> 928 bytes
 buildenv-libvirt-debian-10-cross-aarch64.zip  | Bin 982 -> 1110 bytes
 buildenv-libvirt-debian-10-cross-armv6l.zip   | Bin 974 -> 1102 bytes
 buildenv-libvirt-debian-10-cross-armv7l.zip   | Bin 983 -> 1108 bytes
 buildenv-libvirt-debian-10-cross-i686.zip | Bin 981 -> 1105 bytes
 buildenv-libvirt-debian-10-cross-mips.zip | Bin 974 -> 1103 bytes
 buildenv-libvirt-debian-10-cross-mips64el.zip | Bin 985 -> 1114 bytes
 buildenv-libvirt-debian-10-cross-mipsel.zip   | Bin 982 -> 1105 bytes
 buildenv-libvirt-debian-10-cross-ppc64le.zip  | Bin 985 -> 1115 bytes
 buildenv-libvirt-debian-10-cross-s390x.zip| Bin 977 -> 1105 bytes
 buildenv-libvirt-debian-10.zip| Bin 908 -> 1032 bytes
 buildenv-libvirt-debian-9-cross-aarch64.zip   | Bin 1018 -> 1143 bytes
 buildenv-libvirt-debian-9-cross-armv6l.zip| Bin 1009 -> 1134 bytes
 buildenv-libvirt-debian-9-cross-armv7l.zip| Bin 1014 -> 1140 bytes
 buildenv-libvirt-debian-9-cross-mips.zip  | Bin 1010 -> 1134 bytes
 buildenv-libvirt-debian-9-cross-mips64el.zip  | Bin 1021 -> 1145 bytes
 buildenv-libvirt-debian-9-cross-mipsel.zip| Bin 1014 -> 1137 bytes
 buildenv-libvirt-debian-9-cross-ppc64le.zip   | Bin 1017 -> 1147 bytes
 buildenv-libvirt-debian-9-cross-s390x.zip | Bin 1013 -> 1137 bytes
 buildenv-libvirt-debian-9.zip | Bin 938 -> 1063 bytes
 buildenv-libvirt-debian-sid-cross-aarch64.zip | Bin 986 -> 1110 bytes
 buildenv-libvirt-debian-sid-cross-armv6l.zip  | Bin 978 -> 1102 bytes
 buildenv-libvirt-debian-sid-cross-armv7l.zip  | Bin 982 -> 1108 bytes
 buildenv-libvirt-debian-sid-cross-i686.zip| Bin 981 -> 1104 bytes
 ...denv-libvirt-debian-sid-cross-mips64el.zip | Bin 985 -> 1114 bytes
 buildenv-libvirt-debian-sid-cross-mipsel.zip  | Bin 974 -> 1102 bytes
 buildenv-libvirt-debian-sid-cross-ppc64le.zip | Bin 986 -> 1115 bytes
 buildenv-libvirt-debian-sid-cross-s390x.zip   | Bin 977 -> 1105 bytes
 buildenv-libvirt-debian-sid.zip   | Bin 908 -> 1032 bytes
 buildenv-libvirt-fedora-30-cross-mingw32.zip  | Bin 897 -> 1031 bytes
 buildenv-libvirt-fedora-30-cross-mingw64.zip  | Bin 899 -> 1034 bytes
 buildenv-libvirt-fedora-30.zip| Bin 776 -> 904 bytes
 buildenv-libvirt-fedora-31.zip| Bin 776 -> 904 bytes
 buildenv-libvirt-fedora-rawhide.zip   | Bin 796 -> 925 bytes
 buildenv-libvirt-opensuse-151.zip | Bin 785 -> 914 bytes
 buildenv-libvirt-ubuntu-1604.zip  | Bin 942 -> 1069 bytes
 buildenv-libvirt-ubuntu-1804.zip  | Bin 943 -> 1069 bytes
 51 files changed, 0 insertions(+), 0 deletions(-)

diff --git a/buildenv-libosinfo-centos-7.zip b/buildenv-libosinfo-centos-7.zip
index e4f5b87..185f258 100644
--- a/buildenv-libosinfo-centos-7.zip
+++ b/buildenv-libosinfo-centos-7.zip
@@ -80,9 +80,18 @@ WEiJKtQrZDJloqtyi/mmRa1VsV7RYR0VPJjhK/R8EQ7Ysshy\n\
 vala \
 vim && \
 yum autoremove -y && \
-yum clean all -y
+yum clean all -y && \
+mkdir -p /usr/libexec/ccache-wrappers && \
+ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/cc && \
+ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/$(basename /usr/bin/gcc)
 
 RUN pip3 install \
  meson==0.49.0
 
 ENV LANG "en_US.UTF-8"
+
+ENV MAKE "/usr/bin/make"
+ENV NINJA "/usr/bin/ninja-build"
+ENV PYTHON "/usr/bin/python3"
+
+ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers"
diff --git a/buildenv-libosinfo-centos-8.zip b/buildenv-libosinfo-centos-8.zip
index 53a49d0..ce7c071 100644
--- a/buildenv-libosinfo-centos-8.zip
+++ b/buildenv-libosinfo-centos-8.zip
@@ 

Re: [libvirt-jenkins-ci PATCH 3/7] lcitool: Create ccache wrappers outside of $HOME

2020-03-30 Thread Andrea Bolognani
On Mon, 2020-03-30 at 13:19 +0100, Daniel P. Berrangé wrote:
> On Mon, Mar 30, 2020 at 02:17:15PM +0200, Andrea Bolognani wrote:
> > I also considered /usr/libexec/ccache-wrappers - what would you think
> > of that option?
> 
> I don't mind either that or your original

Okay, I'll go with /usr/libexec then because it's slighly shorter
and I decided I like it a tiny bit better :)

-- 
Andrea Bolognani / Red Hat / Virtualization



Re: [libvirt-jenkins-ci PATCH 7/7] lcitool: Create and expose ccache wrappers

2020-03-30 Thread Daniel P . Berrangé
On Mon, Mar 30, 2020 at 02:31:40PM +0200, Andrea Bolognani wrote:
> On Mon, 2020-03-30 at 13:23 +0100, Daniel P. Berrangé wrote:
> > On Mon, Mar 30, 2020 at 02:21:19PM +0200, Andrea Bolognani wrote:
> > > We could still expose the path as something like
> > > 
> > >   ENV CCACHE_WRAPPERSDIR "/usr/local/share/ccache-wrappers"
> > > 
> > > so that the CI rules would be able to do
> > > 
> > >   - export PATH="$CCACHE_WRAPPERSDIR:$PATH"
> > > 
> > > instead of hardcoding the exact path. What do you think?
> > 
> > Sure, works for me.
> 
> Cool! Can I have a Reviewed-by, then? O:-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 7/7] lcitool: Create and expose ccache wrappers

2020-03-30 Thread Andrea Bolognani
On Mon, 2020-03-30 at 13:23 +0100, Daniel P. Berrangé wrote:
> On Mon, Mar 30, 2020 at 02:21:19PM +0200, Andrea Bolognani wrote:
> > We could still expose the path as something like
> > 
> >   ENV CCACHE_WRAPPERSDIR "/usr/local/share/ccache-wrappers"
> > 
> > so that the CI rules would be able to do
> > 
> >   - export PATH="$CCACHE_WRAPPERSDIR:$PATH"
> > 
> > instead of hardcoding the exact path. What do you think?
> 
> Sure, works for me.

Cool! Can I have a Reviewed-by, then? O:-)

-- 
Andrea Bolognani / Red Hat / Virtualization



Re: [PATCH 5/5] cpu: Introduce getHost supoort for ARM

2020-03-30 Thread Daniel P . Berrangé
Re-adding libvir-list to the CC line.

On Mon, Mar 30, 2020 at 08:20:44PM +0800, Zhenyu Zheng wrote:
> Hi, yes, I think we can do that using  inline assembly, I can check it out
> if you think it is a better solution,
> do you have any suggestions for the features(cpu flags) part? It seems that
> ARM does not have a location/register
> that holds all the flags, seems that we have to query alot of different
> registers to check for features, which could
> be quite messy.

Perhaps there is a way to record the location/register info in the XML
against each feature name, so that the code itself can stay simple and
just be driven from the metadata ?

I'm not familiar enough with Arm to know how feasiable this is though,
so will have to leave that to others to give an opinion.

> 
> On Mon, Mar 30, 2020 at 8:01 PM Daniel P. Berrangé 
> wrote:
> 
> > On Mon, Mar 30, 2020 at 07:32:36PM +0800, Zhenyu Zheng wrote:
> > > Hi Daniel,
> > >
> > > Thanks for thre review and reply, my first implementation was going to
> > > gather data from /proc/cpuinfo, but unlike X86, we can only get this kind
> > > of info:
> > >
> > >   processor   : 0
> > >   BogoMIPS: 200.00
> > >   Features: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
> > >   CPU implementer : 0x43
> > >   CPU architecture: 8
> > >   CPU variant : 0x1
> > >   CPU part: 0x0a1
> > >   CPU revision: 1
> > >
> > > so we have to perform some translation to perform human readable
> > > information, and I mentioned that 'lscpu' has done that too. So Andrea
> > > Bolognani
> > > suggested that maybe we can use it directly, to avoid re-implement the
> > > translation. Here is the discussion:
> > > https://www.redhat.com/archives/libvir-list/2020-March/msg00812.html
> >
> > On x86 we get majority of info straight from calling the CPUID instruction,
> > not /proc/cpuinfo, and use our XML data files in src/cpu_map to translate
> > things into human readable names.  I see you're adding XML data files
> > with names in the earlier patches. Is it possible to add the hex values
> > for the CPU implementer/architecture/variant/part to these XML files so
> > we can directly map them in libvirt, in the same way we do for x86

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 7/7] lcitool: Create and expose ccache wrappers

2020-03-30 Thread Daniel P . Berrangé
On Mon, Mar 30, 2020 at 02:21:19PM +0200, Andrea Bolognani wrote:
> On Mon, 2020-03-30 at 12:52 +0100, Daniel P. Berrangé wrote:
> > On Fri, Mar 27, 2020 at 08:34:59PM +0100, Andrea Bolognani wrote:
> > >  sys.stdout.write(textwrap.dedent("""
> > >  ENV LANG "en_US.UTF-8"
> > > +ENV PATH "/usr/local/share/ccache-wrappers:$PATH"
> > 
> > I don't think we should do this though.
> > 
> > When doing local container builds "make ci-build@BLAH" the container
> > content is throwaway, so using ccache is adding extra I/O to store
> > the cache arifacts but will not speed up compile as the cache will
> > always be initially empty.
> > 
> > I think we only want to set $PATH in the .gitlab-ci.yml file where
> > we have explicitly provided a persistent cache to use.
> 
> Sounds reasonable.
> 
> We could still expose the path as something like
> 
>   ENV CCACHE_WRAPPERSDIR "/usr/local/share/ccache-wrappers"
> 
> so that the CI rules would be able to do
> 
>   - export PATH="$CCACHE_WRAPPERSDIR:$PATH"
> 
> instead of hardcoding the exact path. What do you think?

Sure, works for me.


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 7/7] lcitool: Create and expose ccache wrappers

2020-03-30 Thread Andrea Bolognani
On Mon, 2020-03-30 at 12:52 +0100, Daniel P. Berrangé wrote:
> On Fri, Mar 27, 2020 at 08:34:59PM +0100, Andrea Bolognani wrote:
> >  sys.stdout.write(textwrap.dedent("""
> >  ENV LANG "en_US.UTF-8"
> > +ENV PATH "/usr/local/share/ccache-wrappers:$PATH"
> 
> I don't think we should do this though.
> 
> When doing local container builds "make ci-build@BLAH" the container
> content is throwaway, so using ccache is adding extra I/O to store
> the cache arifacts but will not speed up compile as the cache will
> always be initially empty.
> 
> I think we only want to set $PATH in the .gitlab-ci.yml file where
> we have explicitly provided a persistent cache to use.

Sounds reasonable.

We could still expose the path as something like

  ENV CCACHE_WRAPPERSDIR "/usr/local/share/ccache-wrappers"

so that the CI rules would be able to do

  - export PATH="$CCACHE_WRAPPERSDIR:$PATH"

instead of hardcoding the exact path. What do you think?

-- 
Andrea Bolognani / Red Hat / Virtualization



Re: [libvirt-jenkins-ci PATCH 3/7] lcitool: Create ccache wrappers outside of $HOME

2020-03-30 Thread Daniel P . Berrangé
On Mon, Mar 30, 2020 at 02:17:15PM +0200, Andrea Bolognani wrote:
> On Mon, 2020-03-30 at 12:47 +0100, Daniel P. Berrangé wrote:
> > On Fri, Mar 27, 2020 at 08:34:55PM +0100, Andrea Bolognani wrote:
> > > +- name: 'Create ccache wrappers'
> > > +  file:
> > > +path: /usr/local/share/ccache-wrappers
> > > +state: directory
> > 
> > I weakly suggest "/usr/local/share/ccache/bin"
> 
> I thought about that path, but then decided against it because
> /usr/local/share/ccache belongs to the ccache package itself on
> FreeBSD (which install all ports under /usr/local) and even on Linux
> when installing it from source without overriding the default prefix,
> so even though there would be no conflicts right now I'd rather steer
> clear of it.

Ah ok, I forgot FreeBSD puts ports into /usr/local.

> I also considered /usr/libexec/ccache-wrappers - what would you think
> of that option?

I don't mind either that or your original


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 2/7] lcitool: Configure ccache using environment variables

2020-03-30 Thread Daniel P . Berrangé
On Mon, Mar 30, 2020 at 02:11:29PM +0200, Andrea Bolognani wrote:
> On Mon, 2020-03-30 at 12:45 +0100, Daniel P. Berrangé wrote:
> > On Fri, Mar 27, 2020 at 08:34:54PM +0100, Andrea Bolognani wrote:
> > > +export CCACHE_MAXSIZE="2G"
> > 
> > I was wondering what a good size for ccache would be. Is there any history
> > to why we picked 2G ?  Having it too big didn't really matter for the
> > Jenkins builders as it is kept local.  For GitLab the cache is downloaded
> > at start of the job off cloud cstorage. So we want it large enough to fit
> > a libvirt.git compile but small enough that outdated cruft gets purged
> > reasonably quickly, so we don't waste time in GitLab CI downloading GB's
> > of data that is no longer needed in the cache.
> > 
> > NB, this is NOT an objection to this patch, as 2GB is a pre-existing value
> > we used. Just want to know how we should consider tuning it in future.
> 
> I think we just scaled it down (from the default of 5 GiB) so that
> it would use most of the disk space that remained free in the VM's
> 15 GiB disk, while leaving some leeway to allow for repositories to
> grow. Nothing more scientific than that, I'm afraid.

Ok, I'll see if I can get some usage stats out of ccache for our CI jobs on
GitLab.

> Note that, for VMs, we're building not just libvirt but a bunch of
> other projects, so if we wanted to tweak it we'd have to take that
> into account as well and not size it for just libvirt itself.

True, but I imagine in terms of object size all the other projects probably
barely reach 5% of the main libvirt.git build size, so likely lost in the
noise.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [GSoC] Interested Student for the Project: 'Introducing job control to the storage driver'

2020-03-30 Thread Martin Kletzander

On Mon, Mar 30, 2020 at 12:23:32PM +0200, Michal Prívozník wrote:

On 29. 3. 2020 0:43, Martin Kletzander wrote:

On Wed, Mar 25, 2020 at 02:59:16PM +0100, Michal Prívozník wrote:

On 23. 3. 2020 10:45, Prathamesh Chavan wrote:

Hi Everyone,


Hey!

It's nice to see somebody new interested in libvirt.



I'm Prathamesh Chavan, a final year student at studying Computer
Science and Engineering at IIT Kharagpur. I've been part of GSoC'17
under the Git Organization and have an internship experience at
Nutanix, India during last summer.
I'm also currently working on a tired file system with software wear
management for nvm technologies as my master project.
I was interested in contributing to the project: "Introducing job
control to the storage driver" under Google Summer of Code - 2020.
I was currently going through the codebase and also was successfully
able to build it.
It would be great if I was assigned a byte-task regarding the same
which could help me understand the project better,


There is no assignment process, just pick anything you want. Adapting to
GLib is pretty popular as it usually removes some lines :-) But feel
free to pick anything you would like.

And for the GSoC project itself; we currently have in_use member for
virStorageVolDef structure which serves as a refcounter. Then we have
some mutexes to guard access to the counter. Whenever a 'long running'
storage API is called (e.g. virStorageVolWipe(),
virStorageVolDownload(), ...) the refcounter is increased at the
beginning and then decreased at the end. Then, virStorageVolDelete()
checks this refcounter to see if there is some other thread using it.

But we already have something similar implemented for virDomainObj -
jobs. Whenever an API wants to work with a domain object [1], it shall
acquire a job. This job then prevents other threads from modifying the
object meanwhile The threads wanting to work over the same object will
serialize. The whole job control is described in src/qemu/THREADS.txt so
I would start there.

Michal

1: actually, that is not entirely true. Acquiring a job is required only
for those APIs which want to unlock the domain object in the middle.
Therefore, some 'short' APIs have no job acquiring implemented (e.g.
qemuDomainGetInfo()). But some other 'short' APIs do (e.g.
qemuDomainReset()).



What if you have a function which changes something, or even looks up
something
and it does not need to communicate with QEMU at all, but it is called
when some
other async job is waiting for a monitor.  I know we discussed this
multiple
times and I always forget some minor detail, so forgive me if I'm asking
this
for 27th time already.  But shouldn't that function also have a job?


Oh yeah, I wasn't trying to be exhaustive and cover all corner cases
when introducing jobs to somebody who hears about the concept for the
first time. If a function modifies internal state then it should grab
_MODIFY job. This is, as you correctly say, to mutually exclude with
another thread that is doing some other modification.

But frankly, I don't remember all the details. I recall discussing
whether each API should grab a job, but I just don't remember the details.


Another
question is if this should also be the case for just reading and
returning some
info about the domain, shouldn't it also take a job, even if it is just a
QEMU_JOB_QUERY?  Otherwise we'd have to make sure that before each
unlock of a
domain (even during a job) all the data integrity is kept.


I'm not completely sure what you are asking here. And how it relates to
storage driver.



I'm lust trying to minimize the complexity and say that it is the best if all
APIs working with an object should take a job, no exceptions (to make it simpler
to follow).  I think we're on the same boat ;-)


Michal


signature.asc
Description: PGP signature


Re: [libvirt-jenkins-ci PATCH 3/7] lcitool: Create ccache wrappers outside of $HOME

2020-03-30 Thread Andrea Bolognani
On Mon, 2020-03-30 at 12:47 +0100, Daniel P. Berrangé wrote:
> On Fri, Mar 27, 2020 at 08:34:55PM +0100, Andrea Bolognani wrote:
> > +- name: 'Create ccache wrappers'
> > +  file:
> > +path: /usr/local/share/ccache-wrappers
> > +state: directory
> 
> I weakly suggest "/usr/local/share/ccache/bin"

I thought about that path, but then decided against it because
/usr/local/share/ccache belongs to the ccache package itself on
FreeBSD (which install all ports under /usr/local) and even on Linux
when installing it from source without overriding the default prefix,
so even though there would be no conflicts right now I'd rather steer
clear of it.

I also considered /usr/libexec/ccache-wrappers - what would you think
of that option?

-- 
Andrea Bolognani / Red Hat / Virtualization



Re: [libvirt-jenkins-ci PATCH 2/7] lcitool: Configure ccache using environment variables

2020-03-30 Thread Andrea Bolognani
On Mon, 2020-03-30 at 12:45 +0100, Daniel P. Berrangé wrote:
> On Fri, Mar 27, 2020 at 08:34:54PM +0100, Andrea Bolognani wrote:
> > +export CCACHE_MAXSIZE="2G"
> 
> I was wondering what a good size for ccache would be. Is there any history
> to why we picked 2G ?  Having it too big didn't really matter for the
> Jenkins builders as it is kept local.  For GitLab the cache is downloaded
> at start of the job off cloud cstorage. So we want it large enough to fit
> a libvirt.git compile but small enough that outdated cruft gets purged
> reasonably quickly, so we don't waste time in GitLab CI downloading GB's
> of data that is no longer needed in the cache.
> 
> NB, this is NOT an objection to this patch, as 2GB is a pre-existing value
> we used. Just want to know how we should consider tuning it in future.

I think we just scaled it down (from the default of 5 GiB) so that
it would use most of the disk space that remained free in the VM's
15 GiB disk, while leaving some leeway to allow for repositories to
grow. Nothing more scientific than that, I'm afraid.

Note that, for VMs, we're building not just libvirt but a bunch of
other projects, so if we wanted to tweak it we'd have to take that
into account as well and not size it for just libvirt itself.

-- 
Andrea Bolognani / Red Hat / Virtualization



Re: [PATCH 5/5] cpu: Introduce getHost supoort for ARM

2020-03-30 Thread Daniel P . Berrangé
On Mon, Mar 30, 2020 at 07:32:36PM +0800, Zhenyu Zheng wrote:
> Hi Daniel,
> 
> Thanks for thre review and reply, my first implementation was going to
> gather data from /proc/cpuinfo, but unlike X86, we can only get this kind
> of info:
> 
>   processor   : 0
>   BogoMIPS: 200.00
>   Features: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
>   CPU implementer : 0x43
>   CPU architecture: 8
>   CPU variant : 0x1
>   CPU part: 0x0a1
>   CPU revision: 1
> 
> so we have to perform some translation to perform human readable
> information, and I mentioned that 'lscpu' has done that too. So Andrea
> Bolognani
> suggested that maybe we can use it directly, to avoid re-implement the
> translation. Here is the discussion:
> https://www.redhat.com/archives/libvir-list/2020-March/msg00812.html

On x86 we get majority of info straight from calling the CPUID instruction,
not /proc/cpuinfo, and use our XML data files in src/cpu_map to translate
things into human readable names.  I see you're adding XML data files
with names in the earlier patches. Is it possible to add the hex values
for the CPU implementer/architecture/variant/part to these XML files so
we can directly map them in libvirt, in the same way we do for x86


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH] qemu-shim: add a --with-config option

2020-03-30 Thread Daniel P . Berrangé
On Sat, Mar 28, 2020 at 12:38:57AM +0100, marcandre.lur...@redhat.com wrote:
> From: Marc-André Lureau 
> 
> Add an option to ease setting up a VM with existing libvirt qemu.conf.
> 
> Fixes:
> https://bugzilla.redhat.com/show_bug.cgi?id=1817776
> 
> Signed-off-by: Marc-André Lureau 
> ---
>  docs/manpages/virt-qemu-run.rst |  6 ++
>  src/libvirt_private.syms|  1 -
>  src/qemu/qemu_shim.c| 36 +
>  3 files changed, 42 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/manpages/virt-qemu-run.rst b/docs/manpages/virt-qemu-run.rst
> index b06c311b1d..3196b4c8f4 100644
> --- a/docs/manpages/virt-qemu-run.rst
> +++ b/docs/manpages/virt-qemu-run.rst
> @@ -71,6 +71,12 @@ is a path to the XML description of the secret, whose UUID 
> should
>  match a secret referenced in the guest domain XML. The ``VALUE-FILE``
>  is a path containing the raw value of the secret.
>  
> +``-c``, ``--with-config``
> +
> +Copy the libvirt qemu.conf configuration to the root directory.  If
> +there is already a qemu.conf file in the $ROOT/etc directory, exit
> +with failure.

As discussed in the bugzilla, we really don't want to be using the
global qemu.conf file in combination with the embedded driver. We
are not intending to guarantee that the embedded driver will share
the same defaults as the global drivers.

In general we want to eliminate the need to touch qemu.conf at all.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 7/7] lcitool: Create and expose ccache wrappers

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 08:34:59PM +0100, Andrea Bolognani wrote:
> VM-based builds have used ccache by default for a very long time,
> and now container-based builds will too.
> 
> Signed-off-by: Andrea Bolognani 
> ---
>  guests/lcitool | 18 ++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/guests/lcitool b/guests/lcitool
> index 117e1a5..011fc07 100755
> --- a/guests/lcitool
> +++ b/guests/lcitool
> @@ -651,6 +651,8 @@ class Application:
>  varmap = self._dockerfile_build_varmap_rpm(facts, mappings, 
> pip_mappings, projects, cross_arch)
>  
>  varmap["package_manager"] = facts["package_manager"]
> +varmap["cc"] = facts["cc"]
> +varmap["ccache"] = facts["ccache"]
>  varmap["make"] = facts["make"]
>  varmap["ninja"] = facts["ninja"]
>  varmap["python"] = facts["python"]
> @@ -864,6 +866,21 @@ class Application:
>  "{package_manager} clean all -y",
>  ])
>  
> +commands.extend([
> +"mkdir -p /usr/local/share/ccache-wrappers",
> +])
> +
> +if cross_arch:
> +commands.extend([
> +"ln -s {ccache} 
> /usr/local/share/ccache-wrappers/{cross_abi}-cc",
> +"ln -s {ccache} 
> /usr/local/share/ccache-wrappers/{cross_abi}-$(basename {cc})",
> +])
> +else:
> +commands.extend([
> +"ln -s {ccache} /usr/local/share/ccache-wrappers/cc",
> +"ln -s {ccache} /usr/local/share/ccache-wrappers/$(basename 
> {cc})",
> +])
> +
>  script = "\nRUN " + (" && \\\n".join(commands)) + "\n"
>  sys.stdout.write(script.format(**varmap))

This part makes sense.

> @@ -900,6 +917,7 @@ class Application:
>  
>  sys.stdout.write(textwrap.dedent("""
>  ENV LANG "en_US.UTF-8"
> +ENV PATH "/usr/local/share/ccache-wrappers:$PATH"

I don't think we should do this though.

When doing local container builds "make ci-build@BLAH" the container
content is throwaway, so using ccache is adding extra I/O to store
the cache arifacts but will not speed up compile as the cache will
always be initially empty.

I think we only want to set $PATH in the .gitlab-ci.yml file where
we have explicitly provided a persistent cache to use.


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 5/7] lcitool: Use commands[] for deb-based distros

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 08:34:57PM +0100, Andrea Bolognani wrote:
> It's nicer to use, and will make further changes easier.
> 
> This commit is better viewed with 'git show -w'.
> 
> Signed-off-by: Andrea Bolognani 
> ---
>  guests/lcitool | 29 ++---
>  1 file changed, 14 insertions(+), 15 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 6/7] lcitool: Use cross_commands[] for all distros

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 08:34:58PM +0100, Andrea Bolognani wrote:
> It's nicer to use, and consistent with how we're building the native
> part of the Dockerfile.
> 
> Signed-off-by: Andrea Bolognani 
> ---
>  guests/lcitool | 33 +++--
>  1 file changed, 19 insertions(+), 14 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 4/7] lcitool: Refactor cross_arch handling a bit

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 08:34:56PM +0100, Andrea Bolognani wrote:
> It will make further changes easier.
> 
> This commit is better viewed with 'git show -w'.
> 
> Signed-off-by: Andrea Bolognani 
> ---
>  guests/lcitool | 34 --
>  1 file changed, 16 insertions(+), 18 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 3/7] lcitool: Create ccache wrappers outside of $HOME

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 08:34:55PM +0100, Andrea Bolognani wrote:
> Having the wrappers inside $HOME works fine when we have control
> over which user will run the build, as is the case for the current
> Jenkins-based environment, but not when the user is chosen by an
> external entity, as is the case for container-based builds.
> 
> Signed-off-by: Andrea Bolognani 
> ---
>  guests/playbooks/update/main.yml|  1 +
>  guests/playbooks/update/tasks/global.yml| 14 ++
>  guests/playbooks/update/tasks/users.yml | 21 -
>  guests/playbooks/update/templates/bashrc.j2 |  2 +-
>  4 files changed, 16 insertions(+), 22 deletions(-)
>  create mode 100644 guests/playbooks/update/tasks/global.yml

> diff --git a/guests/playbooks/update/tasks/global.yml 
> b/guests/playbooks/update/tasks/global.yml
> new file mode 100644
> index 000..504a549
> --- /dev/null
> +++ b/guests/playbooks/update/tasks/global.yml
> @@ -0,0 +1,14 @@
> +---
> +- name: 'Create ccache wrappers'
> +  file:
> +path: /usr/local/share/ccache-wrappers
> +state: directory

I weakly suggest "/usr/local/share/ccache/bin"

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 2/7] lcitool: Configure ccache using environment variables

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 08:34:54PM +0100, Andrea Bolognani wrote:
> We're going to stop creating $HOME/.ccache soon, so we need an
> alternative way to configure ccache; environment variables work
> perfectly fine.
> 
> Signed-off-by: Andrea Bolognani 
> ---
>  guests/playbooks/update/tasks/users.yml  | 7 ---
>  guests/playbooks/update/templates/bashrc.j2  | 1 +
>  guests/playbooks/update/templates/ccache.conf.j2 | 1 -
>  3 files changed, 1 insertion(+), 8 deletions(-)
>  delete mode 100644 guests/playbooks/update/templates/ccache.conf.j2

Reviewed-by: Daniel P. Berrangé 


> diff --git a/guests/playbooks/update/templates/bashrc.j2 
> b/guests/playbooks/update/templates/bashrc.j2
> index 9cea90c..898d30b 100644
> --- a/guests/playbooks/update/templates/bashrc.j2
> +++ b/guests/playbooks/update/templates/bashrc.j2
> @@ -5,6 +5,7 @@ export NINJA="{{ ninja }}"
>  export PYTHON="{{ python }}"
>  
>  export MAKEFLAGS="-j{{ install_vcpus|int + 1 }}"
> +export CCACHE_MAXSIZE="2G"

I was wondering what a good size for ccache would be. Is there any history
to why we picked 2G ?  Having it too big didn't really matter for the
Jenkins builders as it is kept local.  For GitLab the cache is downloaded
at start of the job off cloud cstorage. So we want it large enough to fit
a libvirt.git compile but small enough that outdated cruft gets purged
reasonably quickly, so we don't waste time in GitLab CI downloading GB's
of data that is no longer needed in the cache.

NB, this is NOT an objection to this patch, as 2GB is a pre-existing value
we used. Just want to know how we should consider tuning it in future.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 1/7] lcitool: Improve ccache symlinks creation

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 08:34:53PM +0100, Andrea Bolognani wrote:
> Instead of having the same code snippet twice, once for Linux and
> once for FreeBSD, the only actual difference being the name of the
> compiler, include the compiler's path in the inventory among other
> paths and then figure out the correct name for the symlink based
> on that information.
> 
> Signed-off-by: Andrea Bolognani 
> ---
>  guests/host_vars/libvirt-centos-7/main.yml|  1 +
>  guests/host_vars/libvirt-centos-8/main.yml|  1 +
>  guests/host_vars/libvirt-debian-10/main.yml   |  1 +
>  guests/host_vars/libvirt-debian-9/main.yml|  1 +
>  guests/host_vars/libvirt-debian-sid/main.yml  |  1 +
>  guests/host_vars/libvirt-fedora-30/main.yml   |  1 +
>  guests/host_vars/libvirt-fedora-31/main.yml   |  1 +
>  .../host_vars/libvirt-fedora-rawhide/main.yml |  1 +
>  guests/host_vars/libvirt-freebsd-11/main.yml  |  1 +
>  guests/host_vars/libvirt-freebsd-12/main.yml  |  1 +
>  .../libvirt-freebsd-current/main.yml  |  1 +
>  .../host_vars/libvirt-opensuse-151/main.yml   |  1 +
>  guests/host_vars/libvirt-ubuntu-1604/main.yml |  1 +
>  guests/host_vars/libvirt-ubuntu-1804/main.yml |  1 +
>  guests/playbooks/update/tasks/users.yml   | 19 +--
>  15 files changed, 15 insertions(+), 18 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 3/3] lcitool: Include some paths in the generated Dockerfiles

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 06:26:27PM +0100, Andrea Bolognani wrote:
> These paths are a reflections of the contents of the container
> image, so it makes sense to expose them in the container's
> environment.
> 
> This will allow the GitLab CI integration for both libvirt and
> libosinfo to stop setting these values themselves and start
> relying on them being already present in the environment instead.
> 
> Signed-off-by: Andrea Bolognani 
> ---
>  guests/lcitool | 7 +++
>  1 file changed, 7 insertions(+)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 2/3] lcitool: Refactor varmap generation a bit

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 06:26:26PM +0100, Andrea Bolognani wrote:
> Some values don't depend on whether the underlying OS user deb or
> rpm as the package format, so we can set them in common code.
> 
> Signed-off-by: Andrea Bolognani 
> ---
>  guests/lcitool | 15 +--
>  1 file changed, 9 insertions(+), 6 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt-jenkins-ci PATCH 1/3] lcitool: Include paths in the inventory

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 06:26:25PM +0100, Andrea Bolognani wrote:
> Figuring them out at runtime is neat, but in practice they don't
> change frequently enough for it to be necessary; more importantly,
> including them in the inventory means we can use them in the
> Dockerfile generator in addition to the Ansible playbooks.
> 
> Signed-off-by: Andrea Bolognani 
> ---
>  guests/host_vars/libvirt-centos-7/main.yml|  9 +++
>  guests/host_vars/libvirt-centos-8/main.yml|  9 +++
>  guests/host_vars/libvirt-debian-10/main.yml   |  9 +++
>  guests/host_vars/libvirt-debian-9/main.yml|  9 +++
>  guests/host_vars/libvirt-debian-sid/main.yml  |  9 +++
>  guests/host_vars/libvirt-fedora-30/main.yml   |  9 +++
>  guests/host_vars/libvirt-fedora-31/main.yml   |  9 +++
>  .../host_vars/libvirt-fedora-rawhide/main.yml |  9 +++
>  guests/host_vars/libvirt-freebsd-11/main.yml  |  9 +++
>  guests/host_vars/libvirt-freebsd-12/main.yml  |  9 +++
>  .../libvirt-freebsd-current/main.yml  |  9 +++
>  .../host_vars/libvirt-opensuse-151/main.yml   |  9 +++
>  guests/host_vars/libvirt-ubuntu-1604/main.yml |  9 +++
>  guests/host_vars/libvirt-ubuntu-1804/main.yml |  9 +++
>  guests/playbooks/update/main.yml  |  1 -
>  guests/playbooks/update/tasks/paths.yml   | 71 ---
>  16 files changed, 126 insertions(+), 72 deletions(-)
>  delete mode 100644 guests/playbooks/update/tasks/paths.yml


Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH 1/3] virStorageSourceParseBackingURI: Preserve query parts of URI

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 04:51:19PM +0100, Peter Krempa wrote:
> For non-NBD URIs we need to preserve the query part as it may be
> important to refer to the image. If the query doesn't start with
> 'socket=' concatenate it to the name of the image.

I can see that this makes sense for HTTP URIs (and other schemes with
QEMU's curl driver), as the query string can easily be part of the
identifier needed to resolve the image. These query params are not things
which map directly to QEMU driver options for CURL (if there are some
such query params, we should strip them), and thus don't make sense to
map to XML attributes (indeed it is impossible to map them since they
can be completely arbitrary).

For other URIs schemes though, this feels like it opens the door to
passthrough of arbitrary args to QEMU. eg we shouldn't allow this
for gluster/rbd/iscsi/sheepdog/etc drivers - in all of those any
query parameters should be mapped to explicit attributes and not
included in the path name.

> 
> Signed-off-by: Peter Krempa 
> ---
>  src/util/virstoragefile.c | 15 +++
>  tests/virstoragetest.c|  4 ++--
>  2 files changed, 13 insertions(+), 6 deletions(-)
> 
> diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c
> index c43e52d1f6..caf5de2d2c 100644
> --- a/src/util/virstoragefile.c
> +++ b/src/util/virstoragefile.c
> @@ -2818,6 +2818,7 @@ virStorageSourceParseBackingURI(virStorageSourcePtr src,
>  {
>  g_autoptr(virURI) uri = NULL;
>  const char *path = NULL;
> +g_autofree char *pathquery = NULL;
>  VIR_AUTOSTRINGLIST scheme = NULL;
> 
>  if (!(uri = virURIParse(uristr))) {
> @@ -2851,10 +2852,6 @@ virStorageSourceParseBackingURI(virStorageSourcePtr 
> src,
>  return -1;
>  }
> 
> -/* handle socket stored as a query */
> -if (uri->query)
> -src->hosts->socket = g_strdup(STRSKIP(uri->query, "socket="));
> -
>  /* uri->path is NULL if the URI does not contain slash after host:
>   * transport://host:port */
>  if (uri->path)
> @@ -2862,6 +2859,16 @@ virStorageSourceParseBackingURI(virStorageSourcePtr 
> src,
>  else
>  path = "";
> 
> +/* handle socket stored as a query */
> +if (uri->query) {
> +if (STRPREFIX(uri->query, "socket=")) {
> +src->hosts->socket = g_strdup(STRSKIP(uri->query, "socket="));
> +} else {
> +pathquery = g_strdup_printf("%s?%s", path, uri->query);
> +path = pathquery;
> +}
> +}
> +
>  /* possibly skip the leading slash  */
>  if (path[0] == '/')
>  path++;
> diff --git a/tests/virstoragetest.c b/tests/virstoragetest.c
> index 10d5421150..f0bb46a04c 100644
> --- a/tests/virstoragetest.c
> +++ b/tests/virstoragetest.c
> @@ -1632,7 +1632,7 @@ mymain(void)
> "\"file.url\": 
> \"https://host/folder/esx6.5-rhel7.7-x86%5f64/esx6.5-rhel7.7-x86%5f64-flat.vmdk?dcPath=data=esx6.5-matrix\",;
> "\"file.timeout\": 2000"
>   "}",
> -   " name='folder/esx6.5-rhel7.7-x86_64/esx6.5-rhel7.7-x86_64-flat.vmdk'>\n"
> +   " name='folder/esx6.5-rhel7.7-x86_64/esx6.5-rhel7.7-x86_64-flat.vmdk?dcPath=datadsName=esx6.5-matrix'>\n"
> "  \n"
> "  \n"
> "  \n"
> @@ -1647,7 +1647,7 @@ mymain(void)
> "\"file.url\": 
> \"https://host/folder/esx6.5-rhel7.7-x86%5f64/esx6.5-rhel7.7-x86%5f64-flat.vmdk?dcPath=data=esx6.5-matrix\",;
> "\"file.timeout\": 2000"
>   "}",
> -   " name='folder/esx6.5-rhel7.7-x86_64/esx6.5-rhel7.7-x86_64-flat.vmdk'>\n"
> +   " name='folder/esx6.5-rhel7.7-x86_64/esx6.5-rhel7.7-x86_64-flat.vmdk?dcPath=datadsName=esx6.5-matrix'>\n"
> "  \n"
> "  \n"
> "  \n"
> -- 
> 2.24.1
> 

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH 5/5] cpu: Introduce getHost supoort for ARM

2020-03-30 Thread Zhenyu Zheng
Hi Daniel,

Thanks for thre review and reply, my first implementation was going to
gather data from /proc/cpuinfo, but unlike X86, we can only get this kind
of info:

  processor   : 0
  BogoMIPS: 200.00
  Features: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
  CPU implementer : 0x43
  CPU architecture: 8
  CPU variant : 0x1
  CPU part: 0x0a1
  CPU revision: 1

so we have to perform some translation to perform human readable
information, and I mentioned that 'lscpu' has done that too. So Andrea
Bolognani
suggested that maybe we can use it directly, to avoid re-implement the
translation. Here is the discussion:
https://www.redhat.com/archives/libvir-list/2020-March/msg00812.html

BR,

Zhenyu

On Mon, Mar 30, 2020 at 7:27 PM Daniel P. Berrangé 
wrote:

> On Fri, Mar 27, 2020 at 04:52:25PM +0800, Zhenyu Zheng wrote:
> > Introduce getHost support for ARM, use data from 'lscpu' cmd
> > result. 'util-linux/lscpu' provided a very good translation of
> > ARM cpu data start from release v2.32, use it directly to avoid
> > re-implement the translation.
>
> I'm a bit wary of this approach, because parsing the output of
> command line tools is generally fragile  / liable to break in
> future releases, since few tools guarantee that their output
> format is stable for machine parsing.
>
> How hard would it be to probe this directly in libvirt, as we
> do on x86 arch.
>
> >
> > Signed-of-by: Zhenyu Zheng 
> > ---
> >  src/cpu/cpu_arm.c | 198 +-
> >  1 file changed, 197 insertions(+), 1 deletion(-)
> >
> > diff --git a/src/cpu/cpu_arm.c b/src/cpu/cpu_arm.c
> > index 969025b5cf..30d9c0ae2e 100644
> > --- a/src/cpu/cpu_arm.c
> > +++ b/src/cpu/cpu_arm.c
> > @@ -22,8 +22,11 @@
> >  #include 
> >
> >  #include "viralloc.h"
> > +#include "virlog.h"
> >  #include "cpu.h"
> >  #include "cpu_map.h"
> > +#include "vircommand.h"
> > +#include "virfile.h"
> >  #include "virstring.h"
> >  #include "virxml.h"
> >  #include "cpu_map.h"
> > @@ -31,6 +34,14 @@
> >
> >  #define VIR_FROM_THIS VIR_FROM_CPU
> >
> > +VIR_LOG_INIT("cpu.cpu_arm");
> > +
> > +static const char *lsCpuPath = "/usr/bin/lscpu";
> > +
> > +#define LSCPU lsCpuPath
> > +#define MAX_LSCPU_SIZE = (1024*1024)   /* 1MB limit for lscpu output */
> > +
> > +
> >  static const virArch archs[] = {
> >  VIR_ARCH_ARMV6L,
> >  VIR_ARCH_ARMV7B,
> > @@ -464,13 +475,198 @@ virCPUarmValidateFeatures(virCPUDefPtr cpu)
> >  return 0;
> >  }
> >
> > +static int
> > +armCpuDataFromLsCpu(virCPUarmData *data)
> > +{
> > +int ret = -1;
> > +char *outbuf = NULL;
> > +char *eol = NULL;
> > +const char *cur;
> > +virCommandPtr cmd = NULL;
> > +g_autofree char *lscpu = NULL;
> > +
> > +if (!data)
> > +return ret;
> > +
> > +lscpu = virFindFileInPath("lscpu");
> > +cmd = virCommandNew(lscpu);
> > +virCommandSetOutputBuffer(cmd, );
> > +
> > +if (virCommandRun(cmd, NULL) < 0)
> > +goto cleanup;
> > +
> > +if ((cur = strstr(outbuf, "Vendor ID")) == NULL) {
> > +virReportError(VIR_ERR_INTERNAL_ERROR,
> > +   _("there is no \"Vendor ID\" info in %s command
> result"), LSCPU);
> > +goto cleanup;
> > +}
> > +cur = strchr(cur, ':') + 1;
> > +eol = strchr(cur, '\n');
> > +virSkipSpaces();
> > +if (!eol)
> > +goto cleanup;
> > +
> > +data->vendor_id = g_strndup(cur, eol - cur);
> > +
> > +if ((cur = strstr(outbuf, "Model name")) == NULL) {
> > +virReportError(VIR_ERR_INTERNAL_ERROR,
> > +   _("there is no \"Model name\" info in %s command
> result"), LSCPU);
> > +goto cleanup;
> > +}
> > +cur = strchr(cur, ':') + 1;
> > +eol = strchr(cur, '\n');
> > +virSkipSpaces();
> > +if (!eol)
> > +goto cleanup;
> > +
> > +data->model_name = g_strndup(cur, eol - cur);
> > +
> > +if ((cur = strstr(outbuf, "Flags")) == NULL) {
> > +virReportError(VIR_ERR_INTERNAL_ERROR,
> > +   _("there is no \"Flags\" info in %s command
> result"), LSCPU);
> > +goto cleanup;
> > +}
> > +cur = strchr(cur, ':') + 1;
> > +eol = strchr(cur, '\n');
> > +virSkipSpaces();
> > +if (!eol)
> > +goto cleanup;
> > +
> > +data->features = g_strndup(cur, eol - cur);
> > +
> > +ret = 0;
> > +
> > + cleanup:
> > +virCommandFree(cmd);
> > +VIR_FREE(outbuf);
> > +return ret;
> > +}
> > +
> > +static int
> > +armCpuDataParseFeatures(virCPUDefPtr cpu,
> > +const virCPUarmData *cpuData)
> > +{
> > +int ret = -1;
> > +size_t i;
> > +char **features;
> > +
> > +if (!cpu || !cpuData)
> > +return ret;
> > +
> > +if (!(features = virStringSplitCount(cpuData->features, " ",
> > + 0, >nfeatures)))
> > +return ret;
> > +if (cpu->nfeatures) {
> > +if 

Re: [PATCH 5/5] cpu: Introduce getHost supoort for ARM

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 04:52:25PM +0800, Zhenyu Zheng wrote:
> Introduce getHost support for ARM, use data from 'lscpu' cmd
> result. 'util-linux/lscpu' provided a very good translation of
> ARM cpu data start from release v2.32, use it directly to avoid
> re-implement the translation.

I'm a bit wary of this approach, because parsing the output of
command line tools is generally fragile  / liable to break in
future releases, since few tools guarantee that their output
format is stable for machine parsing.

How hard would it be to probe this directly in libvirt, as we
do on x86 arch.

> 
> Signed-of-by: Zhenyu Zheng 
> ---
>  src/cpu/cpu_arm.c | 198 +-
>  1 file changed, 197 insertions(+), 1 deletion(-)
> 
> diff --git a/src/cpu/cpu_arm.c b/src/cpu/cpu_arm.c
> index 969025b5cf..30d9c0ae2e 100644
> --- a/src/cpu/cpu_arm.c
> +++ b/src/cpu/cpu_arm.c
> @@ -22,8 +22,11 @@
>  #include 
>  
>  #include "viralloc.h"
> +#include "virlog.h"
>  #include "cpu.h"
>  #include "cpu_map.h"
> +#include "vircommand.h"
> +#include "virfile.h"
>  #include "virstring.h"
>  #include "virxml.h"
>  #include "cpu_map.h"
> @@ -31,6 +34,14 @@
>  
>  #define VIR_FROM_THIS VIR_FROM_CPU
>  
> +VIR_LOG_INIT("cpu.cpu_arm");
> +
> +static const char *lsCpuPath = "/usr/bin/lscpu";
> +
> +#define LSCPU lsCpuPath
> +#define MAX_LSCPU_SIZE = (1024*1024)   /* 1MB limit for lscpu output */
> +
> +
>  static const virArch archs[] = {
>  VIR_ARCH_ARMV6L,
>  VIR_ARCH_ARMV7B,
> @@ -464,13 +475,198 @@ virCPUarmValidateFeatures(virCPUDefPtr cpu)
>  return 0;
>  }
>  
> +static int
> +armCpuDataFromLsCpu(virCPUarmData *data)
> +{
> +int ret = -1;
> +char *outbuf = NULL;
> +char *eol = NULL;
> +const char *cur;
> +virCommandPtr cmd = NULL;
> +g_autofree char *lscpu = NULL;
> +
> +if (!data)
> +return ret;
> +
> +lscpu = virFindFileInPath("lscpu");
> +cmd = virCommandNew(lscpu);
> +virCommandSetOutputBuffer(cmd, );
> +
> +if (virCommandRun(cmd, NULL) < 0) 
> +goto cleanup;
> +
> +if ((cur = strstr(outbuf, "Vendor ID")) == NULL) {
> +virReportError(VIR_ERR_INTERNAL_ERROR,
> +   _("there is no \"Vendor ID\" info in %s command 
> result"), LSCPU);
> +goto cleanup;
> +}
> +cur = strchr(cur, ':') + 1;
> +eol = strchr(cur, '\n');
> +virSkipSpaces();
> +if (!eol)
> +goto cleanup;
> +
> +data->vendor_id = g_strndup(cur, eol - cur);
> +
> +if ((cur = strstr(outbuf, "Model name")) == NULL) {
> +virReportError(VIR_ERR_INTERNAL_ERROR,
> +   _("there is no \"Model name\" info in %s command 
> result"), LSCPU);
> +goto cleanup;
> +}
> +cur = strchr(cur, ':') + 1;
> +eol = strchr(cur, '\n');
> +virSkipSpaces();
> +if (!eol)
> +goto cleanup;
> +
> +data->model_name = g_strndup(cur, eol - cur);
> +
> +if ((cur = strstr(outbuf, "Flags")) == NULL) {
> +virReportError(VIR_ERR_INTERNAL_ERROR,
> +   _("there is no \"Flags\" info in %s command result"), 
> LSCPU);
> +goto cleanup;
> +}
> +cur = strchr(cur, ':') + 1;
> +eol = strchr(cur, '\n');
> +virSkipSpaces();
> +if (!eol)
> +goto cleanup;
> +
> +data->features = g_strndup(cur, eol - cur);
> +
> +ret = 0;
> +
> + cleanup:
> +virCommandFree(cmd);
> +VIR_FREE(outbuf);
> +return ret;
> +}
> +
> +static int
> +armCpuDataParseFeatures(virCPUDefPtr cpu,
> +const virCPUarmData *cpuData)
> +{
> +int ret = -1;
> +size_t i;
> +char **features;
> +
> +if (!cpu || !cpuData)
> +return ret;
> +
> +if (!(features = virStringSplitCount(cpuData->features, " ",
> + 0, >nfeatures)))
> +return ret;
> +if (cpu->nfeatures) {
> +if (VIR_ALLOC_N(cpu->features, cpu->nfeatures) < 0)
> +goto error;
> +
> +for (i = 0; i < cpu->nfeatures; i++) {
> +cpu->features[i].policy = VIR_CPU_FEATURE_REQUIRE;
> +cpu->features[i].name = g_strdup(features[i]);
> +}
> +}
> +
> +ret = 0;
> +
> + cleanup:
> +virStringListFree(features);
> +return ret;
> +
> + error:
> +for (i = 0; i < cpu->nfeatures; i++)
> +VIR_FREE(cpu->features[i].name);
> +VIR_FREE(cpu->features);
> +cpu->nfeatures = 0;
> +goto cleanup;
> +}
> +
> +static int
> +armDecode(virCPUDefPtr cpu,
> +  const virCPUarmData *cpuData,
> +  virDomainCapsCPUModelsPtr models)
> +{
> +virCPUarmMapPtr map;
> +virCPUarmModelPtr model;
> +virCPUarmVendorPtr vendor = NULL;
> +
> +if (!cpuData || !(map = virCPUarmGetMap()))
> +return -1;
> +
> +if (!(model = armModelFind(map, cpuData->model_name))) {
> +virReportError(VIR_ERR_OPERATION_FAILED,
> +   _("Cannot find CPU model with name %s"),
> +  

Re: [PATCH v2 11/11] qemu: Make auto dump path generation embed driver aware

2020-03-30 Thread Daniel P . Berrangé
On Thu, Mar 26, 2020 at 04:15:15PM +0100, Michal Privoznik wrote:
> So far, libvirt generates the following path for automatic dumps:
> 
>   $autoDumpPath/$id-$shortName-$timestamp
> 
> where $autoDumpPath is where libvirt stores dumps of guests (e.g.
> /var/lib/libvirt/qemu/dump), $id is domain ID and $shortName is
> shortened version of domain name. So for instance, the generated
> path may look something like this:
> 
>   /var/lib/libvirt/qemu/dump/1-QEMUGuest-2020-03-25-10:40:50
> 
> While in case of embed driver the following path would be
> generated by default:
> 
>   $root/lib/libvirt/qemu/dump/1-QEMUGuest-2020-03-25-10:40:50
> 
> which is not clashing with other embed drivers, we allow users to
> override the default and have all embed drivers use the same
> prefix. This can create clashing paths. Fortunately, we can reuse
> the approach for machined name generation
> (v6.1.0-178-gc9bd08ee35) and include part of hash of the root in
> the generated path.
> 
> Signed-off-by: Michal Privoznik 
> Reviewed-by: Daniel Henrique Barboza 
> ---
>  src/qemu/qemu_driver.c | 6 ++
>  1 file changed, 6 insertions(+)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH v2 10/11] qemu: Make memory path generation embed driver aware

2020-03-30 Thread Daniel P . Berrangé
On Thu, Mar 26, 2020 at 04:15:14PM +0100, Michal Privoznik wrote:
> So far, libvirt generates the following path for memory:
> 
>   $memoryBackingDir/$id-$shortName/ram-nodeN
> 
> where $memoryBackingDir is the path where QEMU mmaps() memory for
> the guest (e.g. /var/lib/libvirt/qemu/ram), $id is domain ID
> and $shortName is shortened version of domain name. So for
> instance, the generated path may look something like this:
> 
>   /var/lib/libvirt/qemu/ram/1-QEMUGuest/ram-node0
> 
> While in case of embed driver the following path would be
> generated by default:
> 
>   $root/lib/qemu/ram/1-QEMUGuest/ram-node0
> 
> which is not clashing with other embed drivers, we allow users to
> override the default and have all embed drivers use the same
> prefix. This can create clashing paths. Fortunately, we can reuse
> the approach for machined name generation
> (v6.1.0-178-gc9bd08ee35) and include part of hash of the root in
> the generated path.
> 
> Note, the important change is in qemuGetMemoryBackingBasePath().
> The rest is needed to pass driver around.
> 
> Signed-off-by: Michal Privoznik 
> Reviewed-by: Daniel Henrique Barboza 
> ---
>  src/qemu/qemu_command.c | 15 +++
>  src/qemu/qemu_conf.c| 21 ++---
>  src/qemu/qemu_conf.h|  8 
>  src/qemu/qemu_process.c |  5 ++---
>  4 files changed, 27 insertions(+), 22 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH v2 09/11] qemu: Make hugepages path generation embed driver aware

2020-03-30 Thread Daniel P . Berrangé
On Thu, Mar 26, 2020 at 04:15:13PM +0100, Michal Privoznik wrote:
> So far, libvirt generates the following path for hugepages:
> 
>   $mnt/libvirt/qemu/$id-$shortName
> 
> where $mnt is the mount point of hugetlbfs corresponding to
> hugepages of desired size (e.g. /dev/hugepages), $id is domain ID
> and $shortName is shortened version of domain name. So for
> instance, the generated path may look something like this:
> 
>   /dev/hugepages/libvirt/qemu/1-QEMUGuest
> 
> But this won't work with embed driver really, because if there
> are two instances of embed driver, and they both want to start a
> domain with the same name and with hugepages, both drivers will
> generate the same path which is not desired. Fortunately, we can
> reuse the approach for machined name generation
> (v6.1.0-178-gc9bd08ee35) and include part of hash of the root in
> the generated path.
> 
> Note, the important change is in qemuGetBaseHugepagePath(). The
> rest is needed to pass driver around.
> 
> Signed-off-by: Michal Privoznik 
> Reviewed-by: Daniel Henrique Barboza 
> ---
>  src/qemu/qemu_command.c |  4 ++--
>  src/qemu/qemu_conf.c| 24 +---
>  src/qemu/qemu_conf.h| 10 ++
>  src/qemu/qemu_driver.c  |  2 +-
>  src/qemu/qemu_process.c |  2 +-
>  5 files changed, 27 insertions(+), 15 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



[PATCH 2/2] qemuDomainSnapshotDiskPrepareOne: Don't load the relative path with blockdev

2020-03-30 Thread Peter Krempa
Since we are refreshing the relative paths when doing the blockjobs we
no longer need to load them upfront when doing the snapshot.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_driver.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index ae38b9c914..78024614cf 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -15167,8 +15167,9 @@ qemuDomainSnapshotDiskPrepareOne(virQEMUDriverPtr 
driver,
 dd->initialized = true;

 /* relative backing store paths need to be updated so that relative
- * block commit still works */
-if (reuse) {
+ * block commit still works. With blockdev we must update it when doing
+ * commit anyways so it's skipped here */
+if (reuse && !blockdev) {
 if (supportsBacking) {
 g_autofree char *backingStoreStr = NULL;

-- 
2.24.1



[PATCH 0/2] qemu: Fix --keep-relative for block-commit/pull with blockdev

2020-03-30 Thread Peter Krempa
Peter Krempa (2):
  qemu: block: Support VIR_DOMAIN_BLOCK_COMMIT/PULL/REBASE_RELATIVE with
blockdev
  qemuDomainSnapshotDiskPrepareOne: Don't load the relative path with
blockdev

 src/qemu/qemu_block.c  | 46 ++
 src/qemu/qemu_block.h  |  5 +
 src/qemu/qemu_driver.c | 13 ++--
 3 files changed, 62 insertions(+), 2 deletions(-)

-- 
2.24.1



[PATCH 1/2] qemu: block: Support VIR_DOMAIN_BLOCK_COMMIT/PULL/REBASE_RELATIVE with blockdev

2020-03-30 Thread Peter Krempa
Preservation of the relative relationship requires us to load the
backing store strings from the disk images. With blockdev we stopped
detecting the backing chain if it's specified in the XML so the relative
links were not loaded at that point. To preserve the functionality from
the pre-blockdev without accessing the backing chain unnecessarily
during VM startup we must refresh the relative links when relative
block commit or block pull is requested.

https://bugzilla.redhat.com/show_bug.cgi?id=1818655

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_block.c  | 46 ++
 src/qemu/qemu_block.h  |  5 +
 src/qemu/qemu_driver.c |  8 
 3 files changed, 59 insertions(+)

diff --git a/src/qemu/qemu_block.c b/src/qemu/qemu_block.c
index 648c3f1026..fe5a0a6a7d 100644
--- a/src/qemu/qemu_block.c
+++ b/src/qemu/qemu_block.c
@@ -3378,3 +3378,49 @@ 
qemuBlockStorageSourceGetCookieString(virStorageSourcePtr src)

 return virBufferContentAndReset();
 }
+
+
+/**
+ * qemuBlockUpdateRelativeBacking:
+ * @vm: domain object
+ * @src: starting point of the update
+ * @topsrc: top level image in the backing chain (used to get security label)
+ *
+ * Reload data necessary for keeping backing store links starting from @src
+ * relative.
+ */
+int
+qemuBlockUpdateRelativeBacking(virDomainObjPtr vm,
+   virStorageSourcePtr src,
+   virStorageSourcePtr topsrc)
+{
+qemuDomainObjPrivatePtr priv = vm->privateData;
+virQEMUDriverPtr driver = priv->driver;
+virStorageSourcePtr n;
+
+for (n = src; virStorageSourceHasBacking(n); n = n->backingStore) {
+g_autofree char *backingStoreStr = NULL;
+int rc;
+
+if (n->backingStore->relPath)
+break;
+
+if (!virStorageFileSupportsBackingChainTraversal(n))
+continue;
+
+if (qemuDomainStorageFileInit(driver, vm, n, topsrc) < 0)
+return -1;
+
+rc = virStorageFileGetBackingStoreStr(n, );
+
+virStorageFileDeinit(n);
+
+if (rc < 0)
+return rc;
+
+if (backingStoreStr && virStorageIsRelative(backingStoreStr))
+n->backingStore->relPath = g_steal_pointer();
+}
+
+return 0;
+}
diff --git a/src/qemu/qemu_block.h b/src/qemu/qemu_block.h
index 8b57ffd8a5..2ad2ce1a1f 100644
--- a/src/qemu/qemu_block.h
+++ b/src/qemu/qemu_block.h
@@ -261,3 +261,8 @@ qemuBlockStorageSourceNeedsStorageSliceLayer(const 
virStorageSource *src);

 char *
 qemuBlockStorageSourceGetCookieString(virStorageSourcePtr src);
+
+int
+qemuBlockUpdateRelativeBacking(virDomainObjPtr vm,
+   virStorageSourcePtr src,
+   virStorageSourcePtr topsrc);
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 570dc059e9..ae38b9c914 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17466,6 +17466,10 @@ qemuDomainBlockPullCommon(virDomainObjPtr vm,
 goto endjob;
 }

+if (blockdev &&
+qemuBlockUpdateRelativeBacking(vm, disk->src, disk->src) < 0)
+goto endjob;
+
 if (virStorageFileGetRelativeBackingPath(disk->src->backingStore,
  baseSource,
  ) < 0)
@@ -18593,6 +18597,10 @@ qemuDomainBlockCommit(virDomainPtr dom,
 goto endjob;
 }

+if (blockdev && top_parent &&
+qemuBlockUpdateRelativeBacking(vm, top_parent, disk->src) < 0)
+goto endjob;
+
 if (virStorageFileGetRelativeBackingPath(topSource, baseSource,
  ) < 0)
 goto endjob;
-- 
2.24.1



Re: [PATCH v2 07/11] qemu: use virQEMUDriverGetEmbedRoot() to access embeddedRoot

2020-03-30 Thread Daniel P . Berrangé
On Thu, Mar 26, 2020 at 04:15:11PM +0100, Michal Privoznik wrote:
> Now that we have shiny accessor for driver->embeddedRoot use it
> instead of accessing the struct member directly.

Is this really better ? 

> 
> Signed-off-by: Michal Privoznik 
> ---
>  src/qemu/qemu_driver.c  | 6 +++---
>  src/qemu/qemu_process.c | 2 +-
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 5cbf2f2478..664ea4eeda 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -1173,7 +1173,7 @@ static virDrvOpenStatus qemuConnectOpen(virConnectPtr 
> conn,
>  return VIR_DRV_OPEN_ERROR;
>  }
>  
> -if (qemu_driver->embeddedRoot) {
> +if (virQEMUDriverGetEmbedRoot(qemu_driver)) {
>  const char *root = virURIGetParam(conn->uri, "root");
>  if (!root)
>  return VIR_DRV_OPEN_ERROR;
> @@ -1184,11 +1184,11 @@ static virDrvOpenStatus qemuConnectOpen(virConnectPtr 
> conn,
>  return VIR_DRV_OPEN_ERROR;
>  }
>  
> -if (STRNEQ(root, qemu_driver->embeddedRoot)) {
> +if (STRNEQ(root, virQEMUDriverGetEmbedRoot(qemu_driver))) {
>  virReportError(VIR_ERR_INTERNAL_ERROR,
> _("Cannot open embedded driver at path '%s', "
>   "already open with path '%s'"),
> -   root, qemu_driver->embeddedRoot);
> +   root, virQEMUDriverGetEmbedRoot(qemu_driver));
>  return VIR_DRV_OPEN_ERROR;
>  }
>  } else {
> diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
> index fcb47e8596..b0b2258850 100644
> --- a/src/qemu/qemu_process.c
> +++ b/src/qemu/qemu_process.c
> @@ -6715,7 +6715,7 @@ qemuProcessLaunch(virConnectPtr conn,
> _("Domain autodestroy requires a connection 
> handle"));
>  return -1;
>  }
> -if (driver->embeddedRoot) {
> +if (virQEMUDriverGetEmbedRoot(driver)) {
>  virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
> _("Domain autodestroy not supported for embedded 
> drivers yet"));
>  return -1;
> -- 
> 2.24.1
> 

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH v2 08/11] Revert "qemu_conf: Track embed root dir"

2020-03-30 Thread Daniel P . Berrangé
On Thu, Mar 26, 2020 at 04:15:12PM +0100, Michal Privoznik wrote:
> This reverts commit 06a19921b6a522cd7b4d352c9320909752947de3.
> 
> What I haven't realized when writing this ^^ commit is that the
> virQEMUDriver structure already stores the root directory path.
> And since the pointer is immutable it can be accessed right from
> the structure and thus there is no need to duplicate it in the
> driver config.
> 
> Signed-off-by: Michal Privoznik 
> Reviewed-by: Andrea Bolognani 
> Reviewed-by: Daniel Henrique Barboza 
> ---
>  src/qemu/qemu_conf.c | 2 --
>  src/qemu/qemu_conf.h | 2 --
>  2 files changed, 4 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH v2 05/11] qemu: Introduce virQEMUDriverGetEmbedRoot

2020-03-30 Thread Daniel P . Berrangé
On Thu, Mar 26, 2020 at 04:15:09PM +0100, Michal Privoznik wrote:
> This function returns embeddedRoot member of the driver
> structure.
> 
> Signed-off-by: Michal Privoznik 
> Reviewed-by: Andrea Bolognani 
> Reviewed-by: Daniel Henrique Barboza 
> ---
>  src/qemu/qemu_conf.c | 12 
>  src/qemu/qemu_conf.h |  1 +
>  2 files changed, 13 insertions(+)
> 
> diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
> index 9786e19f8f..fdc6f53ad3 100644
> --- a/src/qemu/qemu_conf.c
> +++ b/src/qemu/qemu_conf.c
> @@ -1230,6 +1230,18 @@ virQEMUDriverIsPrivileged(virQEMUDriverPtr driver)
>  return driver->privileged;
>  }
>  
> +/* virQEMUDriverGetEmbedRoot:
> + * @driver: the QEMU driver
> + *
> + * Returns root directory specified in connection URI for embed
> + * mode, NULL otherwise.
> + */
> +const char *
> +virQEMUDriverGetEmbedRoot(virQEMUDriverPtr driver)
> +{
> +return driver->embeddedRoot;
> +}

I don't really see the benefit in this method. The embeddedRoot
field is immutable so we can just be accessing it directly with
no need for APIs wrappers, as we often do when accesing the
privileged field.


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: GSoC'20 Interested Student: Adding support to Jailhouse Hypervisor

2020-03-30 Thread Jan Kiszka

On 30.03.20 10:02, PRAKHAR BANSAL wrote:

Hi Jan,

On Sat, Mar 28, 2020 at 4:12 AM Jan Kiszka mailto:jan.kis...@web.de>> wrote:

On 28.03.20 08:47, PRAKHAR BANSAL wrote:
 > Hi Jan,
 >
 > Thanks for the reply!
 >
 > I was only considering the command-line tool "code" for reference
to the
 > Jailhouse kernel API(ioctl calls) because I didn't find a
documentation
 > of the Jailhouse kernel APIs.

Right, the IOCTL API is not documented so far. It is currently only used
inside the Jailhouse project. This needs to be formalized when there
shall be external users like a libvirt driver.

That might be a nice small contribution task: Create some
Documentation/driver-interfaces.md that describes the IOCTLs along with
their parameter structures and that also includes the current
sysfs-entries.txt as a section. Then send this as patch here. I'll help
out when details are not clear from reading the code.

Sure. I will do that.

 >
 > For the second part as you mentioned that Jailhouse can only create
 > cells with the constraints defined in the root cell configuration. I
 > have a few questions regarding that.
 >
 > 1. Is there a way to know if Jailhouse is enabled on the host and get
 > the root cell configuration(s) from Jailhouse through an API?
This can
 > be used while binding the libvirt to the Jailhouse hypervisor.

Look at

https://github.com/siemens/jailhouse/blob/master/Documentation/sysfs-entries.txt
for what is reported as runtime information. Full configurations can't
be read back at this point. This might be reconsidered in the light of
[1], but I wouldn't plat for that yet.


Ok, sure. I am looking into it.


 >
 > 2. If Jailhouse is not enabled(again can we know this using some API)
 > then, can libvirt enable/disable Jailhouse during the libvirt
binding of
 > the Jailhouse driver with a given set of Jailhouse cell
configurations
 > describing a complete partitioned system?

With the API above and a given configuration set, yes. The config set
would have to be provided to the libvirt driver in some to-be-defined
way (e.g. /etc/libvirt/jailhouse.conf -> /etc/libvirt/jailhouse/*.cell).

Cool, got it. Thanks!

 >
 > 3. I was wondering, as you mentioned that libvirt driver should check
 > for mismatch of the cell configuration with the root cell
configuration,
 > the question is, isn't that done by Jailhouse itself? If yes, then
 > libvirt can just pass on the cell creation requests to Jailhouse and
 > return the response to the user as it is, rather than driver
doing any
 > such mismatch check.

With matching I'm referring to a libvirt user request like "create a
domain with 2 CPUs", while there are no cells left that have more than
one CPU. Or "give the domain 1G RAM", and you need to find an available
cell with that much memory. Those are simple examples. A request that
states "connect the domain to the host network A" implies that a cell
has a shared-memory link to, say, the root cell that can be configured
to bridge this. But let's keep that for later and start as simple as
possible.


Do I need to match the libvirt user-requested cell config with only root
cells or with all cells present at that time?


With all non-root cells - the root cell will be occupied already (it
runs libvirt e.g.).



I wanted to request you for a favor for the proposal as the deadline is
approaching. Could I prepare a proposal for this project based on our
discussion here and improve it later based on feedback comments after
the deadline? I understand that I got late in starting on the project
search and selection.


Sure, please go ahead.

Jan



Thanks,
Prakhar


Jan

[1]

https://groups.google.com/d/msgid/jailhouse-dev/CADiTV-1QiRhSWZnw%2BkHhJMO-BoA4sAcOmTkQE7ZWbHkGh3Jexw%40mail.gmail.com

 >
 > -Prakhar
 >
 > On Thu, Mar 26, 2020 at 1:49 AM Jan Kiszka mailto:jan.kis...@web.de>
 > >> wrote:
 >
 >     Hi Prakhar,
 >
 >     On 25.03.20 05:36, PRAKHAR BANSAL wrote:
 >      > Hi Jan,
 >      >
 >      > Thanks for the reply. I looked deeper into the libvirt and
Jailhouse
 >      > source code and found following two things that seem
relevant to the
 >      > project I am interested in.
 >      >
 >      > - Libvirt driver interface at [libvirt.git]
 >      >  / src
 >      >
 /
 >     driver.h
 >      >
 >
  

 >      > - Jailhouse tool, which is using the ioctl API of the
Jailhouse,
 >      > available at
 >      >

Re: [PATCH v2 04/11] virDomainDriverGenerateMachineName: Factor out embed path hashing

2020-03-30 Thread Daniel P . Berrangé
On Thu, Mar 26, 2020 at 04:15:08PM +0100, Michal Privoznik wrote:
> The code that generates "qemu-embed-$hash" is going to be useful
> in more places. Separate it out into a function.
> 
> Signed-off-by: Michal Privoznik 
> Reviewed-by: Andrea Bolognani 
> Reviewed-by: Daniel Henrique Barboza 
> ---
>  src/hypervisor/domain_driver.c | 42 ++
>  src/hypervisor/domain_driver.h |  4 
>  src/libvirt_private.syms   |  1 +
>  3 files changed, 33 insertions(+), 14 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH v2 03/11] conf: Move virDomainGenerateMachineName to hypervisor/

2020-03-30 Thread Daniel P . Berrangé
On Thu, Mar 26, 2020 at 04:15:07PM +0100, Michal Privoznik wrote:
> The virDomainGenerateMachineName() function doesn't belong in
> src/conf/ really, because it has nothing to do with domain XML
> parsing. It landed there because of lack of better place in the
> past. But now that we have src/hypervisor/ the function should
> live there. At the same time, the function name is changed to
> match new location.
> 
> Signed-off-by: Michal Privoznik 
> Reviewed-by: Andrea Bolognani 
> Reviewed-by: Daniel Henrique Barboza 
> ---
>  src/conf/domain_conf.c | 72 -
>  src/conf/domain_conf.h |  7 
>  src/hypervisor/domain_driver.c | 74 ++
>  src/hypervisor/domain_driver.h |  7 
>  src/libvirt_private.syms   |  2 +-
>  src/lxc/lxc_domain.c   |  3 +-
>  src/qemu/qemu_domain.c |  7 ++--
>  tests/virsystemdtest.c |  5 ++-
>  8 files changed, 91 insertions(+), 86 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH v2 02/11] qemu: Drop two layers of nesting of memoryBackingDir

2020-03-30 Thread Daniel P . Berrangé
On Thu, Mar 26, 2020 at 04:15:06PM +0100, Michal Privoznik wrote:
> Initially introduced in v3.10.0-rc1~172.
> 
> When generating a path for memory-backend-file or -mem-path, qemu
> driver will use the following pattern:
> 
>   $memoryBackingDir/libvirt/qemu/$id-$shortName
> 
> where $memoryBackingDir defaults to /var/lib/libvirt/qemu/ram but
> can be overridden in qemu.conf. Anyway, the "/libvirt/qemu/" part
> looks redundant, because it's already contained in the default,
> or creates unnecessary nesting if overridden in qemu.conf.

I think this was copied from our earlier huge pages code
which used /dev/hugepages, and then added "/libvirt/qemu"
to it.

Now we're stripping off "/libvirt/qemu" though, we're liable
to a filename clashes if someone edits qemu.conf to point to
/dev/shm.

IOW, even though "/libvirt/qemu" is redundant when using our
default path, I think we need to keep it to avoid clashing
with custom paths.


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH v2 01/11] tests: Fix virQEMUDriverConfigNew() calling with respect to @root

2020-03-30 Thread Daniel P . Berrangé
On Thu, Mar 26, 2020 at 04:15:05PM +0100, Michal Privoznik wrote:
> The virQEMUDriverConfigNew() accepts path to root directory for
> embed mode as an argument. If the argument is not NULL it uses
> the passed value as prefix for some internal paths (e.g.
> cfg->libDir). If it is NULL though, it looks if the other
> argument, @privileged is true or false and generates internal
> paths accordingly. But when calling the function from the test
> suite, instead of passing NULL for @root, an empty string is
> passed. Fortunately, this doesn't create a problem because in
> both problematic cases the generated paths are "fixed" to point
> somewhere into build dir or the code which is tested doesn't
> access them.
> 
> Signed-off-by: Michal Privoznik 
> Reviewed-by: Andrea Bolognani 
> Reviewed-by: Daniel Henrique Barboza 
> ---
>  tests/domaincapstest.c | 2 +-
>  tests/testutilsqemu.c  | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH 3/3] virSecurityDeviceLabelDefNew: Avoid NULL dereference

2020-03-30 Thread Daniel P . Berrangé
On Mon, Mar 30, 2020 at 11:25:00AM +0200, Michal Privoznik wrote:
> While it is impossible for VIR_ALLOC() to return an error, we
> should be consistent with the rest of the code and not continue
> initializing the virSecurityDeviceLabelDef structure.
> 
> Signed-off-by: Michal Privoznik 
> ---
>  src/util/virseclabel.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Daniel P. Berrangé 

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [GSoC] Interested Student for the Project: 'Introducing job control to the storage driver'

2020-03-30 Thread Prathamesh Chavan
On Sun, Mar 29, 2020 at 5:13 AM Martin Kletzander  wrote:
>
> On Wed, Mar 25, 2020 at 02:59:16PM +0100, Michal Prívozník wrote:
> >On 23. 3. 2020 10:45, Prathamesh Chavan wrote:
> >> Hi Everyone,
> >
> >Hey!
> >
> >It's nice to see somebody new interested in libvirt.
> >
> >>
> >> I'm Prathamesh Chavan, a final year student at studying Computer
> >> Science and Engineering at IIT Kharagpur. I've been part of GSoC'17
> >> under the Git Organization and have an internship experience at
> >> Nutanix, India during last summer.
> >> I'm also currently working on a tired file system with software wear
> >> management for nvm technologies as my master project.
> >> I was interested in contributing to the project: "Introducing job
> >> control to the storage driver" under Google Summer of Code - 2020.
> >> I was currently going through the codebase and also was successfully
> >> able to build it.
> >> It would be great if I was assigned a byte-task regarding the same
> >> which could help me understand the project better,
> >
> >There is no assignment process, just pick anything you want. Adapting to
> >GLib is pretty popular as it usually removes some lines :-) But feel
> >free to pick anything you would like.
> >
> >And for the GSoC project itself; we currently have in_use member for
> >virStorageVolDef structure which serves as a refcounter. Then we have
> >some mutexes to guard access to the counter. Whenever a 'long running'
> >storage API is called (e.g. virStorageVolWipe(),
> >virStorageVolDownload(), ...) the refcounter is increased at the
> >beginning and then decreased at the end. Then, virStorageVolDelete()
> >checks this refcounter to see if there is some other thread using it.
> >
> >But we already have something similar implemented for virDomainObj -
> >jobs. Whenever an API wants to work with a domain object [1], it shall
> >acquire a job. This job then prevents other threads from modifying the
> >object meanwhile The threads wanting to work over the same object will
> >serialize. The whole job control is described in src/qemu/THREADS.txt so
> >I would start there.
> >
> >Michal
> >
> >1: actually, that is not entirely true. Acquiring a job is required only
> >for those APIs which want to unlock the domain object in the middle.
> >Therefore, some 'short' APIs have no job acquiring implemented (e.g.
> >qemuDomainGetInfo()). But some other 'short' APIs do (e.g.
> >qemuDomainReset()).
> >
>
> What if you have a function which changes something, or even looks up 
> something
> and it does not need to communicate with QEMU at all, but it is called when 
> some
> other async job is waiting for a monitor.  I know we discussed this multiple
> times and I always forget some minor detail, so forgive me if I'm asking this
> for 27th time already.  But shouldn't that function also have a job?  Another

I think the API currently allows normal jobs in the presence of
another async ongoing
job (given the normal job is compatible with the async one). And I
think it was this commit (qemu: Allow all query commands to be run
during long jobs) [1], which allowed us to query (or run synchronous
jobs) while there is an async long-running job going on.

Also, regarding the GSoC Project, you mentioned earlier that
currently, a reference counter is being used in the storage driver.
What the project will aim to rather remove this reference counter and
implement a job control routine as done wrt the domain jobs in this
commit (qemu: completely rework reference counting) [2].

I've gone through the way the domain jobs are being handled in qemu.
And right now I'm going through the storage drivers of libvirt,
particularly the way virStorageVolDef is being handled, and how its
values are read and changed.
If there is anything else, I should be looking into, any suggestions
of this sort would be really helpful.

Thanks,
Prathamesh Chavan

[1]:  https://github.com/libvirt/libvirt/commit/361842881e0
[2]: https://github.com/libvirt/libvirt/commit/540c339a253




Re: [PATCH 1/3] Don't pass NULL to yajl_free()

2020-03-30 Thread Daniel P . Berrangé
On Mon, Mar 30, 2020 at 11:24:58AM +0200, Michal Privoznik wrote:
> Unfortunately, yajl_free() is not NOP on NULL. It really does
> expect a valid pointer. Therefore, check whether the pointer we
> want to pass to it is NULL or not.
> 
> Signed-off-by: Michal Privoznik 
> ---
>  src/util/virjson.c | 2 +-
>  tools/nss/libvirt_nss_leases.c | 3 ++-
>  tools/nss/libvirt_nss_macs.c   | 3 ++-
>  3 files changed, 5 insertions(+), 3 deletions(-)

Reviewed-by: Daniel P. Berrangé 


Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [PATCH 2/3] virQEMUCapsNewBinary: Avoid NULL dereference

2020-03-30 Thread Daniel P . Berrangé
On Mon, Mar 30, 2020 at 11:24:59AM +0200, Michal Privoznik wrote:
> Even with GLib it is still possible for virQEMUCapsNew() to
> return NULL because it calls virQEMUCapsInitialize() which is a
> wrapper over pthread_once() which may fail. At least, we still
> check for its retval. If it so happens that the virQEMUCapsNew()
> fails and returns NULL, we should not dereference it.
> 
> Signed-off-by: Michal Privoznik 
> ---
>  src/qemu/qemu_capabilities.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)

Reviewed-by: Daniel P. Berrangé 

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: RFC: qemu: use uuid instead of name for misc filenames

2020-03-30 Thread Daniel P . Berrangé
On Sun, Mar 29, 2020 at 02:33:41PM +0300, nshirokovskiy wrote:
> 
> 
> On 26.03.2020 20:50, Daniel P. Berrangé wrote:
> > On Fri, Feb 28, 2020 at 10:09:41AM +0300, Nikolay Shirokovskiy wrote:
> >> On 27.02.2020 16:48, Daniel P. Berrangé wrote:
> >>> On Thu, Feb 27, 2020 at 03:57:04PM +0300, Nikolay Shirokovskiy wrote:
>  Hi, everyone.
> 
>   
> 
>  I'm working on supporting domain renaming when it has snapshots which is 
>  not
>  supported now. And it strikes me things will be much simplier to manage 
>  on  
>  renaming if we use uuid in filenames instead of domain names.
>   
> 
> > 
> > 
> > 
>  4. No issues with long domain names and filename length limit
> 
>   
> 
>  If the above conversion makes sense I guess the good time to apply it is 
> 
>  on domain start (and rename to support renaming with snapshots). 
> 
> >>>
> >>> The above has not considered the benefit that using the VM name
> >>> has. Essentially the UUID is good for machines, the VM name is
> >>> good for humans.  Seeing the guest XML files, or VM log files
> >>> using a filename based on UUID instead of name is a *really*
> >>> unappealing idea to me. 
> >>
> >> I agree. But we can also keep symlinks with domain names for configs/logs 
> >> etc
> >> This can be done as a separate tool as I suggested in the letter or 
> >> maintain
> >> symlinks always. The idea is failing in this symlinking won't affect daemon
> >> functionality as symlinks are for humans)
> > 
> > I've just realized that there is potential overlap between what we're
> > discussing in this thread, and in the thread about localhost migration:
> > 
> >   https://www.redhat.com/archives/libvir-list/2020-February/msg00061.html
> > 
> > In the localhost migration case, we need to be able to startup a new
> > guest with the same name as an existing guest.  The way we can achieve
> > that is by thinking of localhost migration as being a pair of domain
> > rename operations.
> > 
> > ie, consider guest "foo" we want to localhost-migrate
> > 
> >  - Start target guest "foo-incoming"
> >  - Run live migration from "foo" -> "foo-incoming"
> >  - Migration completes, CPUs stop
> >  - Rename "foo" to "foo-outgoing"
> >  - Rename "foo-incoming" to "foo"
> >  - Tidy up migration state
> >  - Destroy source guest "foo-outgoing"
> 
> I think local migration does not fit really nicely in this scheme:
> 
> - one can not treat outgoing and incoming VMs as just regular VMs as
>   one can not put them into same list as they have same UUID

Yes, that is a tricky issue, but one that we need to solve, as the need
to have a completely separate of list VMs is the thing I dislike the
most about the local migration patches.

One option is to make the target VM have a different UUID by pickling
its UUID. eg have a migration UUID generated on daemon startup.
0466e1ae-a71a-4e75-89ca-c3591a4cf220.  Then XOR this migration UUID
with the source VM's UUID. So during live migration the target VM
will appear with this XOR'd UUID, and once completed, it will get
the real UUID again.

A different option is to not keep the target VM in the domain list
at all. Instead  virDomainObjPtr, could have a pointer to a second
virDomainObjPtr which stores the target VM temporarily.

> - it is not just mere rename. In example reflected in [1] the path
>   given by mgmt is not subjected to rename operation. The switch
>   has to be done by local migration specific code.
> 
> [1] https://www.redhat.com/archives/libvir-list/2020-February/msg01026.html

That is true right now, but I'm thinking that we need to solve this
particular case, even for the domain rename operation. For a graphics
backend using UNIX socket, the socket path is almost certainly going
to be based off the guest name. So when renaming a guest we really want
to be able to rename the UNIX socket.

Thus I think that we need to introduce a new way to do UNIX sockets for
graphics, using an "autopath=yes|no" attribute, in the same way that we
have autoport=yes|no for TCP sockets.  With autopath=yes, then the path
can be changed during a domain rename operation,  and also handled for
local migration.

> > In both this thread and the localhost migration thread, we seem to have
> > come towards a view that symlinks are the only viable way to deal with
> > the naming problem for resources on disk that are based on VM name.
> > 
> > IOW, it would be desirable if whatever solution we take for symlink mgmt
> > will solve the localhost migration and domain rename problems at the same
> > time.
> 
> Agree, symlinks approach itself seems 

Re: [PATCH python] tox: Test with Python 3.6, 3.7 and 3.8

2020-03-30 Thread Michal Prívozník
On 29. 3. 2020 9:30, Radostin Stoyanov wrote:
> Support for Python 2.X has been dropped with commit b22e4f2.
> 
> Signed-off-by: Radostin Stoyanov 
> ---
>  tox.ini | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Michal Privoznik 

and pushed.

Michal



Re: [libvirt PATCH v3 10/12] gitlab: introduce use of ccache for speeding up rebuilds

2020-03-30 Thread Daniel P . Berrangé
On Mon, Mar 30, 2020 at 12:11:19PM +0200, Andrea Bolognani wrote:
> On Fri, 2020-03-27 at 17:20 +, Daniel P. Berrangé wrote:
> > On Fri, Mar 27, 2020 at 03:59:51PM +0100, Andrea Bolognani wrote:
> > > Another question is, once we start doing cascading builds, what to do
> > > with stuff like (from the .bashrc template used by lcitool)
> > 
> > I don't think we will do cascading builds in the way we've done
> > in Jenkins, because there was alot of pointless redundancy in
> > our setup, resulting in us testing the wrong things.
> > 
> > Take the Go binding for example. Go doesn't have the same kind of
> > portability issues that C does, so testing the compile across the
> > many distros is not directly needed.  Similarly we only ever teted
> > it against the latest libvirt git master, despite the code being
> > able to compile against many older versions.
> > 
> > So the two dimensions for Go that we actually need are testing against
> > multiple Go versions, and testing against multiple libvirt versions.
> > 
> > Testing against multiple distros is a crude indirect way of testing
> > several Go versions, without us actually understanding which versions
> > we really are testing.
> 
> Agreed that we could be smarter and more comprehensive in what we
> test, especially when it comes to language bindings; at the same
> time it's useful to test against the latest codebase for the various
> dependencies, so we should make sure we don't lose that coverage.
> 
> > What we did in the Travis config for Go was much more useful in
> > what dimensions it tested:
> > 
> >   https://gitlab.com/libvirt/libvirt-go/-/blob/master/.travis.yml
> > 
> > The same applies for the other language bindings too.
> > 
> > The other reason to not try to chain up builds is that it doesn't
> > align with the forking model of contribution.  If someone does a
> > fork of the libvirt-go binding, they want to be able to run tests
> > on that in isolation. They shouldn't have to first do a fork of
> > libvirt and run build, in order to them run builds on the go
> > binding.
> 
> Of course that wouldn't be acceptable.
> 
> So far I'm aware of two approaches for chaining, one of which is
> currently in use and the other one which IIUC was prototyped but
> never actually deployed:
> 
>   * the CI job for each project includes build instructions for all
> projects it depends on, eg. the libvirt-dbus job would start by
> fetching, building and installing libvirt, then moving on to
> doing the same for libvirt-glib, then finally get to building
> and testing libvirt-dbus itself. This is the approach libosinfo
> is currently using;
> 
>   * the CI job for each project would result in a container image
> that has the same contents as the one used for building, plus
> a complete installation of the project itself, eg. the libvirt
> job would generate an image that has libvirt installed, the
> libvirt-glib job would use that image and generate one that has
> both libvirt and libvirt-glib installed, and finally libvirt-dbus
> would use this last image as build environment.
> 
> If I understand correctly, you're suggesting a third approach:
> 
>   * the CI job for each project uses an image that contains all its
> dependencies, including the ones that are maintained under the
> libvirt umbrella, installed from distro packages.
> 
> Did I get that right? Or did you have something else in mind?

I'm suggesting both option 1 and/or 3 depending on the support scenario.
In the cases where the project needs to test against libvirt git master,
it should clone and build libvirt.git, and then build itself against that.
In the case where the project needs to test against existing releases in
distros, it should have container images that include the pre-built libvirt.

The Perl binding only supports building against libvirt Git, so option 1
is sufficient. The Go & Python bindings support building against historic
versions, so option 1 & 3 are both needed.

> > Where we do need chaining is to trigger these builds. ie, when
> > a libvirt changes hit master, we want to trigger pipelines in
> > any dependant projects to validate that they're not seeing a
> > regression.  GitLab has  a way to configure pipelines triggers
> > todo this.
> 
> I'm not sure how this part would fit into the rest, but let's just
> ignore it for the moment O:-)

Consider the builds are self-contained. libvirt-python CI gets triggered
when a change is committed to libvirt-python.git.  We also need to have
CI triggered in libvirt-python, when a change is committed to libvirt.git, 
so we need to use the gitlab triggers for this.

> > > In some cases that'll make things easier; in other cases, you're
> > > still going to have to change the libvirt-jenkins-ci repository to
> > > eg. alter the build environment in some way, then rebuild the images
> > > and change the build steps accordingly, except instead of having
> > > changes to the 

Re: [GSoC] Interested Student for the Project: 'Introducing job control to the storage driver'

2020-03-30 Thread Michal Prívozník
On 29. 3. 2020 0:43, Martin Kletzander wrote:
> On Wed, Mar 25, 2020 at 02:59:16PM +0100, Michal Prívozník wrote:
>> On 23. 3. 2020 10:45, Prathamesh Chavan wrote:
>>> Hi Everyone,
>>
>> Hey!
>>
>> It's nice to see somebody new interested in libvirt.
>>
>>>
>>> I'm Prathamesh Chavan, a final year student at studying Computer
>>> Science and Engineering at IIT Kharagpur. I've been part of GSoC'17
>>> under the Git Organization and have an internship experience at
>>> Nutanix, India during last summer.
>>> I'm also currently working on a tired file system with software wear
>>> management for nvm technologies as my master project.
>>> I was interested in contributing to the project: "Introducing job
>>> control to the storage driver" under Google Summer of Code - 2020.
>>> I was currently going through the codebase and also was successfully
>>> able to build it.
>>> It would be great if I was assigned a byte-task regarding the same
>>> which could help me understand the project better,
>>
>> There is no assignment process, just pick anything you want. Adapting to
>> GLib is pretty popular as it usually removes some lines :-) But feel
>> free to pick anything you would like.
>>
>> And for the GSoC project itself; we currently have in_use member for
>> virStorageVolDef structure which serves as a refcounter. Then we have
>> some mutexes to guard access to the counter. Whenever a 'long running'
>> storage API is called (e.g. virStorageVolWipe(),
>> virStorageVolDownload(), ...) the refcounter is increased at the
>> beginning and then decreased at the end. Then, virStorageVolDelete()
>> checks this refcounter to see if there is some other thread using it.
>>
>> But we already have something similar implemented for virDomainObj -
>> jobs. Whenever an API wants to work with a domain object [1], it shall
>> acquire a job. This job then prevents other threads from modifying the
>> object meanwhile The threads wanting to work over the same object will
>> serialize. The whole job control is described in src/qemu/THREADS.txt so
>> I would start there.
>>
>> Michal
>>
>> 1: actually, that is not entirely true. Acquiring a job is required only
>> for those APIs which want to unlock the domain object in the middle.
>> Therefore, some 'short' APIs have no job acquiring implemented (e.g.
>> qemuDomainGetInfo()). But some other 'short' APIs do (e.g.
>> qemuDomainReset()).
>>
> 
> What if you have a function which changes something, or even looks up
> something
> and it does not need to communicate with QEMU at all, but it is called
> when some
> other async job is waiting for a monitor.  I know we discussed this
> multiple
> times and I always forget some minor detail, so forgive me if I'm asking
> this
> for 27th time already.  But shouldn't that function also have a job? 

Oh yeah, I wasn't trying to be exhaustive and cover all corner cases
when introducing jobs to somebody who hears about the concept for the
first time. If a function modifies internal state then it should grab
_MODIFY job. This is, as you correctly say, to mutually exclude with
another thread that is doing some other modification.

But frankly, I don't remember all the details. I recall discussing
whether each API should grab a job, but I just don't remember the details.

> Another
> question is if this should also be the case for just reading and
> returning some
> info about the domain, shouldn't it also take a job, even if it is just a
> QEMU_JOB_QUERY?  Otherwise we'd have to make sure that before each
> unlock of a
> domain (even during a job) all the data integrity is kept.

I'm not completely sure what you are asking here. And how it relates to
storage driver.

Michal



Re: [PATCH v2 4/4] qemu: add support for 'multidevs' option

2020-03-30 Thread Christian Schoenebeck
On Freitag, 27. März 2020 18:16:37 CEST Ján Tomko wrote:
> On a Friday in 2020, Christian Schoenebeck wrote:
> >This option prevents misbehaviours on guest if a qemu 9pfs export
> >contains multiple devices, due to the potential file ID collisions
> >this otherwise may cause.
> >
> >Signed-off-by: Christian Schoenebeck 
> >---
> >
> > src/qemu/qemu_command.c |  7 +++
> > src/qemu/qemu_domain.c  | 12 
> > 2 files changed, 19 insertions(+)
> 
> A change to qemuxml2argvtest is needed.
> 
> If you already added the XML file to qemuxml2argvdata in the previous
> commit, all you need to to to generate the output file is:
> * add a new DO_TEST_CAPS_LATEST line to qemuxml2argvtest.c

Looking at the existing test cases, it probably makes sense to extend 
qemuxml2argvdata/virtio-options.xml instead of adding a separate XML file, 
that is:

diff --git a/tests/qemuxml2argvdata/virtio-options.xml b/tests/
qemuxml2argvdata/virtio-options.xml
index dd9a4f4a01..7ea624dfad 100644
--- a/tests/qemuxml2argvdata/virtio-options.xml
+++ b/tests/qemuxml2argvdata/virtio-options.xml
@@ -47,6 +47,21 @@
   
   
 
+
+  
+  
+  
+
+
+  
+  
+  
+
+
+  
+  
+  
+
 
   
   

> * run the test with VIR_TEST_REGENERATE_OUTPUT=1

Ah, that's the one! :) I was already wondering whether other people are auto 
generating the files I manually changed so far.

I'll post a rebased v3 with the things discussed so far. Thanks Ján!

Best regards,
Christian Schoenebeck





Re: [libvirt PATCH v3 10/12] gitlab: introduce use of ccache for speeding up rebuilds

2020-03-30 Thread Andrea Bolognani
On Fri, 2020-03-27 at 17:20 +, Daniel P. Berrangé wrote:
> On Fri, Mar 27, 2020 at 03:59:51PM +0100, Andrea Bolognani wrote:
> > Another question is, once we start doing cascading builds, what to do
> > with stuff like (from the .bashrc template used by lcitool)
> 
> I don't think we will do cascading builds in the way we've done
> in Jenkins, because there was alot of pointless redundancy in
> our setup, resulting in us testing the wrong things.
> 
> Take the Go binding for example. Go doesn't have the same kind of
> portability issues that C does, so testing the compile across the
> many distros is not directly needed.  Similarly we only ever teted
> it against the latest libvirt git master, despite the code being
> able to compile against many older versions.
> 
> So the two dimensions for Go that we actually need are testing against
> multiple Go versions, and testing against multiple libvirt versions.
> 
> Testing against multiple distros is a crude indirect way of testing
> several Go versions, without us actually understanding which versions
> we really are testing.

Agreed that we could be smarter and more comprehensive in what we
test, especially when it comes to language bindings; at the same
time it's useful to test against the latest codebase for the various
dependencies, so we should make sure we don't lose that coverage.

> What we did in the Travis config for Go was much more useful in
> what dimensions it tested:
> 
>   https://gitlab.com/libvirt/libvirt-go/-/blob/master/.travis.yml
> 
> The same applies for the other language bindings too.
> 
> The other reason to not try to chain up builds is that it doesn't
> align with the forking model of contribution.  If someone does a
> fork of the libvirt-go binding, they want to be able to run tests
> on that in isolation. They shouldn't have to first do a fork of
> libvirt and run build, in order to them run builds on the go
> binding.

Of course that wouldn't be acceptable.

So far I'm aware of two approaches for chaining, one of which is
currently in use and the other one which IIUC was prototyped but
never actually deployed:

  * the CI job for each project includes build instructions for all
projects it depends on, eg. the libvirt-dbus job would start by
fetching, building and installing libvirt, then moving on to
doing the same for libvirt-glib, then finally get to building
and testing libvirt-dbus itself. This is the approach libosinfo
is currently using;

  * the CI job for each project would result in a container image
that has the same contents as the one used for building, plus
a complete installation of the project itself, eg. the libvirt
job would generate an image that has libvirt installed, the
libvirt-glib job would use that image and generate one that has
both libvirt and libvirt-glib installed, and finally libvirt-dbus
would use this last image as build environment.

If I understand correctly, you're suggesting a third approach:

  * the CI job for each project uses an image that contains all its
dependencies, including the ones that are maintained under the
libvirt umbrella, installed from distro packages.

Did I get that right? Or did you have something else in mind?

> Where we do need chaining is to trigger these builds. ie, when
> a libvirt changes hit master, we want to trigger pipelines in
> any dependant projects to validate that they're not seeing a
> regression.  GitLab has  a way to configure pipelines triggers
> todo this.

I'm not sure how this part would fit into the rest, but let's just
ignore it for the moment O:-)

> > In some cases that'll make things easier; in other cases, you're
> > still going to have to change the libvirt-jenkins-ci repository to
> > eg. alter the build environment in some way, then rebuild the images
> > and change the build steps accordingly, except instead of having
> > changes to the build environment and build recipe appear as two
> > subsequent commits in the same repository, now they will be dozens
> > of commits spread across as many repositories.
> 
> Eventually I'd like to get the container image biulds into the main
> repos too.  ie instead of libvirt-dockerfiles.git, we should commit
> the dockerfiles into each project's git repo. The GitLab CI job can
> generate (and cache) the container images directly, avoiding a need
> for us to send builds via quay.io separately.

This will again result in the situation where a single update to
lcitool might result in a couple dozen commits to a couple dozen
repositories, but since it will be entirely mechanical and likely
fall under the same "Dockerfile update rule" as pushes to the
libvirt-dockerfiles repo currently fall under, I think it should
be reasonably manageable.

Will the container images built this way made available outside of
the GitLab CI infrastructure? We still want people to be able to
run 'make ci-build@...' locally.

Will the GitLab registry allow us to store a lot 

Re: [PATCH 2/3] qemuBlockStorageSourceGetURI: Properly preserve query component

2020-03-30 Thread Jiri Denemark
On Fri, Mar 27, 2020 at 16:51:20 +0100, Peter Krempa wrote:
> Look for the questionmark in the name and move the contents into the
> query portion so that we format the URI back properly.
> 
> Signed-off-by: Peter Krempa 
> ---
>  src/qemu/qemu_block.c | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/src/qemu/qemu_block.c b/src/qemu/qemu_block.c
> index 648c3f1026..c51fb2b6ea 100644
> --- a/src/qemu/qemu_block.c
> +++ b/src/qemu/qemu_block.c
> @@ -429,12 +429,18 @@ qemuBlockStorageSourceGetURI(virStorageSourcePtr src)
>  }
> 
>  if (src->path) {
> +char *query;
>  if (src->volume) {
>  uri->path = g_strdup_printf("/%s/%s", src->volume, src->path);
>  } else {
>  uri->path = g_strdup_printf("%s%s", src->path[0] == '/' ? "" : 
> "/",
>  src->path);
>  }
> +
> +if ((query = strchr(uri->path, '?'))) {
> +uri->query = g_strdup(query + 1);
> +*query = '\0';
> +}

This would be quite fragile I'm afraid. We should only do this when we
know we got src->path from a URI with a query string, otherwise we could
find a random '?' possibly used in a file's name.

Jirka



[PATCH 2/3] virQEMUCapsNewBinary: Avoid NULL dereference

2020-03-30 Thread Michal Privoznik
Even with GLib it is still possible for virQEMUCapsNew() to
return NULL because it calls virQEMUCapsInitialize() which is a
wrapper over pthread_once() which may fail. At least, we still
check for its retval. If it so happens that the virQEMUCapsNew()
fails and returns NULL, we should not dereference it.

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_capabilities.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index a95a60c36a..3afe8a7b2c 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -1713,7 +1713,8 @@ virQEMUCapsNewBinary(const char *binary)
 {
 virQEMUCapsPtr qemuCaps = virQEMUCapsNew();
 
-qemuCaps->binary = g_strdup(binary);
+if (qemuCaps)
+qemuCaps->binary = g_strdup(binary);
 
 return qemuCaps;
 }
-- 
2.24.1



[PATCH 1/3] Don't pass NULL to yajl_free()

2020-03-30 Thread Michal Privoznik
Unfortunately, yajl_free() is not NOP on NULL. It really does
expect a valid pointer. Therefore, check whether the pointer we
want to pass to it is NULL or not.

Signed-off-by: Michal Privoznik 
---
 src/util/virjson.c | 2 +-
 tools/nss/libvirt_nss_leases.c | 3 ++-
 tools/nss/libvirt_nss_macs.c   | 3 ++-
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/src/util/virjson.c b/src/util/virjson.c
index be472d49e4..dc662bf8e9 100644
--- a/src/util/virjson.c
+++ b/src/util/virjson.c
@@ -1838,7 +1838,7 @@ virJSONValueFromString(const char *jsonstring)
 if (!hand) {
 virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("Unable to create JSON parser"));
-goto cleanup;
+return NULL;
 }
 
 /* Yajl 2 is nice enough to default to rejecting trailing garbage. */
diff --git a/tools/nss/libvirt_nss_leases.c b/tools/nss/libvirt_nss_leases.c
index 7c431e4d53..015bbc4ab6 100644
--- a/tools/nss/libvirt_nss_leases.c
+++ b/tools/nss/libvirt_nss_leases.c
@@ -426,7 +426,8 @@ findLeases(const char *file,
 *addrs = NULL;
 *naddrs = 0;
 }
-yajl_free(parser);
+if (parser)
+yajl_free(parser);
 free(parserState.entry.ipaddr);
 free(parserState.entry.macaddr);
 free(parserState.entry.hostname);
diff --git a/tools/nss/libvirt_nss_macs.c b/tools/nss/libvirt_nss_macs.c
index 05d096a348..d4b165eef6 100644
--- a/tools/nss/libvirt_nss_macs.c
+++ b/tools/nss/libvirt_nss_macs.c
@@ -278,7 +278,8 @@ findMACs(const char *file,
 *macs = NULL;
 *nmacs = 0;
 }
-yajl_free(parser);
+if (parser)
+yajl_free(parser);
 for (i = 0; i < parserState.entry.nmacs; i++)
 free(parserState.entry.macs[i]);
 free(parserState.entry.macs);
-- 
2.24.1



[PATCH 3/3] virSecurityDeviceLabelDefNew: Avoid NULL dereference

2020-03-30 Thread Michal Privoznik
While it is impossible for VIR_ALLOC() to return an error, we
should be consistent with the rest of the code and not continue
initializing the virSecurityDeviceLabelDef structure.

Signed-off-by: Michal Privoznik 
---
 src/util/virseclabel.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/util/virseclabel.c b/src/util/virseclabel.c
index a2b5ebf6b7..2141d84210 100644
--- a/src/util/virseclabel.c
+++ b/src/util/virseclabel.c
@@ -77,7 +77,7 @@ virSecurityDeviceLabelDefNew(const char *model)
 
 if (VIR_ALLOC(seclabel) < 0) {
 virSecurityDeviceLabelDefFree(seclabel);
-seclabel = NULL;
+return NULL;
 }
 
 seclabel->model = g_strdup(model);
-- 
2.24.1



[PATCH 0/3] Random almost trivial fixes

2020-03-30 Thread Michal Privoznik
*** BLURB HERE ***

Michal Prívozník (3):
  Don't pass NULL to yajl_free()
  virQEMUCapsNewBinary: Avoid NULL dereference
  virSecurityDeviceLabelDefNew: Avoid NULL dereference

 src/qemu/qemu_capabilities.c   | 3 ++-
 src/util/virjson.c | 2 +-
 src/util/virseclabel.c | 2 +-
 tools/nss/libvirt_nss_leases.c | 3 ++-
 tools/nss/libvirt_nss_macs.c   | 3 ++-
 5 files changed, 8 insertions(+), 5 deletions(-)

-- 
2.24.1



Re: [libvirt PATCH] tools: explain that '^' means 'Ctrl' for console escape sequence

2020-03-30 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 08:40:31PM +0100, Andrea Bolognani wrote:
> On Fri, 2020-03-27 at 16:52 +0100, Michal Prívozník wrote:
> > On 27. 3. 2020 16:42, Daniel P. Berrangé wrote:
> > > > > +if (priv->escapeChar[0] == '^') {
> > > > > +vshPrintExtra(ctl, " (Ctrl + %c)", priv->escapeChar[1]);
> > > > > +}
> > > 
> > > I'll remove the {} to keep syntax-check happy about single line "if"
> > 
> > Speaking of which. That exception, while it might have served the
> > purpose in the past, now that the code base has grown in size I think
> > the less exceptions to formatting style we have the better. So maybe we
> > can start thinking about requiring braces for every if() ? Just a friday
> > evening thought.
> 
> I'm all for it.

Braces (or not) around single line ifs is not something I'd suggest we
spend any time changing in isolation as it is disruptive for not clear
win.
 
> We should really just figure out a clang-format configuration that
> approximates reasonably well our current code style, throw out
> every exception that can't be represented, and just tell people to
> run the tool before submitting code.
> 
> Incidentally, that's the approach both Rust and Go have followed
> since the very beginning.

Yeah, I think it is really a good approach, as it removes any kind of
ambiguity. A global switch to clang-format is the time at which we could
consider changing single line if brace policy.

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



Re: [libvirt PATCH] docs: Clarify semantics of model/@usable attribute in dom caps

2020-03-30 Thread Andrea Bolognani
On Fri, 2020-03-27 at 21:17 +0100, Jiri Denemark wrote:
> The documentation could confuse people to expect that CPU models with
> usable='no' attribute are not usable at all on the current host. But
> they cannot be only used without explicitly disabling some features.
> 
> Signed-off-by: Jiri Denemark 
> ---
>  docs/formatdomaincaps.html.in | 8 +---
>  1 file changed, 5 insertions(+), 3 deletions(-)

Reviewed-by: Andrea Bolognani 

and safe for freeze.

-- 
Andrea Bolognani / Red Hat / Virtualization



Re: [PATCH] util: virdaemon: fix compilation on mingw

2020-03-30 Thread Michal Prívozník
On 27. 3. 2020 18:40, Rafael Fonseca wrote:
> The daemons are not supported on Win32 and therefore were not compiled
> in that platform. However, with the daemon code sharing, all the code in
> utils *is* compiled and it failed because `waitpid`, `fork`, and
> `setsid` are not available. So, as before, let's not build them on
> Win32 and make the code more portable by using existing vir* wrappers.
> 
> Signed-off-by: Rafael Fonseca 
> ---
>  src/util/virdaemon.c | 46 +++-
>  1 file changed, 41 insertions(+), 5 deletions(-)

Reviewed-by: Michal Privoznik 

and pushed.

Michal



Re: GSoC'20 Interested Student: Adding support to Jailhouse Hypervisor

2020-03-30 Thread PRAKHAR BANSAL
Hi Jan,

On Sat, Mar 28, 2020 at 4:12 AM Jan Kiszka  wrote:

> On 28.03.20 08:47, PRAKHAR BANSAL wrote:
> > Hi Jan,
> >
> > Thanks for the reply!
> >
> > I was only considering the command-line tool "code" for reference to the
> > Jailhouse kernel API(ioctl calls) because I didn't find a documentation
> > of the Jailhouse kernel APIs.
>
> Right, the IOCTL API is not documented so far. It is currently only used
> inside the Jailhouse project. This needs to be formalized when there
> shall be external users like a libvirt driver.
>
> That might be a nice small contribution task: Create some
> Documentation/driver-interfaces.md that describes the IOCTLs along with
> their parameter structures and that also includes the current
> sysfs-entries.txt as a section. Then send this as patch here. I'll help
> out when details are not clear from reading the code.
>
>   Sure. I will do that.

> >
> > For the second part as you mentioned that Jailhouse can only create
> > cells with the constraints defined in the root cell configuration. I
> > have a few questions regarding that.
> >
> > 1. Is there a way to know if Jailhouse is enabled on the host and get
> > the root cell configuration(s) from Jailhouse through an API? This can
> > be used while binding the libvirt to the Jailhouse hypervisor.
>
> Look at
>
> https://github.com/siemens/jailhouse/blob/master/Documentation/sysfs-entries.txt
> for what is reported as runtime information. Full configurations can't
> be read back at this point. This might be reconsidered in the light of
> [1], but I wouldn't plat for that yet.
>

  Ok, sure. I am looking into it.

>
> >
> > 2. If Jailhouse is not enabled(again can we know this using some API)
> > then, can libvirt enable/disable Jailhouse during the libvirt binding of
> > the Jailhouse driver with a given set of Jailhouse cell configurations
> > describing a complete partitioned system?
>
> With the API above and a given configuration set, yes. The config set
> would have to be provided to the libvirt driver in some to-be-defined
> way (e.g. /etc/libvirt/jailhouse.conf -> /etc/libvirt/jailhouse/*.cell).
>
>  Cool, got it. Thanks!


> >
> > 3. I was wondering, as you mentioned that libvirt driver should check
> > for mismatch of the cell configuration with the root cell configuration,
> > the question is, isn't that done by Jailhouse itself? If yes, then
> > libvirt can just pass on the cell creation requests to Jailhouse and
> > return the response to the user as it is, rather than driver doing any
> > such mismatch check.
>
> With matching I'm referring to a libvirt user request like "create a
> domain with 2 CPUs", while there are no cells left that have more than
> one CPU. Or "give the domain 1G RAM", and you need to find an available
> cell with that much memory. Those are simple examples. A request that
> states "connect the domain to the host network A" implies that a cell
> has a shared-memory link to, say, the root cell that can be configured
> to bridge this. But let's keep that for later and start as simple as
> possible.
>

  Do I need to match the libvirt user-requested cell config with only root
cells or with all cells present at that time?

I wanted to request you for a favor for the proposal as the deadline is
approaching. Could I prepare a proposal for this project based on our
discussion here and improve it later based on feedback comments after the
deadline? I understand that I got late in starting on the project search
and selection.

Thanks,
Prakhar


>
> Jan
>
> [1]
>
> https://groups.google.com/d/msgid/jailhouse-dev/CADiTV-1QiRhSWZnw%2BkHhJMO-BoA4sAcOmTkQE7ZWbHkGh3Jexw%40mail.gmail.com
>
> >
> > -Prakhar
> >
> > On Thu, Mar 26, 2020 at 1:49 AM Jan Kiszka  > > wrote:
> >
> > Hi Prakhar,
> >
> > On 25.03.20 05:36, PRAKHAR BANSAL wrote:
> >  > Hi Jan,
> >  >
> >  > Thanks for the reply. I looked deeper into the libvirt and
> Jailhouse
> >  > source code and found following two things that seem relevant to
> the
> >  > project I am interested in.
> >  >
> >  > - Libvirt driver interface at [libvirt.git]
> >  >  / src
> >  >  /
> > driver.h
> >  >
> > <
> https://libvirt.org/git/?p=libvirt.git;a=blob_plain;f=src/driver.h;hb=HEAD
> >
> >  > - Jailhouse tool, which is using the ioctl API of the Jailhouse,
> >  > available at
> >  >
> https://github.com/siemens/jailhouse/blob/master/tools/jailhouse.c.
> >  >
> >  > With the help of the above two, it looks like, a libvirt driver
> > for the
> >  > Jailhouse can be implemented. Let me know if I am moving in the
> right
> >  > direction so far.
> >
> >   From the Jailhouse perspective, it is important to not consider the
> > command line tool an interface anymore (like in the first prototype)
> but
> > build on top of the 

Re: On the need to move to a merge request workflow

2020-03-30 Thread Andrea Bolognani
On Fri, 2020-03-27 at 14:15 +0100, Peter Krempa wrote:
> - review is terrible
> - threads are only single level
> - response can't be quoted (okay, they can but you have to copy and
>   pase what you want to quote and then select that it's a quote)

FYI, you can just select the part you're interested in and then
hit 'r'.

-- 
Andrea Bolognani / Red Hat / Virtualization