On 7/17/19 7:20 PM, Daniel Henrique Barboza wrote:
After this commit, QEMU domains with PCI hostdevs running with
managed=true started to fail to launch with the following error:
error : virProcessRunInFork:1170 : internal error: child reported (status=125):
unable to open /dev/vfio/1: Device
On Wed, 2019-07-17 at 18:14 +0100, Daniel P. Berrangé wrote:
> On Wed, Jul 17, 2019 at 03:49:12PM +0200, Andrea Bolognani wrote:
> > If we've been asked not to produce any output, we can bail
> > early: doing so means we don't need to increase indentation
> > for subsequent code, and in some cases
On 7/18/19 8:46 AM, Michal Privoznik wrote:
On 7/17/19 7:20 PM, Daniel Henrique Barboza wrote:
After this commit, QEMU domains with PCI hostdevs running with
managed=true started to fail to launch with the following error:
error : virProcessRunInFork:1170 : internal error: child reported
On Wed, Jul 17, 2019 at 05:38:51PM -0400, Cole Robinson wrote:
> On 7/17/19 12:49 PM, Laine Stump wrote:
> > On 7/14/19 8:03 PM, Cole Robinson wrote:
> >> There's several unresolved RFEs for the bridge
> >> driver that are essentially requests to add XML wrappers
> >> for underlying dnsmasq
See 2/2 for explanation.
Michal Prívozník (2):
virSecurityManagerMetadataLock: Expand the comment on deadlocks
virSecurityManagerMetadataLock: Skip over duplicate paths
src/security/security_manager.c | 28 ++--
1 file changed, 26 insertions(+), 2 deletions(-)
--
If there are two paths on the list that are the same we need to
lock it only once. Because when we try to lock it the second time
then open() fails. And if it didn't, locking it the second time
would fail for sure. After all, it is sufficient to lock all
paths just once satisfy the caller.
Document why we need to sort paths while it's still fresh in my
memory.
Signed-off-by: Michal Privoznik
---
src/security/security_manager.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/src/security/security_manager.c b/src/security/security_manager.c
index
On Thu, Jul 18, 2019 at 11:14:48AM +0200, Michal Privoznik wrote:
> Document why we need to sort paths while it's still fresh in my
> memory.
>
> Signed-off-by: Michal Privoznik
> ---
> src/security/security_manager.c | 7 ++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
Reviewed-by:
On Thu, Jul 18, 2019 at 11:14:49AM +0200, Michal Privoznik wrote:
> If there are two paths on the list that are the same we need to
> lock it only once. Because when we try to lock it the second time
> then open() fails. And if it didn't, locking it the second time
> would fail for sure. After
On 7/18/19 5:44 AM, Julio Faracco wrote:
This commit is similar with 596aa144. It fixes an uninitialized
variable to avoid garbage value. This case, it uses time 't' 0 if
an error occurs with virTimeMillisNowRaw.
Signed-off-by: Julio Faracco
---
src/util/virtime.c | 2 +-
1 file changed, 1
On Mon, Jul 15, 2019 at 04:27:04PM +0200, Michal Privoznik wrote:
On 7/15/19 1:07 PM, Erik Skultety wrote:
On Wed, Jun 19, 2019 at 01:50:14PM +0200, Michal Privoznik wrote:
On 6/19/19 12:59 PM, Erik Skultety wrote:
On Tue, Jun 18, 2019 at 03:46:15PM +0200, Michal Privoznik wrote:
There are
Signed-off-by: Ilias Stamatis
---
src/test/test_driver.c | 131 +
1 file changed, 131 insertions(+)
diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index fcb80c9e47..2907c043cb 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
On 7/18/19 5:48 AM, Michal Privoznik wrote:
On 7/18/19 8:46 AM, Michal Privoznik wrote:
On 7/17/19 7:20 PM, Daniel Henrique Barboza wrote:
After this commit, QEMU domains with PCI hostdevs running with
managed=true started to fail to launch with the following error:
error :
Patchew URL: https://patchew.org/QEMU/20190717173937.18747-1-js...@redhat.com/
Hi,
This series failed the asan build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
=== TEST SCRIPT BEGIN ===
#!/bin/bash
make
Signed-off-by: Ilias Stamatis
---
src/test/test_driver.c | 24
1 file changed, 24 insertions(+)
diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index 2e33a9dd55..90e1ede7c4 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -6716,6 +6716,29
Thanks for the quick patches. They fixed the problem I was having
with a domain using a PCI Multifunction card, 4 hostdevs that
belongs to IOMMU group 1. The problem triggered when QEMU
tried to open the path /dev/vfio/1 for the second time.
For both patches:
Tested-by: Daniel Henrique Barboza
On 7/18/19 11:28 AM, Daniel P. Berrangé wrote:
On Thu, Jul 18, 2019 at 11:14:49AM +0200, Michal Privoznik wrote:
If there are two paths on the list that are the same we need to
lock it only once. Because when we try to lock it the second time
then open() fails. And if it didn't, locking it the
On Tue, Jun 18, 2019 at 03:46:14PM +0200, Michal Privoznik wrote:
Yet again, couple of patches I have on my local branch for the feature
I'm working on and which can go separately.
Michal Prívozník (2):
virsh-completer: Separate comma list construction into a function
tools: Introduce
On Fri, 2019-05-03 at 14:51 +0200, Andrea Bolognani wrote:
> Signed-off-by: Andrea Bolognani
> ---
> .gitpublish | 4
> 1 file changed, 4 insertions(+)
> create mode 100644 .gitpublish
Now pushed under the trivial rule.
--
Andrea Bolognani / Red Hat / Virtualization
--
libvir-list
The libosinfo project would like to start using these container
images in their own CI pipeline, so they need the corresponding
build dependencies to be included.
Signed-off-by: Andrea Bolognani
---
refresh | 5 +
1 file changed, 5 insertions(+)
diff --git a/refresh b/refresh
index
See patch 2/3 for more information.
Andrea Bolognani (3):
refresh: Store projects in a more convenient format
refresh: Add libosinfo-related projects
Refresh after adding libosinfo-related projects
buildenv-centos-7.Dockerfile | 11 +
Signed-off-by: Andrea Bolognani
---
refresh | 12
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/refresh b/refresh
index 1db0fc8..81df77e 100755
--- a/refresh
+++ b/refresh
@@ -58,12 +58,16 @@ class Dockerfile:
self.os = stem
self.cross_arch
Signed-off-by: Andrea Bolognani
---
buildenv-centos-7.Dockerfile | 11 +
buildenv-debian-10-cross-aarch64.Dockerfile | 15
buildenv-debian-10-cross-armv6l.Dockerfile| 15
buildenv-debian-10-cross-armv7l.Dockerfile| 15
On Thu, 2019-07-18 at 14:20 +0200, Andrea Bolognani wrote:
> The libosinfo project would like to start using these container
> images in their own CI pipeline, so they need the corresponding
> build dependencies to be included.
>
> Signed-off-by: Andrea Bolognani
> ---
> refresh | 5 +
> 1
Andrea,
On Thu, 2019-07-18 at 14:20 +0200, Andrea Bolognani wrote:
> See patch 2/3 for more information.
>
> Andrea Bolognani (3):
> refresh: Store projects in a more convenient format
> refresh: Add libosinfo-related projects
> Refresh after adding libosinfo-related projects
>
>
On Thu, Jul 18, 2019 at 02:20:34PM +0200, Andrea Bolognani wrote:
> The libosinfo project would like to start using these container
> images in their own CI pipeline, so they need the corresponding
> build dependencies to be included.
>
> Signed-off-by: Andrea Bolognani
> ---
> refresh | 5
On 7/18/19 8:30 AM, Michal Privoznik wrote:
On 7/18/19 11:28 AM, Daniel P. Berrangé wrote:
On Thu, Jul 18, 2019 at 11:14:49AM +0200, Michal Privoznik wrote:
If there are two paths on the list that are the same we need to
lock it only once. Because when we try to lock it the second time
then
On Thu, 2019-07-18 at 13:35 +0100, Daniel P. Berrangé wrote:
> On Thu, Jul 18, 2019 at 02:20:34PM +0200, Andrea Bolognani wrote:
> > self.projects = [
> > +"libosinfo",
> > "libvirt",
> > +"osinfo-db",
> > +"osinfo-db-tools",
> > ]
On Sun, Jul 14, 2019 at 08:03:59PM -0400, Cole Robinson wrote:
Just the plumbing, no real implementation yet
Signed-off-by: Cole Robinson
---
src/conf/network_conf.c | 22 --
src/conf/network_conf.h | 21 -
src/network/bridge_driver.c | 2 +-
3
On Sun, Jul 14, 2019 at 08:04:00PM -0400, Cole Robinson wrote:
This maps to XML like:
...
To dnsmasq config options
...
foo=bar
cname=*.foo.example.com,master.example.com
Signed-off-by: Cole Robinson
---
docs/schemas/network.rng | 11 ++
virCommandMassCloseGetFDsLinux fails when running libvird on valgrind
with the following message:
libvirt: error : internal error: unable to set FD as open: 1024
This is because valgrind opens few file descriptors beyond the limit:
65701125 lr-x--. 1 root root 64 Jul 18 14:48 1024 ->
On Thu, Jul 18, 2019 at 03:05:16PM +0200, Andrea Bolognani wrote:
> On Thu, 2019-07-18 at 13:35 +0100, Daniel P. Berrangé wrote:
> > On Thu, Jul 18, 2019 at 02:20:34PM +0200, Andrea Bolognani wrote:
> > > self.projects = [
> > > +"libosinfo",
> > > "libvirt",
> >
There are couple of functions that are meant to be exposed but
are missing syms file adjustment.
Signed-off-by: Michal Privoznik
---
Pushed under trivial rule.
src/libvirt_private.syms | 5 +
1 file changed, 5 insertions(+)
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
On 7/18/19 3:12 PM, Peter Krempa wrote:
virCommandMassCloseGetFDsLinux fails when running libvird on valgrind
with the following message:
libvirt: error : internal error: unable to set FD as open: 1024
This is because valgrind opens few file descriptors beyond the limit:
65701125 lr-x--.
On Thu, Jul 18, 2019 at 11:36:27AM +0200, Martin Kletzander wrote:
> On Mon, Jul 15, 2019 at 04:27:04PM +0200, Michal Privoznik wrote:
> > On 7/15/19 1:07 PM, Erik Skultety wrote:
> > > On Wed, Jun 19, 2019 at 01:50:14PM +0200, Michal Privoznik wrote:
> > > > On 6/19/19 12:59 PM, Erik Skultety
On Tue, Jun 18, 2019 at 03:46:16PM +0200, Michal Privoznik wrote:
> This is a very simple completer for completing --cap argument of
> nodedev-list command.
>
> Signed-off-by: Michal Privoznik
> ---
Reviewed-by: Erik Skultety
--
libvir-list mailing list
libvir-list@redhat.com
Hi,
I have a PoC that enables partial coldplug assignment of multifunction
PCI devices with managed mode. At this moment, Libvirt can't handle
this scenario - the code will detach only the hostdevs from the XML,
when in fact the whole IOMMU needs to be detached. This can be
verified by the fact
When querying storage metadata after a block job we re-run
virStorageFileGetMetadata on the top level storage file. This means that
the workers (virStorageFileGetMetadataInternal) must not overwrite any
pointers without freeing them.
This was not considered for src->compat and src->features. Fix
While verifying that legacy block jobs are not broken I've found a
memleak.
Peter Krempa (2):
util: storage: Clean up label use in virStorageFileGetMetadataInternal
util: storage: Don't leak metadata on repeated calls of
virStorageFileGetMetadata
src/util/virstoragefile.c | 36
The function does not do any cleanup, so replace the 'cleanup' label
with return of -1 and the 'done' label with return of 0.
Signed-off-by: Peter Krempa
---
src/util/virstoragefile.c | 27 +++
1 file changed, 11 insertions(+), 16 deletions(-)
diff --git
On Thu, Jul 18, 2019 at 04:42:00PM +0200, Peter Krempa wrote:
While verifying that legacy block jobs are not broken I've found a
memleak.
Peter Krempa (2):
util: storage: Clean up label use in virStorageFileGetMetadataInternal
util: storage: Don't leak metadata on repeated calls of
On 7/18/19 4:42 PM, Peter Krempa wrote:
While verifying that legacy block jobs are not broken I've found a
memleak.
Peter Krempa (2):
util: storage: Clean up label use in virStorageFileGetMetadataInternal
util: storage: Don't leak metadata on repeated calls of
Shutting down the daemon after 30 seconds of being idle is a little bit
too aggressive. Especially when using 'virsh' in single-shot mode, as
opposed to interactive shell mode, it would not be unusual to have
more than 30 seconds between commands. This will lead to the daemon
shutting down and
Ping
On Mon, Jul 08, 2019 at 03:13:09PM +0100, Daniel P. Berrangé wrote:
> Instead of having each caller pass in the desired logfile name, pass in
> the binary name instead. The logging code can then just derive a logfile
> name by appending ".log".
>
> Signed-off-by: Daniel P. Berrangé
> ---
>
On 7/18/19 4:59 AM, Daniel P. Berrangé wrote:
On Wed, Jul 17, 2019 at 05:38:51PM -0400, Cole Robinson wrote:
On 7/17/19 12:49 PM, Laine Stump wrote:
On 7/14/19 8:03 PM, Cole Robinson wrote:
There's several unresolved RFEs for the bridge
driver that are essentially requests to add XML
On Thu, Jul 18, 2019 at 03:53:32PM +0100, Daniel P. Berrangé wrote:
Ping
On Mon, Jul 08, 2019 at 03:13:09PM +0100, Daniel P. Berrangé wrote:
Instead of having each caller pass in the desired logfile name, pass in
the binary name instead. The logging code can then just derive a logfile
name by
On Thu, Jul 18, 2019 at 05:14:13PM +0200, Ján Tomko wrote:
> On Thu, Jul 18, 2019 at 03:53:32PM +0100, Daniel P. Berrangé wrote:
> > Ping
> >
> > On Mon, Jul 08, 2019 at 03:13:09PM +0100, Daniel P. Berrangé wrote:
> > > Instead of having each caller pass in the desired logfile name, pass in
> > >
On 7/18/19 10:29 AM, Daniel Henrique Barboza wrote:
Hi,
I have a PoC that enables partial coldplug assignment of multifunction
PCI devices with managed mode. At this moment, Libvirt can't handle
this scenario - the code will detach only the hostdevs from the XML,
when in fact the whole IOMMU
Since fb9f6ce6253 we are including a libxml header file in the
network driver but never link with it. This hasn't caused an
immediate problem because in the end the network driver links
with libvirt.la. But apparently, it's causing a build issue on
old Ubuntu.
Signed-off-by: Michal Privoznik
---
On 7/18/19 12:29 PM, Laine Stump wrote:
On 7/18/19 10:29 AM, Daniel Henrique Barboza wrote:
Hi,
I have a PoC that enables partial coldplug assignment of multifunction
PCI devices with managed mode. At this moment, Libvirt can't handle
this scenario - the code will detach only the hostdevs
The block job event handler qemuProcessHandleBlockJob looks at the block
job data to see whether the job requires synchronous handling. Since the
block job event may arrive before we continue the job handling (if the
job has no data to copy) we could hit the state when the job is still
set as
While this function does start a block job in case when we'd not be able
to get our internal data for it, the handler sets the job state to
QEMU_BLOCKJOB_STATE_RUNNING anyways, thus qemuBlockJobStartupFinalize
would just unref the job.
Since the other usage of qemuBlockJobStartupFinalize in the
Some of the refactors which allowed carrying more data with blockjobs
created a race condition when a too-quick job would actually delete the
data. This is a problem for legacy block jobs only.
Peter Krempa (3):
qemu: process: Don't use qemuBlockJobStartupFinalize in
Report in logs when we don't find existing block job data and create it
just to handle the job.
Signed-off-by: Peter Krempa
---
src/qemu/qemu_driver.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 0438dd2878..482f915b67 100644
---
In the effort to use blockdev we need a few tweaks for the support APIs.
Peter Krempa (8):
qemu: driver: blockdevize qemuDomainGetBlockJobInfo
qemu: Make checks in qemuDomainBlockPivot depend on data of the job
qemu: blockjob: Add block job states for abort and pivot operations
qemu: Use
Use the stored job name rather than passing in the disk alias when
refering to the job which allows the same code to work also when
-blockdev will be used.
Note that this API does not require the change to use 'query-job' as it
will ever only work with blockjobs bound to disks due to the
Now that we track the job separately we watch only for the abort of the
one single block job so the comment is no longer accurate. Also
describing asynchronous operation is not really necessary.
Signed-off-by: Peter Krempa
---
src/qemu/qemu_driver.c | 6 --
1 file changed, 6 deletions(-)
When initiating a pivot or abort of a block job we need to track which
one was initiated. Currently it was done via data stashed in
virDomainDiskDef. Add possibility to track this also together with the
job itself.
Signed-off-by: Peter Krempa
---
src/qemu/qemu_blockjob.c | 9 -
Use job-complete/job-abort instead of the blockjob-* variants for
blockdev.
Signed-off-by: Peter Krempa
---
src/qemu/qemu_driver.c | 16 ++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 7a69a0e084..12ae31b9a7
As the error message is now available and we know whether the job failed
we can report an error straight away rather than having the user check
the event.
Signed-off-by: Peter Krempa
---
src/qemu/qemu_driver.c | 15 +++
1 file changed, 15 insertions(+)
diff --git
With -blockdev:
- we track the job and check it after restart
- have the ability to ask qemu to persist it to collect result
- have the ability to report errors.
This solves all points the comment outlined so remove it. Also all jobs
handle the disk state modification along with the event so
Do decisions based on the configuration of the job rather than the data
stored with the disk.
Signed-off-by: Peter Krempa
---
src/qemu/qemu_driver.c | 24 +---
1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
Set the correct job states after the operation is requested in qemu.
Signed-off-by: Peter Krempa
---
src/qemu/qemu_blockjob.c | 4 +++-
src/qemu/qemu_driver.c | 8 +---
2 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/src/qemu/qemu_blockjob.c b/src/qemu/qemu_blockjob.c
index
On 7/18/19 11:56 AM, Daniel Henrique Barboza wrote:
On 7/18/19 12:29 PM, Laine Stump wrote:
On 7/18/19 10:29 AM, Daniel Henrique Barboza wrote:
Hi,
I have a PoC that enables partial coldplug assignment of multifunction
PCI devices with managed mode. At this moment, Libvirt can't handle
this
On 7/18/19 6:20 AM, no-re...@patchew.org wrote:
> PASS 17 test-bdrv-drain /bdrv-drain/graph-change/drain_all
> =
> ==10263==ERROR: AddressSanitizer: heap-use-after-free on address
> 0x6122c1f0 at pc 0x555fd5bd7cb6 bp
On 7/12/19 12:23 PM, Stefan Berger wrote:
This series of patches addresses the RFE in BZ 172830:
https://bugzilla.redhat.com/show_bug.cgi?id=1728030
This series of patches adds support for vTPM state encryption by passing
the read-end of a pipe's file descriptor to 'swtpm_setup' and 'swtpm'
On 7/18/19 2:18 PM, Laine Stump wrote:
On 7/18/19 11:56 AM, Daniel Henrique Barboza wrote:
On 7/18/19 12:29 PM, Laine Stump wrote:
On 7/18/19 10:29 AM, Daniel Henrique Barboza wrote:
Hi,
I have a PoC that enables partial coldplug assignment of multifunction
PCI devices with managed mode.
Signed-off-by: Jonathon Jongsma
---
docs/news.xml | 9 +
1 file changed, 9 insertions(+)
diff --git a/docs/news.xml b/docs/news.xml
index 07d4575a65..1134309ec2 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -54,6 +54,15 @@
+
+
+ qemu: Introduce a
We have 3 pieces of code that do slightly the same thing, but
it varies depending on where it is called:
- virPCIDeviceReattach(). This is where the actual re-attach
work happens;
- virHostdevReattachPCIDevice(). This is a static function from
virhostdev.c that calls virPCIDeviceReattach() after
There are two places in virhostdev that executes a re-attach operation
in all pci devices of a virPCIDeviceListPtr array:
virHostdevPreparePCIDevices and virHostdevReAttachPCIDevices. The
difference is that the code inside virHostdevPreparePCIDevices uses
virPCIDeviceReattach(), while inside
This code that executes virPCIDeviceReset in all virPCIDevicePtr
objects of a given virPCIDeviceListPtr list is replicated twice
in the code. Putting it in a helper function helps with
readability.
Signed-off-by: Daniel Henrique Barboza
---
src/util/virhostdev.c | 54
These are cleanups that I made together with an attempt to
enable parcial PCI Multifunction assignment with managed=true.
That work will be scrapped after discussions with Laine in
[1], but these cleanups kind of make sense on their own, so
here they are.
[1]
On 7/18/19 2:58 PM, Daniel Henrique Barboza wrote:
On 7/18/19 2:18 PM, Laine Stump wrote:
But to back up a bit - what is it about managed='yes' that makes you
want to do it that way instead of managed='no'? Do you really ever
need the devices to be binded to the host driver? Or are you just
On Thu, 18 Jul 2019 17:08:23 -0400
Laine Stump wrote:
> On 7/18/19 2:58 PM, Daniel Henrique Barboza wrote:
> >
> > On 7/18/19 2:18 PM, Laine Stump wrote:
> >
> >> But to back up a bit - what is it about managed='yes' that makes you
> >> want to do it that way instead of managed='no'? Do you
On 7/18/19 8:53 AM, Daniel P. Berrangé wrote:
> Shutting down the daemon after 30 seconds of being idle is a little bit
> too aggressive. Especially when using 'virsh' in single-shot mode, as
> opposed to interactive shell mode, it would not be unusual to have
> more than 30 seconds between
virCgroupRemove return -1 when removing cgroup failed.
But there are retry code to remove cgroup in QemuProcessStop:
retry:
if ((ret = qemuRemoveCgroup(vm)) < 0) {
if (ret == -EBUSY && (retries++ < 5)) {
usleep(200*1000);
goto retry;
}
Hi guys,
Em sex, 12 de jul de 2019 às 13:06, Peter Krempa escreveu:
>
> Add the job structure to the table when instantiating a new job and
> remove it when it terminates/fails.
>
> Signed-off-by: Peter Krempa
> ---
> src/qemu/qemu_blockjob.c | 29 ++---
>
77 matches
Mail list logo