On Fri, Nov 29, 2019 at 3:02 PM Jiri Denemark wrote:
>
> Gluster 6.0 is not built on i686 for RHEL-8, which prevents libvirt from
> building. Let's just disable gluster there as all we need are client
> libraries anyway.
>
> Signed-off-by: Jiri Denemark
> ---
> libvirt.spec.in | 8
> 1
On 11/29/19 2:25 PM, Ján Tomko wrote:
On Fri, Nov 29, 2019 at 02:04:35PM +0100, Michal Privoznik wrote:
On 11/29/19 1:30 PM, Ján Tomko wrote:
Reviewed-by: Ján Tomko
Thanks, I'd like to push this one during the freeze since it fixes a
crash. Are you okay with that?
Yes, that's what the
Gluster 6.0 is not built on i686 for RHEL-8, which prevents libvirt from
building. Let's just disable gluster there as all we need are client
libraries anyway.
Signed-off-by: Jiri Denemark
---
libvirt.spec.in | 8
1 file changed, 8 insertions(+)
diff --git a/libvirt.spec.in
We tolerate image format detection during block copy in very specific
circumstances, but the code didn't error out on failure of the format
detection.
Signed-off-by: Peter Krempa
---
src/qemu/qemu_driver.c | 17 -
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git
Commit 4b58fdf280a which enabled block copy also for network
destinations needed to limit when the 'mirror' storage source is
initialized in cases when we e.g. don't have an appropriate backend.
Limiting it just to virStorageFileSupportsCreate is too restrictive as
for example we can't precreate
Peter Krempa (2):
qemu: blockcopy: Report error on image format detection failure
qemu: blockcopy: Fix conditions when virStorageSource should be
initialized
src/qemu/qemu_driver.c | 34 +++---
1 file changed, 19 insertions(+), 15 deletions(-)
--
2.23.0
--
On Fri, Nov 29, 2019 at 02:04:35PM +0100, Michal Privoznik wrote:
On 11/29/19 1:30 PM, Ján Tomko wrote:
Reviewed-by: Ján Tomko
Thanks, I'd like to push this one during the freeze since it fixes a
crash. Are you okay with that?
Yes, that's what the freeze is for.
Jano
Michal
On 11/29/19 1:30 PM, Ján Tomko wrote:
Reviewed-by: Ján Tomko
Thanks, I'd like to push this one during the freeze since it fixes a
crash. Are you okay with that?
Michal
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
On Mon, Nov 25, 2019 at 05:49:35PM +0100, Michal Privoznik wrote:
In v5.9.0-273-g8ecab214de I've tried to fix a lock ordering
problem, but introduced a crasher. Problem is that because the
client lock is unlocked (in order to honour lock ordering) the
stream we are currently checking in
Han Han (2):
conf: Allow virtio scsi to using unit 16383
rng: Extend the valid range of drive unit
docs/schemas/domaincommon.rng | 2 +-
src/conf/domain_conf.c| 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
--
2.23.0
--
libvir-list mailing list
libvir-list@redhat.com
In qemu, 16383 is the top valid value for virtio scsi unit number:
$ /usr/libexec/qemu-kvm \
-device \
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x6 \
-drive \
file=A.qcow2,format=qcow2,if=none,id=drive
-device \
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=16383,drive=drive,id=disk
VNC server
In the last commit, we allowed unit number of virtio scsi disk to use
range 0..16383. Extend the range in rng file to match that.
Signed-off-by: Han Han
---
docs/schemas/domaincommon.rng | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/schemas/domaincommon.rng
On Fri, Nov 29, 2019 at 10:46:47 +0100, Ján Tomko wrote:
> On Fri, Nov 29, 2019 at 09:59:28AM +0100, Peter Krempa wrote:
> > The design of the stats fields returned for VIR_DOMAIN_STATS_IOTHREAD
> > domain statistics groups deviates from the established pattern. In this
> > instance it's
On Fri, Nov 29, 2019 at 09:59:28AM +0100, Peter Krempa wrote:
The design of the stats fields returned for VIR_DOMAIN_STATS_IOTHREAD
domain statistics groups deviates from the established pattern. In this
instance it's impossible to infer which values of for
iothread fields will be reported
With this patch users can cold unplug some sound devices.
use "virsh detach-device vm sound.xml --config" command.
Signed-off-by: Jidong Xia
---
src/conf/domain_conf.c | 48
src/conf/domain_conf.h | 3 +++
src/libvirt_private.syms | 2 ++
On Fri, Nov 29, 2019 at 09:59:26AM +0100, Peter Krempa wrote:
The original implementation used QEMU_ADD_COUNT_PARAM which added the
'count' suffix, but 'cnt' was documented. Fix the documentation to
conform with the original implementation.
Signed-off-by: Peter Krempa
---
src/libvirt-domain.c
On Fri, Nov 29, 2019 at 09:59:27AM +0100, Peter Krempa wrote:
In commit 2ccb5335dc4 I've refactored how we fill the typed parameters
for domain statistics. The commit introduced a regression in the
formating of stats for IOthreads by using the array index to label the
entries as it's common for
The design of the stats fields returned for VIR_DOMAIN_STATS_IOTHREAD
domain statistics groups deviates from the established pattern. In this
instance it's impossible to infer which values of for
iothread fields will be reported back because they have no
connection to the iothread.count
Fix regression of numbering of the output in the statistics for
iothreads and improve few aspects so that the numbering is usable.
Peter Krempa (3):
lib: Fix documentation for the count field of
VIR_DOMAIN_STATS_IOTHREAD
qemu: Fix indexes in statistics of iothreads
qemu: Report which
In commit 2ccb5335dc4 I've refactored how we fill the typed parameters
for domain statistics. The commit introduced a regression in the
formating of stats for IOthreads by using the array index to label the
entries as it's common for all other types of statistics rather than
the iothread IDs used
The original implementation used QEMU_ADD_COUNT_PARAM which added the
'count' suffix, but 'cnt' was documented. Fix the documentation to
conform with the original implementation.
Signed-off-by: Peter Krempa
---
src/libvirt-domain.c | 10 +-
1 file changed, 5 insertions(+), 5
On 11/28/19 3:48 PM, Erik Skultety wrote:
This is a very simple and straightforward implementation of the opposite
what buildPool does for the disk backend.
The background for this change comes from an existing test case in TCK
which does use the delete method for a pool of type disk, but it
On 11/28/19 2:04 PM, Daniel P. Berrangé wrote:
>
I don't recall the exact details, but I remember I had to disable
clearing capabilities temporarily (I vaguely recall it had something to
do with device assignment). What I am trying to say is that clearing
capabilities may sometimes get in our
23 matches
Mail list logo