virtio_vqs forgot to free on the error path in realize(). Fix that.
The asan stack:
Direct leak of 14336 byte(s) in 1 object(s) allocated from:
#0 0x7f58b93fd970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
#1 0x7f58b858249d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
#2
From: Peter Maydell
Coverity points out (CID 1421984) that we are leaking the
memory returned by qemu_allocate_irqs(). We can avoid this
leak by switching to using qdev_init_gpio_in(); the base
class finalize will free the irqs that this allocates under
the hood.
Signed-off-by: Peter Maydell
From: Mark Cave-Ayland
The PCI level calculation was accidentally left in when rebasing from a
previous patchset. Since both IRQs are driven separately, the value
being passed into the IRQ handler should be used directly.
Signed-off-by: Mark Cave-Ayland
Message-id:
From: Sven Schnelle
While working on the Tulip driver i tried to write some Teledisk images to
a floppy image which didn't work. Turned out that Teledisk checks the written
data by issuing a READ command to the FDC but running the DMA controller
in VERIFY mode. As we ignored the DMA request in
From: Mark Cave-Ayland
This prevents the memory from qemu_allocate_irqs() from being leaked which
can in some cases be spotted by Coverity (CID 1421984).
Signed-off-by: Mark Cave-Ayland
Message-id: 20200324210519.2974-3-mark.cave-ayl...@ilande.co.uk
Signed-off-by: John Snow
---
hw/ide/via.c
From: Mark Cave-Ayland
This prevents the memory from qemu_allocate_irqs() from being leaked which
can in some cases be spotted by Coverity (CID 1421984).
Signed-off-by: Mark Cave-Ayland
Message-id: 20200324210519.2974-4-mark.cave-ayl...@ilande.co.uk
Signed-off-by: John Snow
---
The following changes since commit cfe68ae025f704f336d7dd3d1903ce37b445831d:
Merge remote-tracking branch
'remotes/vivier2/tags/linux-user-for-5.0-pull-request' into staging (2020-03-26
20:55:54 +)
are available in the Git repository at:
https://github.com/jnsnow/qemu.git
On 3/27/20 1:59 PM, Alberto Garcia wrote:
A discard request deallocates the selected clusters so they read back
as zeroes. This is done by clearing the cluster offset field and
setting QCOW_OFLAG_ZERO in the L2 entry.
This flag is however only supported when qcow_version >= 3. In older
images
On Fri 27 Mar 2020 07:57:40 PM CET, Eric Blake wrote:
>> +/* If the image does not support QCOW_OFLAG_ZERO then discarding
>> + * clusters could expose stale data from the backing file. */
>> +if (s->qcow_version < 3 && bs->backing) {
>> +return -ENOTSUP;
>> +}
>
> Hmm.
A discard request deallocates the selected clusters so they read back
as zeroes. This is done by clearing the cluster offset field and
setting QCOW_OFLAG_ZERO in the L2 entry.
This flag is however only supported when qcow_version >= 3. In older
images the cluster is simply deallocated, exposing
On 3/27/20 11:48 AM, Alberto Garcia wrote:
A discard request deallocates the selected clusters so they read back
as zeroes. This is done by clearing the cluster offset field and
setting QCOW_OFLAG_ZERO in the L2 entry.
This flag is however only supported when qcow_version >= 3. In older
images
On 3/27/20 1:43 PM, Alberto Garcia wrote:
On Fri 27 Mar 2020 07:13:04 PM CET, Eric Blake wrote:
+for qcow2_compat in 0.10 1.1; do
+echo "# Create an image with compat=$qcow2_compat without a backing file"
+_make_test_img -o "compat=$qcow2_compat" 128k
+
+echo "# Fill all clusters
On 3/27/20 12:43 PM, Daniel P. Berrangé wrote:
I don't think it is acceptable to do this loop here. The gnutls_bye()
function triggers several I/O operations which could block. Looping
like this means we busy-wait, blocking this thread for as long as I/O
is blocking on the socket.
Hmm, good
Patchew URL:
https://patchew.org/QEMU/20200327161936.2225989-1-ebl...@redhat.com/
Hi,
This series failed the docker-quick@centos7 build test. Please find the testing
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
=== TEST SCRIPT BEGIN
On Fri 27 Mar 2020 07:13:04 PM CET, Eric Blake wrote:
>> +for qcow2_compat in 0.10 1.1; do
>> +echo "# Create an image with compat=$qcow2_compat without a backing
>> file"
>> +_make_test_img -o "compat=$qcow2_compat" 128k
>> +
>> +echo "# Fill all clusters with data and then discard
On 3/27/20 11:48 AM, Alberto Garcia wrote:
A discard request deallocates the selected clusters so they read back
as zeroes. This is done by clearing the cluster offset field and
setting QCOW_OFLAG_ZERO in the L2 entry.
This flag is however only supported when qcow_version >= 3. In older
images
On Fri, Mar 27, 2020 at 12:42:21PM -0500, Eric Blake wrote:
> On 3/27/20 11:35 AM, Daniel P. Berrangé wrote:
> > On Fri, Mar 27, 2020 at 11:19:36AM -0500, Eric Blake wrote:
> > > Although the remote end should always be tolerant of a socket being
> > > arbitrarily closed, there are situations
On Fri, Mar 27, 2020 at 12:29:39PM -0500, Eric Blake wrote:
> On 3/27/20 11:40 AM, Daniel P. Berrangé wrote:
> > On Fri, Mar 27, 2020 at 11:19:35AM -0500, Eric Blake wrote:
> > > Gnutls documents that while many apps simply yank out the underlying
> > > transport at the end of communication in the
On 3/27/20 11:35 AM, Daniel P. Berrangé wrote:
On Fri, Mar 27, 2020 at 11:19:36AM -0500, Eric Blake wrote:
Although the remote end should always be tolerant of a socket being
arbitrarily closed, there are situations where it is a lot easier if
the remote end can be guaranteed to read EOF even
On Fri, 27 Mar 2020 at 15:20, Kevin Wolf wrote:
>
> The following changes since commit cfe68ae025f704f336d7dd3d1903ce37b445831d:
>
> Merge remote-tracking branch
> 'remotes/vivier2/tags/linux-user-for-5.0-pull-request' into staging
> (2020-03-26 20:55:54 +)
>
> are available in the Git
On 3/27/20 11:40 AM, Daniel P. Berrangé wrote:
On Fri, Mar 27, 2020 at 11:19:35AM -0500, Eric Blake wrote:
Gnutls documents that while many apps simply yank out the underlying
transport at the end of communication in the name of efficiency, this
is indistinguishable from a malicious actor
A discard request deallocates the selected clusters so they read back
as zeroes. This is done by clearing the cluster offset field and
setting QCOW_OFLAG_ZERO in the L2 entry.
This flag is however only supported when qcow_version >= 3. In older
images the cluster is simply deallocated, exposing
On Fri, Mar 27, 2020 at 11:19:35AM -0500, Eric Blake wrote:
> Gnutls documents that while many apps simply yank out the underlying
> transport at the end of communication in the name of efficiency, this
> is indistinguishable from a malicious actor terminating the connection
> prematurely. Since
On Fri, Mar 27, 2020 at 11:19:36AM -0500, Eric Blake wrote:
> Although the remote end should always be tolerant of a socket being
> arbitrarily closed, there are situations where it is a lot easier if
> the remote end can be guaranteed to read EOF even before the socket
> has closed. In
On Fri 27 Mar 2020 03:14:42 PM CET, Kevin Wolf wrote:
>> +echo "# Create a backing image and fill it with data"
>> +BACKING_IMG="$TEST_IMG.base"
>> +TEST_IMG="$BACKING_IMG" _make_test_img 128k
>> +$QEMU_IO -c 'write -P 0xff 0 128k' "$BACKING_IMG" | _filter_qemu_io
>> +
>> +for qcow2_compat in 0.10
Gnutls documents that while many apps simply yank out the underlying
transport at the end of communication in the name of efficiency, this
is indistinguishable from a malicious actor terminating the connection
prematurely. Since our channel I/O code already supports the notion of
a graceful
Gnutls documents that applications that want to distinguish between a
clean end-of-communication and a malicious client abruptly tearing the
underlying transport out of under our feet need to use gnutls_bye().
Our channel code is already set up to allow shutdown requests, but we
weren't forwarding
I'm currently trying to debug a hang in 'nbdkit nbd' as a client when
TLS is in use, when dealing with three processes:
nbdkit example1[1] <=tls=> nbdkit nbd[2] <=plain=> qemu-nbd --list[3]
In theory, when process [3] exits, the server side of process [2] is
supposed to disconnect as client from
Although the remote end should always be tolerant of a socket being
arbitrarily closed, there are situations where it is a lot easier if
the remote end can be guaranteed to read EOF even before the socket
has closed. In particular, when using gnutls, if we fail to inform
the remote end about an
From: Chen Qun
Clang static code analyzer show warning:
block/iscsi.c:1920:9: warning: Value stored to 'flags' is never read
flags &= ~BDRV_O_RDWR;
^
In iscsi_allocmap_init() only checks BDRV_O_NOCACHE, which
is the same in both of flags and bs->open_flags.
This reverts commit 7e6c4ff792734e196c8ca82564c56b5e7c6288ca.
The fix was incomplete as it only protected against requests waiting for
themselves, but not against requests waiting for each other. We need a
different solution.
Signed-off-by: Kevin Wolf
Message-Id:
mirror_wait_for_free_in_flight_slot() just picks a random operation to
wait for. However, a MirrorOp is already in s->ops_in_flight when
mirror_co_read() waits for free slots, so if not enough slots are
immediately available, an operation can end up waiting for itself, or
two or more operations
These fields were already removed in commit c3c10f72, but then commit
b58deb34 revived them probably due to bad merge conflict resolution.
They are still unused, so remove them again.
Fixes: b58deb344ddff3b9d8b265bf73a65274767ee5f4
Signed-off-by: Kevin Wolf
Message-Id:
The following changes since commit cfe68ae025f704f336d7dd3d1903ce37b445831d:
Merge remote-tracking branch
'remotes/vivier2/tags/linux-user-for-5.0-pull-request' into staging (2020-03-26
20:55:54 +)
are available in the Git repository at:
git://repo.or.cz/qemu/kevin.git
From: Vladimir Sementsov-Ogievskiy
bdrv_root_attach_child promises to drop child_bs reference on failure.
It does it on first handled failure path, but not on the second. Fix
that.
Signed-off-by: Vladimir Sementsov-Ogievskiy
Message-Id: <20200324155921.23822-1-vsement...@virtuozzo.com>
From: Minwoo Im
The given argument for this trace should be cqid, not sqid.
Signed-off-by: Minwoo Im
Message-Id: <20200324140646.8274-1-minwoo.im@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé
Reviewed-by: Stefano Garzarella
Signed-off-by: Kevin Wolf
---
hw/block/trace-events | 2 +-
1
Am 24.03.2020 um 13:16 hat Alberto Garcia geschrieben:
> A discard request deallocates the selected clusters so they read back
> as zeroes. This is done by clearing the cluster offset field and
> setting QCOW_OFLAG_ZERO in the L2 entry.
>
> This flag is however only supported when qcow_version >=
27.03.2020 12:40, Denis Plotnikov wrote:
On 27.03.2020 11:43, Vladimir Sementsov-Ogievskiy wrote:
Should we note somehow in qcow2 spec that we use streamed version of zstd with
specific end byte?
We didn't do it for zlib. zstd does it the same way as zlib, saves the
compression output to
> I *think* the second patch also fixes the hangs on backup abort that I and
> Dietmar noticed in v1, but I'm not sure, they we're somewhat intermittent
> before too.
After more test, I am 100% sure the bug (or another one) is still there.
Here is how to trigger:
1. use latest qemu sources from
virtio_vqs forgot to free on the error path in realize(). Fix that.
The asan stack:
Direct leak of 14336 byte(s) in 1 object(s) allocated from:
#0 0x7f58b93fd970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
#1 0x7f58b858249d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d)
#2
On 27.03.2020 11:43, Vladimir Sementsov-Ogievskiy wrote:
Should we note somehow in qcow2 spec that we use streamed version of
zstd with specific end byte?
We didn't do it for zlib. zstd does it the same way as zlib, saves the
compression output to some buffer.
23.03.2020 17:25, Denis
On Fri, Mar 27, 2020 at 12:56:19PM +0800, Pan Nengyuan wrote:
> virtio_vqs forgot to free on the error path in realize(). Fix that.
>
> The asan stack:
> Direct leak of 14336 byte(s) in 1 object(s) allocated from:
> #0 0x7f58b93fd970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
>
On 3/27/2020 4:41 PM, Stefano Garzarella wrote:
> On Fri, Mar 27, 2020 at 11:56:49AM +0800, Pan Nengyuan wrote:
>> virtio_vqs forgot to free on the error path in realize(). Fix that.
>>
>> The asan stack:
>> Direct leak of 14336 byte(s) in 1 object(s) allocated from:
>> #0 0x7f58b93fd970 in
Should we note somehow in qcow2 spec that we use streamed version of zstd with
specific end byte?
23.03.2020 17:25, Denis Plotnikov wrote:
zstd significantly reduces cluster compression time.
It provides better compression performance maintaining
the same level of the compression ratio in
On Fri, Mar 27, 2020 at 11:56:49AM +0800, Pan Nengyuan wrote:
> virtio_vqs forgot to free on the error path in realize(). Fix that.
>
> The asan stack:
> Direct leak of 14336 byte(s) in 1 object(s) allocated from:
> #0 0x7f58b93fd970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970)
>
Wait - maybe this was a bug in my test setup - I am unable to reproduce now..
@Stefan Reiter: Are you able to trigger this?
> > I *think* the second patch also fixes the hangs on backup abort that I and
> > Dietmar noticed in v1, but I'm not sure, they we're somewhat intermittent
> > before too.
But I need to mention that it takes some time to reproduce this. This time I
run/aborted about 500 backup jobs until it triggers.
> > I *think* the second patch also fixes the hangs on backup abort that I and
> > Dietmar noticed in v1, but I'm not sure, they we're somewhat intermittent
> > before
> I *think* the second patch also fixes the hangs on backup abort that I and
> Dietmar noticed in v1, but I'm not sure, they we're somewhat intermittent
> before too.
No, I still get this freeze:
0 0x7f0aa4866916 in __GI_ppoll (fds=0x7f0a12935c40, nfds=2,
timeout=, timeout@entry=0x0,
26.03.2020 18:56, Stefan Reiter wrote:
All code-paths leading to backup_clean (via job_clean) have the job's
context already acquired. The job's context is guaranteed to be the same
as the one used by backup_top via backup_job_create.
As we already discussed, this is not quite right. So, may
49 matches
Mail list logo