[PATCH v4 1/2] virtio-blk: delete vqs on the error path in realize()

2020-03-27 Thread Pan Nengyuan
virtio_vqs forgot to free on the error path in realize(). Fix that. The asan stack: Direct leak of 14336 byte(s) in 1 object(s) allocated from: #0 0x7f58b93fd970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970) #1 0x7f58b858249d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d) #2

[PULL 2/5] hw/ide/sii3112: Use qdev gpio rather than qemu_allocate_irqs()

2020-03-27 Thread John Snow
From: Peter Maydell Coverity points out (CID 1421984) that we are leaking the memory returned by qemu_allocate_irqs(). We can avoid this leak by switching to using qdev_init_gpio_in(); the base class finalize will free the irqs that this allocates under the hood. Signed-off-by: Peter Maydell

[PULL 3/5] via-ide: don't use PCI level for legacy IRQs

2020-03-27 Thread John Snow
From: Mark Cave-Ayland The PCI level calculation was accidentally left in when rebasing from a previous patchset. Since both IRQs are driven separately, the value being passed into the IRQ handler should be used directly. Signed-off-by: Mark Cave-Ayland Message-id:

[PULL 1/5] fdc/i8257: implement verify transfer mode

2020-03-27 Thread John Snow
From: Sven Schnelle While working on the Tulip driver i tried to write some Teledisk images to a floppy image which didn't work. Turned out that Teledisk checks the written data by issuing a READ command to the FDC but running the DMA controller in VERIFY mode. As we ignored the DMA request in

[PULL 4/5] via-ide: use qdev gpio rather than qemu_allocate_irqs()

2020-03-27 Thread John Snow
From: Mark Cave-Ayland This prevents the memory from qemu_allocate_irqs() from being leaked which can in some cases be spotted by Coverity (CID 1421984). Signed-off-by: Mark Cave-Ayland Message-id: 20200324210519.2974-3-mark.cave-ayl...@ilande.co.uk Signed-off-by: John Snow --- hw/ide/via.c

[PULL 5/5] cmd646-ide: use qdev gpio rather than qemu_allocate_irqs()

2020-03-27 Thread John Snow
From: Mark Cave-Ayland This prevents the memory from qemu_allocate_irqs() from being leaked which can in some cases be spotted by Coverity (CID 1421984). Signed-off-by: Mark Cave-Ayland Message-id: 20200324210519.2974-4-mark.cave-ayl...@ilande.co.uk Signed-off-by: John Snow ---

[PULL 0/5] Ide patches

2020-03-27 Thread John Snow
The following changes since commit cfe68ae025f704f336d7dd3d1903ce37b445831d: Merge remote-tracking branch 'remotes/vivier2/tags/linux-user-for-5.0-pull-request' into staging (2020-03-26 20:55:54 +) are available in the Git repository at: https://github.com/jnsnow/qemu.git

Re: [PATCH v4] qcow2: Forbid discard in qcow2 v2 images with backing files

2020-03-27 Thread Eric Blake
On 3/27/20 1:59 PM, Alberto Garcia wrote: A discard request deallocates the selected clusters so they read back as zeroes. This is done by clearing the cluster offset field and setting QCOW_OFLAG_ZERO in the L2 entry. This flag is however only supported when qcow_version >= 3. In older images

Re: [PATCH v3] qcow2: Forbid discard in qcow2 v2 images with backing files

2020-03-27 Thread Alberto Garcia
On Fri 27 Mar 2020 07:57:40 PM CET, Eric Blake wrote: >> +/* If the image does not support QCOW_OFLAG_ZERO then discarding >> + * clusters could expose stale data from the backing file. */ >> +if (s->qcow_version < 3 && bs->backing) { >> +return -ENOTSUP; >> +} > > Hmm.

[PATCH v4] qcow2: Forbid discard in qcow2 v2 images with backing files

2020-03-27 Thread Alberto Garcia
A discard request deallocates the selected clusters so they read back as zeroes. This is done by clearing the cluster offset field and setting QCOW_OFLAG_ZERO in the L2 entry. This flag is however only supported when qcow_version >= 3. In older images the cluster is simply deallocated, exposing

Re: [PATCH v3] qcow2: Forbid discard in qcow2 v2 images with backing files

2020-03-27 Thread Eric Blake
On 3/27/20 11:48 AM, Alberto Garcia wrote: A discard request deallocates the selected clusters so they read back as zeroes. This is done by clearing the cluster offset field and setting QCOW_OFLAG_ZERO in the L2 entry. This flag is however only supported when qcow_version >= 3. In older images

Re: [PATCH v3] qcow2: Forbid discard in qcow2 v2 images with backing files

2020-03-27 Thread Eric Blake
On 3/27/20 1:43 PM, Alberto Garcia wrote: On Fri 27 Mar 2020 07:13:04 PM CET, Eric Blake wrote: +for qcow2_compat in 0.10 1.1; do +echo "# Create an image with compat=$qcow2_compat without a backing file" +_make_test_img -o "compat=$qcow2_compat" 128k + +echo "# Fill all clusters

Re: [PATCH 2/3] io: Support shutdown of TLS channel

2020-03-27 Thread Eric Blake
On 3/27/20 12:43 PM, Daniel P. Berrangé wrote: I don't think it is acceptable to do this loop here. The gnutls_bye() function triggers several I/O operations which could block. Looping like this means we busy-wait, blocking this thread for as long as I/O is blocking on the socket. Hmm, good

Re: [PATCH 0/3] nbd: Try for cleaner TLS shutdown

2020-03-27 Thread no-reply
Patchew URL: https://patchew.org/QEMU/20200327161936.2225989-1-ebl...@redhat.com/ Hi, This series failed the docker-quick@centos7 build test. Please find the testing commands and their output below. If you have Docker installed, you can probably reproduce it locally. === TEST SCRIPT BEGIN

Re: [PATCH v3] qcow2: Forbid discard in qcow2 v2 images with backing files

2020-03-27 Thread Alberto Garcia
On Fri 27 Mar 2020 07:13:04 PM CET, Eric Blake wrote: >> +for qcow2_compat in 0.10 1.1; do >> +echo "# Create an image with compat=$qcow2_compat without a backing >> file" >> +_make_test_img -o "compat=$qcow2_compat" 128k >> + >> +echo "# Fill all clusters with data and then discard

Re: [PATCH v3] qcow2: Forbid discard in qcow2 v2 images with backing files

2020-03-27 Thread Eric Blake
On 3/27/20 11:48 AM, Alberto Garcia wrote: A discard request deallocates the selected clusters so they read back as zeroes. This is done by clearing the cluster offset field and setting QCOW_OFLAG_ZERO in the L2 entry. This flag is however only supported when qcow_version >= 3. In older images

Re: [PATCH 3/3] nbd: Use shutdown(SHUT_WR) after last item sent

2020-03-27 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 12:42:21PM -0500, Eric Blake wrote: > On 3/27/20 11:35 AM, Daniel P. Berrangé wrote: > > On Fri, Mar 27, 2020 at 11:19:36AM -0500, Eric Blake wrote: > > > Although the remote end should always be tolerant of a socket being > > > arbitrarily closed, there are situations

Re: [PATCH 2/3] io: Support shutdown of TLS channel

2020-03-27 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 12:29:39PM -0500, Eric Blake wrote: > On 3/27/20 11:40 AM, Daniel P. Berrangé wrote: > > On Fri, Mar 27, 2020 at 11:19:35AM -0500, Eric Blake wrote: > > > Gnutls documents that while many apps simply yank out the underlying > > > transport at the end of communication in the

Re: [PATCH 3/3] nbd: Use shutdown(SHUT_WR) after last item sent

2020-03-27 Thread Eric Blake
On 3/27/20 11:35 AM, Daniel P. Berrangé wrote: On Fri, Mar 27, 2020 at 11:19:36AM -0500, Eric Blake wrote: Although the remote end should always be tolerant of a socket being arbitrarily closed, there are situations where it is a lot easier if the remote end can be guaranteed to read EOF even

Re: [PULL 0/6] Block layer patches

2020-03-27 Thread Peter Maydell
On Fri, 27 Mar 2020 at 15:20, Kevin Wolf wrote: > > The following changes since commit cfe68ae025f704f336d7dd3d1903ce37b445831d: > > Merge remote-tracking branch > 'remotes/vivier2/tags/linux-user-for-5.0-pull-request' into staging > (2020-03-26 20:55:54 +) > > are available in the Git

Re: [PATCH 2/3] io: Support shutdown of TLS channel

2020-03-27 Thread Eric Blake
On 3/27/20 11:40 AM, Daniel P. Berrangé wrote: On Fri, Mar 27, 2020 at 11:19:35AM -0500, Eric Blake wrote: Gnutls documents that while many apps simply yank out the underlying transport at the end of communication in the name of efficiency, this is indistinguishable from a malicious actor

[PATCH v3] qcow2: Forbid discard in qcow2 v2 images with backing files

2020-03-27 Thread Alberto Garcia
A discard request deallocates the selected clusters so they read back as zeroes. This is done by clearing the cluster offset field and setting QCOW_OFLAG_ZERO in the L2 entry. This flag is however only supported when qcow_version >= 3. In older images the cluster is simply deallocated, exposing

Re: [PATCH 2/3] io: Support shutdown of TLS channel

2020-03-27 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 11:19:35AM -0500, Eric Blake wrote: > Gnutls documents that while many apps simply yank out the underlying > transport at the end of communication in the name of efficiency, this > is indistinguishable from a malicious actor terminating the connection > prematurely. Since

Re: [PATCH 3/3] nbd: Use shutdown(SHUT_WR) after last item sent

2020-03-27 Thread Daniel P . Berrangé
On Fri, Mar 27, 2020 at 11:19:36AM -0500, Eric Blake wrote: > Although the remote end should always be tolerant of a socket being > arbitrarily closed, there are situations where it is a lot easier if > the remote end can be guaranteed to read EOF even before the socket > has closed. In

Re: [PATCH v2] qcow2: Forbid discard in qcow2 v2 images with backing files

2020-03-27 Thread Alberto Garcia
On Fri 27 Mar 2020 03:14:42 PM CET, Kevin Wolf wrote: >> +echo "# Create a backing image and fill it with data" >> +BACKING_IMG="$TEST_IMG.base" >> +TEST_IMG="$BACKING_IMG" _make_test_img 128k >> +$QEMU_IO -c 'write -P 0xff 0 128k' "$BACKING_IMG" | _filter_qemu_io >> + >> +for qcow2_compat in 0.10

[PATCH 2/3] io: Support shutdown of TLS channel

2020-03-27 Thread Eric Blake
Gnutls documents that while many apps simply yank out the underlying transport at the end of communication in the name of efficiency, this is indistinguishable from a malicious actor terminating the connection prematurely. Since our channel I/O code already supports the notion of a graceful

[PATCH 1/3] crypto: Add qcrypto_tls_shutdown()

2020-03-27 Thread Eric Blake
Gnutls documents that applications that want to distinguish between a clean end-of-communication and a malicious client abruptly tearing the underlying transport out of under our feet need to use gnutls_bye(). Our channel code is already set up to allow shutdown requests, but we weren't forwarding

[PATCH 0/3] nbd: Try for cleaner TLS shutdown

2020-03-27 Thread Eric Blake
I'm currently trying to debug a hang in 'nbdkit nbd' as a client when TLS is in use, when dealing with three processes: nbdkit example1[1] <=tls=> nbdkit nbd[2] <=plain=> qemu-nbd --list[3] In theory, when process [3] exits, the server side of process [2] is supposed to disconnect as client from

[PATCH 3/3] nbd: Use shutdown(SHUT_WR) after last item sent

2020-03-27 Thread Eric Blake
Although the remote end should always be tolerant of a socket being arbitrarily closed, there are situations where it is a lot easier if the remote end can be guaranteed to read EOF even before the socket has closed. In particular, when using gnutls, if we fail to inform the remote end about an

[PULL 1/6] block/iscsi:use the flags in iscsi_open() prevent Clang warning

2020-03-27 Thread Kevin Wolf
From: Chen Qun Clang static code analyzer show warning: block/iscsi.c:1920:9: warning: Value stored to 'flags' is never read flags &= ~BDRV_O_RDWR; ^ In iscsi_allocmap_init() only checks BDRV_O_NOCACHE, which is the same in both of flags and bs->open_flags.

[PULL 4/6] Revert "mirror: Don't let an operation wait for itself"

2020-03-27 Thread Kevin Wolf
This reverts commit 7e6c4ff792734e196c8ca82564c56b5e7c6288ca. The fix was incomplete as it only protected against requests waiting for themselves, but not against requests waiting for each other. We need a different solution. Signed-off-by: Kevin Wolf Message-Id:

[PULL 5/6] mirror: Wait only for in-flight operations

2020-03-27 Thread Kevin Wolf
mirror_wait_for_free_in_flight_slot() just picks a random operation to wait for. However, a MirrorOp is already in s->ops_in_flight when mirror_co_read() waits for free slots, so if not enough slots are immediately available, an operation can end up waiting for itself, or two or more operations

[PULL 6/6] qcow2: Remove unused fields from BDRVQcow2State

2020-03-27 Thread Kevin Wolf
These fields were already removed in commit c3c10f72, but then commit b58deb34 revived them probably due to bad merge conflict resolution. They are still unused, so remove them again. Fixes: b58deb344ddff3b9d8b265bf73a65274767ee5f4 Signed-off-by: Kevin Wolf Message-Id:

[PULL 0/6] Block layer patches

2020-03-27 Thread Kevin Wolf
The following changes since commit cfe68ae025f704f336d7dd3d1903ce37b445831d: Merge remote-tracking branch 'remotes/vivier2/tags/linux-user-for-5.0-pull-request' into staging (2020-03-26 20:55:54 +) are available in the Git repository at: git://repo.or.cz/qemu/kevin.git

[PULL 2/6] block: fix bdrv_root_attach_child forget to unref child_bs

2020-03-27 Thread Kevin Wolf
From: Vladimir Sementsov-Ogievskiy bdrv_root_attach_child promises to drop child_bs reference on failure. It does it on first handled failure path, but not on the second. Fix that. Signed-off-by: Vladimir Sementsov-Ogievskiy Message-Id: <20200324155921.23822-1-vsement...@virtuozzo.com>

[PULL 3/6] nvme: Print 'cqid' for nvme_del_cq

2020-03-27 Thread Kevin Wolf
From: Minwoo Im The given argument for this trace should be cqid, not sqid. Signed-off-by: Minwoo Im Message-Id: <20200324140646.8274-1-minwoo.im@gmail.com> Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Stefano Garzarella Signed-off-by: Kevin Wolf --- hw/block/trace-events | 2 +- 1

Re: [PATCH v2] qcow2: Forbid discard in qcow2 v2 images with backing files

2020-03-27 Thread Kevin Wolf
Am 24.03.2020 um 13:16 hat Alberto Garcia geschrieben: > A discard request deallocates the selected clusters so they read back > as zeroes. This is done by clearing the cluster offset field and > setting QCOW_OFLAG_ZERO in the L2 entry. > > This flag is however only supported when qcow_version >=

Re: [PATCH v9 3/4] qcow2: add zstd cluster compression

2020-03-27 Thread Vladimir Sementsov-Ogievskiy
27.03.2020 12:40, Denis Plotnikov wrote: On 27.03.2020 11:43, Vladimir Sementsov-Ogievskiy wrote: Should we note somehow in qcow2 spec that we use streamed version of zstd with specific end byte? We didn't do it for zlib. zstd does it the same way as zlib, saves the compression output to

Re: [PATCH v2 0/3] Fix some AIO context locking in jobs

2020-03-27 Thread Dietmar Maurer
> I *think* the second patch also fixes the hangs on backup abort that I and > Dietmar noticed in v1, but I'm not sure, they we're somewhat intermittent > before too. After more test, I am 100% sure the bug (or another one) is still there. Here is how to trigger: 1. use latest qemu sources from

[PATCH v3 1/2] virtio-blk: delete vqs on the error path in realize()

2020-03-27 Thread Pan Nengyuan
virtio_vqs forgot to free on the error path in realize(). Fix that. The asan stack: Direct leak of 14336 byte(s) in 1 object(s) allocated from: #0 0x7f58b93fd970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970) #1 0x7f58b858249d in g_malloc0 (/lib64/libglib-2.0.so.0+0x5249d) #2

Re: [PATCH v9 3/4] qcow2: add zstd cluster compression

2020-03-27 Thread Denis Plotnikov
On 27.03.2020 11:43, Vladimir Sementsov-Ogievskiy wrote: Should we note somehow in qcow2 spec that we use streamed version of zstd with specific end byte? We didn't do it for zlib. zstd does it the same way as zlib, saves the compression output to some buffer. 23.03.2020 17:25, Denis

Re: [PATCH v2 1/2] virtio-blk: delete vqs on the error path in realize()

2020-03-27 Thread Stefano Garzarella
On Fri, Mar 27, 2020 at 12:56:19PM +0800, Pan Nengyuan wrote: > virtio_vqs forgot to free on the error path in realize(). Fix that. > > The asan stack: > Direct leak of 14336 byte(s) in 1 object(s) allocated from: > #0 0x7f58b93fd970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970) >

Re: [PATCH 1/2] virtio-blk: delete vqs on the error path in realize()

2020-03-27 Thread Pan Nengyuan
On 3/27/2020 4:41 PM, Stefano Garzarella wrote: > On Fri, Mar 27, 2020 at 11:56:49AM +0800, Pan Nengyuan wrote: >> virtio_vqs forgot to free on the error path in realize(). Fix that. >> >> The asan stack: >> Direct leak of 14336 byte(s) in 1 object(s) allocated from: >> #0 0x7f58b93fd970 in

Re: [PATCH v9 3/4] qcow2: add zstd cluster compression

2020-03-27 Thread Vladimir Sementsov-Ogievskiy
Should we note somehow in qcow2 spec that we use streamed version of zstd with specific end byte? 23.03.2020 17:25, Denis Plotnikov wrote: zstd significantly reduces cluster compression time. It provides better compression performance maintaining the same level of the compression ratio in

Re: [PATCH 1/2] virtio-blk: delete vqs on the error path in realize()

2020-03-27 Thread Stefano Garzarella
On Fri, Mar 27, 2020 at 11:56:49AM +0800, Pan Nengyuan wrote: > virtio_vqs forgot to free on the error path in realize(). Fix that. > > The asan stack: > Direct leak of 14336 byte(s) in 1 object(s) allocated from: > #0 0x7f58b93fd970 in __interceptor_calloc (/lib64/libasan.so.5+0xef970) >

Re: [PATCH v2 0/3] Fix some AIO context locking in jobs

2020-03-27 Thread Dietmar Maurer
Wait - maybe this was a bug in my test setup - I am unable to reproduce now.. @Stefan Reiter: Are you able to trigger this? > > I *think* the second patch also fixes the hangs on backup abort that I and > > Dietmar noticed in v1, but I'm not sure, they we're somewhat intermittent > > before too.

Re: [PATCH v2 0/3] Fix some AIO context locking in jobs

2020-03-27 Thread Dietmar Maurer
But I need to mention that it takes some time to reproduce this. This time I run/aborted about 500 backup jobs until it triggers. > > I *think* the second patch also fixes the hangs on backup abort that I and > > Dietmar noticed in v1, but I'm not sure, they we're somewhat intermittent > > before

Re: [PATCH v2 0/3] Fix some AIO context locking in jobs

2020-03-27 Thread Dietmar Maurer
> I *think* the second patch also fixes the hangs on backup abort that I and > Dietmar noticed in v1, but I'm not sure, they we're somewhat intermittent > before too. No, I still get this freeze: 0 0x7f0aa4866916 in __GI_ppoll (fds=0x7f0a12935c40, nfds=2, timeout=, timeout@entry=0x0,

Re: [PATCH v2 1/3] backup: don't acquire aio_context in backup_clean

2020-03-27 Thread Vladimir Sementsov-Ogievskiy
26.03.2020 18:56, Stefan Reiter wrote: All code-paths leading to backup_clean (via job_clean) have the job's context already acquired. The job's context is guaranteed to be the same as the one used by backup_top via backup_job_create. As we already discussed, this is not quite right. So, may