Am 06.03.2018 um 17:35 schrieb Peter Lieven:
> Am 06.03.2018 um 17:07 schrieb Stefan Hajnoczi:
>> On Mon, Mar 05, 2018 at 02:52:16PM +, Dr. David Alan Gilbert wrote:
>>> * Peter Lieven (p...@kamp.de) wrote:
Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
> On Thu, Feb 22, 2018 at
I have applied this patch and when I run the following qmp commands I I do
not see the crash anymore but there is still something wrong because only
/root/a is opened from qemu. It looks like nbd-server-stop is also getting
rid of the nodes added with blockdev-snapshot-sync, therfore is than not
On 02/28/2018 12:04 PM, Kevin Wolf wrote:
> Am 24.02.2018 um 00:51 hat John Snow geschrieben:
>> Some jobs upon finalization may need to perform some work that can
>> still fail. If these jobs are part of a transaction, it's important
>> that these callbacks fail the entire transaction.
>>
>> We
On Tue, Feb 06, 2018 at 10:52:04AM +, Daniel P. Berrangé wrote:
> Consider passing a JSON based block driver to "qemu-img commit"
>
> $ qemu-img commit 'json:{"driver":"qcow2","file":{"driver":"gluster",\
> "volume":"gv0","path":"sn1.qcow2",
>
On 02/05/2018 03:22 PM, Max Reitz wrote:
> This series implements .bdrv_refresh_filename() for the ssh block
> driver, along with an appropriate .bdrv_dirname() so we don't chop off
> query strings for backing files with relative filenames.
>
> This series depends on my “block: Fix some
Commit 2019ba0a0197 ("block: Add AioContextNotifier functions to BB")
added blk_add/remove_aio_context_notifier() and implemented them by
passing through the bdrv_*() equivalent.
This doesn't work across bdrv_append(), which detaches child->bs and
re-attaches it to a new BlockDriverState. When
This test case adds an NBD server export and then invokes
blockdev-snapshot-sync, which changes the BlockDriverState node that the
NBD server's BlockBackend points to. This is an interesting scenario to
test and exercises the code path fixed by the previous commit.
Signed-off-by: Stefan Hajnoczi
The blockdev-snapshot-sync command uses bdrv_append() to update all parents to
point at the external snapshot node. This breaks BlockBackend's
blk_add/remove_aio_context_notifier(), which doesn't expect a BDS change.
Patch 1 fixes this by tracking AioContext notifiers in BlockBackend.
See the
On Tue, Mar 06, 2018 at 16:00:17 +, Stefan Hajnoczi wrote:
> On Wed, Feb 07, 2018 at 03:50:35PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > Vladimir Sementsov-Ogievskiy (2):
> > block/accounting: introduce latency histogram
> > qapi: add block latency histogram interface
(snip)
> > 5
Am 06.03.2018 um 14:47 hat Stefan Hajnoczi geschrieben:
> On Wed, Feb 28, 2018 at 09:11:32PM +0100, Max Reitz wrote:
> > On 2018-02-28 19:08, Max Reitz wrote:
> > > On 2018-02-27 17:17, Stefan Hajnoczi wrote:
> > >> On Mon, Feb 26, 2018 at 06:03:13PM +0100, Max Reitz wrote:
> > >>> There are
Am 06.03.2018 um 17:14 hat Alberto Garcia geschrieben:
> Hey,
>
> here's the new version of this series. It fixes a leak reported by
> Kevin and adds a couple of error_report_err() to make use of the
> message returned by qcow2_validate_table().
Thanks, applied to the block branch.
Kevin
On 06.03.2018 17:45, Alberto Garcia wrote:
> Signed-off-by: Alberto Garcia
> ---
> tests/qemu-iotests/051.pc.out | 20
> tests/qemu-iotests/186.out| 22 +++---
> 2 files changed, 3 insertions(+), 39 deletions(-)
>
> diff --git
Am 01.03.2018 um 17:36 hat Paolo Bonzini geschrieben:
> Check and invalidate_cache share some parts of the implementation
> with the regular I/O path. This is sometimes complicated because the
> I/O path wants to use a CoMutex but that is not possible outside coroutine
> context. By moving
Am 06.03.2018 um 17:07 schrieb Stefan Hajnoczi:
> On Mon, Mar 05, 2018 at 02:52:16PM +, Dr. David Alan Gilbert wrote:
>> * Peter Lieven (p...@kamp.de) wrote:
>>> Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter Lieven wrote:
> I stumbled
ping
On Mon 26 Feb 2018 03:36:23 PM CET, Alberto Garcia wrote:
> L2 entries for compressed clusters have a field that indicates the
> number of sectors used to store the data in the image.
>
> That's however not the size of the compressed data itself, just the
> number of sectors where that data
On Mon, Mar 05, 2018 at 05:04:52PM +0100, Max Reitz wrote:
> On 2018-03-05 16:59, Stefan Hajnoczi wrote:
> > There is a race between the test's 'query-migrate' QMP command after the
> > QMP 'STOP' event and completing the migration:
> >
> > The test case invokes 'query-migrate' upon receiving
'qemu-img check' cannot detect if a snapshot's L1 table is corrupted.
This patch checks the table's offset and size and reports corruption
if the values are not valid.
This patch doesn't add code to fix that corruption yet, only to detect
and report it.
Signed-off-by: Alberto Garcia
The inactive-l2 overlap check iterates uses the L1 tables from all
snapshots, but it does not validate them first.
We now have a function to take care of this, so let's use it.
Signed-off-by: Alberto Garcia
Reviewed-by: Eric Blake
---
This function deletes a snapshot from disk, removing its entry from
the snapshot table, freeing its L1 table and decreasing the refcounts
of all clusters.
The L1 table offset and size are however not validated. If we use
invalid values in this function we'll probably corrupt the image even
more,
This function checks that the offset and size of a table are valid.
While the offset checks are fine, the size check is too generic, since
it only verifies that the total size in bytes fits in a 64-bit
integer. In practice all tables used in qcow2 have much smaller size
limits, so the size needs
Hey,
here's the new version of this series. It fixes a leak reported by
Kevin and adds a couple of error_report_err() to make use of the
message returned by qcow2_validate_table().
Regards,
Berto
Changes:
v2:
- Patch 3: Don't leak l1_table and report the error returned by
This function copies a snapshot's L1 table into the active one without
validating it first.
We now have a function to take care of this, so let's use it.
Signed-off-by: Alberto Garcia
Cc: Eric Blake
---
block/qcow2-snapshot.c | 9 +
This function iterates over all snapshots of a qcow2 file in order to
expand all zero clusters, but it does not validate the snapshots' L1
tables first.
We now have a function to take care of this, so let's use it.
We can also take the opportunity to replace the sector-based
bdrv_read() with
This function checks that the size of a snapshot's L1 table is not too
large, but it doesn't validate the offset.
We now have a function to take care of this, so let's use it.
Signed-off-by: Alberto Garcia
Reviewed-by: Eric Blake
---
block/qcow2-snapshot.c
Am 05.03.2018 um 15:52 schrieb Dr. David Alan Gilbert:
> * Peter Lieven (p...@kamp.de) wrote:
>> Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
>>> On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter Lieven wrote:
I stumbled across the MAX_INFLIGHT_IO field that was introduced in 2015
and
On Mon, Mar 05, 2018 at 02:52:16PM +, Dr. David Alan Gilbert wrote:
> * Peter Lieven (p...@kamp.de) wrote:
> > Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
> > > On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter Lieven wrote:
> > >> I stumbled across the MAX_INFLIGHT_IO field that was
On Wed, Feb 07, 2018 at 03:50:35PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> v2:
>
> 01: add block_latency_histogram_clear()
> 02: fix spelling (sorry =()
> some rewordings
> remove histogram if latency parameter unspecified
>
> Vladimir Sementsov-Ogievskiy (2):
> block/accounting:
On Wed, Feb 07, 2018 at 03:50:37PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index 8225308904..74fe3fe9c4 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -451,6 +451,74 @@
> 'status':
On Wed, Feb 07, 2018 at 03:50:36PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Introduce latency histogram statics for block devices.
> For each accounted operation type latency region [0, +inf) is
> divided into subregions by several points. Then, calculate
> hits for each subregion.
>
>
On Tue 06 Mar 2018 04:11:17 PM CET, Kevin Wolf wrote:
> I've finished the review now, the rest looks correct.
>
> The only other thing I wondered is about the cases where you pass a
> NULL errp because the callers don't get an Error parameter, so they
> can't pass it on. Some of these callers
Am 06.03.2018 um 16:01 hat Alberto Garcia geschrieben:
> On Tue 06 Mar 2018 03:54:26 PM CET, Kevin Wolf wrote:
> >> @@ -2092,11 +2092,18 @@ int qcow2_expand_zero_clusters(BlockDriverState
> >> *bs,
> >> }
> >>
> >> for (i = 0; i < s->nb_snapshots; i++) {
> >> -int l1_sectors =
On Tue 06 Mar 2018 03:54:26 PM CET, Kevin Wolf wrote:
>> @@ -2092,11 +2092,18 @@ int qcow2_expand_zero_clusters(BlockDriverState *bs,
>> }
>>
>> for (i = 0; i < s->nb_snapshots; i++) {
>> -int l1_sectors = DIV_ROUND_UP(s->snapshots[i].l1_size *
>> -
Am 01.03.2018 um 17:27 hat Alberto Garcia geschrieben:
> This function iterates over all snapshots of a qcow2 file in order to
> expand all zero clusters, but it does not validate the snapshots' L1
> tables first.
>
> We now have a function to take care of this, so let's use it.
>
> We can also
On 06/03/2018 14:59, Kevin Wolf wrote:
> Am 01.03.2018 um 17:36 hat Paolo Bonzini geschrieben:
>> Check and invalidate_cache share some parts of the implementation
>> with the regular I/O path. This is sometimes complicated because the
>> I/O path wants to use a CoMutex but that is not possible
On Tue 06 Mar 2018 03:06:46 PM CET, Kevin Wolf wrote:
>> So although having a way to repair this kind of corruption would be
>> nice, we should still allow the user to try to open the image and
>> read data from it without requiring a potentially destructive
>> operation first.
>
> Generally
Am 01.03.2018 um 17:27 hat Alberto Garcia geschrieben:
> Hey ho!
>
> As we have already discussed in the mailing list, the offset and size
> values of snapshots' L1 tables are almost never validated in the QEMU
> code.
>
> One way to deal with this is to validate them when the image is being
>
Am 01.03.2018 um 17:36 hat Paolo Bonzini geschrieben:
> Check and invalidate_cache share some parts of the implementation
> with the regular I/O path. This is sometimes complicated because the
> I/O path wants to use a CoMutex but that is not possible outside coroutine
> context. By moving
On Wed, Feb 28, 2018 at 09:11:32PM +0100, Max Reitz wrote:
> On 2018-02-28 19:08, Max Reitz wrote:
> > On 2018-02-27 17:17, Stefan Hajnoczi wrote:
> >> On Mon, Feb 26, 2018 at 06:03:13PM +0100, Max Reitz wrote:
> >>> There are filesystems (among which is tmpfs) that have a hard time
> >>>
This patch tweaks TestParallelOps in iotest 030 so it allocates data
in smaller regions (256KB/512KB instead of 512KB/1MB) and the
block-stream job in test_stream_commit() only needs to copy data that
is at the very end of the image.
This way when the block-stream job is awakened it will finish
On Tue, Feb 20, 2018 at 06:04:02PM +0100, Peter Lieven wrote:
> I remember we discussed a long time ago to limit the stack usage of all
> functions that are executed in a coroutine
> context to a very low value to be able to safely limit the coroutine stack
> size as well.
>
> I checked through
Am 18.01.2018 um 11:31 hat Daniel P. Berrange geschrieben:
> If the bdrv_reopen_prepare helper isn't provided, the qemu-img commit
> command fails to re-open the base layer after committing changes into
> it. Provide a no-op implementation for the LUKS driver, since there
> is not any custom work
Am 06.03.2018 um 11:53 hat Stefan Hajnoczi geschrieben:
> Nested BDRV_POLL_WHILE() calls can occur. Currently
> assert(!bs_->wakeup) will fail when this happens.
>
> This patch converts bs->wakeup from bool to a counter.
>
> Nesting works correctly because outer BDRV_POLL_WHILE() callers
On 06/03/2018 11:53, Stefan Hajnoczi wrote:
> Nested BDRV_POLL_WHILE() calls can occur. Currently
> assert(!bs_->wakeup) will fail when this happens.
>
> This patch converts bs->wakeup from bool to a counter.
>
> Nesting works correctly because outer BDRV_POLL_WHILE() callers evaluate
> the
Nested BDRV_POLL_WHILE() calls can occur. Currently
assert(!bs_->wakeup) will fail when this happens.
This patch converts bs->wakeup from bool to a counter.
Nesting works correctly because outer BDRV_POLL_WHILE() callers evaluate
the condition again after the inner caller completes (invoking
Ping
On Fri, Feb 16, 2018 at 01:53:12PM +, Daniel P. Berrangé wrote:
> Ping, can this be queued in the block tree, since it appears the no-op impl
> is ok ?
>
> On Thu, Jan 18, 2018 at 10:31:43AM +, Daniel P. Berrange wrote:
> > If the bdrv_reopen_prepare helper isn't provided, the
On 6 March 2018 at 10:24, Kevin Wolf wrote:
> Am 06.03.2018 um 11:09 hat Peter Maydell geschrieben:
>> On 5 March 2018 at 17:51, Kevin Wolf wrote:
>> > The following changes since commit
>> > 86f4c7e05b1c44dbe1b329a51f311f10aef6ff34:
>> >
>> > Merge
On Mon 05 Mar 2018 05:59:47 PM CET, Max Reitz wrote:
> Secondly, I've reverted 3d5d319e1221082 to test this, and I could
> reproduce failure exactly once. Since then, no luck (in like 20
> attempts, I think)...
Oh, I see. I was bisecting this and it seems that 1a63a907507fbbcf made
this problem
Am 06.03.2018 um 11:09 hat Peter Maydell geschrieben:
> On 5 March 2018 at 17:51, Kevin Wolf wrote:
> > The following changes since commit 86f4c7e05b1c44dbe1b329a51f311f10aef6ff34:
> >
> > Merge remote-tracking branch
> > 'remotes/pmaydell/tags/pull-target-arm-20180302' into
On 5 March 2018 at 17:51, Kevin Wolf wrote:
> The following changes since commit 86f4c7e05b1c44dbe1b329a51f311f10aef6ff34:
>
> Merge remote-tracking branch
> 'remotes/pmaydell/tags/pull-target-arm-20180302' into staging (2018-03-02
> 14:37:10 +)
>
> are available in the
On Tue, Feb 06, 2018 at 10:52:04AM +, Daniel P. Berrangé wrote:
> Consider passing a JSON based block driver to "qemu-img commit"
>
> $ qemu-img commit 'json:{"driver":"qcow2","file":{"driver":"gluster",\
> "volume":"gv0","path":"sn1.qcow2",
>
50 matches
Mail list logo