On 04/11/2017 06:29 PM, Eric Blake wrote:
> We are gradually moving away from sector-based interfaces, towards
> byte-based. In the common case, allocation is unlikely to ever use
> values that are not naturally sector-aligned, but it is possible
> that byte-based values will let us be more
[adjust a paragraph of the original commit message]
...
Note that we have an inherent limitation in the BDRV_BLOCK_* return
values: BDRV_BLOCK_OFFSET_VALID can only return the start of a
sector, even if we later relax the interface to query for the status
starting at an intermediate byte; document
On 04/18/2017 02:22 PM, Kevin Wolf wrote:
> Am 14.04.2017 um 06:17 hat Denis V. Lunev geschrieben:
>> [skipped...]
>>
>>> Hi Denis,
>>>
>>> I've read this entire thread now and I really like Berto's summary which
>>> I think is one of the best recaps of existing qcow2 problems and this
>>>
On 04/17/2017 08:33 PM, Eric Blake wrote:
> We are gradually moving away from sector-based interfaces, towards
> byte-based. Update the file protocol driver accordingly.
>
> Signed-off-by: Eric Blake
> ---
> block/file-posix.c | 47
On 04/17/2017 08:33 PM, Eric Blake wrote:
> There are patches floating around to add NBD_CMD_BLOCK_STATUS,
> but NBD wants to report status on byte granularity (even if the
> reporting will probably be naturally aligned to sectors or even
> much higher levels). I've therefore started the task of
For the tests that use the common.qemu functions for running a QEMU
process, _cleanup_qemu must be called in the exit function.
If it is not, if the qemu process aborts, then not all of the droppings
are cleaned up (e.g. pidfile, fifos).
This updates those tests that did not have a cleanup in
On 04/18/2017 02:31 PM, Jeff Cody wrote:
> On Tue, Apr 18, 2017 at 01:44:43PM -0500, Eric Blake wrote:
>> On 04/18/2017 12:45 PM, Jeff Cody wrote:
>>> For the tests that use the common.qemu functions for running a QEMU
>>> process, _cleanup_qemu must be called in the exit function.
>>>
>>> If it
On Tue, Apr 18, 2017 at 01:44:43PM -0500, Eric Blake wrote:
> On 04/18/2017 12:45 PM, Jeff Cody wrote:
> > For the tests that use the common.qemu functions for running a QEMU
> > process, _cleanup_qemu must be called in the exit function.
> >
> > If it is not, if the qemu process aborts, then not
On 04/18/2017 12:45 PM, Jeff Cody wrote:
> For the tests that use the common.qemu functions for running a QEMU
> process, _cleanup_qemu must be called in the exit function.
>
> If it is not, if the qemu process aborts, then not all of the droppings
> are cleaned up (e.g. pidfile, fifos).
>
>
On 04/18/2017 02:52 PM, Alberto Garcia wrote:
> On Thu 13 Apr 2017 05:17:21 PM CEST, Denis V. Lunev wrote:
>> On 04/13/2017 06:04 PM, Alberto Garcia wrote:
>>> On Thu 13 Apr 2017 03:30:43 PM CEST, Denis V. Lunev wrote:
Yes, block size should be increased. I perfectly in agreement with
For the tests that use the common.qemu functions for running a QEMU
process, _cleanup_qemu must be called in the exit function.
If it is not, if the qemu process aborts, then not all of the droppings
are cleaned up (e.g. pidfile, fifos).
This updates those tests that did not have a cleanup in
On 04/18/2017 12:31 PM, Jeff Cody wrote:
> For the tests that use the common.qemu functions for running a QEMU
> process, _cleanup_qemu must be called in the exit function.
>
> If it is not, if the qemu process aborts, then not all of the droppings
> are cleaned up (e.g. pidfile, fifos).
>
>
For the tests that use the common.qemu functions for running a QEMU
process, _cleanup_qemu must be called in the exit function.
If it is not, if the qemu process aborts, then not all of the droppings
are cleaned up (e.g. pidfile, fifos).
This updates those tests that did not have a cleanup in
On Thu, Apr 13, 2017 at 05:43:34PM +0200, Max Reitz wrote:
> The block layer takes care of removing the bs->file child if the block
> driver's bdrv_open()/bdrv_file_open() implementation fails. The block
> driver therefore does not need to do so, and indeed should not unless it
> sets bs->file to
Hi,
This series failed automatic build test. Please find the testing commands and
their output below. If you have docker installed, you can probably reproduce it
locally.
Type: series
Message-id: 20170401155751.14322-1-mre...@redhat.com
Subject: [Qemu-devel] [RFC for-3.0 0/4] block: Add
Hi,
This series failed build test on s390x host. Please find the details below.
Type: series
Message-id: 20170401155751.14322-1-mre...@redhat.com
Subject: [Qemu-devel] [RFC for-3.0 0/4] block: Add qcow2-rust block driver
=== TEST SCRIPT BEGIN ===
#!/bin/bash
# Testing script will be invoked
woot! Happy birthday!
On Tue, Apr 18, 2017 at 7:58 PM Max Reitz wrote:
> The issues of using C are well understood and nobody likes it. Let's use
> a better language. C++ is not a better language, Rust is. Everybody
> loves Rust. Rust is good. Rust is hip. It will attract
Hi,
This series seems to have some coding style problems. See output below for
more information:
Type: series
Message-id: 20170401155751.14322-1-mre...@redhat.com
Subject: [Qemu-devel] [RFC for-3.0 0/4] block: Add qcow2-rust block driver
=== TEST SCRIPT BEGIN ===
#!/bin/bash
BASE=base
n=1
This patch adds an FFI interface for block drivers written in Rust, the
language of the future. It's a very good interface, in fact, it's the
best interface there is. We are truly making QEMU great again!
Signed-off-by: Max Reitz
---
configure |
The rust qcow2 driver is now actually MUCH BETTER than the LEGACY
CROOKED qcow2 driver, so let's not beat around the bush and just
register it as a block driver. Has always been my opinion, never said
anything different. The QEMU project will deal with the C issue in a
decisive way. Q.M.U.
The issues of using C are well understood and nobody likes it. Let's use
a better language. C++ is not a better language, Rust is. Everybody
loves Rust. Rust is good. Rust is hip. It will attract developers, it
will improve code quality, it will improve performance, it will even
improve your
Currently, this MUCH MORE SECURE block driver than the LEGACY C qcow2
driver (SAD!) only has read support. But this makes it actually much
less likely to destroy your data, so this is a GOOD thing.
Signed-off-by: Max Reitz
---
block/rust/src/lib.rs | 3
Some people may want to lead you to believe write support may destroy
your data. These entirely BASELESS and FALSE accusations are COMPLETE
and UTTER LIES because you actually cannot use this driver yet at all,
as it does not register itself as a qemu block driver.
This is a very modest approach,
On 04/18/2017 02:59 AM, Fam Zheng wrote:
> Mirror calculates job len from current I/O progress:
>
> s->common.len = s->common.offset +
> (cnt + s->sectors_in_flight) * BDRV_SECTOR_SIZE;
>
> The final "len" of a failed mirror job in iotests 109 depends on the
> subtle
Am 18.04.2017 um 16:47 hat Stefan Hajnoczi geschrieben:
> On Wed, Apr 12, 2017 at 11:18:19AM +0200, Kevin Wolf wrote:
> > after getting assertion failure reports for block migration in the last
> > minute, we just hacked around it by commenting out op blocker assertions
> > for the 2.9 release,
On Tue, 04/18 16:46, Kevin Wolf wrote:
> Am 18.04.2017 um 16:30 hat Fam Zheng geschrieben:
> > The recursive bdrv_drain_recurse may run a block job completion BH that
> > drops nodes. The coming changes will make that more likely and
> > use-after-free
> > would happen without this patch
> >
> >
On Tue, Apr 18, 2017 at 10:30:42PM +0800, Fam Zheng wrote:
> v4: Split patch, and fix the unsafe bdrv_unref. [Paolo]
>
> Fam Zheng (2):
> block: Walk bs->children carefully in bdrv_drain_recurse
> block: Drain BH in bdrv_drained_begin
>
> block/io.c| 23 ---
>
On Wed, Apr 12, 2017 at 11:18:19AM +0200, Kevin Wolf wrote:
> after getting assertion failure reports for block migration in the last
> minute, we just hacked around it by commenting out op blocker assertions
> for the 2.9 release, but now we need to see how to fix things properly.
> Luckily,
Am 18.04.2017 um 16:30 hat Fam Zheng geschrieben:
> The recursive bdrv_drain_recurse may run a block job completion BH that
> drops nodes. The coming changes will make that more likely and use-after-free
> would happen without this patch
>
> Stash the bs pointer and use bdrv_ref/bdrv_unref in
On 18/04/2017 16:30, Fam Zheng wrote:
> v4: Split patch, and fix the unsafe bdrv_unref. [Paolo]
>
> Fam Zheng (2):
> block: Walk bs->children carefully in bdrv_drain_recurse
> block: Drain BH in bdrv_drained_begin
>
> block/io.c| 23 ---
>
On Wed, Apr 12, 2017 at 05:51:23PM +0800, 858585 jemmy wrote:
> it this bug?
> https://bugs.launchpad.net/qemu/+bug/1681688
Yes. This discussion is about the long-term fix instead of a short-term
hack for QEMU 2.9.
Stefan
signature.asc
Description: PGP signature
During block job completion, nothing is preventing
block_job_defer_to_main_loop_bh from being called in a nested
aio_poll(), which is a trouble, such as in this code path:
qmp_block_commit
commit_active_start
bdrv_reopen
bdrv_reopen_multiple
The recursive bdrv_drain_recurse may run a block job completion BH that
drops nodes. The coming changes will make that more likely and use-after-free
would happen without this patch
Stash the bs pointer and use bdrv_ref/bdrv_unref in addition to
QLIST_FOREACH_SAFE to prevent such a case from
v4: Split patch, and fix the unsafe bdrv_unref. [Paolo]
Fam Zheng (2):
block: Walk bs->children carefully in bdrv_drain_recurse
block: Drain BH in bdrv_drained_begin
block/io.c| 23 ---
include/block/block.h | 22 ++
2 files changed, 34
On Tue, 04/18 14:36, Paolo Bonzini wrote:
>
>
> On 18/04/2017 12:39, Fam Zheng wrote:
> > +QLIST_FOREACH_SAFE(child, >children, next, tmp) {
> > +BlockDriverState *bs = child->bs;
> > +assert(bs->refcnt > 0);
> > +bdrv_ref(bs);
> > +waited |=
On 03/13/2017 09:39 PM, Fam Zheng wrote:
> Signed-off-by: Fam Zheng
> ---
> qemu-io.c | 28 +---
> 1 file changed, 21 insertions(+), 7 deletions(-)
>
> @@ -108,6 +112,7 @@ static void open_help(void)
> " -r, -- open file read-only\n"
> " -s, -- use
On 03/13/2017 09:39 PM, Fam Zheng wrote:
> Signed-off-by: Fam Zheng
> ---
> qemu-img.c | 148
> +++--
> 1 file changed, 114 insertions(+), 34 deletions(-)
>
> @@ -2711,9 +2751,10 @@ static int img_map(int argc, char
On 03/13/2017 09:39 PM, Fam Zheng wrote:
> This flag clears out the "consistent read" permission that blk_new_open
> requests.
>
> Signed-off-by: Fam Zheng
> ---
> block/block-backend.c | 2 +-
> include/block/block.h | 1 +
> 2 files changed, 2 insertions(+), 1 deletion(-)
>
On 18/04/2017 12:39, Fam Zheng wrote:
> +QLIST_FOREACH_SAFE(child, >children, next, tmp) {
> +BlockDriverState *bs = child->bs;
> +assert(bs->refcnt > 0);
> +bdrv_ref(bs);
> +waited |= bdrv_drain_recurse(bs);
> +bdrv_unref(bs);
> }
I think this
Kevin Wolf wrote:
> Hi all,
Hi
> after getting assertion failure reports for block migration in the last
> minute, we just hacked around it by commenting out op blocker assertions
> for the 2.9 release, but now we need to see how to fix things properly.
> Luckily,
* Kevin Wolf (kw...@redhat.com) wrote:
> Signed-off-by: Kevin Wolf
> ---
> tests/qemu-iotests/181 | 117
> +
> tests/qemu-iotests/181.out | 38 +++
> tests/qemu-iotests/group | 1 +
> 3 files changed, 156
On 04/18/2017 04:33 AM, Eric Blake wrote:
> We are gradually moving away from sector-based interfaces, towards
> byte-based. Update the parallels driver accordingly. Note that
> the internal function block_status() is still sector-based, because
> it is still in use by other sector-based
On Thu 13 Apr 2017 05:17:21 PM CEST, Denis V. Lunev wrote:
> On 04/13/2017 06:04 PM, Alberto Garcia wrote:
>> On Thu 13 Apr 2017 03:30:43 PM CEST, Denis V. Lunev wrote:
>>> Yes, block size should be increased. I perfectly in agreement with
>>> your. But I think that we could do that by plain
Am 18.04.2017 um 12:27 schrieb Denis V. Lunev:
From: Anton Nefedov
We should wait for other coroutines on error path, i.e. one of coroutines
terminates with i/o error, before cleaning the common structures. In the
other case we would crash in a lot of different
Am 14.04.2017 um 06:17 hat Denis V. Lunev geschrieben:
> [skipped...]
>
> > Hi Denis,
> >
> > I've read this entire thread now and I really like Berto's summary which
> > I think is one of the best recaps of existing qcow2 problems and this
> > discussion so far.
> >
> > I understand your opinion
On Tue, Apr 18, 2017 at 06:39:48PM +0800, Fam Zheng wrote:
> During block job completion, nothing is preventing
> block_job_defer_to_main_loop_bh from being called in a nested
> aio_poll(), which is a trouble, such as in this code path:
>
> qmp_block_commit
> commit_active_start
>
Am 18.04.2017 um 12:39 hat Fam Zheng geschrieben:
> During block job completion, nothing is preventing
> block_job_defer_to_main_loop_bh from being called in a nested
> aio_poll(), which is a trouble, such as in this code path:
>
> qmp_block_commit
> commit_active_start
>
During block job completion, nothing is preventing
block_job_defer_to_main_loop_bh from being called in a nested
aio_poll(), which is a trouble, such as in this code path:
qmp_block_commit
commit_active_start
bdrv_reopen
bdrv_reopen_multiple
From: Anton Nefedov
We should wait for other coroutines on error path, i.e. one of coroutines
terminates with i/o error, before cleaning the common structures. In the
other case we would crash in a lot of different places. This behaviour
was introduced by commit
On Mon, 04/10 09:57, Stefan Hajnoczi wrote:
> On Tue, Mar 21, 2017 at 11:16:23AM +0800, Fam Zheng wrote:
> > @@ -1713,21 +1714,22 @@ void bdrv_format_default_perms(BlockDriverState
> > *bs, BdrvChild *c,
> > perm |= BLK_PERM_CONSISTENT_READ;
> > shared &= ~(BLK_PERM_WRITE |
On Tue, 04/18 10:18, Paolo Bonzini wrote:
>
>
> On 17/04/2017 10:27, Fam Zheng wrote:
> > At this point it's even unclear to me what should be the plan for 2.9. v1
> > IMO
> > was the least intrusive, but didn't cover bdrv_drain_all_begin. v2 has this
> > controversial "aio_poll(ctx_, false)",
On 17/04/2017 10:27, Fam Zheng wrote:
> At this point it's even unclear to me what should be the plan for 2.9. v1 IMO
> was the least intrusive, but didn't cover bdrv_drain_all_begin. v2 has this
> controversial "aio_poll(ctx_, false)",
v1 has it too:
-bdrv_drain_recurse(bs);
+while
On 17/04/2017 05:33, Fam Zheng wrote:
> BDRV_POLL_WHILE in both IOThread and main loop has aio_context_acquire(ctx)
> around it; in the branch where main loop calls aio_poll(ctx, false),
> there is also no aio_context_release(ctx). So I thinki it is protected by the
> AioContext lock, and is
Mirror calculates job len from current I/O progress:
s->common.len = s->common.offset +
(cnt + s->sectors_in_flight) * BDRV_SECTOR_SIZE;
The final "len" of a failed mirror job in iotests 109 depends on the
subtle timing of the completion of read and write issued in the
54 matches
Mail list logo