r in one of the children. This is serious enough
so I'll send the patch to fix this crash to qemu-stable as well.
Regards,
Berto
v2:
- Patch 3: Rename clock_type to event_clock_type [Max]
v1: https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg02161.html
- Original version
Alberto Garci
d-off-by: Alberto Garcia
Reviewed-by: Max Reitz
---
tests/qemu-iotests/146 | 129 +
tests/qemu-iotests/146.out | 5 ++
tests/qemu-iotests/group | 1 +
3 files changed, 135 insertions(+)
create mode 100644 tests/qemu-iotests/146
create mode 1
quorum_aio_cb() emits the QUORUM_REPORT_BAD event if there's
an I/O error in a Quorum child. However sacb->aiocb must be
correctly initialized for this to happen. read_quorum_children() and
read_fifo_child() are not doing this, which results in a QEMU crash.
Signed-off-by: Albert
t; +return;
I think it should be "does not have a child" or "does not have any
child".
With that corrected,
Reviewed-by: Alberto Garcia
Berto
On Fri 11 Mar 2016 02:31:31 AM CET, Wen Congyang wrote:
> On 03/10/2016 08:13 PM, Alberto Garcia wrote:
>> quorum_aio_cb() emits the QUORUM_REPORT_BAD event if there's
>> an I/O error in a Quorum child. However sacb->aiocb must be
>> correctly initialized for this to
On Thu 10 Mar 2016 09:35:32 PM CET, Eric Blake wrote:
>> @@ -572,6 +572,10 @@ static unsigned int qapi_event_throttle_hash(const void
>> *key)
>> hash += g_str_hash(qdict_get_str(evstate->data, "id"));
>> }
>>
>> +if (evstate->event == QAPI_EVENT_QUORUM_REPORT_BAD) {
>> +
On Thu 10 Mar 2016 03:49:40 AM CET, Changlong Xie wrote:
> @@ -81,6 +82,8 @@ typedef struct BDRVQuorumState {
> bool rewrite_corrupted;/* true if the driver must rewrite-on-read
> corrupted
> * block if Quorum is reached.
> */
> +u
drivers. So it is experimental now.
>
> Signed-off-by: Wen Congyang
> Signed-off-by: zhanghailiang
> Signed-off-by: Gonglei
> Signed-off-by: Changlong Xie
> Reviewed-by: Max Reitz
Reviewed-by: Alberto Garcia
Berto
On Fri 11 Mar 2016 03:14:47 PM CET, Max Reitz wrote:
> The newly added type parameter for the QUORUM_REPORT_BAD event changed
> the output of iotest 081, so the reference should be amended
> accordingly.
>
> Signed-off-by: Max Reitz
Reviewed-by: Alberto Garcia
Berto
On Mon 14 Mar 2016 02:57:31 AM CET, Changlong Xie wrote:
>> And now that we're at it, shouldn't we call quorum_report_bad() in
>> FIFO mode as well? Or is there any reason not to do it?
>
> IMO, no reason not to do it.
I'll send a patch to fix that.
Thanks,
Berto
ifo. This patch fixes this problem.
This applies on top of Kevin's block branch.
Regards,
Berto
Alberto Garcia (2):
quorum: Emit QUORUM_REPORT_BAD for reads in fifo mode
iotests: Test QUORUM_REPORT_BAD in fifo mode
block/quorum.c | 17 +
tests/qemu-
Signed-off-by: Alberto Garcia
---
tests/qemu-iotests/148 | 17 +++--
tests/qemu-iotests/148.out | 4 ++--
2 files changed, 17 insertions(+), 4 deletions(-)
diff --git a/tests/qemu-iotests/148 b/tests/qemu-iotests/148
index 30bc379..d066ec3 100644
--- a/tests/qemu-iotests/148
If there's an I/O error in one of Quorum children then QEMU
should emit QUORUM_REPORT_BAD. However this is not working with
read-pattern=fifo. This patch fixes this problem.
Signed-off-by: Alberto Garcia
---
block/quorum.c | 17 +
1 file changed, 9 insertions(+), 8 dele
On Mon 14 Mar 2016 07:02:08 AM CET, Changlong Xie
wrote:
>>> @@ -81,6 +82,8 @@ typedef struct BDRVQuorumState {
>>> bool rewrite_corrupted;/* true if the driver must rewrite-on-read
>>> corrupted
>>> * block if Quorum is reached.
>>>
On Thu 17 Mar 2016 10:56:09 AM CET, Wen Congyang wrote:
>> We should have the failure modes documented, and how you'll use it
>> after failover etc Without that it's really difficult to tell if this
>> naming is right.
>
> For COLO, children.0 is the real disk, children.1 is replication
> driver.
even more.
Although the patch is one year old, I decided to keep all the R-b
lines because nothing has really changed in this code since then and I
don't believe the change is controversial.
Berto
Alberto Garcia (1):
block: never cancel a streaming job without running stream_complete()
bloc
On Thu 17 Mar 2016 02:22:40 AM CET, Wen Congyang wrote:
> @@ -81,6 +82,8 @@ typedef struct BDRVQuorumState {
> bool rewrite_corrupted;/* true if the driver must rewrite-on-read
> corrupted
> * block if Quorum is reached.
>
rmediate node.
Signed-off-by: Alberto Garcia
Reviewed-by: Max Reitz
Reviewed-by: Fam Zheng
Reviewed-by: Stefan Hajnoczi
---
block/stream.c | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/block/stream.c b/block/stream.c
index cafaa07..eea3938 100644
--- a/block/stream.
rmediate node.
Signed-off-by: Alberto Garcia
Reviewed-by: Max Reitz
Reviewed-by: Fam Zheng
Reviewed-by: Stefan Hajnoczi
---
block/stream.c | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/block/stream.c b/block/stream.c
index cafaa07..eea3938 100644
--- a/block/stream.
Alberto Garcia (3):
block: never cancel a streaming job without running stream_complete()
qemu-iotests: fix test_stream_partial()
qemu-iotests: add no-op streaming test
block/stream.c | 11 ++-
tests/qemu-iotests/030 | 21 -
tests/qemu-iotests/030.out
This patch tests that in a partial block-stream operation, no data is
ever copied from the base image.
Signed-off-by: Alberto Garcia
Reviewed-by: Max Reitz
---
tests/qemu-iotests/030 | 18 ++
tests/qemu-iotests/030.out | 4 ++--
2 files changed, 20 insertions(+), 2
intermediate image before the test, so there's something
to copy and the test is meaningful.
Signed-off-by: Alberto Garcia
Reviewed-by: Max Reitz
---
tests/qemu-iotests/030 | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
On Wed 23 Mar 2016 10:03:41 AM CET, Kevin Wolf wrote:
>> This series would mess up my own I/O throttling cleanups that have
>> been posted in February and have hardly seen a review for one month.
>
> Which cleanups? The ones that you hid in an I/O path locking series?
> Whose v2 didn't even inclu
On Thu 24 Mar 2016 05:39:21 PM CET, Paolo Bonzini wrote:
> The return value is unused and I am not sure why it would be useful.
Yeah, this is not needed since 0b06ef3bdd177.
Reviewed-by: Alberto Garcia
> while (qemu_co_enter_next(&bs->throttled_reqs[i])) {
> -
On Thu 24 Mar 2016 05:39:22 PM CET, Paolo Bonzini wrote:
> @@ -335,6 +346,11 @@ void throttle_group_config(BlockDriverState *bs,
> ThrottleConfig *cfg)
> }
> throttle_config(ts, tt, cfg);
> qemu_mutex_unlock(&tg->lock);
> +
> +aio_context_acquire(bdrv_get_aio_context(bs));
> +
On Tue 29 Mar 2016 05:51:22 PM CEST, Max Reitz wrote:
>> It sounds like the argument here, and in Max's thread on
>> query-block-node-tree, is that we DO have cases where order matters, and
>> so we need a way for the hot-add operation to explicitly specify where
>> in the list a child is inserted
t; Signed-off-by: Paolo Bonzini
I like the idea, thanks for the patch!
Reviewed-by: Alberto Garcia
> +void bdrv_no_throttling_end(BlockDriverState *bs)
> +{
> +--bs->io_limits_disabled;
> }
Not very important, but you could assert that this doesn't go below 0.
Berto
On Thu 24 Mar 2016 08:07:17 PM CET, Max Reitz wrote:
> There are two reasons why I fear we may not want this:
>
> The first is that the node graph is more or less something internal to
> qemu. Its actual structure may (and most probably will) change over
> time. We do want to be able to let the use
On Wed 30 Mar 2016 05:07:15 PM CEST, Max Reitz wrote:
>> I also have another (not directly related) question: why not simply
>> use the node name when removing children? I understood that the idea
>> was that it's possible to have the same node attached twice to the
>> same Quorum, but can you actu
: Alberto Garcia
---
blockjob.c | 13 +
include/block/blockjob.h | 14 ++
2 files changed, 27 insertions(+)
diff --git a/blockjob.c b/blockjob.c
index 9fc37ca..3557048 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -50,6 +50,16 @@ struct BlockJobTxn {
int
bdrv_drain_all() pauses all block jobs by using bdrv_next() to iterate
over all top-level BlockDriverStates. Therefore the code is unable to
find block jobs in other nodes.
This patch uses block_job_next() to iterate over all block jobs.
Signed-off-by: Alberto Garcia
---
block/io.c | 21
qmp_query_block_jobs() uses bdrv_next() to look for block jobs, but
this function can only find those in top-level BlockDriverStates.
This patch uses block_job_next() instead.
Signed-off-by: Alberto Garcia
---
blockdev.c | 19 ---
1 file changed, 8 insertions(+), 11 deletions
for
that job.
Signed-off-by: Alberto Garcia
---
blockdev.c | 18 ++
blockjob.c | 5 +++--
docs/qmp-events.txt | 8
qapi/block-core.json | 20 ++--
4 files changed, 27 insertions(+), 24 deletions(-)
diff --git a/blockdev.c b/b
bdrv_close_all() cancels all block jobs by iterating over all
BlockDriverStates. This patch simplifies the code by iterating
directly over the block jobs using block_job_next().
Signed-off-by: Alberto Garcia
---
block.c | 25 ++---
1 file changed, 6 insertions(+), 19
chain.
See here for details on why that is currently not supported:
[Qemu-block] RFC: Status of the intermediate block streaming work
https://lists.gnu.org/archive/html/qemu-block/2015-12/msg00180.html
Finally, this also unblocks the stream operation in backing files.
Signed-off-by: Alberto Garcia
This adds test_stream_intermediate(), similar to test_stream() but
streams to the intermediate image instead.
Signed-off-by: Alberto Garcia
Reviewed-by: Max Reitz
---
tests/qemu-iotests/030 | 18 +-
tests/qemu-iotests/030.out | 4 ++--
2 files changed, 19 insertions(+), 3
This test case checks that it's not possible to perform two
block-stream operations if there are nodes involved in both.
Signed-off-by: Alberto Garcia
---
tests/qemu-iotests/030 | 60 ++
tests/qemu-iotests/030.out | 4 ++--
2 files change
Even if there are no common nodes involved, we currently don't support
several operations at the same time in the same backing chain.
Signed-off-by: Alberto Garcia
---
tests/qemu-iotests/030 | 21 +
tests/qemu-iotests/030.out | 4 ++--
2 files changed, 23 inser
Signed-off-by: Alberto Garcia
Reviewed-by: Max Reitz
Reviewed-by: Eric Blake
---
docs/live-block-ops.txt | 31 ---
1 file changed, 20 insertions(+), 11 deletions(-)
diff --git a/docs/live-block-ops.txt b/docs/live-block-ops.txt
index a257087..a05d869 100644
--- a
of all 'block-job-*' commands can now take a
node name.
- The BlockJobInfo type and all BLOCK_JOB_* events report the node
name in the 'device' field if the node does not have a device name.
- All intermediate nodes are blocked (and checked for blockers) during
the streamin
from bdrv_lookup_bs() and
no longer returns DeviceNotFound, iotest 030 is updated to expect
GenericError instead.
Signed-off-by: Alberto Garcia
---
blockdev.c | 31 +++
qapi/block-core.json | 10 +++---
tests/qemu-iotests/030 | 2 +-
3 files ch
On Mon 04 Apr 2016 05:26:02 PM CEST, Kevin Wolf wrote:
> The problem exists whenever a BDS has more users than just its BB, for
> example it is used as a backing file for another node.
The code seems correct, but I don't understand this use case.
Berto
On Wed 06 Apr 2016 11:59:47 AM CEST, Kevin Wolf wrote:
> Let me draw some ASCII art. I'm drawing I/O throttling like a filter
> to make the difference clear; in reality it's part of the BDS now or
> of the BB in 2.7, so don't let this confuse you.
>
> Let's assume these command line options:
>
> -d
he upcoming move of throttling into BlockBackend,
> such configurations won't be possible anyway.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
y, so we can just set bs_snapshot to NULL and let bdrv_open()
> do the rest.
>
> Signed-off-by: Max Reitz
Reviewed-by: Alberto Garcia
Berto
Hi,
I was away for a few days but I just came back. There's a ton of
patches that I'll try to review these days but I also have a question.
Let's create a QCOW2 image and pass a runtime option such as
l2-cache-size:
$ qemu -drive if=virtio,file=hd0.qcow2,l2-cache-size=10M
Now we make a snaps
On Wed 27 Apr 2016 02:04:33 PM CEST, Max Reitz wrote:
>> -bs = NULL;
>> -while ((bs = bdrv_next(bs))) {
>> -AioContext *aio_context = bdrv_get_aio_context(bs);
>> +while ((job = block_job_next(job))) {
>> +AioContext *aio_context = bdrv_get_aio_context(job->bs);
>
> Tech
On Wed 27 Apr 2016 02:30:55 PM CEST, Max Reitz wrote:
>> @@ -3694,18 +3696,18 @@ static BlockJob *find_block_job(const char *device,
>> AioContext **aio_context,
>>
>> *aio_context = NULL;
>>
>> -blk = blk_by_name(device);
>> -if (!blk) {
>> +bs = bdrv_lookup_bs(device_or_node
On Wed 27 Apr 2016 03:48:26 PM CEST, Max Reitz wrote:
>> +# Attach the drive to the VM
>> +self.vm = iotests.VM()
>> +self.vm.add_drive("blkdebug::" + self.imgs[-1], ','.join(opts))
>
> Any special reason for blkdebug? For me it works just fine without.
Oh, I think it's jus
On Wed 27 Apr 2016 03:04:57 PM CEST, Max Reitz wrote:
>> +/* If we are streaming to an intermediate image, we need to block
>> + * the active layer. Due to a race condition, having several block
>> + * jobs running in the same chain is broken and we currently don't
>> + * support i
On Wed 27 Apr 2016 03:34:19 PM CEST, Max Reitz wrote:
>> +/* Look for the top-level node that contains 'bs' in its chain */
>> +active = NULL;
>> +do {
>> +active = bdrv_next(active);
>> +} while (active && !bdrv_chain_contains(active, bs));
>
> Alternatively, you could ite
On Fri 29 Apr 2016 12:01:08 PM CEST, Kevin Wolf wrote:
>> This is another feature that was "logically" part of the
>> BlockBackend, but implemented as a BlockDriverState feature. It was
>> always kept on top using swap_feature_fields().
>>
>> This series moves it to be actually implemented in the
On Fri 29 Apr 2016 04:32:41 PM CEST, Kevin Wolf wrote:
> However, I'd like to give you a heads-up that this will technically
> conflict with my series that removes BlockDriverState.blk because that
> changes the bdrv_next() signature.
>
> Nothing dramatic, but I guess it would make sense to decide
On Fri 29 Apr 2016 04:38:58 PM CEST, Kevin Wolf wrote:
> This is essentially the same as I'm doing here:
> http://repo.or.cz/qemu/kevin.git/commitdiff/6b545b21e3dfe2e3927cfb6bbdcc1b233c67630c
Oh, I see.
> I think I like having a separate block_job_cancel_sync_all() function
> like I did instead o
On Fri 29 Apr 2016 05:18:26 PM CEST, Kevin Wolf wrote:
> This patch errors out if we can't find the active layer. Sounds safe
> and appropriate for an initial version. The real solution isn't to
> improve the magic to find the root node, but to remove the need to
> find it (by getting the new op bl
On Fri 29 Apr 2016 05:11:07 PM CEST, Kevin Wolf wrote:
>> +if (active == NULL) {
>> +error_setg(errp, "Cannot find top level node for '%s'", device);
>> +goto out;
>> +}
>
> Hm... On the one hand, I really like that you don't expect the user to
> provide the active layer in
On Tue 03 May 2016 03:18:39 PM CEST, Kevin Wolf wrote:
>> > On the other hand, this code assumes that there is only a single
>> > top-level node. This isn't necessarily true any more these days.
>>
>> Hmm... if you give me an example I can test that scenario.
>
> Simply reference the same node twi
On Tue 03 May 2016 03:23:24 PM CEST, Kevin Wolf wrote:
>> c) we fix bdrv_reopen() so we can actually run both jobs at the same
>>time. I'm wondering if pausing all block jobs between
>>bdrv_reopen_prepare() and bdrv_reopen_commit() would do the
>>trick. Opinions?
>
> I would have to rea
On Tue 03 May 2016 03:48:47 PM CEST, Kevin Wolf wrote:
> The solution to that should be simply changing the order of things:
>
> 1. bdrv_drained_begin()
> 2. bdrv_reopen_queue()
> 3. bdrv_reopen_multiple()
> * Instead of bdrv_drain_all(), assert that no requests are pending
No requests are pen
anipulations while I/O throttling is enabled.
> It would have been possible to keep things working with some temporary
> hacks, but quite cumbersome, so it's not worth the hassle. We'll fix
> things again in a minute.
>
> Signed-off-by: Kevin Wolf
> Reviewed-by: Eric Blake
Reviewed-by: Alberto Garcia
Berto
osing the whole struct layout in the public header
> file, this patch introduces an embedded public struct where such
> information can be added and a pair of functions to convert between
> BlockBackend and BlockBackendPublic.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
because we made sure that throttling can only be
> enabled on BDSes which have a BB attached.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
On Fri 22 Apr 2016 07:42:33 PM CEST, Kevin Wolf wrote:
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
On Fri 22 Apr 2016 07:42:35 PM CEST, Kevin Wolf wrote:
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
d when the BDS is detached from the BB.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
oup)
s/bdrv_set_io_limits/blk_set_io_limits/ in the comment above.
Otherwise the patch looks good to me.
Reviewed-by: Alberto Garcia
Berto
On Fri 22 Apr 2016 07:42:40 PM CEST, Kevin Wolf wrote:
> bdrv_move_feature_fields() and swap_feature_fields() are empty now, they
> can be removed.
>
> Signed-off-by: Kevin Wolf
> Reviewed-by: Max Reitz
Reviewed-by: Alberto Garcia
Berto
least one BDS
> - * directly used by a block job */
> -assert(bs);
> -}
> }
You could add an assert(QTAILQ_EMPTY(&all_bdrv_states)) as it was
suggested some days ago.
The rest looks good.
Reviewed-by: Alberto Garcia
Berto
On Fri 29 Apr 2016 05:00:57 PM CEST, Kevin Wolf wrote:
>> - Block jobs can now be identified by the node name of their
>> BlockDriverState in addition to the device name. Since both device
>> and node names live in the same namespace there's no ambiguity.
>
> Careful, we know this is a part of our
On Fri 22 Apr 2016 07:42:37 PM CEST, Kevin Wolf wrote:
> BlockBackends use it to get a back pointer from BdrvChild to
> BlockBackend in any BdrvChildRole callbacks.
Hmmm... can anyone else use this, other than BlockBackend? If not, why
is it opaque?
Berto
On Fri 22 Apr 2016 07:42:37 PM CEST, Kevin Wolf wrote:
> BlockBackends use it to get a back pointer from BdrvChild to
> BlockBackend in any BdrvChildRole callbacks.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
ns to drain the
> throttled requests for BlockBackend parents.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
On Fri 06 May 2016 07:54:36 PM CEST, John Snow wrote:
@@ -3694,18 +3696,18 @@ static BlockJob *find_block_job(const char
*device, AioContext **aio_context,
*aio_context = NULL;
-blk = blk_by_name(device);
-if (!blk) {
+bs = bdrv_lookup_b
e parents don't influence each other any more.
>
> Conflicts:
> block.c
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
On Wed 04 May 2016 11:39:19 AM CEST, Kevin Wolf wrote:
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
On Wed 13 Apr 2016 10:33:08 AM CEST, Changlong Xie wrote:
Sorry for the late reply!
The patch looks good, I have some additional comments on top of what Max
Wrote, nothing serious though :)
> @@ -67,6 +68,9 @@ typedef struct QuorumVotes {
> typedef struct BDRVQuorumState {
> BdrvChild **ch
On Tue 10 May 2016 09:36:38 AM CEST, Changlong Xie wrote:
> From: Wen Congyang
>
> Signed-off-by: Wen Congyang
> Signed-off-by: zhanghailiang
> Signed-off-by: Gonglei
> Signed-off-by: Changlong Xie
Reviewed-by: Alberto Garcia
Berto
On Tue 10 May 2016 10:39:21 AM CEST, Kevin Wolf wrote:
>>s->children = g_renew(BdrvChild *, s->children, ++s->num_children);
>>s->children[s->num_children] = child;
>
> Without having checked the context, this code is not equivalent. You
> need to access s->children[s->num_children
On Fri 22 Apr 2016 07:42:42 PM CEST, Kevin Wolf wrote:
> Checking whether there are throttled requests requires going to the
> associated BlockBackend, which we want to avoid. All users of
> bdrv_requests_pending() already call bdrv_parent_drained_begin()
> first,
There's a couple of assert(!bdrv_
On Fri 22 Apr 2016 07:42:34 PM CEST, Kevin Wolf wrote:
> typedef struct BlockBackendPublic {
> -/* I/O throttling */
> +/* I/O throttling.
> + * throttle_state tells us if this BDS has I/O limits configured.
> + * io_limits_disabled tells us if they are currently being enforced */
then re-enabled any more
> during graph reconfiguration. This fixes the temporary breakage of I/O
> throttling when used with live snapshots or block jobs that manipulate
> the graph.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
On Wed 11 May 2016 04:45:33 AM CEST, Fam Zheng wrote:
> Currently we only recurse to bs->file, which will miss the children in quorum
> and VMDK.
>
> Recurse into the whole subtree to avoid that.
>
> Signed-off-by: Fam Zheng
Reviewed-by: Alberto Garcia
Berto
On Wed 11 May 2016 04:45:34 AM CEST, Fam Zheng wrote:
> Now they are invalidated by the block layer, so it's not necessary to
> do this in block drivers' implementations of .bdrv_invalidate_cache.
>
> Signed-off-by: Fam Zheng
Reviewed-by: Alberto Garcia
Berto
On Thu 12 May 2016 05:28:34 PM CEST, Kevin Wolf wrote:
> Am 12.05.2016 um 17:13 hat Alberto Garcia geschrieben:
>> On Thu 12 May 2016 05:04:51 PM CEST, Kevin Wolf wrote:
>> > Am 12.05.2016 um 15:47 hat Alberto Garcia geschrieben:
>> >> On Tue 03 May 2016 03:4
> bdrv_{at,de}tach_aio_context() in Quorum, blkverify and VMDK.
>
> Signed-off-by: Max Reitz
Reviewed-by: Alberto Garcia
Berto
On Tue 17 May 2016 04:47:57 PM CEST, Kevin Wolf wrote:
>> > Right, but if the coroutine yields, we jump back to the caller, which
>> > looks like this:
>> >
>> > co = qemu_coroutine_create(bdrv_flush_co_entry);
>> > qemu_coroutine_enter(co, &rwco);
>> > while (rwco.ret == NOT_DONE) {
>>
ter now, thanks!
Reviewed-by: Alberto Garcia
Berto
x27;t work that well any more when block jobs use
> BlockBackends internally because then they will lose their BDS before
> being cancelled.
>
> This patch changes bdrv_close_all() to first cancel all jobs and then
> remove all root BDSes from the remaining BBs.
>
> Signed
> Signed-off-by: Kevin Wolf
> Reviewed-by: Max Reitz
Reviewed-by: Alberto Garcia
Berto
hat job->blk points to. At the
> moment block jobs are too tightly coupled with their BDS, so that moving
> a job to another BDS isn't easily possible; therefore, we need to just
> manually undo this change afterwards.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
On Tue 24 May 2016 03:47:26 PM CEST, Kevin Wolf wrote:
> This changes the streaming block job to use the job's BlockBackend for
> performing the COR reads. job->bs isn't used by the streaming code any
> more afterwards.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
On Tue 24 May 2016 03:47:31 PM CEST, Kevin Wolf wrote:
> Now that we pass the job to the function, bs is implied by that.
>
> Signed-off-by: Kevin Wolf
> Reviewed-by: Max Reitz
Reviewed-by: Alberto Garcia
Berto
On Tue 24 May 2016 03:47:34 PM CEST, Kevin Wolf wrote:
> There is a single remaining user in qemu-img, which can be trivially
> converted to using BlockJob.blk instead.
>
> Signed-off-by: Kevin Wolf
> Reviewed-by: Max Reitz
Reviewed-by: Alberto Garcia
Berto
On Wed 25 May 2016 02:29:14 PM CEST, Kevin Wolf wrote:
> Also add trace points now that the function can be directly called.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Alberto Garcia
Berto
qmp_query_block_jobs() uses bdrv_next() to look for block jobs, but
this function can only find those in top-level BlockDriverStates.
This patch uses block_job_next() instead.
Signed-off-by: Alberto Garcia
---
blockdev.c | 20
1 file changed, 8 insertions(+), 12 deletions
If the base or overlay images need to be reopened in read-write mode
but the block_job_create() call fails then no one will put those
images back in read-only mode.
We can solve this problem easily by calling block_job_create() first.
Signed-off-by: Alberto Garcia
---
block/commit.c | 11
a fix that was already proposed in the list.
- Patch 4 is quite simple and it deals with the failure of
block_job_create() in commit_start().
Regards,
Berto
Alberto Garcia (4):
block: use the block job list in bdrv_drain_all()
block: use the block job list in qmp_query_block_jobs()
been paused after it comes back from sleeping.
Signed-off-by: Alberto Garcia
Suggested-by: Kevin Wolf
---
blockjob.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/blockjob.c b/blockjob.c
index c095cc5..01b896b 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -361,10
bdrv_drain_all() pauses all block jobs by using bdrv_next() to iterate
over all top-level BlockDriverStates. Therefore the code is unable to
find block jobs in other nodes.
This patch uses block_job_next() to iterate over all block jobs.
Signed-off-by: Alberto Garcia
---
block/io.c | 24
On Tue 26 Apr 2016 11:32:00 PM CEST, Max Reitz wrote:
> bdrv_refresh_filename() should invoke itself recursively on all
> children, not just on file.
>
> With that change, we can remove the manual invocations in blkverify and
> quorum.
>
> Signed-off-by: Max Reitz
Review
701 - 800 of 3246 matches
Mail list logo