On Mon, Jun 18, 2018 at 08:14:10PM -0400, John Snow wrote:
>
>
> On 06/18/2018 02:02 PM, Amol Surati wrote:
> > On Mon, Jun 18, 2018 at 12:05:15AM +0530, Amol Surati wrote:
> >> This patch fixes the assumption that io_buffer_size is always a perfect
> >> multiple of the sector size. The
Ping...
On 2018/6/12 7:26, Jie Wang wrote:
> if laio_init create linux_aio failed and return NULL, NULL pointer
> dereference will occur when laio_attach_aio_context dereference
> linux_aio in aio_get_linux_aio. Let's avoid it and report error.
>
> Signed-off-by: Jie Wang
> ---
>
On 06/18/2018 02:02 PM, Amol Surati wrote:
> On Mon, Jun 18, 2018 at 12:05:15AM +0530, Amol Surati wrote:
>> This patch fixes the assumption that io_buffer_size is always a perfect
>> multiple of the sector size. The assumption is the cause of the firing
>> of 'assert(n * 512 == s->sg.size);'.
On 06/17/2018 08:13 AM, air icy wrote:
>
> Hi,
> QEMU 'hw/ide/core.c:871' Denial of Service Vulnerability in version
> qemu-2.12.0
> run the program in qemu-2.12.0:
>
> #define _GNU_SOURCE
> #include
> #include
> #include
> #include
> #include
> #include
> #include
> #include
>
On 06/14/2018 09:29 AM, Markus Armbruster wrote:
block_crypto_open_opts_init() and block_crypto_create_opts_init()
contain a virtual visit of QCryptoBlockOptions and
QCryptoBlockCreateOptions less member "format", respectively.
Change their callers to put member "format" in the QDict, so they
On Mon, Jun 18, 2018 at 12:05:15AM +0530, Amol Surati wrote:
> This patch fixes the assumption that io_buffer_size is always a perfect
> multiple of the sector size. The assumption is the cause of the firing
> of 'assert(n * 512 == s->sg.size);'.
>
> Signed-off-by: Amol Surati
> ---
The
On Mon, Jun 18, 2018 at 02:13:52PM -0400, John Snow wrote:
>
> On 06/18/2018 02:02 PM, Amol Surati wrote:
> > On Mon, Jun 18, 2018 at 12:05:15AM +0530, Amol Surati wrote:
> >> This patch fixes the assumption that io_buffer_size is always a perfect
> >> multiple of the sector size. The assumption
Hi,
This series seems to have some coding style problems. See output below for
more information:
Type: series
Message-id: 20180618164504.24488-1-kw...@redhat.com
Subject: [Qemu-devel] [PULL 00/35] Block layer patches
=== TEST SCRIPT BEGIN ===
#!/bin/bash
BASE=base
n=1
total=$(git log --oneline
On 06/18/2018 02:02 PM, Amol Surati wrote:
> On Mon, Jun 18, 2018 at 12:05:15AM +0530, Amol Surati wrote:
>> This patch fixes the assumption that io_buffer_size is always a perfect
>> multiple of the sector size. The assumption is the cause of the firing
>> of 'assert(n * 512 == s->sg.size);'.
From: Max Reitz
Signed-off-by: Max Reitz
Reviewed-by: Fam Zheng
Reviewed-by: Alberto Garcia
Message-id: 20180613181823.13618-15-mre...@redhat.com
Signed-off-by: Max Reitz
---
tests/qemu-iotests/151 | 120 +
tests/qemu-iotests/151.out | 5 ++
From: Max Reitz
This patch allows the user to specify whether to use active or only
background mode for mirror block jobs. Currently, this setting will
remain constant for the duration of the entire block job.
Signed-off-by: Max Reitz
Reviewed-by: Alberto Garcia
Message-id:
From: Max Reitz
This will allow us to access the block job data when the mirror block
driver becomes more complex.
Signed-off-by: Max Reitz
Reviewed-by: Fam Zheng
Message-id: 20180613181823.13618-11-mre...@redhat.com
Signed-off-by: Max Reitz
---
block/mirror.c | 12
1 file
From: Max Reitz
This new parameter allows the caller to just query the next dirty
position without moving the iterator.
Signed-off-by: Max Reitz
Reviewed-by: Fam Zheng
Reviewed-by: John Snow
Message-id: 20180613181823.13618-8-mre...@redhat.com
Signed-off-by: Max Reitz
---
From: Max Reitz
Add a function that wraps hbitmap_iter_next() and always calls it in
non-advancing mode first, and in advancing mode next. The result should
always be the same.
By using this function everywhere we called hbitmap_iter_next() before,
we should get good test coverage for
From: Max Reitz
Currently, bdrv_replace_node() refuses to create loops from one BDS to
itself if the BDS to be replaced is the backing node of the BDS to
replace it: Say there is a node A and a node B. Replacing B by A means
making all references to B point to A. If B is a child of A (i.e. A
From: Max Reitz
This patch implements active synchronous mirroring. In active mode, the
passive mechanism will still be in place and is used to copy all
initially dirty clusters off the source disk; but every write request
will write data both to the source and the target disk, so the source
From: Max Reitz
This patch makes the mirror code differentiate between simply waiting
for any operation to complete (mirror_wait_for_free_in_flight_slot())
and specifically waiting for all operations touching a certain range of
the virtual disk to complete (mirror_wait_on_conflicts()).
From: Max Reitz
With this, the mirror_top_bs is no longer just a technically required
node in the BDS graph but actually represents the block job operation.
Also, drop MirrorBlockJob.source, as we can reach it through
mirror_top_bs->backing.
Signed-off-by: Max Reitz
Reviewed-by: Fam Zheng
Am 18.06.2018 um 17:50 hat Stefan Hajnoczi geschrieben:
> On Tue, Jun 12, 2018 at 07:26:25AM +0800, Jie Wang wrote:
> > if laio_init create linux_aio failed and return NULL, NULL pointer
> > dereference will occur when laio_attach_aio_context dereference
> > linux_aio in aio_get_linux_aio. Let's
From: Max Reitz
This new function allows to look for a consecutively dirty area in a
dirty bitmap.
Signed-off-by: Max Reitz
Reviewed-by: Fam Zheng
Reviewed-by: John Snow
Message-id: 20180613181823.13618-10-mre...@redhat.com
Signed-off-by: Max Reitz
---
include/block/dirty-bitmap.h | 2 ++
From: Max Reitz
Signed-off-by: Max Reitz
Message-id: 20180613181823.13618-12-mre...@redhat.com
Reviewed-by: Kevin Wolf
Signed-off-by: Max Reitz
---
include/qemu/job.h | 15 +++
job.c | 5 +
2 files changed, 20 insertions(+)
diff --git a/include/qemu/job.h
From: Greg Kurz
Removing a drive with drive_del while it is being used to run an I/O
intensive workload can cause QEMU to crash.
An AIO flush can yield at some point:
blk_aio_flush_entry()
blk_co_flush(blk)
bdrv_co_flush(blk->root->bs)
...
qemu_coroutine_yield()
and let the HMP
From: Max Reitz
When converting mirror's I/O to coroutines, we are going to need a point
where these coroutines are created. mirror_perform() is going to be
that point.
Signed-off-by: Max Reitz
Reviewed-by: Fam Zheng
Reviewed-by: Vladimir Sementsov-Ogievskiy
Reviewed-by: Jeff Cody
In the future, bdrv_drained_all_begin/end() will drain all invidiual
nodes separately rather than whole subtrees. This means that we don't
want to propagate the drain to all parents any more: If the parent is a
BDS, it will already be drained separately. Recursing to all parents is
unnecessary
bdrv_drain_all_*() used bdrv_next() to iterate over all root nodes and
did a subtree drain for each of them. This works fine as long as the
graph is static, but sadly, reality looks different.
If the graph changes so that root nodes are added or removed, we would
have to compensate for this.
We cannot allow aio_poll() in bdrv_drain_invoke(begin=true) until we're
done with propagating the drain through the graph and are doing the
single final BDRV_POLL_WHILE().
Just schedule the coroutine with the callback and increase bs->in_flight
to make sure that the polling phase will wait for
This adds a test case that goes wrong if bdrv_drain_invoke() calls
aio_poll().
Signed-off-by: Kevin Wolf
---
tests/test-bdrv-drain.c | 102 +---
1 file changed, 88 insertions(+), 14 deletions(-)
diff --git a/tests/test-bdrv-drain.c
bdrv_drain_all() wants to have a single polling loop for draining the
in-flight requests of all nodes. This means that the AIO_WAIT_WHILE()
condition relies on activity in multiple AioContexts, which is polled
from the mainloop context. We must therefore call AIO_WAIT_WHILE() from
the mainloop
From: Max Reitz
Attach a CoQueue to each in-flight operation so if we need to wait for
any we can use it to wait instead of just blindly yielding and hoping
for some operation to wake us.
A later patch will use this infrastructure to allow requests accessing
the same area of the virtual disk to
From: Max Reitz
In order to talk to the source BDS (and maybe in the future to the
target BDS as well) directly, we need to convert our existing AIO
requests into coroutine I/O requests.
Signed-off-by: Max Reitz
Reviewed-by: Fam Zheng
Message-id: 20180613181823.13618-3-mre...@redhat.com
This tests both adding and remove a node between bdrv_drain_all_begin()
and bdrv_drain_all_end(), and enabled the existing detach test for
drain_all.
Signed-off-by: Kevin Wolf
---
tests/test-bdrv-drain.c | 75 +++--
1 file changed, 73 insertions(+), 2
From: Max Reitz
This patch adds two bdrv-drain tests for what happens if some BDS goes
away during the drainage.
The basic idea is that you have a parent BDS with some child nodes.
Then, you drain one of the children. Because of that, the party who
actually owns the parent decides to (A)
Before we can introduce a single polling loop for all nodes in
bdrv_drain_all_begin(), we must make sure to run it outside of coroutine
context like we already do for bdrv_do_drained_begin().
Signed-off-by: Kevin Wolf
---
block/io.c | 22 +-
1 file changed, 17 insertions(+),
bdrv_do_drained_begin() is only safe if we have a single
BDRV_POLL_WHILE() after quiescing all affected nodes. We cannot allow
that parent callbacks introduce a nested polling loop that could cause
graph changes while we're traversing the graph.
Split off bdrv_do_drained_begin_quiesce(), which
Anything can happen inside BDRV_POLL_WHILE(), including graph
changes that may interfere with its callers (e.g. child list iteration
in recursive callers of bdrv_do_drained_begin).
Switch to a single BDRV_POLL_WHILE() call for the whole subtree at the
end of bdrv_do_drained_begin() to avoid such
Signed-off-by: Kevin Wolf
---
tests/test-bdrv-drain.c | 130
1 file changed, 130 insertions(+)
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index 0c8f632f2d..abb287e597 100644
--- a/tests/test-bdrv-drain.c
+++
If bdrv_do_drained_begin() polls during its subtree recursion, the graph
can change and mess up the bs->children iteration. Test that this
doesn't happen.
Signed-off-by: Kevin Wolf
---
tests/test-bdrv-drain.c | 38 +-
1 file changed, 29 insertions(+), 9
For bdrv_drain(), recursively waiting for child node requests is
pointless because we didn't quiesce their parents, so new requests could
come in anyway. Letting the function work only on a single node makes it
more consistent.
For subtree drains and drain_all, we already have the recursion in
We already requested that block jobs be paused in .bdrv_drained_begin,
but no guarantee was made that the job was actually inactive at the
point where bdrv_drained_begin() returned.
This introduces a new callback BdrvChildRole.bdrv_drained_poll() and
uses it to make bdrv_drain_poll() consider
bdrv_do_drain_begin/end() implement already everything that
bdrv_drain_all_begin/end() need and currently still do manually: Disable
external events, call parent drain callbacks, call block driver
callbacks.
It also does two more things:
The first is incrementing bs->quiesce_counter.
Commit 91af091f923 added an additional aio_poll() to BDRV_POLL_WHILE()
in order to make sure that all pending BHs are executed on drain. This
was the wrong place to make the fix, as it is useless overhead for all
other users of the macro and unnecessarily complicates the mechanism.
This patch
The following changes since commit 2ef2f16781af9dee6ba6517755e9073ba5799fa2:
Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20180615a'
into staging (2018-06-15 18:13:35 +0100)
are available in the git repository at:
git://repo.or.cz/qemu/kevin.git tags/for-upstream
for
Since we use bdrv_do_drained_begin/end() for bdrv_drain_all_begin/end(),
coroutine context is automatically left with a BH, preventing the
deadlocks that made bdrv_drain_all*() unsafe in coroutine context. Now
that we even removed the old polling code as dead code, it's obvious
that it's
As long as nobody keeps the other I/O thread from working, there is no
reason why bdrv_drain() wouldn't work with cross-AioContext events. The
key is that the root request we're waiting for is in the AioContext
we're polling (which it always is for bdrv_drain()) so that aio_poll()
is woken up in
All involved nodes are already idle, we called bdrv_do_drain_begin() on
them.
The comment in the code suggested that this was not correct because the
completion of a request on one node could spawn a new request on a
different node (which might have been drained before, so we wouldn't
drain the
All callers pass false for the 'recursive' parameter now. Remove it.
Signed-off-by: Kevin Wolf
Reviewed-by: Stefan Hajnoczi
---
block/io.c | 13 +++--
1 file changed, 3 insertions(+), 10 deletions(-)
diff --git a/block/io.c b/block/io.c
index b355009f2c..b75d68886a 100644
---
Am 18.06.2018 um 17:28 hat Alberto Garcia geschrieben:
> On Mon 18 Jun 2018 04:15:07 PM CEST, Kevin Wolf wrote:
>
> >> @@ -2850,7 +2850,8 @@ static BlockReopenQueue
> >> *bdrv_reopen_queue_child(BlockReopenQueue *bs_queue,
> >> int flags,
> >>
Am 18.06.2018 um 17:06 hat Alberto Garcia geschrieben:
> On Mon 18 Jun 2018 04:38:01 PM CEST, Kevin Wolf wrote:
> > Am 14.06.2018 um 17:49 hat Alberto Garcia geschrieben:
> >> This patch allows the user to change the backing file of an image that
> >> is being reopened. Here's what it does:
> >>
On 06/18/2018 06:36 PM, Alberto Garcia wrote:
On Fri 08 Jun 2018 02:32:28 PM CEST, Ari Sundholm wrote:
The guest OS may perform writes which are aligned to the logical
sector size instead of the physical one, so logging at this granularity
records the writes performed on the block device most
On Fri 08 Jun 2018 02:32:28 PM CEST, Ari Sundholm wrote:
> The guest OS may perform writes which are aligned to the logical
> sector size instead of the physical one, so logging at this granularity
> records the writes performed on the block device most faithfully.
>
> Signed-off-by: Ari Sundholm
On Mon 18 Jun 2018 04:15:07 PM CEST, Kevin Wolf wrote:
>> @@ -2850,7 +2850,8 @@ static BlockReopenQueue
>> *bdrv_reopen_queue_child(BlockReopenQueue *bs_queue,
>> int flags,
>> const BdrvChildRole
On 2018-06-13 20:18, Max Reitz wrote:
> This series implements an active and synchronous mirroring mode.
>
> You can read the cover letter of v4 here (I don’t like to copy-paste
> that because people who reviewed previous versions know it already and
> this saves them from having to look out for
On Mon 18 Jun 2018 04:38:01 PM CEST, Kevin Wolf wrote:
> Am 14.06.2018 um 17:49 hat Alberto Garcia geschrieben:
>> This patch allows the user to change the backing file of an image that
>> is being reopened. Here's what it does:
>>
>> - In bdrv_reopen_queue_child(): if the 'backing' option
Am 14.06.2018 um 17:49 hat Alberto Garcia geschrieben:
> This patch allows the user to change the backing file of an image that
> is being reopened. Here's what it does:
>
> - In bdrv_reopen_queue_child(): if the 'backing' option points to an
>image different from the current backing file
Am 14.06.2018 um 17:49 hat Alberto Garcia geschrieben:
> The bdrv_reopen_queue() function is used to create a queue with the
> BDSs that are going to be reopened and their new options. Once the
> queue is ready bdrv_reopen_multiple() is called to perform the
> operation.
>
> The original options
On 2018-06-14 17:22, Kevin Wolf wrote:
> Am 13.06.2018 um 20:18 hat Max Reitz geschrieben:
>> In order to talk to the source BDS (and maybe in the future to the
>> target BDS as well) directly, we need to convert our existing AIO
>> requests into coroutine I/O requests.
>>
>> Signed-off-by: Max
On Fri, Jun 15, 2018 at 06:54:23PM +0100, Dr. David Alan Gilbert wrote:
> * Daniel P. Berrangé (berra...@redhat.com) wrote:
> > From: "Daniel P. Berrange"
> >
> > The QEMU instance that runs as the server for the migration data
> > transport (ie the target QEMU) needs to be able to configure
57 matches
Mail list logo