20.12.2017 18:49, Vladimir Sementsov-Ogievskiy wrote:
Add special state, when qmp operations on the bitmap are disabled.
It is needed during bitmap migration. "Frozen" state is not
appropriate here, because it looks like bitmap is unchanged.
Signed-off-by: Vladimir Sementsov-Ogievskiy
21.12.2017 21:32, Eric Blake wrote:
On 12/21/2017 09:28 AM, Markus Armbruster wrote:
Looks like you forgot to cc: Eric. Fixing...
Or that he did cc me, and the mailman bug ate the cc line because of
my list subscription settings. (Have I mentioned that I hate that
mailman 2 bug, but don't
On Thu, Dec 21, 2017 at 05:58:47PM -0500, John Snow wrote:
>
>
> On 12/21/2017 05:13 PM, Vasiliy Tolstov wrote:
> > Hi! Today my server have forced reboot and one of my vm can't start
> > with message:
> > qcow2: Marking image as corrupt: L2 table offset 0x3f786d6c207600
> > unaligned (L1 index:
22.12.2017 05:03, John Snow wrote:
On 12/20/2017 10:49 AM, Vladimir Sementsov-Ogievskiy wrote:
Add special state, when qmp operations on the bitmap are disabled.
It is needed during bitmap migration. "Frozen" state is not
appropriate here, because it looks like bitmap is unchanged.
As of
15.12.2017 19:38, Max Reitz wrote:
Hi everyone,
Kevin, Markus, and me had a small personal meeting over the last 1.5
days and discussed a couple of things about the block layer and its QAPI
entanglements.
Here's a rather rough sketch on what we talked about:
[...]
== Single block job for
On Thu, Dec 21, 2017 at 05:44:11PM -0500, John Snow wrote:
> I don't think there's a legitimate reason to open directories as if
> they were files. This prevents QEMU from opening and attempting to probe
> a directory inode, which can break in exciting ways. One of those ways
> is lseek on
22.12.2017 16:43, Kevin Wolf wrote:
Am 12.12.2017 um 17:04 hat Vladimir Sementsov-Ogievskiy geschrieben:
The test creates two vms (vm_a, vm_b), create dirty bitmap in
the first one, do several writes to corresponding device and
then migrate vm_a to vm_b with dirty bitmaps.
For now, only
Am 22.12.2017 um 01:11 hat Jack Schwartz geschrieben:
> BLOCK_IO_ERROR events currently contain a "reason" string which is
> strerror(errno) of the error. This enhancement provides those events with
> the numeric errno value as well, since it is easier to parse for error type
> than a string.
>
On Mon, Dec 18, 2017 at 02:09:08PM +0300, Denis V. Lunev wrote:
> From: Klim Kireev
>
> This dependency is required for adequate Parallels images support.
> Typically the disk consists of several images which are glued by
> XML disk descriptor. Also XML hides inside
Am 12.12.2017 um 17:04 hat Vladimir Sementsov-Ogievskiy geschrieben:
> The test creates two vms (vm_a, vm_b), create dirty bitmap in
> the first one, do several writes to corresponding device and
> then migrate vm_a to vm_b with dirty bitmaps.
>
> For now, only migration through shared storage
Am 18.12.2017 um 18:14 hat Thomas Huth geschrieben:
> Remove the deprecated "-drive boot" and "-hdachs" options and properly
> mark some other deprecated options in the deprecation chapter.
>
> Thomas Huth (3):
> block: Remove the obsolete -drive boot=on|off parameter
> block: Remove the
Am 21.12.2017 um 23:44 hat John Snow geschrieben:
> I don't think there's a legitimate reason to open directories as if
> they were files. This prevents QEMU from opening and attempting to probe
> a directory inode, which can break in exciting ways. One of those ways
> is lseek on ext4/xfs, which
Am 15.12.2017 um 09:04 hat Fam Zheng geschrieben:
> Signed-off-by: Fam Zheng
The updated test case passes even without your fix. I can reproduce the
problem manually and verify that patch 1 fixes it, but the test case
seems ineffective.
And having a closer look at the test,
22.12.2017 16:39, Kevin Wolf wrote:
Am 12.12.2017 um 17:04 hat Vladimir Sementsov-Ogievskiy geschrieben:
Consider migration with shared storage. Persistent bitmaps are stored
on bdrv_inactivate. Then, on destination
process_incoming_migration_bh() calls bdrv_invalidate_cache_all() which
leads
Am 15.12.2017 um 09:04 hat Fam Zheng geschrieben:
> Management tools create overlays of running guests with qemu-img:
>
> $ qemu-img create -b /image/in/use.qcow2 -f qcow2 /overlay/image.qcow2
>
> but this doesn't work anymore due to image locking:
>
> qemu-img: /overlay/image.qcow2:
Am 21.12.2017 um 23:47 hat Peter Maydell geschrieben:
> On 21 December 2017 at 15:26, Kevin Wolf wrote:
> > The following changes since commit 4da5c51cac8363f86ec92dc99c38f9382d617647:
> >
> > Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2017-12-20'
> > into
Am 12.12.2017 um 17:04 hat Vladimir Sementsov-Ogievskiy geschrieben:
> Consider migration with shared storage. Persistent bitmaps are stored
> on bdrv_inactivate. Then, on destination
> process_incoming_migration_bh() calls bdrv_invalidate_cache_all() which
> leads to
The following changes since commit 281f327487c9c9b1599f93c589a408bbf4a651b8:
Merge remote-tracking branch 'remotes/vivier/tags/m68k-for-2.12-pull-request'
into staging (2017-12-22 00:11:36 +)
are available in the git repository at:
git://repo.or.cz/qemu/kevin.git tags/for-upstream
for
Commit 1f4ad7d fixed 'qemu-img info' for raw images that are currently
in use as a mirror target. It is not enough for image formats, though,
as these still unconditionally request BLK_PERM_CONSISTENT_READ.
As this permission is geared towards whether the guest-visible data is
consistent, and has
From: John Snow
VPC has some difficulty creating geometries of particular size.
However, we can indeed force it to use a literal one, so let's
do that for the sake of test 197, which is testing some specific
offsets.
Signed-off-by: John Snow
Reviewed-by:
From: Fam Zheng
Management tools create overlays of running guests with qemu-img:
$ qemu-img create -b /image/in/use.qcow2 -f qcow2 /overlay/image.qcow2
but this doesn't work anymore due to image locking:
qemu-img: /overlay/image.qcow2: Failed to get shared "write" lock
Test that drain sections are correctly propagated through the graph.
Signed-off-by: Kevin Wolf
---
tests/test-bdrv-drain.c | 74 +
1 file changed, 74 insertions(+)
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
22.12.2017 18:43, Kevin Wolf wrote:
Am 22.12.2017 um 15:25 hat Vladimir Sementsov-Ogievskiy geschrieben:
22.12.2017 16:39, Kevin Wolf wrote:
Am 12.12.2017 um 17:04 hat Vladimir Sementsov-Ogievskiy geschrieben:
Consider migration with shared storage. Persistent bitmaps are stored
on
The device is drained, so there is no point in waiting for requests at
the end of the drained section. Remove the bdrv_drain_recurse() calls
there.
The bdrv_drain_recurse() calls were introduced in commit 481cad48e5e
in order to call the .bdrv_co_drain_end() driver callback. This is now
done by a
Block jobs already paused themselves when their main BlockBackend
entered a drained section. This is not good enough: We also want to
pause a block job and may not submit new requests if, for example, the
mirror target node should be drained.
This implements .drained_begin/end callbacks in
bdrv_drained_begin() waits for the completion of requests in the whole
subtree, but it only actually keeps its immediate bs parameter quiesced
until bdrv_drained_end().
Add a version that keeps the whole subtree drained. As of this commit,
graph changes cannot be allowed during a subtree drained
Add a subtree drain version to the existing test cases.
Signed-off-by: Kevin Wolf
---
tests/test-bdrv-drain.c | 27 ++-
1 file changed, 26 insertions(+), 1 deletion(-)
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index
Signed-off-by: Kevin Wolf
---
tests/test-bdrv-drain.c | 57 +
1 file changed, 57 insertions(+)
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index 6da66ae841..9098b77ab4 100644
--- a/tests/test-bdrv-drain.c
+++
bdrv_drained_begin() doesn't increase bs->quiesce_counter recursively
and also doesn't notify other parent nodes of children, which both means
that the child nodes are not actually drained, and bdrv_drained_begin()
is providing useful functionality only on a single node.
To keep things
From: Fam Zheng
Signed-off-by: Fam Zheng
Signed-off-by: Kevin Wolf
---
include/block/block_int.h | 1 -
block/io.c| 18 --
2 files changed, 19 deletions(-)
diff --git a/include/block/block_int.h
From: Thomas Huth
It's been marked as deprecated since QEMU v2.10.0, and so far nobody
complained that we should keep it, so let's remove this legacy option
now to simplify the code quite a bit.
Signed-off-by: Thomas Huth
Reviewed-by: John Snow
Block jobs must be paused if any of the involved nodes are drained.
Signed-off-by: Kevin Wolf
---
tests/test-bdrv-drain.c | 121
1 file changed, 121 insertions(+)
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
bdrv_do_drained_begin() restricts the call of parent callbacks and
aio_disable_external() to the outermost drain section, but the block
driver callbacks are always called. bdrv_do_drained_end() must match
this behaviour, otherwise nodes stay drained even if begin/end calls
were balanced.
Signed-off-by: Kevin Wolf
---
tests/test-bdrv-drain.c | 80 +
1 file changed, 80 insertions(+)
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index 690b585b4d..d760e2b243 100644
--- a/tests/test-bdrv-drain.c
+++
Drain requests are propagated to child nodes, parent nodes and directly
to the AioContext. The order in which this happened was different
between all combinations of drain/drain_all and begin/end.
The correct order is to keep children only drained when their parents
are also drained. This means
Commit 15afd94a047 added code to acquire and release the AioContext in
qemuio_command(). This means that the lock is taken twice now in the
call path from hmp_qemu_io(). This causes BDRV_POLL_WHILE() to hang for
any requests issued to nodes in a non-mainloop AioContext.
Dropping the first locking
Now that the bdrv_drain_invoke() calls are pulled up to the callers of
bdrv_drain_recurse(), the 'begin' parameter isn't needed any more.
Signed-off-by: Kevin Wolf
Reviewed-by: Stefan Hajnoczi
---
block/io.c | 12 ++--
1 file changed, 6
Removing a quorum child node with x-blockdev-change results in a quorum
driver state that cannot be recreated with create options because it
would require a list with gaps. This causes trouble in at least
.bdrv_refresh_filename().
Document this problem so that we won't accidentally mark the
From: Edgar Kaziakhmedov
Since bdrv_co_preadv does all neccessary checks including
reading after the end of the backing file, avoid duplication
of verification before bdrv_co_preadv call.
Signed-off-by: Edgar Kaziakhmedov
The existing test is for bdrv_drain_all_begin/end() only. Generalise the
test case so that it can be run for the other variants as well. At the
moment this is only bdrv_drain_begin/end(), but in a while, we'll add
another one.
Also, add a backing file to the test node to test whether the
From: Thomas Huth
Looks like we forgot to announce the deprecation of these options in
the corresponding chapter of the qemu-doc text, so let's do that now.
Signed-off-by: Thomas Huth
Reviewed-by: John Snow
Reviewed-by: Markus Armbruster
This is currently only working correctly for bdrv_drain(), not for
bdrv_drain_all(). Leave a comment for the drain_all case, we'll address
it later.
Signed-off-by: Kevin Wolf
---
tests/test-bdrv-drain.c | 45 +
1 file changed, 45
This is in preparation for subtree drains, i.e. drained sections that
affect not only a single node, but recursively all child nodes, too.
Calling the parent callbacks for drain is pointless when we just came
from that parent node recursively and leads to multiple increases of
bs->quiesce_counter
If bdrv_do_drained_begin/end() are called in coroutine context, they
first use a BH to get out of the coroutine context. Call some existing
tests again from a coroutine to cover this code path.
Signed-off-by: Kevin Wolf
---
tests/test-bdrv-drain.c | 59
We need to remember how many of the drain sections in which a node is
were recursive (i.e. subtree drain rather than node drain), so that they
can be correctly applied when children are added or removed during the
drained section.
With this change, it is safe to modify the graph even inside a
This change separates bdrv_drain_invoke(), which calls the BlockDriver
drain callbacks, from bdrv_drain_recurse(). Instead, the function
performs its own recursion now.
One reason for this is that bdrv_drain_recurse() can be called multiple
times by bdrv_drain_all_begin(), but the callbacks may
bdrv_drain_all_begin() used to call the .bdrv_co_drain_begin() driver
callback inside its polling loop. This means that how many times it got
called for each node depended on long it had to poll the event loop.
This is obviously not right and results in nodes that stay drained even
after
From: Thomas Huth
It's not working anymore since QEMU v1.3.0 - time to remove it now.
Signed-off-by: Thomas Huth
Reviewed-by: John Snow
Reviewed-by: Markus Armbruster
Signed-off-by: Kevin Wolf
---
This adds a test case that the BlockDriver callbacks for drain are
called in bdrv_drained_all_begin/end(), and that both of them are called
exactly once.
Signed-off-by: Kevin Wolf
Reviewed-by: Stefan Hajnoczi
Reviewed-by: Eric Blake
---
From: Doug Gale
Add trace output for commands, errors, and undefined behavior.
Add guest error log output for undefined behavior.
Report invalid undefined accesses to MMIO.
Annotate unlikely error checks with unlikely.
Signed-off-by: Doug Gale
Reviewed-by:
Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
---
block/io.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/block/io.c b/block/io.c
index cf780c3cb0..b94740b8ff 100644
--- a/block/io.c
+++ b/block/io.c
@@ -330,6 +330,12 @@ void
Block jobs are already paused using the BdrvChildRole drain callbacks,
so we don't need an additional block_job_pause_all() call.
Signed-off-by: Kevin Wolf
---
block/io.c | 4
tests/test-bdrv-drain.c | 10 --
2 files changed, 4 insertions(+), 10
The bdrv_reopen*() implementation doesn't like it if the graph is
changed between queuing nodes for reopen and actually reopening them
(one of the reasons is that queuing can be recursive).
So instead of draining the device only in bdrv_reopen_multiple(),
require that callers already drained all
Since commit bde70715, base is the only node that is reopened in
commit_start(). This means that the code, which still involves an
explicit BlockReopenQueue, can now be simplified by using bdrv_reopen().
Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
---
Am 22.12.2017 um 15:25 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 22.12.2017 16:39, Kevin Wolf wrote:
> > Am 12.12.2017 um 17:04 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > Consider migration with shared storage. Persistent bitmaps are stored
> > > on bdrv_inactivate. Then, on
On 12/22/2017 02:53 AM, Vladimir Sementsov-Ogievskiy wrote:
21.12.2017 21:32, Eric Blake wrote:
On 12/21/2017 09:28 AM, Markus Armbruster wrote:
Looks like you forgot to cc: Eric. Fixing...
Or that he did cc me, and the mailman bug ate the cc line because of
my list subscription settings.
Am 22.12.2017 um 17:12 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 22.12.2017 18:43, Kevin Wolf wrote:
> > Am 22.12.2017 um 15:25 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > 22.12.2017 16:39, Kevin Wolf wrote:
> > > > Am 12.12.2017 um 17:04 hat Vladimir Sementsov-Ogievskiy geschrieben:
Hi,
This series seems to have some coding style problems. See output below for
more information:
Type: series
Message-id: 20171222151846.28110-1-kw...@redhat.com
Subject: [Qemu-devel] [PULL v3 00/35] Block layer patches
=== TEST SCRIPT BEGIN ===
#!/bin/bash
BASE=base
n=1
total=$(git log
58 matches
Mail list logo