#secure method=pgpmime mode=sign
Paolo Bonzini pbonz...@redhat.com writes:
On 01/06/2015 13:16, Dmitry Monakhov wrote:
259,0 31 385 0.719283423 10729 Q WS 29376775 + 248 [qemu-io]
259,0 31 388 0.719287600 10729 Q WS 29377023 + 8 [qemu-io]
259,0 31 391
On Mon, Jun 01, 2015 at 09:46:39AM +0800, Fam Zheng wrote:
On Fri, 05/29 13:37, Stefan Hajnoczi wrote:
On Fri, May 29, 2015 at 10:22:13AM +0800, Fam Zheng wrote:
mirror_exit does the replacing, which requires source and target to be
in sync, unfortunately we can't guarantee that before we
From: Laszlo Ersek ler...@redhat.com
Even if board code decides not to request the creation of the FDC (keyed
off board-level factors, to be determined later), we should create the FDC
nevertheless if the user passes '-drive if=floppy' on the command line.
Otherwise '-drive if=floppy' would
On Fr, 2015-05-29 at 16:53 +0200, Michael S. Tsirkin wrote:
On Fri, May 29, 2015 at 09:51:20AM +0200, Gerd Hoffmann wrote:
Make features 64bit wide everywhere. Exception: command line flags
remain 32bit and are copyed into the lower 32 host_features at
initialization time.
On
Paolo Bonzini pbonz...@redhat.com writes:
On 13/05/2015 18:46, Denis V. Lunev wrote:
I agree with this. Kernel guys are aware and may be we will have
the fix after a while... I have heard (not tested) that performance
loss over multi-queue SSD is around 30%.
I came up with this patch... can
On Mon, Jun 01, 2015 at 09:23:28AM +0200, Gerd Hoffmann wrote:
On Fr, 2015-05-29 at 16:53 +0200, Michael S. Tsirkin wrote:
On Fri, May 29, 2015 at 09:51:20AM +0200, Gerd Hoffmann wrote:
Make features 64bit wide everywhere. Exception: command line flags
remain 32bit and are copyed into
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 01/06/2015 12:34, Dmitry Monakhov wrote:
Yes. Improvement is not huge, but it can be detected for old qemu
unpatched kernel: 728 MiB/sec ± 20Mb patched kernel : 748 MiB/sec
± 10Mb
Ok, so about 3-4%. What does the blktrace look like with
Make features 64bit wide everywhere.
On migration a full 64bit guest_features field is sent if one of the
high bits is set, in addition to the lower 32bit guest_features field
which must stay for compatibility reasons. That way we send the lower
32 feature bits twice, but the code is simpler
On Mon 01 Jun 2015 06:09:17 PM CEST, Max Reitz mre...@redhat.com wrote:
The L2 cache must cover at least two L2 tables, because during COW two
L2 tables are accessed simultaneously.
Reported-by: Alexander Graf ag...@suse.de
Cc: qemu-stable qemu-sta...@nongnu.org
Signed-off-by: Max Reitz
This adds a test case to test 103 for performing a COW operation in a
qcow2 image using an L2 cache with minimal size (which should be at
least two clusters so the COW can access both source and destination
simultaneously).
Signed-off-by: Max Reitz mre...@redhat.com
---
tests/qemu-iotests/103
The L2 cache must cover at least two L2 tables, because during COW two
L2 tables are accessed simultaneously.
Reported-by: Alexander Graf ag...@suse.de
Cc: qemu-stable qemu-sta...@nongnu.org
Signed-off-by: Max Reitz mre...@redhat.com
---
block/qcow2.h | 3 ++-
1 file changed, 2 insertions(+), 1
This series fixes MIN_L2_CACHE_SIZE (which should not be 1, but 2
(clusters)), and introduces a new constant, DEFAULT_L2_CACHE_CLUSTERS,
so the default cache size is no longer always a fixed size in bytes but
is also guaranteed to be able to hold a sane amount of L2 tables (which
was determined to
On 01.06.15 18:09, Max Reitz wrote:
The L2 cache must cover at least two L2 tables, because during COW two
L2 tables are accessed simultaneously.
Reported-by: Alexander Graf ag...@suse.de
Cc: qemu-stable qemu-sta...@nongnu.org
Signed-off-by: Max Reitz mre...@redhat.com
Tested-by:
On Mon 01 Jun 2015 06:09:19 PM CEST, Max Reitz wrote:
If a relatively large cluster size is chosen, the default of 1 MB L2
cache is not really appropriate. In this case, unless overridden by the
user, the default cache size should not be determined by its size in
bytes but by the number of L2
Source and target are in sync when we leave the mirror_run loop, they
should remain so until bdrv_swap. Before block_job_defer_to_main_loop
was introduced, it has been easy to prove that. Now that tricky things
can happen after mirror_run returns and before mirror_exit runs, for
example, ioeventfd
Lock immediately follows aio_context_acquire, so unlock right before
the corresponding aio_context_release.
Signed-off-by: Fam Zheng f...@redhat.com
---
blockdev.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/blockdev.c b/blockdev.c
index fdc5a17..a8d5b10 100644
--- a/blockdev.c
+++
For various purposes, BDS users call bdrv_drain or bdrv_drain_all to make sure
there are no pending requests during a series of operations on the BDS. But in
the middle of operations, the caller may 1) yield from a coroutine (mirror_run);
2) defer the next part of work to a BH (mirror_run); 3)
Should more ops be added to differentiate code between dataplane and
non-dataplane, the new saved_ops approach will be cleaner than messing
with N pointers.
Signed-off-by: Fam Zheng f...@redhat.com
Reviewed-by: Max Reitz mre...@redhat.com
---
hw/block/dataplane/virtio-blk.c | 13 -
So that NBD export will not process more requests.
Signed-off-by: Fam Zheng f...@redhat.com
---
nbd.c | 21 +
1 file changed, 21 insertions(+)
diff --git a/nbd.c b/nbd.c
index 06b501b..854d6a5 100644
--- a/nbd.c
+++ b/nbd.c
@@ -160,6 +160,8 @@ struct NBDExport {
19 matches
Mail list logo