On 06/05/2015 03:45, Fam Zheng wrote:
This is not enough, you also have to do the discard in block/mirror.c,
otherwise the destination image could even become fully provisioned!
I wasn't sure what if src and dest have different can_write_zeroes_with_unmap
value, but your argument is
On Tue, May 05, 2015 at 11:55:49AM -0400, John Snow wrote:
On 05/05/2015 06:25 AM, Stefan Hajnoczi wrote:
On Wed, Apr 29, 2015 at 06:51:08PM -0400, John Snow wrote:
This is a feature that should be very easy to add on top of the existing
incremental feature, since it's just a difference in how
On 06/05/2015 11:50, Fam Zheng wrote:
# src can_write_zeroes_with_unmap target can_write_zeroes_with_unmap
1 true true
2 true
Am 05.05.2015 um 19:01 hat Antoni Villalonga geschrieben:
Hi,
Is my first email to that list ;)
I can reproduce this bug with v2.2 and v2.3. I'm not sure about the results
after testing with v2.1 (doesn't show errors but seems to be still broken).
% qemu-img convert -f raw -O vmdk -o
I just wander ifbdrv_is_allocated_above works as it is considered,migrate image in format of qcow2 run into such backtrace:#0 0x7f9e73822c6d in lseek64 () at ../sysdeps/unix/syscall-template.S:82#1 0x7f9e765f08e4 in find_allocation (bs=value optimized out, sector_num=value optimized out,
On Wed, 05/06 12:01, Kevin Wolf wrote:
Am 05.05.2015 um 19:01 hat Antoni Villalonga geschrieben:
Hi,
Is my first email to that list ;)
I can reproduce this bug with v2.2 and v2.3. I'm not sure about the results
after testing with v2.1 (doesn't show errors but seems to be still
Preventing device from submitting IO is useful around various nested
poll. Op blocker is a good place to put this flag.
Devices would submit IO requests through blk_* block backend interface,
which calls blk_check_request to check the validity. Return -EBUSY if
the operation is blocked, in which
Reported by Paolo.
Unlike the iohandler in main loop, iothreads currently process the event
notifier used as virtio-blk ioeventfd in all nested aio_poll. This is dangerous
without proper protection, because guest requests could sneak to block layer
where they mustn't.
For example, a QMP
We don't want new requests from guest, so block the operation around the
nested poll.
Signed-off-by: Fam Zheng f...@redhat.com
---
block/io.c | 12
1 file changed, 12 insertions(+)
diff --git a/block/io.c b/block/io.c
index 1ce62c4..d369de3 100644
--- a/block/io.c
+++ b/block/io.c
The qcow2 L2/refcount cache contains one separate table for each cache
entry. Doing one allocation per table adds unnecessary overhead and it
also requires us to store the address of each table separately.
Since the size of the cache is constant during its lifetime, it's
better to have an array
A cache miss means that the whole array was traversed and the entry
we were looking for was not found, so there's no need to traverse it
again in order to select an entry to replace.
Signed-off-by: Alberto Garcia be...@igalia.com
---
block/qcow2-cache.c | 45
On 04.05.2015 21:39, Eric Blake wrote:
On 05/04/2015 01:15 PM, Max Reitz wrote:
Expose the two new options for controlling the memory usage of the
overlap check implementation via QAPI.
Signed-off-by: Max Reitz mre...@redhat.com
---
qapi/block-core.json | 37
Since all tables are now stored together, it is possible to obtain
the position of a particular table directly from its address, so the
operation becomes O(1).
Signed-off-by: Alberto Garcia be...@igalia.com
---
block/qcow2-cache.c | 32 +++-
1 file changed, 15
New version of the qcow2 cache patches:
v2:
- Don't do pointer arithmetic on void *
- Rename table_addr() to qcow2_cache_get_table_addr()
- Add qcow2_cache_get_table_idx()
- Cast cache size to size_t to prevent overflows
- Make qcow2_cache_put() a void function
- Don't store the cluster size in
Fix pointer declaration to make it consistent with the rest of the
code.
Signed-off-by: Alberto Garcia be...@igalia.com
---
block/qcow2-cache.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/block/qcow2-cache.c b/block/qcow2-cache.c
index 0f629c4..bea43c1 100644
---
On 06/05/2015 14:20, Fam Zheng wrote:
Does non-dataplane need to do anything, since it uses iohandlers rather
than aio_set_event_notifier_handler?
I guess it's not for this specific bug. See this as an attempt on a general
purpose pause mechanism to the device in investment for the
On 04.05.2015 21:32, Eric Blake wrote:
On 05/04/2015 01:15 PM, Max Reitz wrote:
Later, a mechanism to set a limit on how much memory may be used for the
overlap prevention structures will be introduced. If that limit is about
to be exceeded, a QMP event should be emitted. This very event is
The current cache algorithm traverses the array starting always from
the beginning, so the average number of comparisons needed to perform
a lookup is proportional to the size of the array.
By using a hash of the offset as the starting point, lookups are
faster and independent from the array
Before a freed cluster can be reused, pending discards for this cluster
must be processed.
The original assumption was that this was not a problem because discards
are only cached during discard/write zeroes operations, which are
synchronous so that no concurrent write requests can cause cluster
Signed-off-by: Fam Zheng f...@redhat.com
---
blockdev.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/blockdev.c b/blockdev.c
index 5eaf77e..859fa2e 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -1398,6 +1398,7 @@ typedef struct ExternalSnapshotState {
BlockDriverState *old_bs;
Forward the call to bdrv_op_blocker_add_notifier.
Signed-off-by: Fam Zheng f...@redhat.com
---
block.c| 4 ++--
block/block-backend.c | 6 ++
include/sysemu/block-backend.h | 2 ++
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/block.c
The current algorithm to evict entries from the cache gives always
preference to those in the lowest positions. As the size of the cache
increases, the chances of the later elements of being removed decrease
exponentially.
In a scenario with random I/O and lots of cache misses, entries in
On Thu, Apr 30, 2015 at 01:11:45PM +0300, Alberto Garcia wrote:
Fix pointer declaration to make it consistent with the rest of the
code.
Signed-off-by: Alberto Garcia be...@igalia.com
---
block/qcow2-cache.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
:)
Reviewed-by:
On 06/05/2015 18:12, Max Reitz wrote:
I very much think it would be worth fixing, if there wasn't the problem
with legitimate use cases throwing unnecessary warnings.
Right.
I remember having a discussion with Kevin about this series (v1)
regarding qcow2 on LVM; I think my point was that
On 06.05.2015 18:20, Paolo Bonzini wrote:
On 06/05/2015 18:12, Max Reitz wrote:
I very much think it would be worth fixing, if there wasn't the problem
with legitimate use cases throwing unnecessary warnings.
Right.
I remember having a discussion with Kevin about this series (v1)
regarding
On 06.05.2015 17:30, Paolo Bonzini wrote:
On 06/05/2015 15:04, Max Reitz wrote:
Introducing a warning for a normal QEMU invocation is a bit weird.
What is the point of this series? Were users confused that they hit
ENOSPC?
Users were confused when exporting a qcow2 image using nbd-server
CC-ing qemu-block and Stefan Weil (maintainer of vdi).
On 06.05.2015 19:23, phoeagon wrote:
Thanks for your input.
So I changed it to:
1. Only call bdrv_flush when bdrv_pwrite was successful
2. Only if bdrv_flush was unsuccessful that the return value of
vdi_co_write is updated.
One of both
On 05/02/2015 12:47 AM, John Snow wrote:
On 04/03/2015 07:05 AM, Paolo Bonzini wrote:
On 03/04/2015 12:01, Wen Congyang wrote:
Signed-off-by: Wen Congyang we...@cn.fujitsu.com
Signed-off-by: zhanghailiang zhang.zhanghaili...@huawei.com
Signed-off-by: Gonglei arei.gong...@huawei.com
---
From: phoeagon phoea...@gmail.com
In reference to
b0ad5a455d7e5352d4c86ba945112011dbeadfb8~078a458e077d6b0db262c4b05fee51d01de2d1d2,
metadata writes to qcow2/cow/qcow/vpc/vmdk are all synced prior to succeeding
writes.
Only when write is successful that bdrv_flush is called.
Signed-off-by:
On 06/05/2015 15:04, Max Reitz wrote:
Introducing a warning for a normal QEMU invocation is a bit weird.
What is the point of this series? Were users confused that they hit
ENOSPC?
Users were confused when exporting a qcow2 image using nbd-server
instead of qemu-img, and then accessing
On Tue, May 05, 2015 at 03:06:52PM +0200, Alberto Garcia wrote:
On Fri 01 May 2015 04:31:52 PM CEST, Stefan Hajnoczi wrote:
int qcow2_cache_put(BlockDriverState *bs, Qcow2Cache *c, void **table)
{
-int i;
+int i = (*table - c-table_array) / c-table_size;
-for (i = 0;
On Wed, 05/06 16:22, Paolo Bonzini wrote:
On 06/05/2015 13:23, Fam Zheng wrote:
void bdrv_op_block(BlockDriverState *bs, BlockOpType op, Error *reason)
{
BdrvOpBlocker *blocker;
assert((int) op = 0 op BLOCK_OP_TYPE_MAX);
+bdrv_op_blocker_notify(bs, op, reason,
On Tue, May 05, 2015 at 01:20:19PM +0200, Kevin Wolf wrote:
Am 05.05.2015 um 12:28 hat Stefan Hajnoczi geschrieben:
On Mon, May 04, 2015 at 12:58:13PM +0200, Kevin Wolf wrote:
Am 01.05.2015 um 16:23 hat Stefan Hajnoczi geschrieben:
On Thu, Apr 30, 2015 at 01:11:40PM +0300, Alberto Garcia
33 matches
Mail list logo