On Wed, Apr 29, 2020 at 12:18:13PM +0300, Roman Kagan wrote:
> Devices (virtio-blk, scsi, etc.) and the block layer are happy to use
> 32-bit for logical_block_size, physical_block_size, and min_io_size.
> However, the properties in BlockConf are defined as uint16_t limiting
> the values to 32768.
From: Philippe Mathieu-Daudé
List softmmu fuzz targets in 'make help' output:
$ make help
...
Architecture specific targets:
aarch64-softmmu/all- Build for aarch64-softmmu
aarch64-softmmu/fuzz - Build fuzzer for aarch64-softmmu
alpha-softmmu/all -
From: Philippe Mathieu-Daudé
Signed-off-by: Philippe Mathieu-Daudé
Message-id: 20200514143433.18569-4-phi...@redhat.com
Signed-off-by: Stefan Hajnoczi
---
tests/qtest/fuzz/i440fx_fuzz.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/tests/qtest/fuzz/i440fx_fuzz.c
Am 18.05.2020 um 20:26 hat John Snow geschrieben:
>
>
> On 5/15/20 4:48 AM, Kevin Wolf wrote:
> > Am 14.05.2020 um 22:21 hat John Snow geschrieben:
> >>
> >>
> >> On 5/12/20 3:49 PM, Philippe Mathieu-Daudé wrote:
> >>> Handlers don't need to modify the IDEDMA structure.
> >>> Make it const.
>
On Mon, May 18, 2020 at 15:49:02 -0500, Eric Blake wrote:
> On 5/13/20 10:49 PM, John Snow wrote:
[...]
> > +
> > +/* NB: new bitmap is anonymous and enabled */
> > +cluster_size = bdrv_dirty_bitmap_granularity(target_bitmap);
> > +new_bitmap = bdrv_create_dirty_bitmap(bs,
On Mon, May 18, 2020 at 10:53:59AM +0100, Dr. David Alan Gilbert wrote:
> * Dima Stepanov (dimas...@yandex-team.ru) wrote:
> > On Mon, May 18, 2020 at 10:50:39AM +0800, Jason Wang wrote:
> > >
> > > On 2020/5/16 上午12:54, Dima Stepanov wrote:
> > > >On Thu, May 14, 2020 at 03:34:24PM +0800, Jason
Am 18.05.2020 um 18:12 hat Thomas Huth geschrieben:
> On 15/05/2020 23.15, Vladimir Sementsov-Ogievskiy wrote:
> > Rename bitmaps migration tests and move them to tests subdirectory to
> > demonstrate new human-friendly test naming.
> >
> > Signed-off-by: Vladimir Sementsov-Ogievskiy
> > ---
> >
The io_uring file descriptor monitoring implementation has an internal
list of fd handlers that are pending submission to io_uring.
fdmon_io_uring_destroy() deletes all fd handlers on the list.
Don't delete fd handlers directly in fdmon_io_uring_destroy() for two
reasons:
1. This duplicates the
From: Philippe Mathieu-Daudé
These typedefs are not used. Use a simple structure,
remote the typedefs.
Signed-off-by: Philippe Mathieu-Daudé
Message-id: 20200514143433.18569-5-phi...@redhat.com
Signed-off-by: Stefan Hajnoczi
---
tests/qtest/fuzz/i440fx_fuzz.c | 10 --
1 file changed,
The following changes since commit 013a18edbbc59cdad019100c7d03c0494642b74c:
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-2020051=
4' into staging (2020-05-14 16:17:55 +0100)
are available in the Git repository at:
https://github.com/stefanha/qemu.git
From: Philippe Mathieu-Daudé
Extract the generic pciconfig_fuzz_qos() method from
i440fx_fuzz_qos(). This will help to write tests not
specific to the i440FX controller.
Signed-off-by: Philippe Mathieu-Daudé
Message-id: 20200514143433.18569-6-phi...@redhat.com
Signed-off-by: Stefan Hajnoczi
From: Philippe Mathieu-Daudé
Extract generic ioport_fuzz_qtest() method from
i440fx_fuzz_qtest(). This will help to write tests
not specific to the i440FX controller.
Signed-off-by: Philippe Mathieu-Daudé
Message-id: 20200514143433.18569-7-phi...@redhat.com
Signed-off-by: Stefan Hajnoczi
---
From: Philippe Mathieu-Daudé
Some devices availability depends on CONFIG options.
Use these options to only link tests when requested device
is available.
Signed-off-by: Philippe Mathieu-Daudé
Message-id: 20200514143433.18569-2-phi...@redhat.com
Signed-off-by: Stefan Hajnoczi
---
The glib event loop does not call fdmon_io_uring_wait() so fd handlers
waiting to be submitted build up in the list. There is no benefit is
using io_uring when the glib GSource is being used, so disable it
instead of implementing a more complex fix.
This fixes a memory leak where AioHandlers
On Wed, May 13, 2020 at 01:56:18PM +0800, Jason Wang wrote:
>
> On 2020/5/13 下午12:15, Michael S. Tsirkin wrote:
> >On Tue, May 12, 2020 at 12:35:30PM +0300, Dima Stepanov wrote:
> >>On Tue, May 12, 2020 at 11:32:50AM +0800, Jason Wang wrote:
> >>>On 2020/5/11 下午5:25, Dima Stepanov wrote:
> On
Am 06.05.2020 um 09:02 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 27.04.2020 17:39, Vladimir Sementsov-Ogievskiy wrote:
> > It's safer to expand in_flight request to start before enter to
> > coroutine in synchronous wrappers, due to the following (theoretical)
> > problem:
> >
> > Consider
18.05.2020 23:36, Eric Blake wrote:
On 5/15/20 7:40 AM, Vladimir Sementsov-Ogievskiy wrote:
Important thing for bitmap migration is to select destination block
node to obtain the migrated bitmap.
Prepatch, on source we use bdrv_get_device_or_node_name() to identify
the node, and on target we
Am 02.05.2020 um 00:00 hat Eric Blake geschrieben:
> On 4/27/20 9:39 AM, Vladimir Sementsov-Ogievskiy wrote:
> > It's safer to expand in_flight request to start before enter to
> > coroutine in synchronous wrappers and end after BDRV_POLL_WHILE loop.
> > Note that qemu_coroutine_enter may only
27.04.2020 17:38, Vladimir Sementsov-Ogievskiy wrote:
Hi all!
This is inspired by Kevin's
"block: Fix blk->in_flight during blk_wait_while_drained()" series.
So, like it's now done for block-backends, let's expand
in_flight-protected sections for bdrv_ interfaces, including
coroutine_enter and
On Fri, May 15, 2020 at 07:54:57PM +0300, Dima Stepanov wrote:
> On Thu, May 14, 2020 at 03:34:24PM +0800, Jason Wang wrote:
> >
> > On 2020/5/13 下午5:47, Dima Stepanov wrote:
> > >>> case CHR_EVENT_CLOSED:
> > >>> /* a close event may happen during a read/write, but vhost
> > >>>
19.05.2020 13:52, Kevin Wolf wrote:
Am 06.05.2020 um 09:02 hat Vladimir Sementsov-Ogievskiy geschrieben:
27.04.2020 17:39, Vladimir Sementsov-Ogievskiy wrote:
It's safer to expand in_flight request to start before enter to
coroutine in synchronous wrappers, due to the following (theoretical)
* Dima Stepanov (dimas...@yandex-team.ru) wrote:
> On Mon, May 18, 2020 at 10:53:59AM +0100, Dr. David Alan Gilbert wrote:
> > * Dima Stepanov (dimas...@yandex-team.ru) wrote:
> > > On Mon, May 18, 2020 at 10:50:39AM +0800, Jason Wang wrote:
> > > >
> > > > On 2020/5/16 上午12:54, Dima Stepanov
19.05.2020 03:27, John Snow wrote:
On 5/18/20 3:33 PM, Vladimir Sementsov-Ogievskiy wrote:
18.05.2020 21:23, John Snow wrote:
On 5/18/20 2:14 PM, Vladimir Sementsov-Ogievskiy wrote:
14.05.2020 08:53, John Snow wrote:
move python/qemu/*.py to python/qemu/lib/*.py.
To create a namespace
Am 27.04.2020 um 16:39 hat Vladimir Sementsov-Ogievskiy geschrieben:
> It's safer to expand in_flight request to start before enter to
> coroutine in synchronous wrappers, due to the following (theoretical)
> problem:
>
> Consider write.
> It's possible, that qemu_coroutine_enter only schedules
Am 19.05.2020 um 13:06 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 19.05.2020 13:52, Kevin Wolf wrote:
> > Am 06.05.2020 um 09:02 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > 27.04.2020 17:39, Vladimir Sementsov-Ogievskiy wrote:
> > > > It's safer to expand in_flight request to start
19.05.2020 14:16, Kevin Wolf wrote:
Am 19.05.2020 um 13:06 hat Vladimir Sementsov-Ogievskiy geschrieben:
19.05.2020 13:52, Kevin Wolf wrote:
Am 06.05.2020 um 09:02 hat Vladimir Sementsov-Ogievskiy geschrieben:
27.04.2020 17:39, Vladimir Sementsov-Ogievskiy wrote:
It's safer to expand
19.05.2020 12:07, Kevin Wolf wrote:
Am 18.05.2020 um 18:12 hat Thomas Huth geschrieben:
On 15/05/2020 23.15, Vladimir Sementsov-Ogievskiy wrote:
Rename bitmaps migration tests and move them to tests subdirectory to
demonstrate new human-friendly test naming.
Signed-off-by: Vladimir
Am 19.05.2020 um 13:32 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 19.05.2020 12:07, Kevin Wolf wrote:
> > Am 18.05.2020 um 18:12 hat Thomas Huth geschrieben:
> > > On 15/05/2020 23.15, Vladimir Sementsov-Ogievskiy wrote:
> > > > Rename bitmaps migration tests and move them to tests
On Mon, 18 May 2020 at 18:07, Kevin Wolf wrote:
>
> The following changes since commit debe78ce14bf8f8940c2bdf3ef387505e9e035a9:
>
> Merge remote-tracking branch 'remotes/rth/tags/pull-fpu-20200515' into
> staging (2020-05-15 19:51:16 +0100)
>
> are available in the Git repository at:
>
>
14.05.2020 17:26, Kevin Wolf wrote:
Am 14.05.2020 um 15:21 hat Thomas Lamprecht geschrieben:
On 5/12/20 4:43 PM, Kevin Wolf wrote:
Stefan (Reiter), after looking a bit closer at this, I think there is no
bug in QEMU, but the bug is in your coroutine code that calls block
layer functions
On 19.05.2020 15:32, Vladimir Sementsov-Ogievskiy wrote:
14.05.2020 17:26, Kevin Wolf wrote:
Am 14.05.2020 um 15:21 hat Thomas Lamprecht geschrieben:
On 5/12/20 4:43 PM, Kevin Wolf wrote:
Stefan (Reiter), after looking a bit closer at this, I think there
is no
bug in QEMU, but the bug is
Am 19.05.2020 um 15:54 hat Denis Plotnikov geschrieben:
>
>
> On 19.05.2020 15:32, Vladimir Sementsov-Ogievskiy wrote:
> > 14.05.2020 17:26, Kevin Wolf wrote:
> > > Am 14.05.2020 um 15:21 hat Thomas Lamprecht geschrieben:
> > > > On 5/12/20 4:43 PM, Kevin Wolf wrote:
> > > > > Stefan (Reiter),
Include actions for --add, --remove, --clear, --enable, --disable, and
--merge (note that --clear is a bit of fluff, because the same can be
accomplished by removing a bitmap and then adding a new one in its
place, but it matches what QMP commands exist). Listing is omitted,
because it does not
In qemu_luring_poll_cb() we are not using the cqe peeked from the
CQ ring. We are using io_uring_peek_cqe() only to see if there
are cqes ready, so we can replace it with io_uring_cq_ready().
Signed-off-by: Stefano Garzarella
---
block/io_uring.c | 9 +++--
1 file changed, 3 insertions(+),
19.05.2020 14:25, Vladimir Sementsov-Ogievskiy wrote:
19.05.2020 14:16, Kevin Wolf wrote:
Am 19.05.2020 um 13:06 hat Vladimir Sementsov-Ogievskiy geschrieben:
19.05.2020 13:52, Kevin Wolf wrote:
Am 06.05.2020 um 09:02 hat Vladimir Sementsov-Ogievskiy geschrieben:
27.04.2020 17:39, Vladimir
19.05.2020 17:33, Kevin Wolf wrote:
Am 19.05.2020 um 16:01 hat Vladimir Sementsov-Ogievskiy geschrieben:
19.05.2020 14:25, Vladimir Sementsov-Ogievskiy wrote:
19.05.2020 14:16, Kevin Wolf wrote:
Am 19.05.2020 um 13:06 hat Vladimir Sementsov-Ogievskiy geschrieben:
19.05.2020 13:52, Kevin Wolf
Am 19.05.2020 um 16:01 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 19.05.2020 14:25, Vladimir Sementsov-Ogievskiy wrote:
> > 19.05.2020 14:16, Kevin Wolf wrote:
> > > Am 19.05.2020 um 13:06 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > > 19.05.2020 13:52, Kevin Wolf wrote:
> > > > > Am
On Tue, 19 May 2020 at 09:01, Stefan Hajnoczi wrote:
>
> The following changes since commit 013a18edbbc59cdad019100c7d03c0494642b74c:
>
> Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-2020051=
> 4' into staging (2020-05-14 16:17:55 +0100)
>
> are available in the Git
On 5/19/20 10:48 AM, Vladimir Sementsov-Ogievskiy wrote:
The other options is doing what you suggested. There is nothing in the
qcow2 on-disk format that would prevent this, but we would have to
extend the qcow2 driver to allow I/O to inactive L1 tables. This sounds
like a non-trivial amount of
Am 29.04.2020 um 11:18 hat Roman Kagan geschrieben:
> Devices (virtio-blk, scsi, etc.) and the block layer are happy to use
> 32-bit for logical_block_size, physical_block_size, and min_io_size.
> However, the properties in BlockConf are defined as uint16_t limiting
> the values to 32768.
>
>
On 19.05.2020 17:18, Kevin Wolf wrote:
Am 19.05.2020 um 15:54 hat Denis Plotnikov geschrieben:
On 19.05.2020 15:32, Vladimir Sementsov-Ogievskiy wrote:
14.05.2020 17:26, Kevin Wolf wrote:
Am 14.05.2020 um 15:21 hat Thomas Lamprecht geschrieben:
On 5/12/20 4:43 PM, Kevin Wolf wrote:
Am 19.05.2020 um 17:05 hat Denis Plotnikov geschrieben:
> On 19.05.2020 17:18, Kevin Wolf wrote:
> > Am 19.05.2020 um 15:54 hat Denis Plotnikov geschrieben:
> > >
> > > On 19.05.2020 15:32, Vladimir Sementsov-Ogievskiy wrote:
> > > > 14.05.2020 17:26, Kevin Wolf wrote:
> > > > > Am 14.05.2020 um
19.05.2020 18:29, Kevin Wolf wrote:
Am 19.05.2020 um 17:05 hat Denis Plotnikov geschrieben:
On 19.05.2020 17:18, Kevin Wolf wrote:
Am 19.05.2020 um 15:54 hat Denis Plotnikov geschrieben:
On 19.05.2020 15:32, Vladimir Sementsov-Ogievskiy wrote:
14.05.2020 17:26, Kevin Wolf wrote:
Am
As recently documented [1], io_uring_enter(2) syscall can return an
error (errno=EINTR) if the operation was interrupted by a delivery
of a signal before it could complete.
This should happen when IORING_ENTER_GETEVENTS flag is used, for
example during io_uring_submit_and_wait() or during
Missing "Signed-off-by: Gerd Hoffmann ",
otherwise:
Reviewed-by: Philippe Mathieu-Daudé
On 5/15/20 5:04 PM, Gerd Hoffmann wrote:
---
include/hw/block/fdc.h | 1 +
include/hw/i386/pc.h | 1 -
hw/block/fdc.c | 26 +-
hw/i386/pc.c | 25
ping
On Wed 15 Apr 2020 09:02:07 PM CEST, Alberto Garcia wrote:
> Although we cannot create these images with qemu-img it is still
> possible to do it using an external tool. QEMU should refuse to open
> them until the data-file-raw bit is cleared with 'qemu-img check'.
>
> Signed-off-by: Alberto
On 5/19/20 12:52 PM, Eric Blake wrote:
On 5/19/20 11:18 AM, Eric Blake wrote:
Include actions for --add, --remove, --clear, --enable, --disable, and
--merge (note that --clear is a bit of fluff, because the same can be
+ case 'g':
+ granularity = cvtnum(optarg);
+
On 5/19/20 12:56 PM, Vladimir Sementsov-Ogievskiy wrote:
We have a few bdrv_*() functions that can either spawn a new coroutine
and wait for it with BDRV_POLL_WHILE() or use a fastpath if they are
alreeady running in a coroutine. All of them duplicate basically the
already
same code.
Factor
On 5/19/20 12:56 PM, Masayoshi Mizuma wrote:
Hello,
I would like to discard any changes while the qemu guest OS is done.
I can do that with snapshot and drive option.
However, snapshot option doesn't work for the device which set by
blockdev option like as:
$QEMU --enable-kvm \
-m 1024
Include actions for --add, --remove, --clear, --enable, --disable, and
--merge (note that --clear is a bit of fluff, because the same can be
accomplished by removing a bitmap and then adding a new one in its
place, but it matches what QMP commands exist). Listing is omitted,
because it does not
Hi all!
v2:
01: wording, grammar, keep comment
02-03: add Kevin's r-bs
05: test-output rebased on compression type qcow2 extension
=
I wanted to understand, what is the real difference between
bdrv_block_status_above
and bdrv_is_allocated_above, IMHO bdrv_is_allocated_above should work
In order to reuse bdrv_common_block_status_above in
bdrv_is_allocated_above, let's support include_base parameter.
Signed-off-by: Vladimir Sementsov-Ogievskiy
Reviewed-by: Kevin Wolf
---
block/io.c | 19 ++-
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git
bdrv_co_block_status_above has several problems with handling short
backing files:
1. With want_zeros=true, it may return ret with BDRV_BLOCK_ZERO but
without BDRV_BLOCK_ALLOCATED flag, when actually short backing file
which produces these after-EOF zeros is inside requested backing
sequence.
2.
We are going to reuse bdrv_common_block_status_above in
bdrv_is_allocated_above. bdrv_is_allocated_above may be called with
include_base == false and still bs == base (for ex. from img_rebase()).
So, support this corner case.
Signed-off-by: Vladimir Sementsov-Ogievskiy
Reviewed-by: Kevin Wolf
19.05.2020 23:21, Eric Blake wrote:
On 5/19/20 2:54 PM, Vladimir Sementsov-Ogievskiy wrote:
This leads to the following effect:
./qemu-img create -f qcow2 base.qcow2 2M
./qemu-io -c "write -P 0x1 0 2M" base.qcow2
./qemu-img create -f qcow2 -b base.qcow2 mid.qcow2 1M
./qemu-img create -f
QEMU block drivers are supposed to support aio_poll() from I/O
completion callback functions. This means completion processing must be
re-entrant.
The standard approach is to schedule a BH during completion processing
and cancel it at the end of processing. If aio_poll() is invoked by a
callback
This series allows aio_poll() to work from I/O request completion callbacks.
QEMU block drivers are supposed to support this because some code paths rely on
this behavior.
There was no measurable performance difference with nested aio_poll() support.
This patch series also contains cleanups that
Do not access a CQE after incrementing q->cq.head and releasing q->lock.
It is unlikely that this causes problems in practice but it's a latent
bug.
The reason why it should be safe at the moment is that completion
processing is not re-entrant and the CQ doorbell isn't written until the
end of
A lot of CPU time is spent simply locking/unlocking q->lock during
polling. Check for completion outside the lock to make q->lock disappear
from the profile.
Signed-off-by: Stefan Hajnoczi
---
block/nvme.c | 12
1 file changed, 12 insertions(+)
diff --git a/block/nvme.c
Passing around both BDRVNVMeState and NVMeQueuePair is unwiedly. Reduce
the number of function arguments by keeping the BDRVNVMeState pointer in
NVMeQueuePair. This will come in handly when a BH is introduced in a
later patch and only one argument can be passed to it.
Signed-off-by: Stefan
Existing users access free_req_queue under q->lock. Document this.
Signed-off-by: Stefan Hajnoczi
---
block/nvme.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/nvme.c b/block/nvme.c
index 3ad4f27e1c..e32bff26ff 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -57,7
nvme_process_completion() explicitly checks cid so the assertion that
follows is always true:
if (cid == 0 || cid > NVME_QUEUE_SIZE) {
...
continue;
}
assert(cid <= NVME_QUEUE_SIZE);
Signed-off-by: Stefan Hajnoczi
---
block/nvme.c | 1 -
1 file changed, 1 deletion(-)
diff
On 5/19/20 11:18 AM, Eric Blake wrote:
Include actions for --add, --remove, --clear, --enable, --disable, and
--merge (note that --clear is a bit of fluff, because the same can be
+case 'g':
+granularity = cvtnum(optarg);
+if (granularity < 0) {
+
bdrv_is_allocated_above wrongly handles short backing files: it reports
after-EOF space as UNALLOCATED which is wrong, as on read the data is
generated on the level of short backing file (if all overlays has
unallocated area at that place).
Reusing bdrv_common_block_status_above fixes the issue
These cases are fixed by previous patches around block_status and
is_allocated.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
tests/qemu-iotests/274 | 20
tests/qemu-iotests/274.out | 65 ++
2 files changed, 85 insertions(+)
diff --git
There are three issues with the current NVMeRequest->busy field:
1. The busy field is accidentally accessed outside q->lock when request
submission fails.
2. Waiters on free_req_queue are not woken when a request is returned
early due to submission failure.
2. Finding a free request involves
Ping
The whole series reviewed by Eric, with only one grammar fix needed in 02
commit message (and possible drop of ret2, up to maintainer).
07.05.2020 11:47, Vladimir Sementsov-Ogievskiy wrote:
Hi all!
v2 (by Eric's review):
01: moved to the start of the series, add Eric's r-b
02: new
We have a few bdrv_*() functions that can either spawn a new coroutine
and wait for it with BDRV_POLL_WHILE() or use a fastpath if they are
alreeady running in a coroutine. All of them duplicate basically the
same code.
Factor the common code into a new function bdrv_run_co().
Signed-off-by:
On Tue, May 19, 2020 at 04:08:26PM +0200, Kevin Wolf wrote:
> Am 29.04.2020 um 11:18 hat Roman Kagan geschrieben:
> > Devices (virtio-blk, scsi, etc.) and the block layer are happy to use
> > 32-bit for logical_block_size, physical_block_size, and min_io_size.
> > However, the properties in
On 5/19/20 2:54 PM, Vladimir Sementsov-Ogievskiy wrote:
This leads to the following effect:
./qemu-img create -f qcow2 base.qcow2 2M
./qemu-io -c "write -P 0x1 0 2M" base.qcow2
./qemu-img create -f qcow2 -b base.qcow2 mid.qcow2 1M
./qemu-img create -f qcow2 -b mid.qcow2 top.qcow2 2M
Region
On 5/19/20 2:55 PM, Vladimir Sementsov-Ogievskiy wrote:
bdrv_is_allocated_above wrongly handles short backing files: it reports
after-EOF space as UNALLOCATED which is wrong,
You haven't convinced me of that claim.
as on read the data is
generated on the level of short backing file (if all
On 5/19/20 2:54 PM, Vladimir Sementsov-Ogievskiy wrote:
bdrv_co_block_status_above has several problems with handling short
backing files:
1. With want_zeros=true, it may return ret with BDRV_BLOCK_ZERO but
without BDRV_BLOCK_ALLOCATED flag, when actually short backing file
which produces these
20.05.2020 00:13, Eric Blake wrote:
On 5/19/20 2:55 PM, Vladimir Sementsov-Ogievskiy wrote:
These cases are fixed by previous patches around block_status and
is_allocated.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
tests/qemu-iotests/274 | 20
On 5/19/20 4:13 PM, Vladimir Sementsov-Ogievskiy wrote:
19.05.2020 23:41, Eric Blake wrote:
On 5/19/20 2:54 PM, Vladimir Sementsov-Ogievskiy wrote:
bdrv_co_block_status_above has several problems with handling short
backing files:
1. With want_zeros=true, it may return ret with
19.05.2020 23:41, Eric Blake wrote:
On 5/19/20 2:54 PM, Vladimir Sementsov-Ogievskiy wrote:
bdrv_co_block_status_above has several problems with handling short
backing files:
1. With want_zeros=true, it may return ret with BDRV_BLOCK_ZERO but
without BDRV_BLOCK_ALLOCATED flag, when actually
On 5/19/20 2:55 PM, Vladimir Sementsov-Ogievskiy wrote:
These cases are fixed by previous patches around block_status and
is_allocated.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
tests/qemu-iotests/274 | 20
tests/qemu-iotests/274.out | 65
On 5/19/20 10:51 AM, Philippe Mathieu-Daudé wrote:
> Missing "Signed-off-by: Gerd Hoffmann ",
> otherwise:
>
> Reviewed-by: Philippe Mathieu-Daudé
>
> On 5/15/20 5:04 PM, Gerd Hoffmann wrote:
If you add the S-O-B:
Acked-by: John Snow
>> ---
>> include/hw/block/fdc.h | 1 +
>>
On Tue, May 19, 2020 at 01:41:08PM -0500, Eric Blake wrote:
> On 5/19/20 12:56 PM, Masayoshi Mizuma wrote:
> > Hello,
> >
> > I would like to discard any changes while the qemu guest OS is done.
> > I can do that with snapshot and drive option.
> > However, snapshot option doesn't work for the
Hello,
I would like to discard any changes while the qemu guest OS is done.
I can do that with snapshot and drive option.
However, snapshot option doesn't work for the device which set by
blockdev option like as:
$QEMU --enable-kvm \
-m 1024 \
-nographic \
-serial mon:stdio \
On 5/19/20 4:25 PM, Vladimir Sementsov-Ogievskiy wrote:
$ ./qemu-img map --output=json top.qcow2
[{ "start": 0, "length": 1048576, "depth": 2, "zero": false, "data":
true, "offset": 327680},
{ "start": 1048576, "length": 1048576, "depth": 0, "zero": true,
"data": false}]
I think what we
80 matches
Mail list logo