On 2019-01-25 07:01, Thomas Huth wrote:
> On 2019-01-24 18:23, Stefano Garzarella wrote:
>> If the WRITE_ZEROES feature is enabled, we check this
>> command in the test_basic().
>>
>> Signed-off-by: Stefano Garzarella
>> ---
>> tests/virtio-blk-test.c | 63
On 2019-01-25 07:01, Thomas Huth wrote:
> On 2019-01-24 18:23, Stefano Garzarella wrote:
>> If the WRITE_ZEROES feature is enabled, we check this
>> command in the test_basic().
>>
>> Signed-off-by: Stefano Garzarella
>> ---
>> tests/virtio-blk-test.c | 63
On 2019-01-24 18:23, Stefano Garzarella wrote:
> If the WRITE_ZEROES feature is enabled, we check this
> command in the test_basic().
>
> Signed-off-by: Stefano Garzarella
> ---
> tests/virtio-blk-test.c | 63 +
> 1 file changed, 63 insertions(+)
>
> diff
On 2019-01-24 18:37, Mark Cave-Ayland wrote:
> On 24/01/2019 17:15, Laurent Vivier wrote:
>
>> On 24/01/2019 18:02, Thomas Huth wrote:
>>> On 2018-11-02 16:22, Mark Cave-Ayland wrote:
(MCA: here's the latest version of the q800 patchset. I've hope that I've
addressed most of the comments
On 2018-11-02 16:22, Mark Cave-Ayland wrote:
> From: Laurent Vivier
I'd suggest to add a patch description that contains the text that
Laurent provided as a reply to this patch in v5:
8< --
There is no DMA in Quadra 800, so the CPU
* Kevin Wolf (kw...@redhat.com) wrote:
> Am 24.01.2019 um 11:49 hat Dr. David Alan Gilbert geschrieben:
> > * Kevin Wolf (kw...@redhat.com) wrote:
> > > Am 24.01.2019 um 10:29 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > > 23.01.2019 18:48, Max Reitz wrote:
> > > > > Hi,
> > > > >
> > > > >
On Thu, Jan 24, 2019 at 6:55 PM Dr. David Alan Gilbert
wrote:
>
> * Stefano Garzarella (sgarz...@redhat.com) wrote:
> > This patch adds the support of DISCARD and WRITE ZEROES commands,
> > that have been introduced in the virtio-blk protocol to have
> > better performance when using SSD backend.
On 1/24/19 8:34 AM, Alberto Garcia wrote:
> On Thu 24 Jan 2019 11:11:06 AM CET, Alberto Garcia wrote:
>> On Wed 23 Jan 2019 06:00:49 PM CET, Max Reitz wrote:
>>> Hi,
>>>
>>> 093 and 136 seem really flaky to me. I can reproduce that by running:
>>
>> That's interesting, I can make 093 fail quite ea
On 1/24/19 4:15 AM, Kevin Wolf wrote:
>> But how to fix Qemu not to crash? May be, forbid some transitions
>> (FINISH_MIGRATE -> RUNNING),
>> or at least error-out qmp_cont if runstate is FINISH_MIGRATE?
>
> I wonder whether the QAPI schema should have a field 'run-states' for
> commands, an
* Stefano Garzarella (sgarz...@redhat.com) wrote:
> This patch adds the support of DISCARD and WRITE ZEROES commands,
> that have been introduced in the virtio-blk protocol to have
> better performance when using SSD backend.
>
> Signed-off-by: Stefano Garzarella
Hi,
Do you need to make those
On 24/01/2019 17:15, Laurent Vivier wrote:
> On 24/01/2019 18:02, Thomas Huth wrote:
>> On 2018-11-02 16:22, Mark Cave-Ayland wrote:
>>> (MCA: here's the latest version of the q800 patchset. I've hope that I've
>>> addressed most of the comments, plus this will now boot into the Debian
>>> install
This patch adds the support of DISCARD and WRITE ZEROES commands,
that have been introduced in the virtio-blk protocol to have
better performance when using SSD backend.
Signed-off-by: Stefano Garzarella
---
hw/block/virtio-blk.c | 79 +++
1 file changed,
This series adds the support of DISCARD and WRITE ZEROES commands
and extends the virtio-blk-test to test WRITE_ZEROES command when
the feature is enabled.
RFC because I'm not sure if the "case" conditions that I used in
virtio-blk.c is clean enough.
This series requires the new virtio headers fr
If the WRITE_ZEROES feature is enabled, we check this
command in the test_basic().
Signed-off-by: Stefano Garzarella
---
tests/virtio-blk-test.c | 63 +
1 file changed, 63 insertions(+)
diff --git a/tests/virtio-blk-test.c b/tests/virtio-blk-test.c
index
On 24/01/2019 18:02, Thomas Huth wrote:
> On 2018-11-02 16:22, Mark Cave-Ayland wrote:
>> (MCA: here's the latest version of the q800 patchset. I've hope that I've
>> addressed most of the comments, plus this will now boot into the Debian
>> installer correctly when applied to git master.
>
> Any
On 2018-11-02 16:22, Mark Cave-Ayland wrote:
> (MCA: here's the latest version of the q800 patchset. I've hope that I've
> addressed most of the comments, plus this will now boot into the Debian
> installer correctly when applied to git master.
Any update on this series? Why did it get stalled aga
Am 24.01.2019 um 17:18 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 24.01.2019 17:17, Kevin Wolf wrote:
> > Depending on the exact image layout and the storage backend (tmpfs is
> > konwn to have very slow SEEK_HOLE/SEEK_DATA), caching lseek results can
> > save us a lot of time e.g. during a mi
24.01.2019 17:17, Kevin Wolf wrote:
> Depending on the exact image layout and the storage backend (tmpfs is
> konwn to have very slow SEEK_HOLE/SEEK_DATA), caching lseek results can
> save us a lot of time e.g. during a mirror block job or qemu-img convert
> with a fragmented source image (.bdrv_co
On 1/24/19 8:17 AM, Kevin Wolf wrote:
> Depending on the exact image layout and the storage backend (tmpfs is
> konwn to have very slow SEEK_HOLE/SEEK_DATA), caching lseek results can
> save us a lot of time e.g. during a mirror block job or qemu-img convert
> with a fragmented source image (.bdrv_
24.01.2019 18:39, Kevin Wolf wrote:
> Am 24.01.2019 um 15:37 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> 23.01.2019 15:04, Vladimir Sementsov-Ogievskiy wrote:
>>> 22.01.2019 21:57, Kevin Wolf wrote:
Am 11.01.2019 um 12:40 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 11.01.2019 13:4
On 1/24/19 9:39 AM, Kevin Wolf wrote:
>>>
>>> Hmm, and one more idea from Den:
>>>
>>> We can detect preallocated image, comparing allocated size of real file with
>>> number of non-zero qcow2 refcounts. So, real allocation is much less than
>>> allocation in qcow2 point of view, we'll enable lsee
24.01.2019 18:31, Kevin Wolf wrote:
> Am 24.01.2019 um 15:36 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> 23.01.2019 19:33, Kevin Wolf wrote:
>>> Am 23.01.2019 um 12:53 hat Vladimir Sementsov-Ogievskiy geschrieben:
22.01.2019 21:57, Kevin Wolf wrote:
> Am 11.01.2019 um 12:40 hat Vladim
On Thu, 24 Jan 2019 at 10:29, Stefan Hajnoczi wrote:
>
> The following changes since commit f6b06fcceef465de0cf2514c9f76fe0192896781:
>
> Merge remote-tracking branch 'remotes/kraxel/tags/ui-20190121-pull-request'
> into staging (2019-01-23 17:57:47 +)
>
> are available in the Git repositor
Am 24.01.2019 um 15:36 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 23.01.2019 19:33, Kevin Wolf wrote:
> > Am 23.01.2019 um 12:53 hat Vladimir Sementsov-Ogievskiy geschrieben:
> >> 22.01.2019 21:57, Kevin Wolf wrote:
> >>> Am 11.01.2019 um 12:40 hat Vladimir Sementsov-Ogievskiy geschrieben:
> >
Am 24.01.2019 um 16:22 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 24.01.2019 18:11, Kevin Wolf wrote:
> > Am 24.01.2019 um 15:40 hat Vladimir Sementsov-Ogievskiy geschrieben:
> >> 24.01.2019 17:17, Kevin Wolf wrote:
> >>> Depending on the exact image layout and the storage backend (tmpfs is
>
Am 24.01.2019 um 15:37 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 23.01.2019 15:04, Vladimir Sementsov-Ogievskiy wrote:
> > 22.01.2019 21:57, Kevin Wolf wrote:
> >> Am 11.01.2019 um 12:40 hat Vladimir Sementsov-Ogievskiy geschrieben:
> >>> 11.01.2019 13:41, Kevin Wolf wrote:
> Am 10.01.20
24.01.2019 18:11, Kevin Wolf wrote:
> Am 24.01.2019 um 15:40 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> 24.01.2019 17:17, Kevin Wolf wrote:
>>> Depending on the exact image layout and the storage backend (tmpfs is
>>> konwn to have very slow SEEK_HOLE/SEEK_DATA), caching lseek results can
>>>
Am 24.01.2019 um 15:40 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 24.01.2019 17:17, Kevin Wolf wrote:
> > Depending on the exact image layout and the storage backend (tmpfs is
> > konwn to have very slow SEEK_HOLE/SEEK_DATA), caching lseek results can
> > save us a lot of time e.g. during a mi
24.01.2019 17:17, Kevin Wolf wrote:
> Depending on the exact image layout and the storage backend (tmpfs is
> konwn to have very slow SEEK_HOLE/SEEK_DATA), caching lseek results can
> save us a lot of time e.g. during a mirror block job or qemu-img convert
> with a fragmented source image (.bdrv_co
23.01.2019 19:33, Kevin Wolf wrote:
> Am 23.01.2019 um 12:53 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> 22.01.2019 21:57, Kevin Wolf wrote:
>>> Am 11.01.2019 um 12:40 hat Vladimir Sementsov-Ogievskiy geschrieben:
11.01.2019 13:41, Kevin Wolf wrote:
> Am 10.01.2019 um 14:20 hat Vladim
23.01.2019 15:04, Vladimir Sementsov-Ogievskiy wrote:
> 22.01.2019 21:57, Kevin Wolf wrote:
>> Am 11.01.2019 um 12:40 hat Vladimir Sementsov-Ogievskiy geschrieben:
>>> 11.01.2019 13:41, Kevin Wolf wrote:
Am 10.01.2019 um 14:20 hat Vladimir Sementsov-Ogievskiy geschrieben:
> drv_co_block_st
On Thu 24 Jan 2019 11:11:06 AM CET, Alberto Garcia wrote:
> On Wed 23 Jan 2019 06:00:49 PM CET, Max Reitz wrote:
>> Hi,
>>
>> 093 and 136 seem really flaky to me. I can reproduce that by running:
>
> That's interesting, I can make 093 fail quite easily now (I haven't
> tested the other one yet), b
23.01.2019 17:36, Eric Blake wrote:
> On 1/23/19 2:20 AM, Vladimir Sementsov-Ogievskiy wrote:
>
+hbitmap_set(job->copy_bitmap, cluster, last_cluster - cluster
+ 1);
>>>
>>> Why the +1? Shouldn't the division for last_cluster round up instead?
>>>
+
Depending on the exact image layout and the storage backend (tmpfs is
konwn to have very slow SEEK_HOLE/SEEK_DATA), caching lseek results can
save us a lot of time e.g. during a mirror block job or qemu-img convert
with a fragmented source image (.bdrv_co_block_status on the protocol
layer can be c
On Wed, 23 Jan 2019 at 21:23, Stefan Hajnoczi wrote:
>
> v2:
> * Add Patch 2 to call memory_region_flush_rom_device() from pflash devices
>[Peter]
>
> This series adds the Non-Volatile Memory Controller, which controls access to
> the User Information Control Registers (UICR), Factory Informa
24.01.2019 15:25, Vladimir Sementsov-Ogievskiy wrote:
> qmp_cont fails if vm in RUN_STATE_FINISH_MIGRATE, so let's wait for
> final RUN_STATE_POSTMIGRATE. Also, while being here, check qmp_cont
> result.
>
> Reported-by: Max Reitz
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> tests/qem
On Thu, 24 Jan 2019 at 13:38, Peter Maydell wrote:
>
> On Wed, 23 Jan 2019 at 21:23, Stefan Hajnoczi wrote:
> >
> > v2:
> > * Add Patch 2 to call memory_region_flush_rom_device() from pflash devices
> >[Peter]
> >
> > This series adds the Non-Volatile Memory Controller, which controls access
* Vladimir Sementsov-Ogievskiy (vsement...@virtuozzo.com) wrote:
> qmp_cont in RUN_STATE_FINISH_MIGRATE may lead to moving vm to
> RUN_STATE_RUNNING, before actual migration finish. So, when migration
> thread will try to go to RUN_STATE_POSTMIGRATE, assuming transition
> RUN_STATE_FINISH_MIGRATE->
qmp_cont fails if vm in RUN_STATE_FINISH_MIGRATE, so let's wait for
final RUN_STATE_POSTMIGRATE. Also, while being here, check qmp_cont
result.
Reported-by: Max Reitz
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
tests/qemu-iotests/169 | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(
Hi.
It's a simple fix for problems reported in "Aborts in iotest 169"
by Max:
https://lists.gnu.org/archive/html/qemu-devel/2019-01/msg05907.html
In thread Kevin described that a problem itself is bigger and needs
more effort:
https://lists.gnu.org/archive/html/qemu-devel/2019-01/msg06136.html
S
qmp_cont in RUN_STATE_FINISH_MIGRATE may lead to moving vm to
RUN_STATE_RUNNING, before actual migration finish. So, when migration
thread will try to go to RUN_STATE_POSTMIGRATE, assuming transition
RUN_STATE_FINISH_MIGRATE->RUN_STATE_POSTMIGRATE, it will crash, as
current state is RUN_STATE_RUNNI
Am 24.01.2019 um 11:49 hat Dr. David Alan Gilbert geschrieben:
> * Kevin Wolf (kw...@redhat.com) wrote:
> > Am 24.01.2019 um 10:29 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > 23.01.2019 18:48, Max Reitz wrote:
> > > > Hi,
> > > >
> > > > When running 169 in parallel (e.g. like so:
> > > >
24.01.2019 13:15, Kevin Wolf wrote:
> Am 24.01.2019 um 10:29 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> 23.01.2019 18:48, Max Reitz wrote:
>>> Hi,
>>>
>>> When running 169 in parallel (e.g. like so:
>>>
>>> $ while TEST_DIR=/tmp/t0 ./check -T -qcow2 169; do; done
>>> $ while TEST_DIR=/tmp/t1
Hi Stefan,
On 1/23/19 10:22 PM, Stefan Hajnoczi wrote:
> pflash devices should mark the memory region dirty and invalidate TBs
> after directly writing to the RAM backing the ROM device.
>
> Note that pflash_cfi01_get_memory() is used by several machine types to
> populate ROM contents directly.
* Kevin Wolf (kw...@redhat.com) wrote:
> Am 24.01.2019 um 10:29 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > 23.01.2019 18:48, Max Reitz wrote:
> > > Hi,
> > >
> > > When running 169 in parallel (e.g. like so:
> > >
> > > $ while TEST_DIR=/tmp/t0 ./check -T -qcow2 169; do; done
> > > $ while
Am 11.01.2019 um 17:45 hat Paolo Bonzini geschrieben:
> Whenever the allocation length of a SCSI request is shorter than the size of
> the
> VPD page list, page_idx is used blindly to index into r->buf. Even though
> the stores in the insertion sort are protected against overflows, the same is
>
* Vladimir Sementsov-Ogievskiy (vsement...@virtuozzo.com) wrote:
> 24.01.2019 13:10, Dr. David Alan Gilbert wrote:
> > * Vladimir Sementsov-Ogievskiy (vsement...@virtuozzo.com) wrote:
> >> 24.01.2019 12:29, Vladimir Sementsov-Ogievskiy wrote:
> >>> 23.01.2019 18:48, Max Reitz wrote:
> Hi,
> >>
The following changes since commit f6b06fcceef465de0cf2514c9f76fe0192896781:
Merge remote-tracking branch 'remotes/kraxel/tags/ui-20190121-pull-request'
into staging (2019-01-23 17:57:47 +)
are available in the Git repository at:
git://github.com/stefanha/qemu.git tags/block-pull-reques
Am 24.01.2019 um 11:32 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 24.01.2019 13:15, Kevin Wolf wrote:
> > Am 24.01.2019 um 10:29 hat Vladimir Sementsov-Ogievskiy geschrieben:
> >> 23.01.2019 18:48, Max Reitz wrote:
> >>> Hi,
> >>>
> >>> When running 169 in parallel (e.g. like so:
> >>>
> >>> $
From: Vladimir Sementsov-Ogievskiy
Drop CoSleepCB structure. It's actually unused.
Signed-off-by: Vladimir Sementsov-Ogievskiy
Message-id: 20190122143113.20331-1-vsement...@virtuozzo.com
Signed-off-by: Stefan Hajnoczi
---
util/qemu-coroutine-sleep.c | 27 ++-
1 file ch
24.01.2019 13:15, Kevin Wolf wrote:
> Am 24.01.2019 um 10:29 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> 23.01.2019 18:48, Max Reitz wrote:
>>> Hi,
>>>
>>> When running 169 in parallel (e.g. like so:
>>>
>>> $ while TEST_DIR=/tmp/t0 ./check -T -qcow2 169; do; done
>>> $ while TEST_DIR=/tmp/t1
On Tue, Jan 22, 2019 at 01:19:26PM +0100, Kevin Wolf wrote:
> If QEMU was configured with a driver in --block-drv-ro-whitelist, trying
> to use that driver read-write resulted in an error message even if
> auto-read-only=on was set.
>
> Consider auto-read-only=on for the whitelist checking and use
Hot-unplug a scsi-hd using an iothread. The previous patch fixes a
segfault in this scenario.
This patch adds a regression test.
Suggested-by: Alberto Garcia
Suggested-by: Kevin Wolf
Signed-off-by: Stefan Hajnoczi
Reviewed-by: Alberto Garcia
Message-id: 20190114133257.30299-3-stefa...@redhat
The following QMP command leads to a crash when iothreads are used:
{ 'execute': 'device_del', 'arguments': {'id': 'data'} }
The backtrace involves the queue restart coroutine where
tgm->throttle_state is a NULL pointer because
throttle_group_unregister_tgm() has already been called:
(gdb) b
24.01.2019 13:10, Dr. David Alan Gilbert wrote:
> * Vladimir Sementsov-Ogievskiy (vsement...@virtuozzo.com) wrote:
>> 24.01.2019 12:29, Vladimir Sementsov-Ogievskiy wrote:
>>> 23.01.2019 18:48, Max Reitz wrote:
Hi,
When running 169 in parallel (e.g. like so:
$ while TEST_DI
On Wed 23 Jan 2019 06:00:49 PM CET, Max Reitz wrote:
> Hi,
>
> 093 and 136 seem really flaky to me. I can reproduce that by running:
That's interesting, I can make 093 fail quite easily now (I haven't
tested the other one yet), but I don't think this happened earlier. I'll
try to figure out what'
* Vladimir Sementsov-Ogievskiy (vsement...@virtuozzo.com) wrote:
> 24.01.2019 12:29, Vladimir Sementsov-Ogievskiy wrote:
> > 23.01.2019 18:48, Max Reitz wrote:
> >> Hi,
> >>
> >> When running 169 in parallel (e.g. like so:
> >>
> >> $ while TEST_DIR=/tmp/t0 ./check -T -qcow2 169; do; done
> >> $ wh
Am 24.01.2019 um 10:29 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 23.01.2019 18:48, Max Reitz wrote:
> > Hi,
> >
> > When running 169 in parallel (e.g. like so:
> >
> > $ while TEST_DIR=/tmp/t0 ./check -T -qcow2 169; do; done
> > $ while TEST_DIR=/tmp/t1 ./check -T -qcow2 169; do; done
> > $
On 2019-01-23 22:22, Stefan Hajnoczi wrote:
> From: Steffen Görtz
>
> Signed-off-by: Steffen Görtz
> Signed-off-by: Stefan Hajnoczi
> ---
> tests/microbit-test.c | 97 +++
> 1 file changed, 97 insertions(+)
>
> diff --git a/tests/microbit-test.c b/tests
24.01.2019 12:29, Vladimir Sementsov-Ogievskiy wrote:
> 23.01.2019 18:48, Max Reitz wrote:
>> Hi,
>>
>> When running 169 in parallel (e.g. like so:
>>
>> $ while TEST_DIR=/tmp/t0 ./check -T -qcow2 169; do; done
>> $ while TEST_DIR=/tmp/t1 ./check -T -qcow2 169; do; done
>> $ while TEST_DIR=/tmp/t2
24.01.2019 10:48, Denis Plotnikov wrote:
> When there is a Backup Block Job running and shutdown command is sent to
> a guest, the guest crushes due to assert(!bs->walking_aio_notifiers).
Clarification: not ordinary backup, but fleecing scheme: backup with sync=none,
when source is a backing for t
23.01.2019 18:48, Max Reitz wrote:
> Hi,
>
> When running 169 in parallel (e.g. like so:
>
> $ while TEST_DIR=/tmp/t0 ./check -T -qcow2 169; do; done
> $ while TEST_DIR=/tmp/t1 ./check -T -qcow2 169; do; done
> $ while TEST_DIR=/tmp/t2 ./check -T -qcow2 169; do; done
> $ while TEST_DIR=/tmp/t3 ./
Am 23.01.2019 um 17:16 hat Alberto Garcia geschrieben:
> On Wed 23 Jan 2019 04:47:30 PM CET, Paolo Bonzini wrote:
> >> You mean a common function with the code below?
> >>
> +ctx = blk_get_aio_context(sd->conf.blk);
> +if (ctx != s->ctx && ctx != qemu_get_aio_context()) {
23.01.2019 21:08, Dr. David Alan Gilbert wrote:
> * Max Reitz (mre...@redhat.com) wrote:
>> On 23.01.19 17:35, Dr. David Alan Gilbert wrote:
>>> * Luiz Capitulino (lcapitul...@redhat.com) wrote:
On Wed, 23 Jan 2019 17:12:35 +0100
Max Reitz wrote:
> On 23.01.19 17:04, Luiz Capitu
On Wed, Jan 23, 2019 at 15:19:53 -0600, Eric Blake wrote:
> The existing qemu-nbd --partition code claims to handle logical
> partitions up to 8, since its introduction in 2008 (commit 7a5ca86).
> However, the implementation is bogus (actual MBR logical partitions
> form a sort of linked list, with
65 matches
Mail list logo