On Wed, Jan 31, 2018 at 03:22:49PM +0100, Kevin Wolf wrote:
> Am 31.01.2018 um 15:12 hat Stefan Hajnoczi geschrieben:
> > There should be a separate paragraph in docs/qemu-block-drivers.texi
> > explaining that share-rw=on can be used safely with format=raw if the
> > guests are configured to
On 1/2/2018 5:29 PM, Alberto Garcia wrote:
On Thu 01 Feb 2018 03:16:31 PM CET, Anton Nefedov wrote:
The normal bdrv_co_pwritev() use is either
- BDRV_REQ_ZERO_WRITE reset and iovector provided
- BDRV_REQ_ZERO_WRITE set and iovector == NULL
while
- the flag reset and iovector == NULL
On Wed 31 Jan 2018 09:11:48 PM CET, Max Reitz wrote:
> On 2018-01-26 15:59, Alberto Garcia wrote:
>> After the previous patch we're now always using l2_load() in
>> get_cluster_table() regardless of whether a new L2 table has to be
>> allocated or not.
>>
>> This patch refactors that part of the
On Sat 20 Jan 2018 04:44:11 PM CET, Max Reitz wrote:
> Instead of converting all "backing": null instances into "backing": "",
> handle a null value directly in bdrv_open_inherit().
>
> This enables explicitly null backing links for json:{} filenames.
>
> Signed-off-by: Max Reitz
On 02/01/2018 08:16 AM, Anton Nefedov wrote:
> The normal bdrv_co_pwritev() use is either
> - BDRV_REQ_ZERO_WRITE reset and iovector provided
s/reset/clear/
> - BDRV_REQ_ZERO_WRITE set and iovector == NULL
>
> while
> - the flag reset and iovector == NULL is an assertion failure
again
>
On Thu 01 Feb 2018 03:16:31 PM CET, Anton Nefedov wrote:
> The normal bdrv_co_pwritev() use is either
> - BDRV_REQ_ZERO_WRITE reset and iovector provided
> - BDRV_REQ_ZERO_WRITE set and iovector == NULL
>
> while
> - the flag reset and iovector == NULL is an assertion failure
> in
On Wed 31 Jan 2018 06:11:27 PM CET, Anton Nefedov wrote:
> On 31/1/2018 6:11 PM, Alberto Garcia wrote:
>> On Thu 18 Jan 2018 06:49:02 PM CET, Anton Nefedov wrote:
>>
>>> -static bool coroutine_fn wait_serialising_requests(BdrvTrackedRequest
>>> *self)
>>> +static bool coroutine_fn
The normal bdrv_co_pwritev() use is either
- BDRV_REQ_ZERO_WRITE reset and iovector provided
- BDRV_REQ_ZERO_WRITE set and iovector == NULL
while
- the flag reset and iovector == NULL is an assertion failure
in bdrv_co_do_zero_pwritev()
- the flag set and iovector provided is in fact
On Sat 20 Jan 2018 04:44:12 PM CET, Max Reitz wrote:
> We have a clear replacement, so let's deprecate it.
>
> Signed-off-by: Max Reitz
Reviewed-by: Alberto Garcia
Berto
On Wed, Jan 31, 2018 at 03:31:27PM +0100, Kevin Wolf wrote:
> Am 31.01.2018 um 14:56 hat Stefan Hajnoczi geschrieben:
> > On Tue, Jan 30, 2018 at 05:54:56PM +0100, Kevin Wolf wrote:
> > > Am 30.01.2018 um 16:38 hat Stefan Hajnoczi geschrieben:
> > > > Commit
Commit dce8921b2baaf95974af8176406881872067adfa ("iothread: Stop threads
before main() quits") introduced iothread_stop_all() to avoid the
following virtio-scsi assertion failure:
assert(blk_get_aio_context(d->conf.blk) == s->ctx);
Back then the assertion failed because when bdrv_close_all()
On Wed 31 Jan 2018 09:07:48 PM CET, Max Reitz wrote:
> On 2018-01-26 15:59, Alberto Garcia wrote:
>> This patch updates l2_allocate() to support the qcow2 cache returning
>> L2 slices instead of full L2 tables.
>>
>> The old code simply gets an L2 table from the cache and initializes it
>> with
On 02/01/2018 08:36 AM, Eric Blake wrote:
> On 02/01/2018 08:16 AM, Anton Nefedov wrote:
>> The normal bdrv_co_pwritev() use is either
>> - BDRV_REQ_ZERO_WRITE reset and iovector provided
>
> s/reset/clear/
>
>> - BDRV_REQ_ZERO_WRITE set and iovector == NULL
>>
>> while
>> - the flag reset
On 18/01/2018 07:43, Paolo Bonzini wrote:
> Check and invalidate_cache share some parts of the implementation
> with the regular I/O path. This is sometimes complicated because the
> I/O path wants to use a CoMutex but that is not possible outside coroutine
> context. By moving things to
On 18/01/2018 15:19, Eric Blake wrote:
> On 01/18/2018 06:43 AM, Paolo Bonzini wrote:
>> It is called from qcow2_invalidate_cache in coroutine context (incoming
> Why is a qcow2 function calling qed code? Definitely a copy-paste bug,
> but I'm not sure of the right fix.
This reference was of
On 2018-02-01 01:55, Fam Zheng wrote:
> On Thu, Feb 1, 2018 at 2:58 AM, Max Reitz wrote:
>> On 2018-01-30 07:25, Fam Zheng wrote:
>>> This reverts commit 76bf133c4 which updated the reference output, and
>>> fixed the reference image, because the code path we want to exercise
On 2018-02-01 11:40, Alberto Garcia wrote:
> On Wed 31 Jan 2018 09:11:48 PM CET, Max Reitz wrote:
>> On 2018-01-26 15:59, Alberto Garcia wrote:
>>> After the previous patch we're now always using l2_load() in
>>> get_cluster_table() regardless of whether a new L2 table has to be
>>> allocated or
On 2018-01-26 15:59, Alberto Garcia wrote:
> handle_copied() loads an L2 table and limits the number of checked
> clusters to the amount that fits inside that table. Since we'll be
> loading L2 slices instead of full tables we need to update that limit.
>
> Apart from that, this function doesn't
On 2018-02-01 14:34, Anton Nefedov wrote:
> On 31/1/2018 8:31 PM, Max Reitz wrote:
>> On 2018-01-30 13:34, Anton Nefedov wrote:
>>> Offtop: does REQ_ZERO_WRITE override REQ_WRITE_COMPRESSED in this
>>> function? at least with !REQ_MAY_UNMAP it looks wrong
>>
>> Looks like zero detection will
On 2018-01-26 15:59, Alberto Garcia wrote:
> There's a loop in this function that iterates over the L2 entries in a
> table, so now we need to assert that it remains within the limits of
> an L2 slice.
>
> Apart from that, this function doesn't need any additional changes, so
> this patch simply
On 2018-01-26 15:59, Alberto Garcia wrote:
> qcow2_update_snapshot_refcount() increases the refcount of all
> clusters of a given snapshot. In order to do that it needs to load all
> its L2 tables and iterate over their entries. Since we'll be loading
> L2 slices instead of full tables we need to
On 2018-01-26 15:59, Alberto Garcia wrote:
> expand_zero_clusters_in_l1() expands zero clusters as a necessary step
> to downgrade qcow2 images to a version that doesn't support metadata
> zero clusters. This function takes an L1 table (which may or may not
> be active) and iterates over all its
On 2018-01-26 15:59, Alberto Garcia wrote:
> This function doesn't need any changes to support L2 slices, but since
> it's now dealing with slices instead of full tables, the l2_table
> variable is renamed for clarity.
>
> Signed-off-by: Alberto Garcia
> Reviewed-by: Eric Blake
On 2018-01-26 16:00, Alberto Garcia wrote:
> This function doesn't need any changes to support L2 slices, but since
> it's now dealing with slices intead of full tables, the l2_table
> variable is renamed for clarity.
>
> Signed-off-by: Alberto Garcia
> Reviewed-by: Eric Blake
On 2018-01-26 16:00, Alberto Garcia wrote:
> expand_zero_clusters_in_l1() is used when downgrading qcow2 images
> from v3 to v2 (compat=0.10). This is one of the functions that needed
> more changes to support L2 slices, so this patch extends iotest 061 to
> test downgrading a qcow2 image using a
On 2018-02-01 10:51, Alberto Garcia wrote:
> On Wed 31 Jan 2018 08:48:08 PM CET, Max Reitz wrote:
>> On 2018-01-26 15:59, Alberto Garcia wrote:
>>> The BDRVQcow2State structure contains an l2_size field, which stores
>>> the number of 64-bit entries in an L2 table.
>>>
>>> For efficiency reasons
On 2018-02-01 16:43, Alberto Garcia wrote:
> On Thu 01 Feb 2018 04:23:09 PM CET, Anton Nefedov wrote:
However, I'm wondering whether this is the best approach. The old
L2 table is probably not going to be used after this function, so
we're basically polluting the cache here. That
On 2018-01-26 15:59, Alberto Garcia wrote:
> At the moment it doesn't really make a difference whether we call
> qcow2_get_refcount() before of after reading the L2 table, but if we
> want to support L2 slices we'll need to read the refcount first.
>
> This patch simply changes the order of those
On 2018-01-26 16:00, Alberto Garcia wrote:
> The l2-cache-entry-size setting can only contain values that are
> powers of two between 512 and the cluster size.
>
> Signed-off-by: Alberto Garcia
> ---
> tests/qemu-iotests/103 | 17 +
>
On Tue, Jan 30, 2018 at 04:56:12PM +0100, Kevin Wolf wrote:
> Am 30.01.2018 um 13:38 hat Stefan Hajnoczi geschrieben:
> > On Mon, Jan 29, 2018 at 04:41:07PM +0100, Kevin Wolf wrote:
> > > Am 24.01.2018 um 12:31 hat Stefan Hajnoczi geschrieben:
> > > > On Mon, Jan 22, 2018 at 09:01:49AM -0600, Mark
On 02/01/2018 11:59 AM, Max Reitz wrote:
>>> H, now this fails again on my 32 bit build. :-(
>>>
>>> The issue there is that you get a "Cannot allocate memory" when trying
>>> to open the file. My current fix was 2291712c39111a732 which simply
>>> converted that to "Invalid argument", but
On 2018-01-26 16:00, Alberto Garcia wrote:
> Now that the code is ready to handle L2 slices we can finally add an
> option to allow configuring their size.
>
> An L2 slice is the portion of an L2 table that is read by the qcow2
> cache. Until now the cache was always reading full L2 tables, and
>
On Mon, Jan 29, 2018 at 10:13:02AM -0600, Mark Kanda wrote:
>
>
> On 1/29/2018 9:41 AM, Kevin Wolf wrote:
> > Am 24.01.2018 um 12:31 hat Stefan Hajnoczi geschrieben:
> > > On Mon, Jan 22, 2018 at 09:01:49AM -0600, Mark Kanda wrote:
> > > > Add a BlockDriverState NULL check to
On 2018-02-01 14:13, Alberto Garcia wrote:
> On Wed 31 Jan 2018 09:07:48 PM CET, Max Reitz wrote:
>> On 2018-01-26 15:59, Alberto Garcia wrote:
>>> This patch updates l2_allocate() to support the qcow2 cache returning
>>> L2 slices instead of full L2 tables.
>>>
>>> The old code simply gets an L2
On 2018-01-26 15:59, Alberto Garcia wrote:
> handle_alloc() loads an L2 table and limits the number of checked
> clusters to the amount that fits inside that table. Since we'll be
> loading L2 slices instead of full tables we need to update that limit.
>
> Apart from that, this function doesn't
On 2018-01-26 15:59, Alberto Garcia wrote:
> zero_single_l2() limits the number of clusters to be zeroed to the
> amount that fits inside an L2 table. Since we'll be loading L2 slices
> instead of full tables we need to update that limit.
Same as last patch, maybe we should rename the function
On 2018-01-26 15:59, Alberto Garcia wrote:
> Adding support for L2 slices to qcow2_update_snapshot_refcount() needs
> (among other things) an extra loop that iterates over all slices of
> each L2 table.
>
> Putting all changes in one patch would make it hard to read because
> all semantic changes
On 2018-01-26 16:00, Alberto Garcia wrote:
> This function doesn't need any changes to support L2 slices, but since
> it's now dealing with slices intead of full tables, the l2_table
> variable is renamed for clarity.
>
> Signed-off-by: Alberto Garcia
> Reviewed-by: Eric Blake
On 2018-01-26 16:00, Alberto Garcia wrote:
> This function doesn't need any changes to support L2 slices, but since
> it's now dealing with slices intead of full tables, the l2_table
> variable is renamed for clarity.
>
> Signed-off-by: Alberto Garcia
> Reviewed-by: Eric Blake
On 2018-01-26 16:00, Alberto Garcia wrote:
> This test tries reopening a qcow2 image with valid and invalid
> options. This patch adds l2-cache-entry-size to the set.
>
> Signed-off-by: Alberto Garcia
> ---
> tests/qemu-iotests/137 | 5 +
> tests/qemu-iotests/137.out |
On Thu 01 Feb 2018 04:23:09 PM CET, Anton Nefedov wrote:
>>> However, I'm wondering whether this is the best approach. The old
>>> L2 table is probably not going to be used after this function, so
>>> we're basically polluting the cache here. That was bad enough so
>>> far, but now that actually
On 1/2/2018 4:13 PM, Alberto Garcia wrote:
On Wed 31 Jan 2018 09:07:48 PM CET, Max Reitz wrote:
On 2018-01-26 15:59, Alberto Garcia wrote:
This patch updates l2_allocate() to support the qcow2 cache returning
L2 slices instead of full L2 tables.
The old code simply gets an L2 table from the
On Thu 01 Feb 2018 03:40:51 PM CET, Eric Blake wrote:
>>> --- a/block/io.c
>>> +++ b/block/io.c
>>> @@ -1701,7 +1701,7 @@ int coroutine_fn bdrv_co_pwritev(BdrvChild *child,
>>> */
>>> tracked_request_begin(, bs, offset, bytes, BDRV_TRACKED_WRITE);
>>>
>>> -if (!qiov) {
>>> +if
Now that CoQueues can use a QemuMutex for thread-safety, there is no
need for curl to roll its own coroutine queue. Coroutines can be
placed directly on the queue instead of using a list of CURLAIOCBs.
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Paolo Bonzini
QemuLockable is a polymorphic lock type that takes an object and
knows which function to use for locking and unlocking. The
implementation could use C11 _Generic, but since the support is
not very widespread I am instead using __builtin_choose_expr and
__builtin_types_compatible_p, which are
Hi,
This series seems to have some coding style problems. See output below for
more information:
Type: series
Message-id: 20180201212917.18131-1-pbonz...@redhat.com
Subject: [Qemu-devel] [PATCH v4 0/5] coroutine-lock: polymorphic CoQueue
=== TEST SCRIPT BEGIN ===
#!/bin/bash
BASE=base
n=1
In preparation for adding a similar test using QemuLockable, add a very
simple testcase that has two interleaved calls to lock and unlock.
Reviewed-by: Stefan Hajnoczi
Signed-off-by: Paolo Bonzini
---
tests/test-coroutine.c | 50
There are cases in which a queued coroutine must be restarted from
non-coroutine context (with qemu_co_enter_next). In this cases,
qemu_co_enter_next also needs to be thread-safe, but it cannot use a
CoMutex and so cannot qemu_co_queue_wait. This happens in curl (which
right now is rolling its
There are cases in which a queued coroutine must be restarted from
non-coroutine context (with qemu_co_enter_next). In this cases,
qemu_co_enter_next also needs to be thread-safe, but it cannot use
a CoMutex and so cannot qemu_co_queue_wait. Use QemuLockable so
that the CoQueue can
qemu_co_queue_next does not need to release and re-acquire the mutex,
because the queued coroutine does not run immediately. However, this
does not hold for qemu_co_enter_next. Now that qemu_co_queue_wait
can synchronize (via QemuLockable) with code that is not running in
coroutine context, it's
On 2018-01-26 15:59, Alberto Garcia wrote:
> The qcow2_truncate() code is mostly independent from whether
> we're using L2 slices or full L2 tables, but in full and
> falloc preallocation modes new L2 tables are allocated using
> qcow2_alloc_cluster_link_l2(). Therefore the code needs to be
>
On 2018-01-26 15:59, Alberto Garcia wrote:
> Adding support for L2 slices to expand_zero_clusters_in_l1() needs
> (among other things) an extra loop that iterates over all slices of
> each L2 table.
>
> Putting all changes in one patch would make it hard to read because
> all semantic changes
On Wed 31 Jan 2018 08:48:08 PM CET, Max Reitz wrote:
> On 2018-01-26 15:59, Alberto Garcia wrote:
>> The BDRVQcow2State structure contains an l2_size field, which stores
>> the number of 64-bit entries in an L2 table.
>>
>> For efficiency reasons we want to be able to load slices instead of
>>
53 matches
Mail list logo