Re: [Qemu-devel] [Qemu-block] [PATCH] qcow2: Stop overwriting compressed clusters one by one

2019-09-12 Thread Kevin Wolf
Am 12.09.2019 um 09:25 hat Alberto Garcia geschrieben:
> On Thu 12 Sep 2019 01:33:05 AM CEST, John Snow  wrote:
> >> This restriction comes from commit 095a9c58ce12afeeb90c2 from 2018.
> >
> > You accidentally typed a reasonably modern date. It's from *2008*!
> 
> Oh my, and I reviewed the message 3 times ... if this one gets committed
> please correct the date.

It was actually one of the very first QEMU patches I reviewed. :-)

Thanks, fixed the year and applied to the block branch.

Kevin



Re: [Qemu-devel] [Qemu-block] [PATCH] qcow2: Stop overwriting compressed clusters one by one

2019-09-12 Thread Alberto Garcia
On Thu 12 Sep 2019 01:33:05 AM CEST, John Snow  wrote:
>> This restriction comes from commit 095a9c58ce12afeeb90c2 from 2018.
>
> You accidentally typed a reasonably modern date. It's from *2008*!

Oh my, and I reviewed the message 3 times ... if this one gets committed
please correct the date.

Berto



Re: [Qemu-devel] [Qemu-block] [PATCH] qcow2: Stop overwriting compressed clusters one by one

2019-09-11 Thread John Snow



On 9/11/19 11:16 AM, Alberto Garcia wrote:
> handle_alloc() tries to find as many contiguous clusters that need
> copy-on-write as possible in order to allocate all of them at the same
> time.
> 
> However, compressed clusters are only overwritten one by one, so let's
> say that we have an image with 1024 consecutive compressed clusters:
> 
>qemu-img create -f qcow2 hd.qcow2 64M
>for f in `seq 0 64 65472`; do
>   qemu-io -c "write -c ${f}k 64k" hd.qcow2
>done
> 
> In this case trying to overwrite the whole image with one large write
> request results in 1024 separate allocations:
> 
>qemu-io -c "write 0 64M" hd.qcow2
> 
> This restriction comes from commit 095a9c58ce12afeeb90c2 from 2018.

You accidentally typed a reasonably modern date. It's from *2008*!

> Nowadays QEMU can overwrite multiple compressed clusters just fine,
> and in fact it already does: as long as the first cluster that
> handle_alloc() finds is not compressed, all other compressed clusters
> in the same batch will be overwritten in one go:
> 
>qemu-img create -f qcow2 hd.qcow2 64M
>qemu-io -c "write -z 0 64k" hd.qcow2
>for f in `seq 64 64 65472`; do
>   qemu-io -c "write -c ${f}k 64k" hd.qcow2
>done
> 
> Compared to the previous one, overwriting this image on my computer
> goes from 8.35s down to 230ms.
> 
> Signed-off-by: Alberto Garcia 
> ---
>  block/qcow2-cluster.c | 8 +---
>  1 file changed, 1 insertion(+), 7 deletions(-)
> 
> diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
> index f09cc992af..dcacd3c450 100644
> --- a/block/qcow2-cluster.c
> +++ b/block/qcow2-cluster.c
> @@ -1351,13 +1351,7 @@ static int handle_alloc(BlockDriverState *bs, uint64_t 
> guest_offset,
>  }
>  
>  entry = be64_to_cpu(l2_slice[l2_index]);
> -
> -/* For the moment, overwrite compressed clusters one by one */
> -if (entry & QCOW_OFLAG_COMPRESSED) {
> -nb_clusters = 1;
> -} else {
> -nb_clusters = count_cow_clusters(bs, nb_clusters, l2_slice, 
> l2_index);
> -}
> +nb_clusters = count_cow_clusters(bs, nb_clusters, l2_slice, l2_index);
>  
>  /* This function is only called when there were no non-COW clusters, so 
> if
>   * we can't find any unallocated or COW clusters either, something is
> 

Well, given that count_cow_clusters already works this way, this doesn't
break anything that wasn't broken before, at least.

Reviewed-by: John Snow