Hi!

14.08.2020 17:57, Alberto Garcia wrote:
Hi,

the patch is self-explanatory, but I'm using the cover letter to raise
a couple of related questions.

Since commit c8bb23cbdbe / QEMU 4.1.0 (and if the storage backend
allows it) writing to an image created with preallocation=metadata can
be slower (20% in my tests) than writing to an image with no
preallocation at all.

So:

a) shall we include a warning in the documentation ("note that this
    preallocation mode can result in worse performance")?

I think, the best thing to do is to make it work fast in all cases if possible 
(I assume, that would be, with your patch + positive answer to [b]? Or not?) :)

Andrey recently added a benchmark, with some cases, where c8bb23cbdbe bring 
benefits:
[PATCH v6] scripts/simplebench: compare write request performance
<1594741846-475697-1-git-send-email-andrey.shinkev...@virtuozzo.com>
queued in Eduardo's python-next: 
https://github.com/ehabkost/qemu/commit/9519f87d900b0ef30075c749fa097bd93471553f

So, as a first step, could you post your tests, so we can add it into this 
benchmark? Or post a patch to simplebench on top of Eduardo's python-next.


b) why don't we also initialize preallocated clusters with
    QCOW_OFLAG_ZERO? (at least when there's no subclusters involved,
    i.e. no backing file). This would make reading from them (and
    writing to them, after this patch) faster.

Probably, they are not guaranteed to be zero on all filesystems? But I think at 
least in some cases (99% :) we can mark them as ZERO.. Honestly, I may be not 
aware of actual reasons.


Berto

Alberto Garcia (1):
   qcow2: Skip copy-on-write when allocating a zero cluster

  include/block/block.h |  2 +-
  block/commit.c        |  2 +-
  block/io.c            | 20 +++++++++++++++++---
  block/mirror.c        |  3 ++-
  block/qcow2.c         | 26 ++++++++++++++++----------
  block/replication.c   |  2 +-
  block/stream.c        |  2 +-
  qemu-img.c            |  2 +-
  8 files changed, 40 insertions(+), 19 deletions(-)



--
Best regards,
Vladimir

Reply via email to