On 11.02.21 13:53, Vladimir Sementsov-Ogievskiy wrote:
10.02.2021 20:11, Max Reitz wrote:
On 29.01.21 17:50, Vladimir Sementsov-Ogievskiy wrote:
Introduce a new option: compressed-cache-size, with default to 64
clusters (to be not less than 64 default max-workers for backup job).
Signed-off-by: Vladimir Sementsov-Ogievskiy <[email protected]>
---
qapi/block-core.json | 8 +++-
block/qcow2.h | 4 ++
block/qcow2-refcount.c | 13 +++++++
block/qcow2.c | 87 ++++++++++++++++++++++++++++++++++++++++--
4 files changed, 108 insertions(+), 4 deletions(-)
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 9f555d5c1d..e0be6657f3 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -3202,6 +3202,11 @@
# an image, the data file name is loaded from the image
# file. (since 4.0)
#
+# @compressed-cache-size: The maximum size of compressed write cache in
+# bytes. If positive must be not less than
+# cluster size. 0 disables the feature. Default
+# is 64 * cluster_size. (since 6.0)
Do we need this, really? If you don’t use compression, the cache
won’t use any memory, right? Do you plan on using this option?
I’d just set it to a sane default.
OK for me
OTOH, “a sane default” poses two questions, namely whether 64 *
cluster_size is reasonable – with subclusters, the cluster size may be
rather high, so 64 * cluster_size may well be like 128 MB. Are 64
clusters really necessary for a reasonable performance?
Second, I think I could live with a rather high default if clusters
are flushed as soon as they are full. OTOH, as I briefly touched on,
in practice, I suppose compressed images are just written to
constantly, so even if clusters are flushed as soon as they are full,
the cache will still remain full all the time.
Different topic: Why is the cache disableable? I thought there are no
downsides?
to compare performance for example..
Well :D
Doesn’t seem like a reason to expose it to the outside, though, I don’t
know.
Max