--- Begin Message ---
>
> This patch increase cache to 1GB, enough to handle 8TB image
>
> with default 32MB cache
> fio benchmark 4k randread/write:
>
> 256GB image : 32MB cache : 40000 iops
> 1TB image: 32MB cache: 2500 iops
> 8TB image: 32MB cache: 2500 iops
> 1TB image: 1G cache: 40000 iops
> 8TB image: 1G cache: 40000 iops
>
>
>>have you benchmarked this?
yes, results are in this commit message (2500->40000iops with a 1TB
image with 64k cluster, same results with 128k cluster)
>> if so, did you compare it with using the
>>smaller cache-entry variant described in the file you linked:
Don't have tested it (I don't understand exactly this part to be
honest, but the default l2-cache-entry-size is already smaller, it's
4k, and we use 64k or 128k cluster size)
>>we also know the image size here, so we could use a capped, derived
>>value?
>>
>>what if the disk is resized?
One problem is disk resize, because the cache size can't be increase
without restart. That's why I think it's better to use a big cache
size.(It's really a max value)
>> what about image files with bigger clusters?
I have tried with bigger blocksize (so less metadatas, less memory),
but snapshot performance are not great. (for example, 1MB cluster, this
is 32MB sub-cluster on snapshot (vs 4k sub-cluster with 128k cluster),
with 4k write, you need to rewrite 32MB.
Maybe l2_extended2=on on main image could reduce the needed cache
memory, but from my tests it don't seem to help, I still need to
increase the cache (I'll try to retest it)
They are some good info in the suballocattion paper (including in the
video presentation)
https://blogs.igalia.com/berto/2020/12/03/subcluster-allocation-for-qcow2-images/
--- End Message ---
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel