在 2025/7/16 06:10, Matthew Wilcox 写道:
I've started looking at how the page cache can help filesystems handle
compressed data better. Feedback would be appreciated! I'll probably
say a few things which are obvious to anyone who knows how compressed
files work, but I'm trying to be explicit about my assumptions.
First, I believe that all filesystems work by compressing fixed-size
plaintext into variable-sized compressed blocks. This would be a good
point to stop reading and tell me about counterexamples.
I don't think it's the case for btrfs, unless your "fixed-size" means
block size, and in that case, a single block won't be compressed at all...
In btrfs, we support compressing the plaintext from 2 blocks to 128KiB
(the 128KiB limit is an artificial one).
From what I've been reading in all your filesystems is that you want to
allocate extra pages in the page cache in order to store the excess data
retrieved along with the page that you're actually trying to read. That's
because compressing in larger chunks leads to better compression.
We don't. We just grab dirty pages up to 128KiB, and we can handle
smaller ranges, as small as two blocks.
There's some discrepancy between filesystems whether you need scratch
space for decompression. Some filesystems read the compressed data into
the pagecache and decompress in-place, while other filesystems read the
compressed data into scratch pages and decompress into the page cache.
Btrfs goes the scratch pages way. Decompression in-place looks a little
tricky to me. E.g. what if there is only one compressed page, and it
decompressed to 4 pages.
Won't the plaintext over-write the compressed data halfway?
There also seems to be some discrepancy between filesystems whether the
decompression involves vmap() of all the memory allocated or whether the
decompression routines can handle doing kmap_local() on individual pages.
Btrfs is the later case.
All the decompression/compression routines all support swapping
input/output buffer when one of them is full.
So kmap_local() is completely feasible.
Thanks,
Qu
So, my proposal is that filesystems tell the page cache that their minimum
folio size is the compression block size. That seems to be around 64k,
so not an unreasonable minimum allocation size. That removes all the
extra code in filesystems to allocate extra memory in the page cache.
It means we don't attempt to track dirtiness at a sub-folio granularity
(there's no point, we have to write back the entire compressed bock
at once). We also get a single virtually contiguous block ... if you're
willing to ditch HIGHMEM support. Or there's a proposal to introduce a
vmap_file() which would give us a virtually contiguous chunk of memory
(and could be trivially turned into a noop for the case of trying to
vmap a single large folio).
_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel