> > Sashiko claims that 512 pages will end up consuming 11K to 15K in
> > zswap with this setup, do you know what the actual number is?
>
> Not very sure, I guess each 64K page contains 1 byte of 'a' and 65535 bytes
> of zero. A single page like that compresses down to roughly 20–30 bytes
> (a short literal plus a very long zero run, plus frame/header overhead).
> So the estimate is roughly 512 × 25 bytes ≈ 12.8 KB, which is where the
> "11 to 15 kilobytes" ballpark comes from.
>
> > Especially with different compressors? If it's close to 64K, this
> > might be a problem.
>
> Yes, good point, when I swith to use 'zstd' compressor, it doesn't work.
>
> > Maybe we can fill half of each page with increasing values? It should
> > still be compressible but not too compressible.
>
> I tried, this method works on Lzo algorithm but not Zstd.
> Anyway, I am still investigating.

Do you mean the compressibility is still very high on zstd? I vaguely
remember filling a page with repeating patterns (e.g. alphabet
letters) seemed to produce a decent compression ratio, but I don't
remember the specifics.

I am pretty sure an LLM could figure out what values will work for
different compression algorithms :)

Reply via email to