On Wed, Mar 25, 2026 at 02:17:42PM +0800, Li Wang wrote:
> > In summary, the problem is that 'zswpwb' does not update when zswap is 
> > executed
> > under the zstd algorithm. I'd debugging this issue separately from the 
> > kernel side.

Plz ignore above test logs and conclusion.

> I forgot to mention, this issue only observed on systems with a 64K
> pagesize (ppc64le, aarch64). I changed the aarch64 page size to 4K,
> and it passed the test every time.

Well, finally, I think I've found the root cause of the failure of
test_no_invasive_cgroup_shrink.

The test sets up two cgroups:
  wb_group, which is expected to trigger zswap writeback,
  control_group, which should have pages in zswap but must not experience any 
writeback.

However, the data patterns used for each group are reversed:

wb_group uses allocate_bytes(), which only writes a single byte per page 
(mem[i] = 'a').
The rest of each page is effectively zero. This data is trivially compressible,
especially by zstd, so the compressed pages easily fit within zswap.max and 
writeback
is never triggered.

control_group, on the other hand, uses getrandom() to dirty 1/4 of each page, 
producing
data that is much harder to compress. Ironically, this is the group that does 
not need
to trigger writeback.

So the test has the hard-to-compress data in the wrong cgroup. The fix is to 
swap the
allocation patterns: wb_group should use the partially random data to ensure its
compressed pages exceed zswap.max and trigger writeback, while control_group 
only
needs simple, easily compressible data to occupy zswap.

I have confirmed this when I reverse the two partens and get all passed
on both lzo and zstd.

Will fix in next patch version.

-- 
Regards,
Li Wang


Reply via email to