From: Ran Xiaokai <[email protected]>

Commit f735eebe55f8 ("memcg: multi-memcg percpu charge cache") changed
the percpu charge cache to support multiple memory cgroups
(NR_MEMCG_STOCK) instead of a single memcg per CPU.

Prior to the multi-memcg stock change, the tolerance was calculated as:
  PAGE_SIZE * MEMCG_CHARGE_BATCH * num_cpus

With NR_MEMCG_STOCK slots per CPU, the worst-case discrepancy is now:
  PAGE_SIZE * MEMCG_CHARGE_BATCH * NR_MEMCG_STOCK * num_cpus

Update the test tolerance to include the NR_MEMCG_STOCK factor to
prevent false positive test failures.

Fixes: f735eebe55f8 ("memcg: multi-memcg percpu charge cache")
Signed-off-by: Ran Xiaokai <[email protected]>
---
 tools/testing/selftests/cgroup/test_kmem.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_kmem.c 
b/tools/testing/selftests/cgroup/test_kmem.c
index eeabd34bf083..15b8bb424cb5 100644
--- a/tools/testing/selftests/cgroup/test_kmem.c
+++ b/tools/testing/selftests/cgroup/test_kmem.c
@@ -19,12 +19,19 @@
 
 
 /*
- * Memory cgroup charging is performed using percpu batches 64 pages
- * big (look at MEMCG_CHARGE_BATCH), whereas memory.stat is exact. So
- * the maximum discrepancy between charge and vmstat entries is number
- * of cpus multiplied by 64 pages.
+ * Memory cgroup charging is performed using per-CPU batches to reduce
+ * accounting overhead. Each cache slot can hold up to MEMCG_CHARGE_BATCH
+ * pages for a specific memcg. The per-CPU charge cache supports multiple
+ * memcgs simultaneously (NR_MEMCG_STOCK slots).
+ *
+ * While memory.stat reports exact usage, per-CPU charges are pending
+ * until flushed. Therefore, the maximum discrepancy between charge and
+ * vmstat entries is:
+ *
+ *   PAGE_SIZE * MEMCG_CHARGE_BATCH * NR_MEMCG_STOCK * num_cpus
  */
-#define MAX_VMSTAT_ERROR (4096 * 64 * get_nprocs())
+#define NR_MEMCG_STOCK 7
+#define MAX_VMSTAT_ERROR (4096 * 64 * NR_MEMCG_STOCK * get_nprocs())
 
 #define KMEM_DEAD_WAIT_RETRIES        80
 
-- 
2.25.1



Reply via email to