On Wed, Jan 14, 2026 at 03:40:36PM +0800, Jiayuan Chen wrote:
> From: Jiayuan Chen <[email protected]>
> 
> Currently, kswapd_failures is reset in multiple places (kswapd,
> direct reclaim, PCP freeing, memory-tiers), but there's no way to
> trace when and why it was reset, making it difficult to debug
> memory reclaim issues.
> 
> This patch:
> 
> 1. Introduce pgdat_reset_kswapd_failures() as a wrapper function to
>    centralize kswapd_failures reset logic.
> 
> 2. Add reset_kswapd_failures_reason enum to distinguish reset sources:
>    - RESET_KSWAPD_FAILURES_KSWAPD: reset from kswapd context
>    - RESET_KSWAPD_FAILURES_DIRECT: reset from direct reclaim
>    - RESET_KSWAPD_FAILURES_PCP: reset from PCP page freeing
>    - RESET_KSWAPD_FAILURES_OTHER: reset from other paths
> 
> 3. Add tracepoints for better observability:
>    - mm_vmscan_reset_kswapd_failures: traces each reset with reason
>    - mm_vmscan_kswapd_reclaim_fail: traces each kswapd reclaim failure
> 
> ---
> Test results:
> 
> $ trace-cmd record -e vmscan:mm_vmscan_reset_kswapd_failures -e 
> vmscan:mm_vmscan_kswapd_reclaim_fail
> $ # generate memory pressure
> $ trace-cmd report
> cpus=4
> kswapd1-73  [002]  24.863112: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=1
> kswapd1-73  [002]  24.863472: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=2
> kswapd1-73  [002]  24.863813: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=3
> kswapd1-73  [002]  24.864141: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=4
> kswapd1-73  [002]  24.864462: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=5
> kswapd1-73  [002]  24.864779: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=6
> kswapd1-73  [002]  24.865103: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=7
> kswapd1-73  [002]  24.865421: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=8
> kswapd1-73  [002]  24.865737: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=9
> kswapd1-73  [002]  24.866070: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=10
> kswapd1-73  [002]  24.866385: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=11
> kswapd1-73  [002]  24.866701: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=12
> kswapd1-73  [002]  24.867016: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=13
> kswapd1-73  [002]  24.867333: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=14
> kswapd1-73  [002]  24.867649: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=15
> kswapd1-73  [002]  24.867965: mm_vmscan_kswapd_reclaim_fail: nid=1 failures=16
> kswapd0-72  [001]  25.020464: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=1
> kswapd0-72  [001]  25.021054: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=2
> kswapd0-72  [001]  25.021628: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=3
> kswapd0-72  [001]  25.022217: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=4
> kswapd0-72  [001]  25.022790: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=5
> kswapd0-72  [001]  25.023366: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=6
> kswapd0-72  [001]  25.023937: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=7
> kswapd0-72  [001]  25.024511: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=8
> kswapd0-72  [001]  25.025092: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=9
> kswapd0-72  [001]  25.025665: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=10
> kswapd0-72  [001]  25.026249: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=11
> kswapd0-72  [001]  25.026824: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=12
> kswapd0-72  [001]  25.027398: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=13
> kswapd0-72  [001]  25.027976: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=14
> kswapd0-72  [001]  25.028554: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=15
> kswapd0-72  [001]  25.029140: mm_vmscan_kswapd_reclaim_fail: nid=0 failures=16
> ann-416     [002]  25.577925: mm_vmscan_reset_kswapd_failures: nid=0 
> reason=PCP
> dd-417      [002]  35.111721: mm_vmscan_reset_kswapd_failures: nid=1 
> reason=DIRECT
> 
> Signed-off-by: Jiayuan Chen <[email protected]>
> Signed-off-by: Jiayuan Chen <[email protected]>

Thanks for adding this.

Acked-by: Shakeel Butt <[email protected]>

Reply via email to