On 19/01/2026 09.49, Leon Hwang wrote:
On 5/1/26 00:43, Jakub Kicinski wrote:
On Fri, 2 Jan 2026 12:43:46 +0100 Jesper Dangaard Brouer wrote:
On 02/01/2026 08.17, Leon Hwang wrote:
Introduce a new tracepoint to track stalled page pool releases,
providing better observability for page pool lifecycle issues.
In general I like/support adding this tracepoint for "debugability" of
page pool lifecycle issues.
For "observability" @Kuba added a netlink scheme[1][2] for page_pool[3],
which gives us the ability to get events and list page_pools from userspace.
I've not used this myself (yet) so I need input from others if this is
something that others have been using for page pool lifecycle issues?
My input here is the least valuable (since one may expect the person
who added the code uses it) - but FWIW yes, we do use the PP stats to
monitor PP lifecycle issues at Meta. That said - we only monitor for
accumulation of leaked memory from orphaned pages, as the whole reason
for adding this code was that in practice the page may be sitting in
a socket rx queue (or defer free queue etc.) IOW a PP which is not
getting destroyed for a long time is not necessarily a kernel issue.
What monitoring tool did production people add metrics to?
People at CF recommend that I/we add this to prometheus/node_exporter.
Perhaps somebody else already added this to some other FOSS tool?
https://github.com/prometheus/node_exporter
Need input from @Kuba/others as the "page-pool-get"[4] state that "Only
Page Pools associated with a net_device can be listed". Don't we want
the ability to list "invisible" page_pool's to allow debugging issues?
[1] https://docs.kernel.org/userspace-api/netlink/intro-specs.html
[2] https://docs.kernel.org/userspace-api/netlink/index.html
[3] https://docs.kernel.org/netlink/specs/netdev.html
[4] https://docs.kernel.org/netlink/specs/netdev.html#page-pool-get
The documentation should probably be updated :(
I think what I meant is that most _drivers_ didn't link their PP to the
netdev via params when the API was added. So if the user doesn't see the
page pools - the driver is probably not well maintained.
In practice only page pools which are not accessible / visible via the
API are page pools from already destroyed network namespaces (assuming
their netdevs were also destroyed and not re-parented to init_net).
Which I'd think is a rare case?
Looking at the code, I see that NETDEV_CMD_PAGE_POOL_CHANGE_NTF netlink
notification is only generated once (in page_pool_destroy) and not when
we retry in page_pool_release_retry (like this patch). In that sense,
this patch/tracepoint is catching something more than netlink provides.
First I though we could add a netlink notification, but I can imagine
cases this could generate too many netlink messages e.g. a netdev with
128 RX queues generating these every second for every RX queue.
FWIW yes, we can add more notifications. Tho, as I mentioned at the
start of my reply - the expectation is that page pools waiting for
a long time to be destroyed is something that _will_ happen in
production.
Guess, I've talked myself into liking this change, what do other
maintainers think? (e.g. netlink scheme and debugging balance)
We added the Netlink API to mute the pr_warn() in all practical cases.
If Xiang Mei is seeing the pr_warn() I think we should start by asking
what kernel and driver they are using, and what the usage pattern is :(
As I mentioned most commonly the pr_warn() will trigger because driver
doesn't link the pp to a netdev.
Hi Jakub, Jesper,
Thanks for the discussion. Since netlink notifications are only emitted
at page_pool_destroy(), the tracepoint still provides additional
debugging visibility for prolonged page_pool_release_retry() cases.
Steven has reviewed the tracepoint [1]. Any further feedback would be
appreciated.
This change looks good as-is:
Acked-by: Jesper Dangaard Brouer <[email protected]>
Your patch[0] is marked as "Changes Requested".
I suggest you send a V4 with my Acked-by added.
--Jesper
[0]
https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/