On 2026/1/2 19:43, Jesper Dangaard Brouer wrote:
>
>
> On 02/01/2026 08.17, Leon Hwang wrote:
>> Introduce a new tracepoint to track stalled page pool releases,
>> providing better observability for page pool lifecycle issues.
>>
>
> In general I like/support adding this tracepoint for "debugability" of
> page pool lifecycle issues.
>
> For "observability" @Kuba added a netlink scheme[1][2] for page_pool[3],
> which gives us the ability to get events and list page_pools from
> userspace.
> I've not used this myself (yet) so I need input from others if this is
> something that others have been using for page pool lifecycle issues?
>
> Need input from @Kuba/others as the "page-pool-get"[4] state that "Only
> Page Pools associated with a net_device can be listed". Don't we want
> the ability to list "invisible" page_pool's to allow debugging issues?
>
> [1] https://docs.kernel.org/userspace-api/netlink/intro-specs.html
> [2] https://docs.kernel.org/userspace-api/netlink/index.html
> [3] https://docs.kernel.org/netlink/specs/netdev.html
> [4] https://docs.kernel.org/netlink/specs/netdev.html#page-pool-get
>
> Looking at the code, I see that NETDEV_CMD_PAGE_POOL_CHANGE_NTF netlink
> notification is only generated once (in page_pool_destroy) and not when
> we retry in page_pool_release_retry (like this patch). In that sense,
> this patch/tracepoint is catching something more than netlink provides.
> First I though we could add a netlink notification, but I can imagine
> cases this could generate too many netlink messages e.g. a netdev with
> 128 RX queues generating these every second for every RX queue.
>
> Guess, I've talked myself into liking this change, what do other
> maintainers think? (e.g. netlink scheme and debugging balance)
>
Hi Jesper,
Thanks for the thoughtful review and for sharing the context around the
existing netlink-based observability.
I ran into a real-world issue where stalled pages were still referenced
by dangling TCP sockets. I wrote up the investigation in more detail in
my blog post “let page inflight” [1] (unfortunately only available in
Chinese at the moment).
In practice, the hardest part was identifying *who* was still holding
references to the inflight pages. With the current tooling, it is very
difficult to introspect the active users of a page once it becomes stalled.
If we can expose more information about current page users—such as the
user type and a user pointer, it becomes much easier to debug these
issues using BPF-based tools. For example, by tracing
page_pool_state_hold and page_pool_state_release, tools like bpftrace
[2] or bpfsnoop [3] (which I implemented) can correlate inflight page
pointers with their active users. This significantly lowers the barrier
to diagnosing page pool lifecycle problems.
As you noted, the existing netlink notifications are generated only at
page_pool_destroy, and not during retries in page_pool_release_retry. In
that sense, the proposed tracepoint captures a class of issues that
netlink does not currently cover, and does so without the risk of
generating excessive userspace events.
Thanks again for the feedback, and I’m happy to refine the approach
based on further input from you, Kuba, or other maintainers.
Links:
[1] https://blog.leonhw.com/post/linux-networking-6-inflight-page/
[2] https://github.com/bpftrace/bpftrace/
[3] https://github.com/bpfsnoop/bpfsnoop/
Thanks,
Leon
[...]