On 2026/2/5 21:25, David Hildenbrand (Arm) wrote:
On 2/2/26 16:52, Lance Yang wrote:
On 2026/2/2 23:09, Peter Zijlstra wrote:
On Mon, Feb 02, 2026 at 10:37:39PM +0800, Lance Yang wrote:
PT_RECLAIM=y does have IPI for unshare/collapse — those paths call
tlb_flush_unshared_tables() (for hugetlb unshare) and
collapse_huge_page()
(in khugepaged collapse), which already send IPIs today (broadcast
to all
CPUs via tlb_remove_table_sync_one()).
What PT_RECLAIM=y doesn't need IPI for is table freeing (
__tlb_remove_table_one() uses call_rcu() instead). But table
modification
(unshare, collapse) still needs IPI to synchronize with lockless
walkers,
regardless of PT_RECLAIM.
So PT_RECLAIM=y is not broken; it already has IPI where needed. This
series
just makes those IPIs targeted instead of broadcast. Does that clarify?
Oh bah, reading is hard. I had missed they had more table_sync_one()
calls,
rather than remove_table_one().
So you *can* replace table_sync_one() with rcu_sync(), that will provide
the same guarantees. Its just a 'little' bit slower on the update side,
but does not incur the read side cost.
Yep, we could replace the IPI with synchronize_rcu() on the sync side:
- Currently: TLB flush → send IPI → wait for walkers to finish
- With synchronize_rcu(): TLB flush → synchronize_rcu() -> waits for
grace period
Lockless walkers (e.g. GUP-fast) use local_irq_disable();
synchronize_rcu() also
waits for regions with preemption/interrupts disabled, so it should
work, IIUC.
And then, the trade-off would be:
- Read side: zero cost (no per-CPU tracking)
- Write side: wait for RCU grace period (potentially slower)
For collapse/unshare, that write-side latency might be acceptable :)
@David, what do you think?
Given that we just fixed the write-side latency from breaking Oracle's
databases completely, we have to be a bit careful here :)
Yep, agreed.
The thing is: on many x86 configs we don't need *any* TLB flushed or RCU
syncs.
Right. Looks like that is low-hanging fruit. I'll send that out
separately :)
So "how much slower" are we talking about, especially on bigger/loaded
systems?
Unfortunately the numbers are pretry bad. On an x86-64 64-core system
under high load, each synchronize_rcu() is about *22.9* ms on average ...
So for now, neither approach looks good: tracking on the read side adss
cost to GUP-fast, and syncing on the write side e.g. synchronize_rcu()
is too slow on large systems.