On 16/01/2026 17:36, Edgecombe, Rick P wrote:
On Fri, 2026-01-16 at 17:28 +0000, Nikita Kalyazin wrote:
I imagine this feature is really targeted towards machines running
a bunch of untrusted VMs, so cloud hypervisors really. In that case
the direct map will probably be carved up pretty quick. Did you
consider just breaking the full direct map to 4k at the start when
it's in use?
That's an interesting point, I haven't thought about it from this
perspective. We should run some tests internally to see if it'd
help. This will likely change with support for huge pages coming in
though.
The thing is, those no_flush() helpers actually still flush if they
need to split a page. Plus if they need to clear out lazy vmalloc
aliases it could be another flush. There are probably a lot of
opportunities to reduce flushing even beyond pre-split.
Just curious... as far as performance, have you tested this on a big
multi-socket system, where that flushing will hurt more? It's something
that has always been a fear for these directmap unmapping solutions
Yes, this is a problem that we'd like to address. We have been
discussing it in [1]. The effect of flushing on memory population that
we see on x86 is 5-7x elongation. We are thinking of making use of the
no-direct-map memory allocator that Brendan is working on [2].
[1]
https://lore.kernel.org/lkml/[email protected]
[2] https://lore.kernel.org/kvm/[email protected]