On Mon, Jun 1, 2020 at 12:48 PM Nitin Gupta <nigu...@nvidia.com> wrote:
>
> For some applications, we need to allocate almost all memory as
> hugepages. However, on a running system, higher-order allocations can
> fail if the memory is fragmented. Linux kernel currently does on-demand
> compaction as we request more hugepages, but this style of compaction
> incurs very high latency. Experiments with one-time full memory
> compaction (followed by hugepage allocations) show that kernel is able
> to restore a highly fragmented memory state to a fairly compacted memory
> state within <1 sec for a 32G system. Such data suggests that a more
> proactive compaction can help us allocate a large fraction of memory as
> hugepages keeping allocation latencies low.
>

> Signed-off-by: Nitin Gupta <nigu...@nvidia.com>
> Reviewed-by: Vlastimil Babka <vba...@suse.cz>

(+CC Khalid)

Can this be pipelined for upstream inclusion now? Sorry, I'm a bit
rusty on upstream flow these days.

Thanks,
Nitin

Reply via email to