On 03/10/2018 16:34, Vitaly Kuznetsov wrote:
> Dave Hansen <dave.han...@linux.intel.com> writes:
> 
>> On 10/03/2018 06:52 AM, Vitaly Kuznetsov wrote:
>>> It is more than just memmaps (e.g. forking udev process doing memory
>>> onlining also needs memory) but yes, the main idea is to make the
>>> onlining synchronous with hotplug.
>>
>> That's a good theoretical concern.
>>
>> But, is it a problem we need to solve in practice?
> 
> Yes, unfortunately. It was previously discovered that when we try to
> hotplug tons of memory to a low memory system (this is a common scenario
> with VMs) we end up with OOM because for all new memory blocks we need
> to allocate page tables, struct pages, ... and we need memory to do
> that. The userspace program doing memory onlining also needs memory to
> run and in case it prefers to fork to handle hundreds of notfifications
> ... well, it may get OOMkilled before it manages to online anything.
> 
> Allocating all kernel objects from the newly hotplugged blocks would
> definitely help to manage the situation but as I said this won't solve
> the 'forking udev' problem completely (it will likely remain in
> 'extreme' cases only. We can probably work around it by onlining with a
> dedicated process which doesn't do memory allocation).
> 

I guess the problem is even worse. We always have two phases

1. add memory - requires memory allocation
2. online memory - might require memory allocations e.g. for slab/slub

So if we just added memory but don't have sufficient memory to start a
user space process to trigger onlining, then we most likely also don't
have sufficient memory to online the memory right away (in some scenarios).

We would have to allocate all new memory for 1 and 2 from the memory to
be onlined. I guess the latter part is less trivial.

So while onlining the memory from the kernel might make things a little
more robust, we would still have the chance for OOM / onlining failing.

-- 

Thanks,

David / dhildenb

Reply via email to