Gilles Chanteperdrix wrote:
>>  Some embedded platforms have small TLB compared to the vm hungriness
>>  certain real-time tasks. H_HUGETLB would reply on HugeTLB[1] backing
>>  allocation. Scattered accesses to this memory would benefit in the
>>  pressure of minor page-faults/TLB refills, which is a good thing(tm)
>>  real-time.
>I do not understand the need for a kernel option and a special
>filesystem, why does not the kernel use these hugetlb pages upon large
>allocations ?

You must be speaking of an ideal world, with almighty smart OS ;-)

I do not have the answer to your question. I guess we could find 
some lengthy discussion over lkml about the virtue and side effects 
of automagically using hugetlb upon large allocations. Maybe the kernel
hackers were not confident enough about hugetlb, and just tolerated an
optional subsystem requested by evil big iron application. Rem: not all 
MMU(hardware) and/or Linux arch(software) have HugeTLB available.
Maybe they thought that it was not worth it wasting hard to find 
contiguous memory, while lazy allocation has so much reward when 
a process allocates more memory than it's really using.

As far as I understand, HugeTLB is only an opt-in feature, when
or predictability is expected. To put it another way in Xenomai world, 
HugeTLB can reclaim part of the performance loss when going from kernel 
space to user space. In kernel space, RAM is generally covered with a
or similar. This is not the case in user space with 4K page allocation.
Off course, the issue appears only with big working sets.


Xenomai-core mailing list

Reply via email to