Thanks Mike. I read the doc, which is not explicit on the non used file
taking up huge page count
On Tuesday, July 17, 2018, 4:57:04 PM PDT, Mike Kravetz
<[email protected]> wrote:
On 07/17/2018 12:05 PM, David Frank wrote:
> Hi,
> According to the instruction, I have to mount a huge directory to hugetlbfs
> and create file in the huge directory to use the mmap huge page feature. But
> the issue is that, the files in the huge directory takes up the huge pages
> configured through
> vm.nr_hugepages =
>
> even the files are not used.
>
> When the total size of the files in the huge directory = vm.nr_hugepages *
> huge page size, then mmap would fail with 'can not allocate memory' if the
> file to be mapped is in the huge dir or the call has HUGEPAGETLB flag.
>
> Basically, I have to move the files off of the huge directory to free up huge
> pages.
>
> Am I missing anything here?
>
No, that is working as designed.
hugetlbfs filesystems are generally pre-allocated with nr_hugepages
huge pages. That is the upper limit of huge pages available. You can
use overcommit/surplus pages to try and exceed the limit, but that
comes with a whole set of potential issues.
If you have not done so already, please see Documentation/vm/hugetlbpage.txt
in the kernel source tree.
--
Mike Kravetz
_______________________________________________
Kernelnewbies mailing list
[email protected]
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies