On Wed, Jul 30, 2014 at 5:22 PM, <[email protected]> wrote:

> On Wed, 30 Jul 2014 15:06:39 -0500, Xin Tong said:
>
>
> > 2. modify the kernel (maybe extensively) to allocate 2MB page by default.
>
> How fast do you run out of memory if you do that every time you actually
> only need a few 4K pages?  (In other words - think what that isn't the
> default behavior already :)
>

​I am planning to use this only for workloads with very large memory
footprints, e.g. hadoop, tpcc, etc.

BTW, i see Linux kernel uses the hugetlbfs to manage hugepages. every api
call, mmap, shmget​, etc, all create a hugetlbfs before the hugepages can
be allocated. why can not huge pages be allocated the same way as 4K pages
? whats the point of having the hugetlbfs.

Xin
_______________________________________________
Kernelnewbies mailing list
[email protected]
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

Reply via email to