On Thu, 14 Jun 2012 17:40:31 +0200, telenn barz wrote:
> Hi all,
>
> This post is a request for clarification on some features of
> libhugetlbfs. I realize that this mailing list is not intended for
> this kind of help demand, but after searching unsuccessfully in the
> mailing list archive, reading the man pages of hugetlbfs, hugeadm and
> hugectl, as well as the excellent article series on LWN.net (the
> second one in particular : http://lwn.net/Articles/375096/ [1]), I
> didnt find any other relevant place to ask for help. So sorry if I
> bother you with my noob questions, and thanks in advance to anyone 
> who
> would take a little time to answer.
>
> My questions are related to automatic backing of memory regions, when
> the system supports multiple page sizes. Say for instance 4, 16, 64,
> 256 KB, 1, 4, 16, 64, 256 MB, 1, 4 GB page sizes. We also make the
> assumption that pools of each page size have been configured
> (hugeadm), and that the application has been pre-linked to
> libhugetlbfs.
>
> Question 1:
>
> Does libhugetlbfs optimally back text, data and bss segments ? I 
> mean,
> if a text segment is 5,259,264 bytes, will it be mapped with a
> combination of "4 MB + 1 MB + 16 KB" page sizes ?... In other words
> maybe : when using hugectl, is it allowed to repeat the "--text",
> "--data", "--bss" options with different page sizes, or does it only
> work with a given page size ?

Each segment will only work with a single page size, this page size can 
be selected at run time by passing the desired page size to the 
appropriate segment flag (i.e. --text=4M --bss=16K).  If no page size is 
specified, the system default is selected.  So in your example with a 
text segment of 5,259,246 bytes with a system default of 4MB huge pages, 
2*4MB pages will be used.  Note that repeated --text options will 
overwrite the requested page size.

>
> Question 2:
> Same question for the heap.

The rules for the heap are the same as for the other segments with 
respect to huge page size.

>
> Question 3:
> Is it possible to limit the number of huge pages to be allocated per
> process for the heap (knowing that once this limit is reached, next
> allocations will fallback to the default page size, 4 KB) ?

There is a set of patches being discussed to add a cgroup controller 
for huge pages, AFAIK it has not yet been merged though I believe it is 
close (see here: https://lkml.org/lkml/2012/6/9/22).  The best way I can 
think of to limit the huge page usage is to use separate mount points 
for users and each of them can have different limits.

>
> Regards,
> Telenn
>
>



------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Libhugetlbfs-devel mailing list
Libhugetlbfs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel

Reply via email to