On Fri, Nov 07, 2008 at 09:08:14AM +1100, David Gibson wrote:
> On Thu, Nov 06, 2008 at 11:19:18AM -0600, Adam Litke wrote:
> > On Thu, 2008-11-06 at 16:20 +0000, Mel Gorman wrote:
> > > +.SH NAME
> > > +hugetlbfs_unlinked_fd, hugetlbfs_unlinked_fd_for_size \- Open an 
> > > unlinked file on a hugetlbfs filesystem
> > > +.SH SYNOPSIS
> > > +.B #include <hugetlbfs.h>
> > > +.br
> > > +
> > > +.br
> > > +.B int hugetlbfs_unlinked_fd(void);
> > > +.br
> > > +.B hugetlbfs_unlinked_fd_for_size(long page_size);
> > > +.SH DESCRIPTION
> > > +
> > > +\fBhugetlbfs_unlinked_fd()\fB and its page-size-aware companion
> > > +\fBhugetlbfs_unlinked_fd_for_size()\fB discover a suitable hugetlbfs 
> > > mount
> > > +point, open a file, unlink it and return the descriptor. This is mainly 
> > > used
> > > +interally by hugetlbfs to create memory regions with map(). An example 
> > > usage
> > > +might be to create a heap by allocating and unlinked filedescriptor and 
> > > using
> > > +mmap() and munmap to grow and shrink the heap respectively.
> > > +
> > > +.SH RETURN VALUE
> > > +
> > > +Both functions return a valid file descriptor on success and -1 on 
> > > failure.
> > > +errno is not necessarily set to anything useful when an error occurs.
> > > +
> > > +.SH NOTES
> > > +
> > > +In the current version of libhugetlbfs, it is difficult to 
> > > programatically
> > > +determine why an unlinked file could not be created. It will depend on 
> > > the
> > > +user reading messages writting to console to diagnose and correct the 
> > > error.
> > 
> > 
> > I think it might be useful to describe in the man page why an _unlinked_
> > file descriptor is desired.  This isn't immediately obvious.  Saying
> > that it makes the fd behave more like anonymous memory (in that the huge
> > pages are freed after an unmap and the data will not persist) and that
> > another process cannot open the same fd to snoop data.
> 
> Uh.. the pages won'tbe freed on unmap, only on close().
> 

In the MAP_PRIVATE case, they get freed and in the MAP_SHARED case, one would
have to call ftruncate. Yeah, it's not amazingly easy or anything but it's
doable. Here is a program I linked directly to libhugetlbfs .o files just
to get the read_meminfo() function. It shows a heap being created, and the
pages being consumed and then freed as the heap grows and shrinks.

I can explain it more in the page.

#include <stdio.h>
#include <stdlib.h>
#include <libhugetlbfs_privutils.h>
#include <hugetlbfs.h>
#include <sys/mman.h>
#include <string.h>

#define HEAPSTART (void *)0xA0000000

int main() {
        int fd;
        int i;
        char *buf, *extent;
        size_t len = gethugepagesize();

        printf("Hugepages in use %d\n", read_meminfo("HugePages_Free:"));

        /* Create fd for hugetlbfs-backed heap */
        fd = hugetlbfs_unlinked_fd();
        if (fd < 0) {
                printf("Failed to create unlinked fd on hugetlbfs\n");
                perror("unlinked_fd");
                exit(EXIT_FAILURE);
        }

        /* Create initial mapping */
        buf = mmap(HEAPSTART, len, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
        if (buf == MAP_FAILED) {
                perror("mmap");
                exit(EXIT_FAILURE);
        }

        /* Grow mapping */
        for (i = 1; i < 10; i++) {
                fflush(NULL);
                extent = mmap(buf + gethugepagesize()*i,
                        gethugepagesize(),
                        PROT_READ|PROT_WRITE,
                        MAP_SHARED|MAP_FIXED,
                        fd, gethugepagesize()*i);
                if (extent == MAP_FAILED) {
                        perror("extent map\n");
                        exit(EXIT_FAILURE);
                }


                fflush(NULL);
                memset(buf, 0, gethugepagesize()*i);
                printf("Hugepages in use after growth %d: %d\n",
                        i, read_meminfo("HugePages_Free:"));
        }

        /* Shrink mapping */
        for (i = 9; i >= 1; i--) {
                munmap(buf + gethugepagesize()*i, gethugepagesize());
                ftruncate(fd, gethugepagesize()*i);
                printf("Hugepages in use after shrink %d: %d\n",
                        i, read_meminfo("HugePages_Free:"));
        }

        close(fd);
}

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Libhugetlbfs-devel mailing list
Libhugetlbfs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel

Reply via email to