On Wed, 2008-11-05 at 14:02 +0000, Mel Gorman wrote: > Currently the caller of get_huge_pages() is expected to align the length to > the hugepage boundary. This requires the caller to know alignment is required, > have an ALIGN() macro and know which pagesize is in use. As well as being > a useless and unnecessary burden, it prevents anything useful being done > with the wasted bytes. This patch relieves the caller of some responsibility. > > Signed-off-by: Mel Gorman <[EMAIL PROTECTED]>
Acked-by: Adam Litke <[EMAIL PROTECTED]> > --- > alloc.c | 32 ++++++++++++++++++++------------ > 1 files changed, 20 insertions(+), 12 deletions(-) > > diff --git a/alloc.c b/alloc.c > index 750b2cb..b87a60d 100644 > --- a/alloc.c > +++ b/alloc.c > @@ -69,16 +69,17 @@ static void *fallback_base_pages(size_t len, ghp_t flags) > * flags: Flags specifying the behaviour of the function > * > * This function allocates a region of memory backed by huge pages and > - * at least hugepage-aligned. This is not a suitable drop-in for malloc() > - * and is only suitable in the event the length is expected to be > - * hugepage-aligned. However, a malloc-like library could use this function > - * to create additional heap similar in principal to what morecore does for > - * glibc malloc. > + * at least hugepage-aligned. This is not a suitable drop-in for malloc(). > + * As the length is always aligned to a hugepage-boundary, on average > + * half a hugepage will be wasted unless care is taken. The intention is that > + * a malloc-like library uses this function to create additional heap similar > + * in principal to what morecore does for glibc malloc. > */ > void *get_huge_pages(size_t len, ghp_t flags) > { > void *buf; > int heap_fd; > + size_t aligned_len, wasteage; > > /* Create a file descriptor for the new region */ > heap_fd = hugetlbfs_unlinked_fd(); > @@ -87,15 +88,22 @@ void *get_huge_pages(size_t len, ghp_t flags) > return NULL; > } > > + /* Align the len parameter */ > + aligned_len = ALIGN(len, gethugepagesize()); > + wasteage = aligned_len - len; > + if (wasteage != 0) > + DEBUG("get_huge_pages: Wasted %zd bytes due to alignment\n", > + wasteage); > + > /* Map the requested region */ > - buf = mmap(NULL, len, PROT_READ|PROT_WRITE, > - MAP_PRIVATE, heap_fd, 0); > + buf = mmap(NULL, aligned_len, PROT_READ|PROT_WRITE, > + MAP_PRIVATE, heap_fd, 0); > if (buf == MAP_FAILED) { > close(heap_fd); > > /* Try falling back to base pages if allowed */ > if (flags & GHP_FALLBACK) > - return fallback_base_pages(len, flags); > + return fallback_base_pages(aligned_len, flags); > > WARNING("get_huge_pages: New region mapping failed (flags: > 0x%lX): %s\n", > flags, strerror(errno)); > @@ -103,19 +111,19 @@ void *get_huge_pages(size_t len, ghp_t flags) > } > > /* Fault the region to ensure accesses succeed */ > - if (hugetlbfs_prefault(heap_fd, buf, len) != 0) { > - munmap(buf, len); > + if (hugetlbfs_prefault(heap_fd, buf, aligned_len) != 0) { > + munmap(buf, aligned_len); > close(heap_fd); > > /* Try falling back to base pages if allowed */ > if (flags & GHP_FALLBACK) > - return fallback_base_pages(len, flags); > + return fallback_base_pages(aligned_len, flags); > } > > /* Close the file so we do not have to track the descriptor */ > if (close(heap_fd) != 0) { > WARNING("Failed to close new heap fd: %s\n", strerror(errno)); > - munmap(buf, len); > + munmap(buf, aligned_len); > return NULL; > } > -- Adam Litke - (agl at us.ibm.com) IBM Linux Technology Center ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Libhugetlbfs-devel mailing list Libhugetlbfs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel