On (10/10/08 14:36), [EMAIL PROTECTED] didst pronounce:
> On Fri, Oct 10, 2008 at 02:20:54PM +0100, Mel Gorman wrote:
> > On (09/10/08 20:20), Andy Whitcroft didst pronounce:
> > > From: Adam Litke <[EMAIL PROTECTED]>
> > > 
> > > Now that libhugetlbfs supports multiple huge page sizes, it would be nice 
> > > to
> > > have a way to discover the huge page sizes that are supported by the 
> > > system.
> > > This new call should adhere to the semantics established by precedent
> > > implementations (see
> > > http://docs.sun.com/app/docs/doc/816-5168/getpagesizes-3c?a=view).  The
> > > following patch implements a gethugepagesizes() call with these prevailing
> > > semantics as described below:
> > > 
> > > NAME
> > >   gethugepagesizes() - Get the system supported huge page sizes
> > > 
> > > SYNOPSIS
> > >   #include <hugetlbfs.h>
> > > 
> > >   int gethugepagesizes(long pagesizes[], int n_elem)
> > > 
> > > DESCRIPTION
> > >   The  gethugepagesizes()  function  returns  either the number of system
> > >   supported huge page sizes or the sizes  themselves.   If  pagesizes  is
> > >   NULL and n_elem is 0, then the number of huge pages the system supports
> > >   is returned.  Otherwise, pagesizes is filled with at most  n_elem  page
> > >   sizes.
> > > 
> > > RETURN VALUE
> > >   On  success, either the number of huge page sizes supported by the sys-
> > >   tem or the number of huge page sizes stored in pagesizes  is  returned.
> > >   On failure, -1 is returned and errno is set appropriately.
> > > 
> > > ERRORS
> > >   EINVAL  n_elem  is  less  than  zero or n_elem is greater than zero and
> > >                   pagesizes  is NULL.
> > > 
> > >   Also see opendir(3) for other possible values for errno.
> > > 
> > > Open discussion points:
> > > 
> > > This call will return all huge page sizes as reported by the kernel.  Not 
> > > all
> > > of these sizes may be usable by the programmer since mount points may not 
> > > be
> > > available for all sizes.  To test whether a size will be usable by
> > > libhugetlbfs, hugetlbfs_find_path_for_size() can be called on a specific 
> > > size
> > > to see if a mount point is configured.  I think this requirement is 
> > > sensible.
> > > The only problem I see is that hugetlbfs_find_path_for_size() is not an 
> > > obvious
> > > function name for this usage.
> > > 
> > > Signed-off-by: Adam Litke <[EMAIL PROTECTED]>
> > > ---
> > >  hugetlbfs.h              |    1 +
> > >  hugeutils.c              |   65 +++++++++++++
> > >  tests/Makefile           |    3 +-
> > >  tests/gethugepagesizes.c |  226 
> > > ++++++++++++++++++++++++++++++++++++++++++++++
> > >  tests/run_tests.sh       |    1 +
> > >  5 files changed, 295 insertions(+), 1 deletions(-)
> > >  create mode 100644 tests/gethugepagesizes.c
> > > 
> > > diff --git a/hugetlbfs.h b/hugetlbfs.h
> > > index fbff43b..985e138 100644
> > > --- a/hugetlbfs.h
> > > +++ b/hugetlbfs.h
> > > @@ -22,6 +22,7 @@
> > >  #define HUGETLBFS_MAGIC  0x958458f6
> > >  
> > >  long gethugepagesize(void);
> > > +int gethugepagesizes(long pagesizes[], int n_elem);
> > >  int hugetlbfs_test_path(const char *mount);
> > >  long hugetlbfs_test_pagesize(const char *mount);
> > >  const char *hugetlbfs_find_path(void);
> > > diff --git a/hugeutils.c b/hugeutils.c
> > > index 21f788d..a48fa36 100644
> > > --- a/hugeutils.c
> > > +++ b/hugeutils.c
> > > @@ -552,6 +552,71 @@ long gethugepagesize(void)
> > >   return hpage_size;
> > >  }
> > >  
> > > +int gethugepagesizes(long pagesizes[], int n_elem)
> > > +{
> > > + long default_size;
> > > + DIR *sysfs;
> > > + struct dirent *ent;
> > > + int nr_sizes = 0;
> > > +
> > > + if (n_elem < 0) {
> > > +         errno = EINVAL;
> > > +         return -1;
> > > + }
> > > +
> > > + if (n_elem > 0 && pagesizes == NULL) {
> > > +         errno = EINVAL;
> > > +         return -1;
> > > + }
> > > +
> > > + errno = 0;
> > > +
> > > + /* Get the system default size from /proc/meminfo */
> > > + default_size = read_meminfo("Hugepagesize:") * 1024;
> > > + if (default_size < 0)
> > > +         return 0;
> > > +
> > > + if (n_elem && pagesizes)
> > > +         pagesizes[nr_sizes] = default_size;
> > > + nr_sizes++;
> > > + if (n_elem && (nr_sizes == n_elem))
> > > +         return nr_sizes;
> > > +
> > > + /*
> > > +  * Scan sysfs to look for other sizes.
> > > +  * Non-existing dir is not an error, we got one size from /proc/meminfo.
> > > +  */
> > > + sysfs = opendir(SYSFS_HUGEPAGES_DIR);
> > > + if (!sysfs) {
> > > +         if (errno == ENOENT) {
> > > +                 errno = 0;
> > > +                 return nr_sizes;
> > > +         } else 
> > > +                 return -1;
> > 
> > If the sysfs entry exists but we fail to open it, errno will be set to
> > something specified by opendir and we fail. Failing I agree with but
> > maybe errno should be set to EACCESS  to indicate that other pagesizes
> > existed but we couldn't read them?
> 
> We are assuming opendir will leave it as something sane.  EACCESS
> doesn't really tell us what was wrong.
> 

Ok, I will cover that in the man page then

> > > + }
> > > + while ((ent = readdir(sysfs)) && 
> > > +                         ((n_elem == 0) || (nr_sizes < n_elem))) {
> > > +         long size;
> > > +
> > > +         if (strncmp(ent->d_name, "hugepages-", 10))
> > > +                 continue;
> > > +
> > > +         size = strtol(ent->d_name + 10, NULL, 10);
> > > +         if (size == LONG_MIN || size == LONG_MAX)
> > > +                 continue;
> > > +         size *= 1024; /* Convert from KB to Bytes */
> > > +
> > > +         if (size == default_size)
> > > +                 continue;
> > > +         if (n_elem && pagesizes)
> > > +                 pagesizes[nr_sizes] = size;
> > > +         nr_sizes++;
> > > + }
> > > + closedir(sysfs);
> > > +
> > > + return nr_sizes;
> > > +}
> > > +
> > >  int hugetlbfs_test_path(const char *mount)
> > >  {
> > >   struct statfs64 sb;
> > > diff --git a/tests/Makefile b/tests/Makefile
> > > index beee18e..fdbe864 100644
> > > --- a/tests/Makefile
> > > +++ b/tests/Makefile
> > > @@ -7,7 +7,8 @@ LIB_TESTS = gethugepagesize test_root find_path 
> > > unlinked_fd misalign \
> > >   truncate_reserve_wraparound truncate_sigbus_versus_oom \
> > >   map_high_truncate_2 truncate_above_4GB direct \
> > >   misaligned_offset brk_near_huge task-size-overrun stack_grow_into_huge \
> > > - counters quota heap-overflow get_huge_pages shmoverride_linked
> > > + counters quota heap-overflow get_huge_pages shmoverride_linked \
> > > + gethugepagesizes
> > >  LIB_TESTS_64 = straddle_4GB huge_at_4GB_normal_below \
> > >   huge_below_4GB_normal_above
> > >  NOLIB_TESTS = malloc malloc_manysmall dummy heapshrink 
> > > shmoverride_unlinked
> > > diff --git a/tests/gethugepagesizes.c b/tests/gethugepagesizes.c
> > > new file mode 100644
> > > index 0000000..34e9957
> > > --- /dev/null
> > > +++ b/tests/gethugepagesizes.c
> > > @@ -0,0 +1,226 @@
> > > +/*
> > > + * libhugetlbfs - Easy use of Linux hugepages
> > > + * Copyright (C) 2008 Adam Litke, IBM Corporation.
> > > + *
> > > + * This library is free software; you can redistribute it and/or
> > > + * modify it under the terms of the GNU Lesser General Public License
> > > + * as published by the Free Software Foundation; either version 2.1 of
> > > + * the License, or (at your option) any later version.
> > > + *
> > > + * This library is distributed in the hope that it will be useful, but
> > > + * WITHOUT ANY WARRANTY; without even the implied warranty of
> > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > > + * Lesser General Public License for more details.
> > > + *
> > > + * You should have received a copy of the GNU Lesser General Public
> > > + * License along with this library; if not, write to the Free Software
> > > + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 
> > > USA
> > > + */
> > > +
> > > +#define _GNU_SOURCE
> > > +
> > > +#include <stdlib.h>
> > > +#include <stdio.h>
> > > +#include <unistd.h>
> > > +#include <dirent.h>
> > > +#include <dlfcn.h>
> > > +#include <sys/types.h>
> > > +#include <sys/stat.h>
> > > +#include <hugetlbfs.h>
> > > +
> > > +#include "hugetests.h"
> > > +
> > > +#define REAL_SYSFS_DIR   "/sys/kernel/mm/hugepages/"
> > > +char fake_sysfs[] = "/tmp/sysfs-XXXXXX";
> > > +DIR *(*real_opendir)(const char *name);
> > > +int cleanup_dir = 0;
> > > +
> > > +long (*real_read_meminfo)(const char *tag);
> > > +enum {
> > > + OVERRIDE_OFF,           /* Pass-through to real function */
> > > + OVERRIDE_ON,            /* Ovewrride with local function */
> > > + OVERRIDE_MISSING,       /* Emulate missing support */
> > > +};
> > > +int meminfo_state = OVERRIDE_OFF;
> > > +int sysfs_state = OVERRIDE_OFF;
> > > +
> > > +/*
> > > + * Override opendir so we'll open the fake sysfs dir if intended
> > > + */
> > > +DIR *opendir(const char *name)
> > > +{
> > > + if (!real_opendir)
> > > +         real_opendir = dlsym(RTLD_NEXT, "opendir");
> > > +
> > > + /* Only override calls to the sysfs dir */
> > > + if (strcmp(name, REAL_SYSFS_DIR))
> > > +         return real_opendir(name);
> > > +
> > > + switch (sysfs_state) {
> > > + case OVERRIDE_OFF:
> > > +         return real_opendir(name);
> > > + case OVERRIDE_ON:
> > > +         /* Only safe to override of fake_sysfs was set up */
> > > +         if (cleanup_dir)
> > > +                 return real_opendir(fake_sysfs);
> > > +         else
> > > +                 FAIL("Trying to override opendir before initializing "
> > > +                         "fake_sysfs directory\n");
> > > + default:
> > > +         errno = ENOENT;
> > > +         return NULL;
> > > + }
> > > +}
> > > +
> > > +/*
> > > + * Override read_meminfo to simulate various conditions
> > > + */
> > > +long read_meminfo(const char *tag)
> > > +{
> > > + if (!real_read_meminfo)
> > > +         real_read_meminfo = dlsym(RTLD_NEXT, "read_meminfo");
> > > +
> > > + /* Only override calls that check the page size */
> > > + if (strcmp(tag, "Hugepagesize:"))
> > > +         return real_read_meminfo(tag);
> > > +
> > > + switch (meminfo_state) {
> > > +         case OVERRIDE_OFF:      return real_read_meminfo(tag);
> > > +         case OVERRIDE_ON:       return 16 * 1024;
> > > +         default:                return -1;
> > > + }
> > > +}
> > > +
> > > +void cleanup_fake_sysfs(void)
> > > +{
> > > + DIR *dir;
> > > + struct dirent *ent;
> > > + char fname[PATH_MAX+1];
> > > +
> > > + cleanup_dir = 0;
> > > + dir = real_opendir(fake_sysfs);
> > > + if (!dir)
> > > +         FAIL("opendir %s: %s", fake_sysfs, strerror(errno));
> > > +
> > > + while ((ent = readdir(dir))) {
> > > +         if (strncmp(ent->d_name, "hugepages-", 10))
> > > +                 continue;
> > > +         snprintf(fname, PATH_MAX, "%s/%s", fake_sysfs,
> > > +                 ent->d_name);
> > > +         if (rmdir(fname))
> > > +                 FAIL("rmdir %s: %s", fake_sysfs, strerror(errno));
> > > + }
> > > + closedir(dir);
> > > + if (rmdir(fake_sysfs))
> > > +         FAIL("rmdir %s: %s", fake_sysfs, strerror(errno));
> > > +}
> > > +
> > > +void setup_fake_sysfs(long sizes[], int n_elem)
> > > +{
> > > + int i;
> > > + char fname[PATH_MAX+1];
> > > +
> > > + if (cleanup_dir)
> > > +         cleanup_fake_sysfs();
> > > +
> > > + if (!mkdtemp(fake_sysfs))
> > > +         FAIL("mkdtemp: %s", strerror(errno));
> > > + cleanup_dir = 1;
> > > +
> > > + for (i = 0; i < n_elem; i++) {
> > > +         snprintf(fname, PATH_MAX, "%s/hugepages-%lukB", fake_sysfs,
> > > +                         sizes[i] / 1024);
> > > +         if (mkdir(fname, 0700))
> > > +                 FAIL("mkdir %s: %s", fname, strerror(errno)); 
> > > + }
> > > +}
> > > +
> > > +void cleanup(void)
> > > +{
> > > + if (cleanup_dir)
> > > +         cleanup_fake_sysfs();
> > > +}
> > > +
> > > +#define INIT_SIZES(a, v1, v2, v3) {a[0] = v1; a[1] = v2; a[2] = v3; a[3] 
> > > = -1;}
> > > +
> > > +void expect_sizes(int line, int expected, int actual,
> > > +                 long expected_sizes[], long actual_sizes[])
> > > +{
> > > + int i, j;
> > > + if (expected != actual)
> > > +         FAIL("Line %i: Wrong number of sizes returned -- expected %i "
> > > +                 "got %i", line, expected, actual);
> > > +
> > > + for (i = 0; i < expected; i++) {
> > > +         for (j = 0; j < actual; j++)
> > > +                 if (expected_sizes[i] == actual_sizes[j])
> > > +                         break;
> > > +         if (j >= actual)
> > > +                 FAIL("Line %i: Expected size %li not found in actual "
> > > +                         "results", line, expected_sizes[i]);
> > > + }
> > > +}        
> > > +
> > > +int main(int argc, char *argv[])
> > > +{
> > > + long expected_sizes[4], actual_sizes[4], meminfo_size;
> > > + int nr_sizes;
> > > +
> > > + test_init(argc, argv);
> > > +
> > > + /*
> > > +  * ===
> > > +  * Argment error checking tests
> > > +  * ===
> > > +  */
> > > + meminfo_state = OVERRIDE_OFF;
> > > + sysfs_state = OVERRIDE_OFF;
> > > + if (gethugepagesizes(actual_sizes, -1) != -1 || errno != EINVAL)
> > > +         FAIL("Mishandled params (n_elem < 0)");
> > > + if (gethugepagesizes(NULL, 1) != -1 || errno != EINVAL)
> > > +         FAIL("Mishandled params (pagesizes == NULL, n_elem > 0)");
> > > +
> > > + /*
> > > +  * ===
> > > +  * Test some corner cases using a fake system configuration
> > > +  * ===
> > > +  */
> > > +
> > > + /*
> > > +  * Check handling when /proc/meminfo indicates no huge page support
> > > +  */
> > > + meminfo_state = OVERRIDE_MISSING;
> > > + if (gethugepagesizes(actual_sizes, 1) != 0 || errno != 0)
> > > +         FAIL("Incorrect handling when huge page support is missing");
> > > +
> > > + /*
> > > +  * When the sysfs heirarchy is not present ...
> > > +  */
> > > + sysfs_state = OVERRIDE_MISSING;
> > > +
> > > + /* ... only the meminfo size is returned. */
> > > + meminfo_state = OVERRIDE_ON;
> > > + meminfo_size = read_meminfo("Hugepagesize:") * 1024;
> > > + INIT_SIZES(expected_sizes, meminfo_size, -1, -1);
> > > +
> > > + /* Use 2 to give the function the chance to return too many sizes */
> > > + nr_sizes = gethugepagesizes(actual_sizes, 2);
> > > + expect_sizes(__LINE__, 1, nr_sizes, expected_sizes, actual_sizes);
> > > + 
> > > + /*
> > > +  * When sysfs defines additional sizes ...
> > > +  */
> > > + INIT_SIZES(expected_sizes, meminfo_size, 1024, 2048);
> > > + setup_fake_sysfs(expected_sizes, 3);
> > > + sysfs_state = OVERRIDE_ON;
> > > +
> > > + /* ... make sure all sizes are returned without duplicates */
> > > + nr_sizes = gethugepagesizes(actual_sizes, 4);
> > > + expect_sizes(__LINE__, 3, nr_sizes, expected_sizes, actual_sizes);
> > > +
> > > + /* ... we can check how many sizes are supported. */
> > > + if (gethugepagesizes(NULL, 0) != 3)
> > > +         FAIL("Unable to check the number of supported sizes");
> > > +
> > > + PASS();
> > > +}
> > > diff --git a/tests/run_tests.sh b/tests/run_tests.sh
> > > index d616731..4fe6eed 100755
> > > --- a/tests/run_tests.sh
> > > +++ b/tests/run_tests.sh
> > > @@ -249,6 +249,7 @@ functional_tests () {
> > >  
> > >  # Library tests requiring kernel hugepage support
> > >      run_test gethugepagesize
> > > +    run_test gethugepagesizes
> > >      run_test HUGETLB_VERBOSE=1 empty_mounts
> > >      run_test HUGETLB_VERBOSE=1 large_mounts
> > >  
> > 
> > Other than errno possibly being something stupid due to opendir()
> > failing
> 
> We assume errno is only valid when we return -1.  As such as long as
> opendir sets it to something we are ok.  Not sure we really care what
> the result is other than us wanting to say EINVAL.
> 
> > Acked-by: Mel Gorman <[EMAIL PROTECTED]>
> 
> -apw
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Libhugetlbfs-devel mailing list
Libhugetlbfs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel

Reply via email to