On Mon, 2008-10-06 at 15:14 +0100, Mel Gorman wrote: > On (03/10/08 18:37), Andy Whitcroft didst pronounce: > > From: Adam Litke <[EMAIL PROTECTED]> > > > > Now that libhugetlbfs supports multiple huge page sizes, it would be nice to > > have a way to discover the huge page sizes that are supported by the system. > > This new call should adhere to the semantics established by precedent > > implementations (see > > http://docs.sun.com/app/docs/doc/816-5168/getpagesizes-3c?a=view). The > > following patch implements a gethugepagesizes() call with these prevailing > > semantics as described below: > > I still believe it should adhere to the semantics and signature of that > function for portability between Solaris and Linux reasons. Just mandate > that the first element returned is the base page size and skip over it. > > Having a similar, yet sufficiently different, version of a function is > just awkward. > > > > > NAME > > gethugepagesizes() - Get the system supported huge page sizes > > > > SYNOPSIS > > #include <hugetlbfs.h> > > > > int gethugepagesizes(long pagesizes[], int n_elem) > > > > DESCRIPTION > > The gethugepagesizes() function returns either the number of system > > supported huge page sizes or the sizes themselves. If pagesizes is > > NULL and n_elem is 0, then the number of huge pages the system supports > > is returned. Otherwise, pagesizes is filled with at most n_elem page > > sizes. > > > > RETURN VALUE > > On success, either the number of huge page sizes supported by the sys- > > tem or the number of huge page sizes stored in pagesizes is returned. > > On failure, -1 is returned and errno is set appropriately. > > > > ERRORS > > EINVAL n_elem is less than zero or n_elem is greater than zero and > > pagesizes is NULL. > > > > Also see opendir(3) for other possible values for errno. > > > > Open discussion points: > > > > This call will return all huge page sizes as reported by the kernel. Not > > all > > of these sizes may be usable by the programmer since mount points may not be > > available for all sizes. To test whether a size will be usable by > > libhugetlbfs, hugetlbfs_find_path_for_size() can be called on a specific > > size > > to see if a mount point is configured. I think this requirement is > > sensible. > > The only problem I see is that hugetlbfs_find_path_for_size() is not an > > obvious > > function name for this usage. > > > > To achieve maximum compatibility with other operating systems, it has been > > suggested that we should really be implementing getpagesizes() not > > gethugepagesizes(). That is to say our function should return the base page > > size followed by all supported huge page sizes. This may be a good idea, > > but > > returning the base page size and huge page sizes in one collection would > > suggest to the caller that each size can be used in a similar fashion. > > Unfortunately, the usage models for base pages and huge pages are very > > divergent. I think some more discussion on this idea is warranted. > > > > Signed-off-by: Adam Litke <[EMAIL PROTECTED]> > > --- > > hugetlbfs.h | 1 + > > hugeutils.c | 68 ++++++++++++++- > > tests/Makefile | 3 +- > > tests/gethugepagesizes.c | 226 > > ++++++++++++++++++++++++++++++++++++++++++++++ > > tests/run_tests.sh | 1 + > > 5 files changed, 297 insertions(+), 2 deletions(-) > > create mode 100644 tests/gethugepagesizes.c > > > > diff --git a/hugetlbfs.h b/hugetlbfs.h > > index 2103515..54fa13b 100644 > > --- a/hugetlbfs.h > > +++ b/hugetlbfs.h > > @@ -22,6 +22,7 @@ > > #define HUGETLBFS_MAGIC 0x958458f6 > > > > long gethugepagesize(void); > > +int gethugepagesizes(long pagesizes[], int n_elem); > > int hugetlbfs_test_path(const char *mount); > > long hugetlbfs_test_pagesize(const char *mount); > > const char *hugetlbfs_find_path(void); > > diff --git a/hugeutils.c b/hugeutils.c > > index f47b897..3c30b41 100644 > > --- a/hugeutils.c > > +++ b/hugeutils.c > > @@ -30,6 +30,7 @@ > > #include <string.h> > > #include <ctype.h> > > #include <signal.h> > > +#include <dirent.h> > > > > #include <unistd.h> > > #include <fcntl.h> > > @@ -41,7 +42,6 @@ > > #include <sys/uio.h> > > #include <sys/syscall.h> > > #include <linux/types.h> > > -#include <linux/dirent.h> > > #include <linux/unistd.h> > > > > #include "libhugetlbfs_internal.h" > > @@ -60,6 +60,7 @@ static int hpage_sizes_default_idx = -1; > > > > #define BUF_SZ 256 > > #define MEMINFO_SIZE 2048 > > +#define SYSFS_HUGEPAGES_DIR "/sys/kernel/mm/hugepages/" > > > > /* > > * Convert a quantity in a given unit to the next smallest unit by > > @@ -470,6 +471,71 @@ long gethugepagesize(void) > > return hpage_size; > > } > > > > +int gethugepagesizes(long pagesizes[], int n_elem) > > +{ > > + long default_size; > > + DIR *sysfs; > > + struct dirent *ent; > > + int nr_sizes = 0; > > + > > + if (n_elem < 0) { > > + errno = EINVAL; > > + return -1; > > + } > > + > > + if (n_elem > 0 && pagesizes == NULL) { > > + errno = EINVAL; > > + return -1; > > + } > > + > > + errno = 0; > > + > > + /* Get the system default size from /proc/meminfo */ > > + default_size = read_meminfo("Hugepagesize:") * 1024; > > + if (default_size < 0) > > + return 0; > > + > > Should it not be returning -1 for failure?
This isn't an error. If the system has no huge page support, there are zero huge page sizes available and thus we return 0. > > + if (n_elem && pagesizes) > > + pagesizes[nr_sizes] = default_size; > > + nr_sizes++; > > + if (n_elem && (nr_sizes == n_elem)) > > + return nr_sizes; > > It seems somewhat redundant to make the check here when you do it below > in the while loop anyway. I am trying to avoid the opendir() and potential errors that could trigger. > > + > > + /* > > + * Scan sysfs to look for other sizes. > > + * Non-existing dir is not an error, we got one size from /proc/meminfo. > > + */ > > + sysfs = opendir(SYSFS_HUGEPAGES_DIR); > > + if (!sysfs) { > > + if (errno == ENOENT) { > > + errno = 0; > > + return nr_sizes; > > + } else > > + return -1; > > + } > > + while ((ent = readdir(sysfs)) && > > + ((n_elem == 0) || (nr_sizes < n_elem))) { > > + long size; > > + > > + if (strncmp(ent->d_name, "hugepages-", 10)) > > + continue; > > + > > + size = strtol(ent->d_name + 10, NULL, 10); > > + if (size == LONG_MIN || size == LONG_MAX) > > + continue; > > + size *= 1024; /* Convert from KB to Bytes */ > > + > > + if (size == default_size) > > + continue; > > + if (n_elem && pagesizes) > > + pagesizes[nr_sizes] = size; > > + nr_sizes++; > > + } > > + closedir(sysfs); > > + > > + return nr_sizes; > > +} > > + > > int hugetlbfs_test_path(const char *mount) > > { > > struct statfs64 sb; -- Adam Litke - (agl at us.ibm.com) IBM Linux Technology Center ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Libhugetlbfs-devel mailing list Libhugetlbfs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel