On 10.04.2008 [16:44:14 -0700], Nishanth Aravamudan wrote:
> On 10.04.2008 [16:39:20 -0700], Nishanth Aravamudan wrote:
> > On 10.04.2008 [15:55:08 -0700], Nishanth Aravamudan wrote:
> > > On 10.04.2008 [16:50:22 -0500], Jon Tollefson wrote:
> > > > This patch moves the check of the return value from gethugepagesize() 
> > > > to a common function, check_hugepagesize(),
> > > > rather then repeating it in each test.  gethugepagesize() is updated to 
> > > > set errno depending on the reason for the failure.
> > > > 
> > > > Only tests that were already calling gethugepagesize() are calling 
> > > > check_hugepagesize().  Attached also is the output of the test suite 
> > > > run with 16G pages on POWER before this change and after.
> > > > 
> > > > Changes since v4
> > > >         -save errno so that the same value can be returned on every 
> > > > call to gethugepagesize()
> > > >         -removed check_hugepagesize() call in linkhuge and linkshare 
> > > > since the return will be checked in morecore and elflink in part 2 of 
> > > > this patch
> > > >         -removed the call_gethugepagesize() function since linkhuge and 
> > > > linkshare no longer need it
> > > > 
> > > > Changes since v3
> > > >         -incorporated Mel Gorman's function that uses dlopen so that 
> > > > tests don't have to be linked against libhugetlbfs
> > > >         -used suggestions from Nishanth Aravamudan to use a static 
> > > > inline function and rename to check_hugepagesize()
> > > >         -added check_hugepagesize() call to linkhuge and linkshare tests
> > > > 
> > > > 
> > > > Signed-off-by: Jon Tollefson <[EMAIL PROTECTED]>
> > > <snip>
> > > > xB.linkhuge (32):       libhugetlbfs: ERROR: Hugepage size too large
> > > > Failed to map hugepage segment 0: 20000000-20000000 (errno=22)
> > > > ./run_tests.sh: line 34: 17526 Aborted                 
> > > > PATH="obj$BITS:$PATH" LD_LIBRARY_PATH="$LD_LIBRARY_PATH:../obj$BITS" 
> > > > $ENV "$@"
> > > 
> > > Hrm, the message from libhugetlbfs is good, but why are we continuing
> > > the mmap() still? We should just fail out from there...
> > 
> > Oh I see. Your second patch didn't fix up the error handling in the
> > other call to gethugepagesize() in elflink.c Will fix it locally.
> 
> Here is the version I applied. Look ok?

Had to make some modifications to this version, actually:

> commit a4771d1e3811322e225197c9067cc986c242be04
> Author: Jon Tollefson <[EMAIL PROTECTED]>
> Date:   Thu Apr 10 17:04:18 2008 -0500
> 
>     check gethugepagesize() return
>     
>     Check for a failure return from gethugepagesize().
>     
>     [Nishanth Aravamudan <[EMAIL PROTECTED]>: Consolidate error handling in
>     elflink.c to the constructor function, to abort early.]
>     
>     Signed-off-by: Jon Tollefson <[EMAIL PROTECTED]>
>     Signed-off-by: Nishanth Aravamudan <[EMAIL PROTECTED]>
> 
> diff --git a/elflink.c b/elflink.c
> index ce70eb2..321de51 100644
> --- a/elflink.c
> +++ b/elflink.c
> @@ -163,6 +163,7 @@ static int htlb_num_segs;
>  static int minimal_copy = 1;
>  static int sharing; /* =0 */
>  static unsigned long force_remap; /* =0 */
> +static long hpage_size;
>  
>  /**
>   * assemble_path - handy wrapper around snprintf() for building paths
> @@ -542,7 +543,7 @@ static unsigned long hugetlb_slice_start(unsigned long 
> addr)
>  #elif defined(__powerpc__)
>       return ALIGN_DOWN(addr, SLICE_LOW_SIZE);
>  #else
> -     return ALIGN_DOWN(addr, gethugepagesize());
> +     return ALIGN_DOWN(addr, hpage_size);
>  #endif
>  }
>  
> @@ -556,7 +557,7 @@ static unsigned long hugetlb_slice_end(unsigned long addr)
>  #elif defined(__powerpc__)
>       return ALIGN_UP(addr, SLICE_LOW_SIZE) - 1;
>  #else
> -     return ALIGN_UP(addr, gethugepagesize()) - 1;
> +     return ALIGN_UP(addr, hpage_size) - 1;
>  #endif
>  }

These need to stay gethugepagesize(), as they are used in the
force_remap case.

>  
> @@ -713,7 +714,6 @@ int parse_elf_partial(struct dl_phdr_info *info, size_t 
> size, void *data)
>   */
>  static int prepare_segment(struct seg_info *seg)
>  {
> -     long hpage_size = gethugepagesize();
>       void *p;
>       unsigned long size;
>  
> @@ -921,7 +921,6 @@ static int obtain_prepared_file(struct seg_info 
> *htlb_seg_info)
>  
>  static void remap_segments(struct seg_info *seg, int num)
>  {
> -     long hpage_size = gethugepagesize();
>       int i;
>       void *p;
>  
> @@ -1066,6 +1065,21 @@ void __hugetlbfs_setup_elflink(void)
>       if (check_env())
>               return;
>  
> +     /*
> +      * Do this calculation here, because we need the hugepagesize to
> +      * align the segments if FORCE_ELFMAP is set
> +      */
> +     hpage_size = gethugepagesize();
> +     if (hpage_size <= 0) {
> +             if (errno == ENOSYS)
> +                     ERROR("Hugepages unavailable\n");
> +             else if (errno == EOVERFLOW)
> +                     ERROR("Hugepage size too large\n");
> +             else
> +                     ERROR("Hugepage size (%s)\n", strerror(errno));
> +             return;
> +     }
> +
>       if (parse_elf())
>               return;

And this check should happen before the hpage_size check. Otherwise, any
32-bit user of any libhugetlbfs functions with 16G pages will see a
ERROR for Hugepage size, even if no remapping is done (because ELFMAP is
presumed to be on if we are in the elflink constructor).

I think this is safe because:

a) gethugepagesize()'s return is valid, so the values found in 
parse_elf_partial() will be ok.

b) gethugepagesize()'s return is -1, so the values found in
parse_elf_partial() will be ignored because we'll bail out later in the
constructor.

Patch is now:


commit b22579d338d7d351b8d3f829adc92f7081a4f100
Author: Jon Tollefson <[EMAIL PROTECTED]>
Date:   Thu Apr 10 17:04:18 2008 -0500

    check gethugepagesize() return
    
    Check for a failure return from gethugepagesize().
    
    [Nishanth Aravamudan <[EMAIL PROTECTED]>: Consolidate error handling in
    elflink.c to the constructor function, to abort early.]
    
    Signed-off-by: Jon Tollefson <[EMAIL PROTECTED]>
    Signed-off-by: Nishanth Aravamudan <[EMAIL PROTECTED]>

diff --git a/elflink.c b/elflink.c
index ce70eb2..24a1a50 100644
--- a/elflink.c
+++ b/elflink.c
@@ -163,6 +163,7 @@ static int htlb_num_segs;
 static int minimal_copy = 1;
 static int sharing; /* =0 */
 static unsigned long force_remap; /* =0 */
+static long hpage_size;
 
 /**
  * assemble_path - handy wrapper around snprintf() for building paths
@@ -713,7 +714,6 @@ int parse_elf_partial(struct dl_phdr_info *info, size_t 
size, void *data)
  */
 static int prepare_segment(struct seg_info *seg)
 {
-       long hpage_size = gethugepagesize();
        void *p;
        unsigned long size;
 
@@ -921,7 +921,6 @@ static int obtain_prepared_file(struct seg_info 
*htlb_seg_info)
 
 static void remap_segments(struct seg_info *seg, int num)
 {
-       long hpage_size = gethugepagesize();
        int i;
        void *p;
 
@@ -1069,6 +1068,17 @@ void __hugetlbfs_setup_elflink(void)
        if (parse_elf())
                return;
 
+       hpage_size = gethugepagesize();
+       if (hpage_size <= 0) {
+               if (errno == ENOSYS)
+                       ERROR("Hugepages unavailable\n");
+               else if (errno == EOVERFLOW)
+                       ERROR("Hugepage size too large\n");
+               else
+                       ERROR("Hugepage size (%s)\n", strerror(errno));
+               return;
+       }
+
        DEBUG("libhugetlbfs version: %s\n", VERSION);
 
        /* Do we need to find a share directory */
diff --git a/morecore.c b/morecore.c
index acf8a20..80c9898 100644
--- a/morecore.c
+++ b/morecore.c
@@ -231,7 +231,12 @@ void __hugetlbfs_setup_morecore(void)
 
        blocksize = gethugepagesize();
        if (blocksize <= 0) {
-               ERROR("Hugepages unavailable\n");
+               if (errno == ENOSYS)
+                       ERROR("Hugepages unavailable\n");
+               else if (errno == EOVERFLOW)
+                       ERROR("Hugepage size too large\n");
+               else
+                       ERROR("Hugepage size (%s)\n", strerror(errno));
                return;
        }
 

-- 
Nishanth Aravamudan <[EMAIL PROTECTED]>
IBM Linux Technology Center

-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
Libhugetlbfs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel

Reply via email to