On 2011年12月27日 10:59, Easley wrote:
>
> Now, I'm puzzled.
>
> I changed '#define ELEMENTS 1024*1024*64' to '#define ELEMENTS 
> 1024*1024*20',
> and it looks like not work again. And the program run a short time.
>
> [root@ ~]# echo 1000 > /proc/sys/vm/nr_hugepages
> HugePages_Total:   433
> HugePages_Free:    433
> HugePages_Rsvd:      0
> Hugepagesize:     2048 kB
>
> I didn't mount the hugetlb to a filesystem, in order to get some debug 
> msg.

mount hugetlbfs is a MUST

> And thanks
> for Eric's help.
>
> I don't know how to check hugetlbfs's version. I run 'locate hugetlb', 
> and it output:
>
> A machine (rhel5.4)
> --------------------
> ......
> /usr/lib/libhugetlbfs.so
> /usr/lib64/libhugetlbfs.so
> /usr/share/doc/libhugetlbfs-1.3
> /usr/share/doc/libhugetlbfs-1.3/HOWTO
> /usr/share/doc/libhugetlbfs-1.3/LGPL-2.1
> /usr/share/doc/libhugetlbfs-1.3/NEWS
> /usr/share/doc/libhugetlbfs-1.3/README
> ......
>
> B machine (rhel6.0)
> --------------------
> ......
> /usr/lib64/libhugetlbfs.so
> /usr/share/libhugetlbfs
> /usr/share/doc/libhugetlbfs-2.8
> /usr/share/doc/libhugetlbfs-2.8/HOWTO
> /usr/share/doc/libhugetlbfs-2.8/LGPL-2.1
> /usr/share/doc/libhugetlbfs-2.8/NEWS
> /usr/share/doc/libhugetlbfs-2.8/README
> ......
>
> A machine (rhel5.4)
> -------------------
> [root@A ~]# HUGETLB_DEBUG=yes HUGETLB_VERBOSE=99 
> HUGETLBFS_MORECORE=yes LD_PRELOAD=libhugetlbfs.so ./a.out
> libhugetlbfs [first.dev.com:15824]: HUGETLB_SHARE=0, sharing disabled
> libhugetlbfs [first.dev.com:15824]: Couldn't locate 
> __executable_start, not attempting to remap segments
>
>
> B machine (rhel6.0)
> --------------------
> [root@B ~]#  HUGETLB_DEBUG=yes HUGETLB_VERBOSE=99 
> HUGETLBFS_MORECORE=yes LD_PRELOAD=libhugetlbfs.so ./a.out
> libhugetlbfs [nehalem.rhel6:4660]: INFO: Detected page sizes:
> libhugetlbfs [nehalem.rhel6:4660]: INFO:    Size: 2048 kB (default)  
> Mount: /hugetlb
> libhugetlbfs [nehalem.rhel6:4660]: INFO: Parsed kernel version: [2] . 
> [6] . [32]
> libhugetlbfs [nehalem.rhel6:4660]: INFO: Feature private_reservations 
> is present in this kernel
> libhugetlbfs [nehalem.rhel6:4660]: INFO: Kernel has MAP_PRIVATE 
> reservations.  Disabling heap prefaulting.
> libhugetlbfs [nehalem.rhel6:4660]: INFO: HUGETLB_SHARE=0, sharing disabled
> libhugetlbfs [nehalem.rhel6:4660]: INFO: No segments were appropriate 
> for remapping
>
> It looks like that hugetlb doesn't work, but it did work before.
> Any help will be appreciated.
>
> --Easley
> ------------------ Original ------------------
> *From: * "bill4carson"<bill4car...@gmail.com>;
> *Date: * Mon, Dec 26, 2011 11:12 AM
> *To: * "Easley"<ugi...@gmail.com>;
> *Cc: * "libhugetlbfs-devel"<libhugetlbfs-devel@lists.sourceforge.net>;
> *Subject: * Re: [Libhugetlbfs-devel] The Hugetlb looks like doesn't 
> take effect.
>
>
> On 2011年12月23日 20:29, Easley wrote:
> >
> > I changed #define ELEMENTS 1024*1024*64 to #define ELEMENTS 
> 1024*1024*20,
> > and it takes effect. Thank you.
> > -------
> > And there is another question. It's on another machine
> > I take the following steps:
> > --------------
> >  [root@ ~]# echo 1000 > /proc/sys/vm/nr_hugepages
> what's the output of cat /proc/meminfo | grep Huge ?
>
>
> >  [root@ ~]# mkdir /hugetlb
> >  [root@ ~]# HUGETLB_MORECORE=yes LD_PRELOAD=libhugetlbfs.so ./a.out
> > ---------------
> > There's should be output some error message,
>   why is that ?and what's you point?
>
>
> >  but it output nothing.
> > (I do this in order to check whether hugepage take effect.)
> > I think there's some wrong hugetlbfs setting, and I don't know why.
> > My machine total memory is 24GB, and hugepages is 434 pages.
> > Any help will appreciated.
> >
> > --Easley
> >
> > ------------------ Original ------------------
> > *From: * "bill4carson"<bill4car...@gmail.com>;
> > *Date: * Fri, Dec 23, 2011 11:12 AM
> > *To: * "Easley"<ugi...@gmail.com>;
> > *Cc: * "libhugetlbfs-devel"<libhugetlbfs-devel@lists.sourceforge.net>;
> > *Subject: * Re: [Libhugetlbfs-devel] The Hugetlb looks like doesn't
> > take effect.
> >
> >
> > On 2011年12月23日 10:37, Easley wrote:
> > >
> > > Hi,
> > >
> > >  I run my program by follow:
> > > ---------------------------------
> > > [root@ ~]# echo 200 > /proc/sys/vm/nr_hugepages
> > > [root@ ~]# mkdir /hugetlb
> > > [root@ ~]# mount -t hugetlbfs hugetlbfs /hugetlb
> > > [root@ ~]# HUGETLB_MORECORE=yes LD_PRELOAD=libhugetlbfs.so ./a.out
> > > [root@ ~]# grep Huge /proc/meminfo
> > > HugePages_Total:   200
> > > HugePages_Free:    200
> > > HugePages_Rsvd:      0
> > > Hugepagesize:     2048 kB
> > > ------------------------------------------------
> >
> > So you have 400M bytes memory based on huge page.
> >
> >
> > > And some machines output the follow msg.
> > > -----------
> > > libhugetlbfs: WARNING: New heap segment map at 0x40a00000 failed:
> > > Cannot allocate memory
> > > libhugetlbfs: WARNING: New heap segment map at 0x40a00000 failed:
> > > Cannot allocate memory
> > > ------------
> > > How can I do that?
> > > Any help will be appreciated.
> > >
> > > --Easley
> > >
> > >
> > >
> > > My program code (The code is not belong to me.)
> > > ----------------------
> > > #include <stdio.h>
> > > #include <stdlib.h>
> > >
> > > #define ELEMENTS 1024*1024*64
> > > static double  bss_array_from[ELEMENTS];
> > > static double  bss_array_to[ELEMENTS];
> > > static double *malloc_array_from;
> > > static double *malloc_array_to;
> > >
> > > int   main()   {
> > >     int   i;
> > >     malloc_array_from = (double *)malloc(ELEMENTS*sizeof(double));
> > >     malloc_array_to   = (double *)malloc(ELEMENTS*sizeof(double));
> > >
> > I think you allocate much bigger than you set before.
> > Take a smaller size first, and try again.
> > good luck :)
> >
> > >     /* initialize and touch all of the pages */
> > >     for (i = 1; i < ELEMENTS; i++) {
> > >        bss_array_to[i]      = 1.0;
> > >        bss_array_from[i]    = 2.0;
> > >        malloc_array_to[i]   = 3.0;
> > >        malloc_array_from[i] = 4.0;
> > >     }
> > >
> > >     /* copy "from" "to" */
> > >     for (i = 1; i < ELEMENTS; i++) {
> > >        bss_array_to[i]    = bss_array_from[i];
> > >        malloc_array_to[i] = malloc_array_from[i];
> > >     }
> > >     return 0;
> > > }
> > > ---------------------------------
> > >
> > >
> > >
> > >
> > >
> > >
> > 
> ------------------------------------------------------------------------------
> > > Write once. Port to many.
> > > Get the SDK and tools to simplify cross-platform app development. 
> Create
> > > new or port existing apps to sell to consumers worldwide. Explore the
> > > Intel AppUpSM program developer opportunity. 
> appdeveloper.intel.com/join
> > > http://p.sf.net/sfu/intel-appdev
> > >
> > >
> > > _______________________________________________
> > > Libhugetlbfs-devel mailing list
> > > Libhugetlbfs-devel@lists.sourceforge.net
> > > https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel
> >
> > --
> > I am a slow learner
> > but I will keep trying to fight for my dreams!
> >
> > --bill
> >
>
> -- 
> I am a slow learner
> but I will keep trying to fight for my dreams!
>
> --bill
>

-- 
I am a slow learner
but I will keep trying to fight for my dreams!

--bill


------------------------------------------------------------------------------
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev
_______________________________________________
Libhugetlbfs-devel mailing list
Libhugetlbfs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libhugetlbfs-devel

Reply via email to