On Mon, Feb 8, 2010 at 12:55 AM, shenghui <[email protected]> wrote:
>
>
> 2010/2/8 Rishikesh <[email protected]>
>>
>> On 02/08/2010 12:32 PM, shenghui wrote:
>>
>> 2010/2/8 Rishikesh <[email protected]>
>>>
>>> On 02/06/2010 09:04 PM, shenghui wrote:
>>>
>>> Hi all,
>>>
>>>        I want to know how to run hugetlbshmget03? Does it need some
>>> arguments?
>>> For I have run it several times, always got error "NOMEM" for cannot get
>>> hugetlb
>>> page. But "cat /proc/meminfo" shows that free hugetlb pages are enough
>>> for its
>>> request.
>>>        Does anyone know how to handle this problem?
>>>
>>> Hi Shenghui,
>>>
>>> Have you run this testcase after setting up nr_hugepages ? You need to
>>> set the pages accordingly and then you need to run the testcase.
>>>
>>> e.g:
>>>
>>> echo 10 > /proc/sys/vm/nr_hugepages
>>>
>>> Please post your result if you still finding issue.
>>>
>>> Thanks
>>> Rishi
>>>
>>>
>>> --
>>>
>>>
>>> Thanks and Best Regards,
>>> shenghui
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> The Planet: dedicated and managed hosting, cloud storage, colocation
>>> Stay online with enterprise data centers and the best network in the
>>> business
>>> Choose flexible plans and management services without long-term contracts
>>> Personal 24x7 support from experience hosting pros just a phone call
>>> away.
>>> http://p.sf.net/sfu/theplanet-com
>>>
>>> _______________________________________________
>>> Ltp-list mailing list
>>> [email protected]
>>> https://lists.sourceforge.net/lists/listinfo/ltp-list
>>>
>>
>> Hi Rishi,
>>
>>        Please refer to the following for my error info. And will you
>> please try it on your workstation and send me
>> your result?
>>
>> Hi Shengui,
>>
>> Yes i am also facing this different issue on my TP. Let me investigate
>> more on that. Meanwhile if you know the solution can you please post your
>> thought on this problem ?
>>
>> Thanks
>> Rishi
>>
>>
>>        Thanks!
>>
>>
>> ===============================================================================
>>
>>
>> r...@crossover-laptop:/home/crossover/repository/ltp-intermediate-20100119/testcases/kernel/mem/hugetlb/hugeshmget#
>> ls
>> hugeshmget01    hugeshmget02    hugeshmget03    hugeshmget05    Makefile
>> hugeshmget01.c  hugeshmget02.c  hugeshmget03.c  hugeshmget05.c
>>
>>
>> r...@crossover-laptop:/home/crossover/repository/ltp-intermediate-20100119/testcases/kernel/mem/hugetlb/hugeshmget#
>> echo 10 > /proc/sys/vm/nr_hugepages
>>
>>
>> r...@crossover-laptop:/home/crossover/repository/ltp-intermediate-20100119/testcases/kernel/mem/hugetlb/hugeshmget#
>> cat /proc/meminfo
>> MemTotal:        4048440 kB
>> MemFree:         1576656 kB
>> Buffers:          122652 kB
>> Cached:          1487772 kB
>> SwapCached:            0 kB
>> Active:          1225860 kB
>> Inactive:        1043268 kB
>> Active(anon):     684228 kB
>> Inactive(anon):       16 kB
>> Active(file):     541632 kB
>> Inactive(file):  1043252 kB
>> Unevictable:           0 kB
>> Mlocked:               0 kB
>> HighTotal:       3211696 kB
>> HighFree:        1044060 kB
>> LowTotal:         836744 kB
>> LowFree:          532596 kB
>> SwapTotal:       3903752 kB
>> SwapFree:        3903752 kB
>> Dirty:               204 kB
>> Writeback:             0 kB
>> AnonPages:        658692 kB
>> Mapped:           239928 kB
>> Slab:             109460 kB
>> SReclaimable:      93676 kB
>> SUnreclaim:        15784 kB
>> PageTables:         8332 kB
>> NFS_Unstable:          0 kB
>> Bounce:                0 kB
>> WritebackTmp:          0 kB
>> CommitLimit:     5917732 kB
>> Committed_AS:    1389392 kB
>> VmallocTotal:     122880 kB
>> VmallocUsed:       44644 kB
>> VmallocChunk:      75956 kB
>> HugePages_Total:      10
>> HugePages_Free:       10
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:       2048 kB
>> DirectMap4k:        8184 kB
>> DirectMap2M:      903168 kB
>>
>>
>> r...@crossover-laptop:/home/crossover/repository/ltp-intermediate-20100119/testcases/kernel/mem/hugetlb/hugeshmget#
>> ./hugeshmget03
>> hugeshmget03    0  TINFO  :  errno = 12 : Cannot allocate memory
>> hugeshmget03    1  TBROK  :  Didn't get ENOSPC in test setup
>>
>>
>> r...@crossover-laptop:/home/crossover/repository/ltp-intermediate-20100119/testcases/kernel/mem/hugetlb/hugeshmget#
>> echo 32 > /proc/sys/vm/nr_hugepages
>>
>>
>> r...@crossover-laptop:/home/crossover/repository/ltp-intermediate-20100119/testcases/kernel/mem/hugetlb/hugeshmget#
>> ./hugeshmget03
>> hugeshmget03    0  TINFO  :  errno = 12 : Cannot allocate memory
>> hugeshmget03    1  TBROK  :  Didn't get ENOSPC in test setup
>
> Sure.  : )

    The some incorrect assumptions are being made as the test is
running out of allocatable memory when calling shmget:

#define ENOMEM          12      /* Out of memory */

#define ENOSPC          28      /* No space left on device */

    From shmget(2):

       ENOMEM No memory could be allocated for segment overhead.

       ENOSPC All possible shared memory IDs  have  been  taken  (SHMMNI),  or
              allocating  a segment of the requested size would cause the sys-
              tem to exceed the system-wide limit on shared memory (SHMALL).

    I think this is the ultimate problem with the assumption being
made about the system:

/*
 * The MAXIDS value is somewhat arbitrary and may need to be increased
 * depending on the system being tested.
 */
#define MAXIDS  8192

    This means that 8192 * 1024 * 2048 = 16GB worth of hugepages are
required in order for the testcase to succeed. That's clearly wrong
for systems with less than that available memory, and most likely
won't even be true unless you have an excess of that available memory
on your running system.
    This value needs to be variable given the number of free available
hugepages as found in /proc/meminfo, i.e.

gcoo...@orangebox /scratch/ltp $ grep HugePages /proc/meminfo
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0

    The testcase needs to be adjusted for this case, or at the very
least the maximum number of shared memory IDs need to be scooped up,
then one free'd, then allocated as a hugepage shared memory segment --
otherwise pulling this off would be impossible on standard systems.
HTH,
-Garrett

------------------------------------------------------------------------------
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
_______________________________________________
Ltp-list mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ltp-list

Reply via email to