2010/4/14 Mitani <[email protected]>:
>> -----Original Message-----
>> From: Garrett Cooper [mailto:[email protected]]
>> Sent: Wednesday, April 14, 2010 3:18 PM
>> To: Mitani
>> Cc: Randy Dunlap; [email protected]
>> Subject: Re: [LTP] [PATCH] fix "hugetlb" several tests
>>
>> On Tue, Apr 13, 2010 at 7:17 PM, Mitani <[email protected]> wrote:
>> > Hi Randy,
>> >
>> > I couldn't notice a misspelling. Sorry.
>> > I decided to use "due to" according to your advice.
>> >
>> > May I suggest revised patch?
>> >
> [...]
>> >
>> > Thank you--
>> >
>> > -Tomonori Mitani
>> >
>> >
>> >> -----Original Message-----
>> >> From: Randy Dunlap [mailto:[email protected]]
>> >> Sent: Wednesday, April 14, 2010 12:04 AM
>> >> To: Mitani
>> >> Cc: [email protected]
>> >> Subject: Re: [LTP] [PATCH] fix "hugetlb" several tests
>> >>
>> >> On 04/12/10 23:58, Mitani wrote:
>> >> > ------------
>> >> >  a) All tests:
>> >> > "TBROK  :  Test cannot be continued owning to sufficient
>> >> availability of
>> >> > Hugepages on the system"
>> >> >
>> >> >  b) 2), 3), 5), 6), 8), 10), 11) tests:
>> >> > "TWARN  :  tst_rmdir(): TESTDIR was NULL; no removal attempted"
>> >> > ------------
>> >> >
>> >> > Both case a) and case b) are caused by the same reason.
>> >> >
>> >> > All of case a) failures occured at the following points (for
>> example
>> >> > hugemmap04):
>> >> > ------------<hugemmap04.c - main()>
>> >> >         /* Check number of hugepages */
>> >> >         if (get_no_of_hugepages() <= 0 || hugepages_size() <=
>> 0)
>> >> >                 tst_brkm(TBROK, cleanup, "Test cannot be
>> continued
>> >> owning to
>> >> > \
>> >> >                                 sufficient availability of
>> >> Hugepages on the
>> >> > system");
>> >> > ------------
>> >> >
>> >> > I found out that "HugePages_Total" parameter of "/proc/meminfo"
>> file
>> >> > is set to "0". This caused above TBROK failure. It is environment
>> >> problem.
>> >> >
>> >> > But, in this case, tests must not return with TBROK, but with TCONF,
>> >> > I think.
>> >>
>> >> That makes sense to me.
>> >>
>> >> > And, in case b), these tests try to delete "TESTDIR" directory
>> by
>> >> > calling "tst_rmdir()" function in "cleanup()" function.
>> >> > But, "TESTDIR" never set if "tst_tmpdir()" function isn't called.
>> >> > I think that case b)'s tests must not call cleanup() function.
>> >> >
>> >> >
>> >> > I want to suggest following patch.
>> >> >
>> >> > Signed-off-by: Tomonori Mitani <[email protected]>
>> >> >
>> >> > ============
>> >> > --- a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap04.c
>> >>       2010-04-01
>> >> > 15:23:09.000000000 +0900
>> >> > +++ b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap04.c
>> >>       2010-04-13
>> >> > 11:23:33.000000000 +0900
>> >> > @@ -127,7 +127,7 @@
>> >> >
>> >> >     /* Check number of hugepages */
>> >> >     if (get_no_of_hugepages() <= 0 || hugepages_size() <= 0)
>> >> > -           tst_brkm(TBROK, cleanup, "Test cannot be continued
>> >> owning to
>> >> > \
>> >> > +           tst_brkm(TCONF, cleanup, "Test cannot be continued
>> >> owning to
>> >> > \
>> >> >                             sufficient availability of
>> Hugepages
>> >> on the
>> >> > system");
>> >> >
>> >> >     /* Perform global setup for test */
>> >>
>> >> Not caused by your patch, but all of those "owning to" should be
>> "owing
>> >> to"
>> >> or even better, "due to".
>>
>>     Sorry... it might have been better to say (more succinctly): "Not
>> enough available Hugepages" ?
>> Thanks,
>> -Garrett
>
>
> Hi,
>
> I suggest new patch.
>
> Signed-off-by: Tomonori Mitani <[email protected]>
>
> ============
> --- a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap04.c        2010-04-01
> 15:23:09.000000000 +0900
> +++ b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap04.c        2010-04-14
> 16:40:25.000000000 +0900
> @@ -127,8 +127,7 @@
>
>        /* Check number of hugepages */
>        if (get_no_of_hugepages() <= 0 || hugepages_size() <= 0)
> -               tst_brkm(TBROK, cleanup, "Test cannot be continued owning to
> \
> -                               sufficient availability of Hugepages on the
> system");
> +               tst_brkm(TCONF, cleanup, "Not enough available Hugepages");
>
>        /* Perform global setup for test */
>        setup();
> --- a/testcases/kernel/mem/hugetlb/hugeshmat/hugeshmat01.c      2010-04-01
> 15:23:09.000000000 +0900
> +++ b/testcases/kernel/mem/hugetlb/hugeshmat/hugeshmat01.c      2010-04-14
> 16:48:54.000000000 +0900
> @@ -105,7 +105,7 @@
>        }
>
>         if ( get_no_of_hugepages() <= 0 || hugepages_size() <= 0 )
> -             tst_brkm(TBROK, cleanup, "Test cannot be continued owning to
> sufficient availability of Hugepages on the system");
> +             tst_brkm(TCONF, tst_exit, "Not enough available Hugepages");
>         else
>              huge_pages_shm_to_be_allocated = ( get_no_of_hugepages() *
> hugepages_size() * 1024) / 2 ;
>
> --- a/testcases/kernel/mem/hugetlb/hugeshmat/hugeshmat02.c      2010-04-01
> 15:23:09.000000000 +0900
> +++ b/testcases/kernel/mem/hugetlb/hugeshmat/hugeshmat02.c      2010-04-14
> 16:49:15.000000000 +0900
> @@ -102,7 +102,7 @@
>        }
>
>         if ( get_no_of_hugepages() <= 0 || hugepages_size() <= 0 )
> -             tst_brkm(TBROK, cleanup, "Test cannot be continued owning to
> sufficient availability of Hugepages on the system");
> +             tst_brkm(TCONF, tst_exit, "Not enough available Hugepages");
>         else
>              huge_pages_shm_to_be_allocated = ( get_no_of_hugepages() *
> hugepages_size() * 1024) / 2 ;
>
> --- a/testcases/kernel/mem/hugetlb/hugeshmat/hugeshmat03.c      2010-04-01
> 15:23:09.000000000 +0900
> +++ b/testcases/kernel/mem/hugetlb/hugeshmat/hugeshmat03.c      2010-04-14
> 16:49:31.000000000 +0900
> @@ -86,7 +86,7 @@
>        }
>
>         if ( get_no_of_hugepages() <= 0 || hugepages_size() <= 0 )
> -             tst_brkm(TBROK, cleanup, "Test cannot be continued owning to
> sufficient availability of Hugepages on the system");
> +             tst_brkm(TCONF, cleanup, "Not enough available Hugepages");
>         else
>              huge_pages_shm_to_be_allocated = ( get_no_of_hugepages() *
> hugepages_size() * 1024) / 2 ;
>
> --- a/testcases/kernel/mem/hugetlb/hugeshmctl/hugeshmctl01.c    2010-04-01
> 15:23:09.000000000 +0900
> +++ b/testcases/kernel/mem/hugetlb/hugeshmctl/hugeshmctl01.c    2010-04-14
> 16:50:16.000000000 +0900
> @@ -130,7 +130,7 @@
>        }
>
>         if ( get_no_of_hugepages() <= 0 || hugepages_size() <= 0 )
> -             tst_brkm(TBROK, cleanup, "Test cannot be continued owning to
> sufficient availability of Hugepages on the system");
> +             tst_brkm(TCONF, tst_exit, "Not enough available Hugepages");
>         else
>              huge_pages_shm_to_be_allocated = ( get_no_of_hugepages() *
> hugepages_size() * 1024) / 2 ;
>
> --- a/testcases/kernel/mem/hugetlb/hugeshmctl/hugeshmctl02.c    2010-04-01
> 15:23:09.000000000 +0900
> +++ b/testcases/kernel/mem/hugetlb/hugeshmctl/hugeshmctl02.c    2010-04-14
> 16:50:28.000000000 +0900
> @@ -102,7 +102,7 @@
>        }
>
>         if ( get_no_of_hugepages() <= 0 || hugepages_size() <= 0 )
> -             tst_brkm(TBROK, cleanup, "Test cannot be continued owning to
> sufficient availability of Hugepages on the system");
> +             tst_brkm(TCONF, tst_exit, "Not enough available Hugepages");
>         else
>              huge_pages_shm_to_be_allocated = ( get_no_of_hugepages() *
> hugepages_size() * 1024) / 2 ;
>
> --- a/testcases/kernel/mem/hugetlb/hugeshmctl/hugeshmctl03.c    2010-04-01
> 15:23:09.000000000 +0900
> +++ b/testcases/kernel/mem/hugetlb/hugeshmctl/hugeshmctl03.c    2010-04-14
> 16:50:40.000000000 +0900
> @@ -105,7 +105,7 @@
>        }
>
>         if ( get_no_of_hugepages() <= 0 || hugepages_size() <= 0 )
> -             tst_brkm(TBROK, cleanup, "Test cannot be continued owning to
> sufficient availability of Hugepages on the system");
> +             tst_brkm(TCONF, cleanup, "Not enough available Hugepages");
>         else
>              huge_pages_shm_to_be_allocated = ( get_no_of_hugepages() *
> hugepages_size() * 1024) / 2 ;
>
> --- a/testcases/kernel/mem/hugetlb/hugeshmdt/hugeshmdt01.c      2010-04-01
> 15:23:09.000000000 +0900
> +++ b/testcases/kernel/mem/hugetlb/hugeshmdt/hugeshmdt01.c      2010-04-14
> 16:51:11.000000000 +0900
> @@ -87,7 +87,7 @@
>        }
>
>         if ( get_no_of_hugepages() <= 0 || hugepages_size() <= 0 )
> -             tst_brkm(TBROK, cleanup, "Test cannot be continued owning to
> sufficient availability of Hugepages on the system");
> +             tst_brkm(TCONF, tst_exit, "Not enough available Hugepages");
>         else
>              huge_pages_shm_to_be_allocated = ( get_no_of_hugepages() *
> hugepages_size() * 1024) / 2 ;
>
> --- a/testcases/kernel/mem/hugetlb/hugeshmget/hugeshmget01.c    2010-04-01
> 15:23:09.000000000 +0900
> +++ b/testcases/kernel/mem/hugetlb/hugeshmget/hugeshmget01.c    2010-04-14
> 16:51:34.000000000 +0900
> @@ -82,7 +82,7 @@
>
>        /* The following loop checks looping state if -i option given */
>         if ( get_no_of_hugepages() <= 0 || hugepages_size() <= 0 )
> -             tst_brkm(TBROK, cleanup, "Test cannot be continued owning to
> sufficient availability of Hugepages on the system");
> +             tst_brkm(TCONF, cleanup, "Not enough available Hugepages");
>         else
>               huge_pages_shm_to_be_allocated = ( get_no_of_hugepages() *
> hugepages_size() * 1024) / 2 ;
>
> --- a/testcases/kernel/mem/hugetlb/hugeshmget/hugeshmget02.c    2010-04-01
> 15:23:09.000000000 +0900
> +++ b/testcases/kernel/mem/hugetlb/hugeshmget/hugeshmget02.c    2010-04-14
> 16:51:45.000000000 +0900
> @@ -84,7 +84,7 @@
>        }
>
>         if ( get_no_of_hugepages() <= 0 || hugepages_size() <= 0 )
> -             tst_brkm(TBROK, cleanup, "Test cannot be continued owning to
> sufficient availability of Hugepages on the system");
> +             tst_brkm(TCONF, tst_exit, "Not enough available Hugepages");
>         else
>              huge_pages_shm_to_be_allocated = ( get_no_of_hugepages() *
> hugepages_size() * 1024) / 2 ;
>
> --- a/testcases/kernel/mem/hugetlb/hugeshmget/hugeshmget03.c    2010-04-01
> 15:23:09.000000000 +0900
> +++ b/testcases/kernel/mem/hugetlb/hugeshmget/hugeshmget03.c    2010-04-14
> 16:51:55.000000000 +0900
> @@ -85,7 +85,7 @@
>
>        /* The following loop checks looping state if -i option given */
>         if ( get_no_of_hugepages() <= 0 || hugepages_size() <= 0 )
> -             tst_brkm(TBROK, cleanup, "Test cannot be continued owning to
> sufficient availability of Hugepages on the system");
> +             tst_brkm(TCONF, tst_exit, "Not enough available Hugepages");
>         else
>              huge_pages_shm_to_be_allocated = ( get_no_of_hugepages() *
> hugepages_size() * 1024) / 2 ;
>
> --- a/testcases/kernel/mem/hugetlb/hugeshmget/hugeshmget05.c    2010-04-01
> 15:23:09.000000000 +0900
> +++ b/testcases/kernel/mem/hugetlb/hugeshmget/hugeshmget05.c    2010-04-14
> 16:52:10.000000000 +0900
> @@ -86,7 +86,7 @@
>        }
>
>         if ( get_no_of_hugepages() <= 0 || hugepages_size() <= 0 )
> -             tst_brkm(TBROK, cleanup, "Test cannot be continued owning to
> sufficient availability of Hugepages on the system");
> +             tst_brkm(TCONF, cleanup, "Not enough available Hugepages");
>         else
>              huge_pages_shm_to_be_allocated = ( get_no_of_hugepages() *
> hugepages_size() * 1024) / 2 ;
>
> ============
>
>
> Regards--
>
> -Tomonori Mitani
>
>
>
> ------------------------------------------------------------------------------
> Download Intel&#174; Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
> http://p.sf.net/sfu/intel-sw-dev
> _______________________________________________
> Ltp-list mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/ltp-list
>

Hi,

       Does these patches have been merged to the mainline?

thanks,

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Ltp-list mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ltp-list

Reply via email to