----- Original Message -----
> From: "Li Wang" <liw...@redhat.com>
> To: "Jan Stancek" <jstan...@redhat.com>
> Cc: ltp-list@lists.sourceforge.net
> Sent: Sunday, 4 January, 2015 4:10:38 AM
> Subject: Re: [LTP] [PATCH] mem/hugeshmat: new case for hugepage leak 
> inspection
> 
> The regression has a long history, Patch in upstream 2.6.20-rc1
> GIT: 39dde65c9940c97fcd178a3d2b1c57ed8b7b68aa.
> https://lkml.org/lkml/2012/4/16/129

And fix is presumably this commit:

ommit c5c99429fa57dcf6e05203ebe3676db1ec646793
Author: Larry Woodman <lwood...@redhat.com>
Date:   Thu Jan 24 05:49:25 2008 -0800
    fix hugepages leak due to pagetable page sharing

It'd be nice to include this information somewhere in testcase or commit 
message.
More comments below:

> 
> Not long before, Fujistu guys request us(RedHat) to cover this regression
> test in RHEL7-next , then I found two Redhat BZs on clearly tracing this
> issue:
> https://bugzilla.redhat.com/show_bug.cgi?id=428612
> https://bugzilla.redhat.com/show_bug.cgi?id=222753#c16
> So, I port the testcase to LTP.
> 
> --
> Regards,
> Li Wang
> Email: liw...@redhat.com
> 
> 
> 
> ----- Original Message -----
> > 
> > 
> > 
> > 
> > ----- Original Message -----
> > > From: "Li Wang" <liw...@redhat.com>
> > > To: ltp-list@lists.sourceforge.net
> > > Sent: Saturday, 27 December, 2014 8:13:06 AM
> > > Subject: [LTP] [PATCH] mem/hugeshmat: new case for hugepage leak
> > > inspection
> > > 
> > > Description of Problem:
> > >     When over 1GB shared memory was alocated in hugepage, the hugepage
> > >     is not released though process finished.
> > 
> > This looks like a regression test for kernel bug. Do you know what
> > kernel version are affected and which commit is fixing it?
> > 
> > Regards,
> > Jan
> > 
> > > 
> > > Signed-off-by: Li Wang <liw...@redhat.com>
> > > ---
> > >  runtest/hugetlb                                    |   1 +
> > >  testcases/kernel/mem/.gitignore                    |   1 +
> > >  .../kernel/mem/hugetlb/hugeshmat/hugeshmat04.c     | 185
> > >  +++++++++++++++++++++
> > >  3 files changed, 187 insertions(+)
> > >  create mode 100644 testcases/kernel/mem/hugetlb/hugeshmat/hugeshmat04.c
> > > 
> > > diff --git a/runtest/hugetlb b/runtest/hugetlb
> > > index 3eaf14c..805141d 100644
> > > --- a/runtest/hugetlb
> > > +++ b/runtest/hugetlb
> > > @@ -10,6 +10,7 @@ hugemmap05_3 hugemmap05 -s -m
> > >  hugeshmat01 hugeshmat01 -i 5
> > >  hugeshmat02 hugeshmat02 -i 5
> > >  hugeshmat03 hugeshmat03 -i 5
> > > +hugeshmat04 hugeshamt04 -i 5
> > >  
> > >  hugeshmctl01 hugeshmctl01 -i 5
> > >  hugeshmctl02 hugeshmctl02 -i 5
> > > diff --git a/testcases/kernel/mem/.gitignore
> > > b/testcases/kernel/mem/.gitignore
> > > index f96964c..c531563 100644
> > > --- a/testcases/kernel/mem/.gitignore
> > > +++ b/testcases/kernel/mem/.gitignore
> > > @@ -6,6 +6,7 @@
> > >  /hugetlb/hugeshmat/hugeshmat01
> > >  /hugetlb/hugeshmat/hugeshmat02
> > >  /hugetlb/hugeshmat/hugeshmat03
> > > +/hugetlb/hugeshmat/hugeshmat04
> > >  /hugetlb/hugeshmctl/hugeshmctl01
> > >  /hugetlb/hugeshmctl/hugeshmctl02
> > >  /hugetlb/hugeshmctl/hugeshmctl03
> > > diff --git a/testcases/kernel/mem/hugetlb/hugeshmat/hugeshmat04.c
> > > b/testcases/kernel/mem/hugetlb/hugeshmat/hugeshmat04.c
> > > new file mode 100644
> > > index 0000000..515947b
> > > --- /dev/null
> > > +++ b/testcases/kernel/mem/hugetlb/hugeshmat/hugeshmat04.c
> > > @@ -0,0 +1,185 @@
> > > +/*
> > > + *   Copyright (c) Linux Test Project, 2014
> > > + *
> > > + *   This program is free software;  you can redistribute it and/or
> > > modify
> > > + *   it under the terms of the GNU General Public License as published
> > > by
> > > + *   the Free Software Foundation; either version 2 of the License, or
> > > + *   (at your option) any later version.
> > > + *
> > > + *   This program is distributed in the hope that it will be useful,
> > > + *   but WITHOUT ANY WARRANTY;  without even the implied warranty of
> > > + *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See
> > > + *   the GNU General Public License for more details.
> > > + *
> > > + *   You should have received a copy of the GNU General Public License
> > > + *   along with this program;  if not, write to the Free Software
> > > + *   Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
> > > 02110-1301 USA
> > > + */
> > > +
> > > +/*
> > > + * NAME
> > > + *       hugeshmat04.c
> > > + *

I'm getting numerous white errors with git-am. Can you run your patch through 
checkpatch?

> > > + * DESCRIPTION
> > > + *       hugeshmat04 - test for hugepage leak inspection.
> > > + *
> > > + *      Description of Problem:
> > > + *          When over 1GB shered memory was alocated in hugepage, the
> > > hugepage
> > > + *          is not released though process finished.
> > > + *
> > > + *          You need more than 2GB memory in test job
> > > + *
> > > + *       Return value:
> > > + *         0:     Test successed.  No regression found.
> > > + *         1:     Test failed.  Maybe memory shortage.  Please retry 
> > > testing.

Can't we test for enough huge pages in setup, so we don't have to guess why it 
failed?
If we don't have enough huge pages for test, setup() should end with TCONF.

> > > + *         2:     Test failed.  Regression detected.
> > > + *         other: Test failed.  Other problem detected.

I don't see how you could end up with different value.

> > > + *
> > > + * HISTORY
> > > + *       05/2014 - Written by Fujistu Corp.
> > > + *       12/2014 - Port to LTP by Li Wang.
> > > + *
> > > + * RESTRICTIONS
> > > + *       test must be run at root
> > > + */
> > > +
> > > +#include <stdio.h>
> > > +#include <stdlib.h>
> > > +#include <unistd.h>
> > > +#include <fcntl.h>
> > > +#include <string.h>
> > > +#include <sys/mman.h>
> > > +#include <sys/types.h>
> > > +#include <sys/shm.h>
> > > +#include <sys/wait.h>
> > > +
> > > +#include "test.h"
> > > +#include "mem.h"
> > > +
> > > +#define SIZE     (1024 * 1024 * 1024)
> > > +#define BOUNDARY (1024 * 1024 * 1024)
> > > +
> > > +char *TCID = "hugeshmat04";
> > > +int TST_TOTAL = 3;
> > > +
> > > +static int huge_total;
> > > +static int huge_free;
> > > +static int huge_free2;
> > > +static long hugepages;
> > > +static long orig_hugepages;
> > > +
> > > +static int shared_hugepage(void);
> > > +
> > > +
> > > +int main(int ac, char **av)
> > > +{
> > > + int lc, i, ret;
> > > + const char *msg;
> > > +
> > > + msg = parse_opts(ac,av, NULL, NULL);
> > > + if (msg != NULL)
> > > +         tst_brkm(TBROK, NULL, "OPTION PARSING ERROR - %s", msg);
> > > +
> > > + setup();
> > > +
> > > + for (lc = 0; TEST_LOOPING(lc); lc++) {
> > > +         tst_count = 0;
> > > +
> > > +         for (i = 0; i < TST_TOTAL; i++) {
> > > +                 huge_total = read_meminfo("HugePages_Total:");
> > > +                 huge_free = read_meminfo("HugePages_Free:");
> > > +
> > > +                 if (huge_total == hugepages && huge_free == hugepages) {

This condition assumes that all huge pages must be free when test starts.
Isn't it enough to test that number of free huge pages before and after test is 
the same?

> > > +                         ret = shared_hugepage();

shared_hugepage() always returns 0

> > > +
> > > +                         huge_free2 = read_meminfo("HugePages_Free:");
> > > +                         if (huge_free2 != huge_free)
> > > +                                 ret = 2;
> > > +                 } else {
> > > +                         ret = 1;
> > > +                 }
> > > +

switch below seems unnecessary if you remove "ret" and put tst_resm/brkm at 
places
where you initialize "ret".

> > > +                 switch (ret) {
> > > +                 case 0:
> > > +                         tst_resm(TPASS,
> > > +                                 "No regression found.");
> > > +                         break;
> > > +                 case 1:
> > > +                         tst_brkm(TFAIL, cleanup,
> > > +                                 "Test failed. Maybe memory shortage.");
> > > +                         break;
> > > +                 case 2:
> > > +                         tst_brkm(TFAIL, cleanup,
> > > +                                 "Test failed. Hugepage leak 
> > > inspection.");
> > > +                         break;
> > > +                 default:
> > > +                         tst_brkm(TFAIL, cleanup,
> > > +                                 "Test failed. Other problem detected.");
> > > +                         break;
> > > +                 }
> > > +
> > > +         }
> > > + }
> > > + cleanup();
> > > + tst_exit();
> > > +}
> > > +
> > > +int shared_hugepage(void)
> > > +{
> > > + pid_t pid;
> > > + int status, shmid;
> > > + size_t size = (size_t)SIZE;
> > > + void *buf;
> > > +
> > > + shmid = shmget(IPC_PRIVATE, size, SHM_HUGETLB | IPC_CREAT | 0777);
> > > + if (shmid < 0)
> > > +         tst_brkm(TBROK | TERRNO, cleanup, "shmget");
> > > +
> > > + buf = shmat(shmid, (void *)BOUNDARY, SHM_RND | 0777);

Does it make a difference where you attach shared segment? I'm slightly worried,
that absolute address you picked may already be in use on some distros/arches.

> > > + if ( buf == (void *)-1 ) {
> > > +         shmctl(shmid, IPC_RMID, NULL);
> > > +         tst_brkm(TBROK | TERRNO, cleanup, "shmat");
> > > + }
> > > +
> > > + memset(buf, 2, size);
> > > + sleep(3);

What is this sleep for?

> > > + pid = fork();
> > > +
> > > + if (pid == 0)
> > > +         exit(1);
> > > + else if (pid < 0)
> > > +         tst_brkm(TBROK | TERRNO, cleanup, "fork");
> > > +
> > > + wait(&status);
> > > + shmdt(buf);
> > > + shmctl(shmid, IPC_RMID, NULL);
> > > +
> > > + return 0;

The return value seems pointless as it always returns 0.

> > > +}
> > > +
> > > +void setup(void)
> > > +{
> > > + long mem_total, hpage_size;
> > > +
> > > + tst_require_root(NULL);
> > > +
> > > + mem_total = read_meminfo("MemTotal:");
> > > + if (mem_total < 2097152) {
> > > +         tst_resm(TINFO, "Total memory should greater than 2G.");
> > > +         tst_exit();
> > > + }
> > > +
> > > + orig_hugepages = get_sys_tune("nr_hugepages");
> > > + hpage_size = read_meminfo("Hugepagesize:");
> > > +
> > > + hugepages = (orig_hugepages * hpage_size + 1048576) / hpage_size;

Can you use already defined "SIZE" instead of hardcoding values above?
Also if there are not enough huge pages for test, it should end with TCONF.

Regards,
Jan

> > > + set_sys_tune("nr_hugepages", hugepages, 1);
> > > +
> > > + TEST_PAUSE;
> > > +}
> > > +
> > > +void cleanup(void)
> > > +{
> > > + TEST_CLEANUP;
> > > + set_sys_tune("nr_hugepages", orig_hugepages, 0);
> > > +}
> > > --
> > > 1.8.3.1
> > > 
> > > 
> > > ------------------------------------------------------------------------------
> > > Dive into the World of Parallel Programming! The Go Parallel Website,
> > > sponsored by Intel and developed in partnership with Slashdot Media, is
> > > your
> > > hub for all things parallel software development, from weekly thought
> > > leadership blogs to news, videos, case studies, tutorials and more. Take
> > > a
> > > look and join the conversation now. http://goparallel.sourceforge.net
> > > _______________________________________________
> > > Ltp-list mailing list
> > > Ltp-list@lists.sourceforge.net
> > > https://lists.sourceforge.net/lists/listinfo/ltp-list
> > > 
> > 
> 

------------------------------------------------------------------------------
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

Reply via email to