On Wed, Apr 23, 2025 at 04:33:19PM +0530, Dev Jain wrote:
> 
> 
> On 23/04/25 4:06 pm, Feng Tang wrote:
> > When running mm selftest to verify mm patches, 'compaction_test' case
> > failed on an x86 server with 1TB memory. And the root cause is that it
> > has too much free memory than what the test supports.
> > 
> > The test case tries to allocate 100000 huge pages, which is about 200 GB
> > for that x86 server, and when it succeeds, it expects it's large than
> > 1/3 of 80% of the free memory in system. This logic only works for
> > platform with 750 GB ( 200 / (1/3) / 80% ) or less free memory, and may
> > raise false alarm for others.
> > 
> > Fix it by changing the fixed page number to self-adjustable number
> > according to the real number of free memory.
> > 
> > Fixes: bd67d5c15cc19 ("Test compaction of mlocked memory")
> > Signed-off-by: Feng Tang <[email protected]>
> 
> Not sure if fixes tag is needed.

Yep, I was not very sure either :). And I'm fine with dropping the tag.

> 
> Acked-by: Dev Jain <[email protected]>
 
Many thanks for the review!

- Feng

> > ---
> >   tools/testing/selftests/mm/compaction_test.c | 19 ++++++++++++++-----
> >   1 file changed, 14 insertions(+), 5 deletions(-)
> > 
> > diff --git a/tools/testing/selftests/mm/compaction_test.c 
> > b/tools/testing/selftests/mm/compaction_test.c
> > index 2c3a0eb6b22d..9bc4591c7b16 100644
> > --- a/tools/testing/selftests/mm/compaction_test.c
> > +++ b/tools/testing/selftests/mm/compaction_test.c
> > @@ -90,6 +90,8 @@ int check_compaction(unsigned long mem_free, unsigned 
> > long hugepage_size,
> >     int compaction_index = 0;
> >     char nr_hugepages[20] = {0};
> >     char init_nr_hugepages[24] = {0};
> > +   char target_nr_hugepages[24] = {0};
> > +   int slen;
> >     snprintf(init_nr_hugepages, sizeof(init_nr_hugepages),
> >              "%lu", initial_nr_hugepages);
> > @@ -106,11 +108,18 @@ int check_compaction(unsigned long mem_free, unsigned 
> > long hugepage_size,
> >             goto out;
> >     }
> > -   /* Request a large number of huge pages. The Kernel will allocate
> > -      as much as it can */
> > -   if (write(fd, "100000", (6*sizeof(char))) != (6*sizeof(char))) {
> > -           ksft_print_msg("Failed to write 100000 to 
> > /proc/sys/vm/nr_hugepages: %s\n",
> > -                          strerror(errno));
> > +   /*
> > +    * Request huge pages for about half of the free memory. The Kernel
> > +    * will allocate as much as it can, and we expect it will get at least 
> > 1/3
> > +    */
> > +   nr_hugepages_ul = mem_free / hugepage_size / 2;
> > +   snprintf(target_nr_hugepages, sizeof(target_nr_hugepages),
> > +            "%lu", nr_hugepages_ul);
> > +
> > +   slen = strlen(target_nr_hugepages);
> > +   if (write(fd, target_nr_hugepages, slen) != slen) {
> > +           ksft_print_msg("Failed to write %lu to 
> > /proc/sys/vm/nr_hugepages: %s\n",
> > +                          nr_hugepages_ul, strerror(errno));
> >             goto close_fd;
> >     }

Reply via email to