From: Yanfei Xu <[email protected]>

We could check MMF_DISABLE_THP ahead of iterating over all of vma.
Otherwise if some mm_struct contain a large number of vma, there will
be amounts meaningless cpu cycles cost.

BTW, drop an unnecessary cond_resched(), because there is a another
cond_resched() followed it and no consumed invocation between them.

Signed-off-by: Yanfei Xu <[email protected]>
---
 mm/khugepaged.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2efe1d0c92ed..c293ec4a94ea 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2094,6 +2094,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int 
pages,
         */
        if (unlikely(!mmap_read_trylock(mm)))
                goto breakouterloop_mmap_lock;
+       if (test_bit(MMF_DISABLE_THP, &mm->flags))
+               goto breakouterloop_mmap_lock;
        if (likely(!khugepaged_test_exit(mm)))
                vma = find_vma(mm, khugepaged_scan.address);
 
@@ -2101,7 +2103,6 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int 
pages,
        for (; vma; vma = vma->vm_next) {
                unsigned long hstart, hend;
 
-               cond_resched();
                if (unlikely(khugepaged_test_exit(mm))) {
                        progress++;
                        break;
-- 
2.27.0

Reply via email to