Since commit 682a3385e773 ("mm, page_alloc: inline the fast path of the
zonelist iterator") we replace a NULL nodemask with cpuset_current_mems_allowed
in the fast path, so that get_page_from_freelist() filters nodes allowed by the
cpuset via for_next_zone_zonelist_nodemask(). In that case it's pointless to
additionaly check __cpuset_zone_allowed() in each iteration, which we can avoid
by not adding ALLOC_CPUSET to alloc_flags in that scenario.

This saves some cycles in the allocator fast path on systems with one or more
non-root cpuset configured. In the slow path, ALLOC_CPUSET is reset according
to __alloc_pages_slowpath(). Without configured cpusets, this code is disabled
by a static key.

Signed-off-by: Vlastimil Babka <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Reviewed-by: Anshuman Khandual <[email protected]>
---
 mm/page_alloc.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 05068adf9007..407e5d89ad2e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3900,9 +3900,10 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, 
unsigned int order,
 
        if (cpusets_enabled()) {
                *alloc_mask |= __GFP_HARDWALL;
-               *alloc_flags |= ALLOC_CPUSET;
                if (!ac->nodemask)
                        ac->nodemask = &cpuset_current_mems_allowed;
+               else
+                       *alloc_flags |= ALLOC_CPUSET;
        }
 
        lockdep_trace_alloc(gfp_mask);
-- 
2.11.0

Reply via email to