On Wed 14-06-17 17:06:47, Vlastimil Babka wrote:
> On 06/14/2017 03:42 PM, Michal Hocko wrote:
> > On Wed 14-06-17 15:18:26, Vlastimil Babka wrote:
> >> On 06/13/2017 11:00 AM, Michal Hocko wrote:
> > [...]
> >>> @@ -1717,13 +1640,22 @@ struct page *alloc_huge_page_node(struct hstate 
> >>> *h, int nid)
> >>>           page = dequeue_huge_page_node(h, nid);
> >>>   spin_unlock(&hugetlb_lock);
> >>>  
> >>> - if (!page)
> >>> -         page = __alloc_buddy_huge_page_no_mpol(h, nid);
> >>> + if (!page) {
> >>> +         nodemask_t nmask;
> >>> +
> >>> +         if (nid != NUMA_NO_NODE) {
> >>> +                 nmask = NODE_MASK_NONE;
> >>> +                 node_set(nid, nmask);
> >>
> >> TBH I don't like this hack too much, and would rather see __GFP_THISNODE
> >> involved, which picks a different (short) zonelist. Also it's allocating
> >> nodemask on stack, which we generally avoid? Although the callers
> >> currently seem to be shallow.
> > 
> > Fair enough. That would require pulling gfp mask handling up the call
> > chain. This on top of this patch + refreshes for other patches later in
> > the series as they will conflict now?
> 
> For the orig patch + fold (squashed locally from your mmotm/... branch)
> 
> Acked-by: Vlastimil Babka <[email protected]>

Thanks!

> Please update the commit description which still mentions the nodemask
> emulation of __GFP_THISNODE.

yes I will do that when squashing them.

> Also I noticed that the goal of patch 2 is already partially achieved
> here, because alloc_huge_page_nodemask() will now allocate using
> zonelist. It won't dequeue that way yet, though.

well, the primary point if the later is to allow for the preferred node.
I didn't find a proper way to split the two things and still have a
reasonably comprehensible diff. So I've focused on the real allocation
here and pools in the other patch. Hope that makes some sense.
-- 
Michal Hocko
SUSE Labs

Reply via email to