On 03/29/2016 11:52 AM, Vlastimil Babka wrote:
On 03/26/2016 06:31 AM, Xishi Qiu wrote:
On 2016/3/26 3:22, Andrew Morton wrote:

On Fri, 25 Mar 2016 14:56:04 +0800 Xishi Qiu <qiuxi...@huawei.com> wrote:

It is incorrect to use next_node to find a target node, it will
return MAX_NUMNODES or invalid node. This will lead to crash in
buddy system allocation.

...

--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -289,11 +289,11 @@ struct page *alloc_migrate_target(struct page *page, 
unsigned long private,
         * now as a simple work-around, we use the next node for destination.
         */
        if (PageHuge(page)) {
-               nodemask_t src = nodemask_of_node(page_to_nid(page));
-               nodemask_t dst;
-               nodes_complement(dst, src);
+               int node = next_online_node(page_to_nid(page));
+               if (node == MAX_NUMNODES)
+                       node = first_online_node;
                return alloc_huge_page_node(page_hstate(compound_head(page)),
-                                           next_node(page_to_nid(page), dst));
+                                           node);
        }

        if (PageHighMem(page))

Indeed.  Can you tell us more about this circumstances under which the
kernel will crash?  I need to decide which kernel version(s) need the
patch, but the changelog doesn't contain the info needed to make this
decision (it should).


Hi Andrew,

I read the code v4.4, and find the following path maybe trigger the bug.

alloc_migrate_target()
        alloc_huge_page_node()  // the node may be offline or MAX_NUMNODES
                __alloc_buddy_huge_page_no_mpol()
                        __alloc_buddy_huge_page()
                                __hugetlb_alloc_buddy_huge_page()

The code in this functions seems to come from 099730d67417d ("mm,
hugetlb: use memory policy when available") by Dave Hansen (adding to
CC), which was indeed merged in 4.4-rc1.

However, alloc_pages_node() is only called in the block guarded by:

if (!IS_ENABLED(CONFIG_NUMA) || !vma) {

The rather weird "!IS_ENABLED(CONFIG_NUMA)" part comes from immediate
followup commit e0ec90ee7e6f ("mm, hugetlbfs: optimize when NUMA=n")

So I doubt the code path here can actually happen. But it's fragile and
confusing nevertheless.

Ah, so there's actually a dangerous path:
alloc_huge_page_node()
    dequeue_huge_page_node()
        list_for_each_entry(page, &h->hugepage_freelists[nid], lru)

hugepage_freelists is MAX_NUMNODES sized, so when nid is MAX_NUMNODES, we access past it.

However, look closer at how nid is obtained in alloc_migrate_target():

nodemask_t src = nodemask_of_node(page_to_nid(page));
nodemask_t dst;
nodes_complement(dst, src);

nid = next_node(page_to_nid(page), dst)

for nid to be MAX_NUMNODES, the original page has to be on node MAX_NUMNODES-1, otherwise the complement part means we hit the very next bit which is set.

It's actually a rather obfuscated way of doing:

nid = page_to_nid(page) + 1;

In that case the problem is in commit c8721bbbdd36 ("mm: memory-hotplug: enable memory hotplug to handle hugepage") from 3.12 and will likely affect only people that tune down MAX_NUMNODES to match their machine.

                                        alloc_pages_node()
                                                __alloc_pages_node()
                                                        VM_BUG_ON(nid < 0 || nid 
>= MAX_NUMNODES);
                                                        
VM_WARN_ON(!node_online(nid));

Thanks,
Xishi Qiu



Reply via email to