From: Jing Xiangfeng <jingxiangf...@huawei.com>

We can use the following command to dynamically allocate huge pages:
        echo NR_HUGEPAGES > /proc/sys/vm/nr_hugepages
The count in  __nr_hugepages_store_common() is parsed from 
/proc/sys/vm/nr_hugepages,
The maximum number of count is ULONG_MAX,
the operation 'count += h->nr_huge_pages - h->nr_huge_pages_node[nid]' overflow 
and count will be a wrong number.
So check the overflow to fix this problem.

Signed-off-by: Jing Xiangfeng <jingxiangf...@huawei.com>
---
 mm/hugetlb.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index afef616..55173c3 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2423,7 +2423,12 @@ static ssize_t __nr_hugepages_store_common(bool 
obey_mempolicy,
                 * per node hstate attribute: adjust count to global,
                 * but restrict alloc/free to the specified node.
                 */
+               unsigned long old_count = count;
                count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
+               if (count < old_count) {
+                       err = -EINVAL;
+                       goto out;
+               }
                init_nodemask_of_node(nodes_allowed, nid);
        } else
                nodes_allowed = &node_states[N_MEMORY];
-- 
2.7.4

Reply via email to