On 2/22/19 6:01 AM, Jing Xiangfeng wrote:
Thanks, just a couple small changes.

> User can change a node specific hugetlb count. i.e.
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB

Please make that,
/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages

> the calculated value of count is a total number of huge pages. It could
> be overflow when a user entering a crazy high value. If so, the total
> number of huge pages could be a small value which is not user expect.
> We can simply fix it by setting count to ULONG_MAX, then it goes on. This
> may be more in line with user's intention of allocating as many huge pages
> as possible.
> 
> Signed-off-by: Jing Xiangfeng <jingxiangf...@huawei.com>
> ---
>  mm/hugetlb.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index afef616..18fa7d7 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2423,7 +2423,10 @@ static ssize_t __nr_hugepages_store_common(bool 
> obey_mempolicy,
>                * per node hstate attribute: adjust count to global,
>                * but restrict alloc/free to the specified node.
>                */
> +             unsigned long old_count = count;
>               count += h->nr_huge_pages - h->nr_huge_pages_node[nid];

Also, adding a comment here about checking for overflow would help people
reading the code.  Something like,

                /*
                 * If user specified count causes overflow, set to
                 * largest possible value.
                 */

-- 
Mike Kravetz

> +             if (count < old_count)
> +                     count = ULONG_MAX;
>               init_nodemask_of_node(nodes_allowed, nid);
>       } else
>               nodes_allowed = &node_states[N_MEMORY];
> 

Reply via email to