The page_counter rounds limits down to page size values.  This makes
sense, except in the case of hugetlb_cgroup where it's not possible to
charge partial hugepages.

Round the hugetlb_cgroup limit down to hugepage size.

Signed-off-by: David Rientjes <[email protected]>
---
 mm/hugetlb_cgroup.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -288,6 +288,7 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file 
*of,
 
        switch (MEMFILE_ATTR(of_cft(of)->private)) {
        case RES_LIMIT:
+               nr_pages &= ~((1 << huge_page_order(&hstates[idx])) - 1);
                mutex_lock(&hugetlb_limit_mutex);
                ret = page_counter_limit(&h_cg->hugepage[idx], nr_pages);
                mutex_unlock(&hugetlb_limit_mutex);

Reply via email to