To avoid frequent TLB flushes, vmap_area structs are freed lazily, in batches, once there are enough of them hanging around. As a result, accounting them to a memcg pins the memcg itself and its kmem caches for indefinitely long, which is not good. To avoid this, this patch makes allocations of vmap_area's go through the root cgroup. Since they are small and there cannot be many of them, not accounting them to memcg won't open any security holes.
Signed-off-by: Vladimir Davydov <[email protected]> --- mm/vmalloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 06a44461ccd5..ac32dca89d4f 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -357,7 +357,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, BUG_ON(!is_power_of_2(align)); va = kmalloc_node(sizeof(struct vmap_area), - gfp_mask & GFP_RECLAIM_MASK, node); + (gfp_mask & GFP_RECLAIM_MASK) | __GFP_NOACCOUNT, node); if (unlikely(!va)) return ERR_PTR(-ENOMEM); -- 1.7.10.4 _______________________________________________ Devel mailing list [email protected] https://lists.openvz.org/mailman/listinfo/devel
