On Wed, May 27, 2020 at 06:01:20PM +0200, Vlastimil Babka wrote:
> On 5/26/20 11:42 PM, Roman Gushchin wrote:
>
> > @@ -549,17 +503,14 @@ static __always_inline int charge_slab_page(struct
> > page *page,
> > gfp_t gfp, int order,
> >
On 5/26/20 11:42 PM, Roman Gushchin wrote:
> @@ -549,17 +503,14 @@ static __always_inline int charge_slab_page(struct page
> *page,
> gfp_t gfp, int order,
> struct kmem_cache *s)
> {
> -#ifdef CONFIG_MEMCG_KMEM
This is fairly big but mostly red patch, which makes all accounted
slab allocations use a single set of kmem_caches instead of
creating a separate set for each memory cgroup.
Because the number of non-root kmem_caches is now capped by the number
of root kmem_caches, there is no need to shrink or
3 matches
Mail list logo