On Thu, Nov 06, 2014 at 09:01:52AM -0600, Christoph Lameter wrote:
> On Thu, 6 Nov 2014, Vladimir Davydov wrote:
>
> > I call memcg_kmem_recharge_slab only on alloc path. Free path isn't
> > touched. The overhead added is one function call. The function only
> > reads and compares two pointers und
On Thu, 6 Nov 2014, Vladimir Davydov wrote:
> I call memcg_kmem_recharge_slab only on alloc path. Free path isn't
> touched. The overhead added is one function call. The function only
> reads and compares two pointers under RCU most of time. This is
> comparable to the overhead introduced by memcg
Hi Christoph,
On Wed, Nov 05, 2014 at 12:43:31PM -0600, Christoph Lameter wrote:
> On Mon, 3 Nov 2014, Vladimir Davydov wrote:
>
> > +static __always_inline void slab_free(struct kmem_cache *cachep, void
> > *objp);
> > +
> > static __always_inline void *
> > slab_alloc_node(struct kmem_cache
On Mon, 3 Nov 2014, Vladimir Davydov wrote:
> +static __always_inline void slab_free(struct kmem_cache *cachep, void *objp);
> +
> static __always_inline void *
> slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
> unsigned long caller)
> @@ -3185,6 +3187,10 @@
Since we now reuse per cgroup kmem caches, the slab we allocate an
object from may be accounted to a dead memory cgroup. If we leave such a
slab accounted to a dead cgroup, we risk pinning the cgroup forever, so
we introduce a new function, memcg_kmem_recharge_slab, which is to be
called in the end
5 matches
Mail list logo