On 08/10/2012 09:27 PM, Kamezawa Hiroyuki wrote:
>> +bool __memcg_kmem_new_page(gfp_t gfp, void *_handle, int order)
>> > +{
>> > +  struct mem_cgroup *memcg;
>> > +  struct mem_cgroup **handle = (struct mem_cgroup **)_handle;
>> > +  bool ret = true;
>> > +  size_t size;
>> > +  struct task_struct *p;
>> > +
>> > +  *handle = NULL;
>> > +  rcu_read_lock();
>> > +  p = rcu_dereference(current->mm->owner);
>> > +  memcg = mem_cgroup_from_task(p);
>> > +  if (!memcg_kmem_enabled(memcg))
>> > +          goto out;
>> > +
>> > +  mem_cgroup_get(memcg);
>> > +
> This mem_cgroup_get() will be a potentioal performance problem.
> Don't you have good idea to avoid accessing atomic counter here ?
> I think some kind of percpu counter or a feature to disable "move task"
> will be a help.
> 
> 

I have just sent out a proposal to deal with this. I tried the trick of
marking only the first charge and last uncharge, and it works quite
alright at the cost of a bit test on most calls to memcg_kmem_charge.

Please let me know what you think.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to