Michal Hocko <mho...@suse.cz> wrote:

>On Tue 12-02-13 17:13:32, Michal Hocko wrote:
>> On Tue 12-02-13 16:43:30, Michal Hocko wrote:
>> [...]
>> The example was not complete:
>> 
>> > Wait a moment. But what prevents from the following race?
>> > 
>> > rcu_read_lock()
>> 
>> cgroup_next_descendant_pre
>> css_tryget(css);
>> memcg = mem_cgroup_from_css(css)             atomic_add(CSS_DEACT_BIAS,
>&css->refcnt)
>> 
>> >                                            mem_cgroup_css_offline(memcg)
>> 
>> We should be safe if we did synchronize_rcu() before
>root->dead_count++,
>> no?
>> Because then we would have a guarantee that if css_tryget(memcg)
>> suceeded then we wouldn't race with dead_count++ it triggered.
>> 
>> >                                            root->dead_count++
>> > iter->last_dead_count = root->dead_count
>> > iter->last_visited = memcg
>> >                                            // final
>> >                                            css_put(memcg);
>> > // last_visited is still valid
>> > rcu_read_unlock()
>> > [...]
>> > // next iteration
>> > rcu_read_lock()
>> > iter->last_dead_count == root->dead_count
>> > // KABOOM
>
>Ohh I have missed that we took a reference on the current memcg which
>will be stored into last_visited. And then later, during the next
>iteration it will be still alive until we are done because previous
>patch moved css_put to the very end.
>So this race is not possible. I still need to think about parallel
>iteration and a race with removal.

I thought the whole point was to not have a reference in last_visited because 
have the iterator might be unused indefinitely :-)

We only store a pointer and validate it before use the next time around.  So I 
think the race is still possible, but we can deal with it by not losing 
concurrent dead count changes, i.e. one atomic read in the iterator function.

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to