On Thu, Mar 18, 2010 at 8:00 PM, KAMEZAWA Hiroyuki 
<kamezawa.hir...@jp.fujitsu.com> wrote:
> On Fri, 19 Mar 2010 08:10:39 +0530
> Balbir Singh <bal...@linux.vnet.ibm.com> wrote:
>
>> * KAMEZAWA Hiroyuki <kamezawa.hir...@jp.fujitsu.com> [2010-03-19 10:23:32]:
>>
>> > On Thu, 18 Mar 2010 21:58:55 +0530
>> > Balbir Singh <bal...@linux.vnet.ibm.com> wrote:
>> >
>> > > * KAMEZAWA Hiroyuki <kamezawa.hir...@jp.fujitsu.com> [2010-03-18 
>> > > 13:35:27]:
>> >
>> > > > Then, no probelm. It's ok to add mem_cgroup_udpate_stat() indpendent 
>> > > > from
>> > > > mem_cgroup_update_file_mapped(). The look may be messy but it's not 
>> > > > your
>> > > > fault. But please write "why add new function" to patch description.
>> > > >
>> > > > I'm sorry for wasting your time.
>> > >
>> > > Do we need to go down this route? We could check the stat and do the
>> > > correct thing. In case of FILE_MAPPED, always grab page_cgroup_lock
>> > > and for others potentially look at trylock. It is OK for different
>> > > stats to be protected via different locks.
>> > >
>> >
>> > I _don't_ want to see a mixture of spinlock and trylock in a function.
>> >
>>
>> A well documented well written function can help. The other thing is to
>> of-course solve this correctly by introducing different locking around
>> the statistics. Are you suggesting the later?
>>
>
> No. As I wrote.
>        - don't modify codes around FILE_MAPPED in this series.
>        - add a new functions for new statistics
> Then,
>        - think about clean up later, after we confirm all things work as 
> expected.

I have ported Andrea Righi's memcg dirty page accounting patches to latest
mmtom-2010-04-05-16-09.  In doing so I have to address this locking issue.  Does
the following look good?  I will (of course) submit the entire patch for review,
but I wanted make sure I was aiming in the right direction.

void mem_cgroup_update_page_stat(struct page *page,
                        enum mem_cgroup_write_page_stat_item idx, bool charge)
{
        static int seq;
        struct page_cgroup *pc;

        if (mem_cgroup_disabled())
                return;
        pc = lookup_page_cgroup(page);
        if (!pc || mem_cgroup_is_root(pc->mem_cgroup))
                return;

        /*
         * This routine does not disable irq when updating stats.  So it is
         * possible that a stat update from within interrupt routine, could
         * deadlock.  Use trylock_page_cgroup() to avoid such deadlock.  This
         * makes the memcg counters fuzzy.  More complicated, or lower
         * performing locking solutions avoid this fuzziness, but are not
         * currently needed.
         */
        if (irqs_disabled()) {
                if (! trylock_page_cgroup(pc))
                        return;
        } else
                lock_page_cgroup(pc);

        __mem_cgroup_update_page_stat(pc, idx, charge);
        unlock_page_cgroup(pc);
}

__mem_cgroup_update_page_stat() has a switch statement that updates all of the
MEMCG_NR_FILE_{MAPPED,DIRTY,WRITEBACK,WRITEBACK_TEMP,UNSTABLE_NFS} counters
using the following form:
        switch (idx) {
        case MEMCG_NR_FILE_MAPPED:
                if (charge) {
                        if (!PageCgroupFileMapped(pc))
                                SetPageCgroupFileMapped(pc);
                        else
                                val = 0;
                } else {
                        if (PageCgroupFileMapped(pc))
                                ClearPageCgroupFileMapped(pc);
                        else
                                val = 0;
                }
                idx = MEM_CGROUP_STAT_FILE_MAPPED;
                break;

                ...
        }

        /*
         * Preemption is already disabled. We can use __this_cpu_xxx
         */
        if (val > 0) {
                __this_cpu_inc(mem->stat->count[idx]);
        } else if (val < 0) {
                __this_cpu_dec(mem->stat->count[idx]);
        }

In my current tree, irq is never saved/restored by cgroup locking code.  To
protect against interrupt reentrancy, trylock_page_cgroup() is used.  As the
comment indicates, this makes the new counters fuzzy.

--
Greg
_______________________________________________
Containers mailing list
contain...@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers

_______________________________________________
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel

Reply via email to