On Thu, 11 Mar 2010 16:50:20 +0900
Daisuke Nishimura <[email protected]> wrote:

> On Thu, 11 Mar 2010 15:15:11 +0900, KAMEZAWA Hiroyuki 
> <[email protected]> wrote:
> > On Thu, 11 Mar 2010 14:13:00 +0900
> > KAMEZAWA Hiroyuki <[email protected]> wrote:
> > 
> > > On Thu, 11 Mar 2010 13:58:47 +0900
> > > Daisuke Nishimura <[email protected]> wrote:
> > > > > I'll consider yet another fix for race in account migration if I can.
> > > > > 
> > > > me too.
> > > > 
> > > 
> > > How about this ? Assume that the race is very rare.
> > > 
> > >   1. use trylock when updating statistics.
> > >      If trylock fails, don't account it.
> > > 
> > >   2. add PCG_FLAG for all status as
> > > 
> > > + PCG_ACCT_FILE_MAPPED, /* page is accounted as file rss*/
> > > + PCG_ACCT_DIRTY, /* page is dirty */
> > > + PCG_ACCT_WRITEBACK, /* page is being written back to disk */
> > > + PCG_ACCT_WRITEBACK_TEMP, /* page is used as temporary buffer for FUSE */
> > > + PCG_ACCT_UNSTABLE_NFS, /* NFS page not yet committed to the server */
> > > 
> > >   3. At reducing counter, check PCG_xxx flags by
> > >   TESTCLEARPCGFLAG()
> > > 
> > > This is similar to an _used_ method of LRU accounting. And We can think 
> > > this
> > > method's error-range never go too bad number. 
> > > 
> I agree with you. I've been thinking whether we can remove page cgroup lock
> in update_stat as we do in lru handling codes.
> 
> > > I think this kind of fuzzy accounting is enough for writeback status.
> > > Does anyone need strict accounting ?
> > > 
> > 
> IMHO, we don't need strict accounting.
> 
> > How this looks ?
> I agree to this direction. One concern is we re-introduce "trylock" again..
> 
Yes, it's my concern, too.


> Some comments are inlined.

> > +   switch (idx) {
> > +   case MEMCG_NR_FILE_MAPPED:
> > +           if (charge) {
> > +                   if (!PageCgroupFileMapped(pc))
> > +                           SetPageCgroupFileMapped(pc);
> > +                   else
> > +                           val = 0;
> > +           } else {
> > +                   if (PageCgroupFileMapped(pc))
> > +                           ClearPageCgroupFileMapped(pc);
> > +                   else
> > +                           val = 0;
> > +           }
> Using !TestSetPageCgroupFileMapped(pc) or TestClearPageCgroupFileMapped(pc) 
> is better ?
> 

I used this style because we're under lock. (IOW, to show we're guarded by 
lock.)


> > +           idx = MEM_CGROUP_STAT_FILE_MAPPED;
> > +           break;
> > +   default:
> > +           BUG();
> > +           break;
> > +   }
> >     /*
> >      * Preemption is already disabled. We can use __this_cpu_xxx
> >      */
> > -   __this_cpu_add(mem->stat->count[MEM_CGROUP_STAT_FILE_MAPPED], val);
> > +   __this_cpu_add(mem->stat->count[idx], val);
> > +}
> >  
> > -done:
> > -   unlock_page_cgroup(pc);
> > +void mem_cgroup_update_stat(struct page *page, int idx, bool charge)
> > +{
> > +   struct page_cgroup *pc;
> > +
> > +   pc = lookup_page_cgroup(page);
> > +   if (unlikely(!pc))
> > +           return;
> > +
> > +   if (trylock_page_cgroup(pc)) {
> > +           __mem_cgroup_update_stat(pc, idx, charge);
> > +           unlock_page_cgroup(pc);
> > +   }
> > +   return;
> > +}
> > +
> > +static void mem_cgroup_migrate_stat(struct page_cgroup *pc,
> > +   struct mem_cgroup *from, struct mem_cgroup *to)
> > +{
> > +   preempt_disable();
> > +   if (PageCgroupFileMapped(pc)) {
> > +           __this_cpu_dec(from->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]);
> > +           __this_cpu_inc(to->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]);
> > +   }
> > +   preempt_enable();
> > +}
> > +
> I think preemption is already disabled here too(by lock_page_cgroup()).
> 
Ah, yes. 


> > +static void
> > +__mem_cgroup_stat_fixup(struct page_cgroup *pc, struct mem_cgroup *mem)
> > +{
> > +   /* We'are in uncharge() and lock_page_cgroup */
> > +   if (PageCgroupFileMapped(pc)) {
> > +           __this_cpu_dec(mem->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]);
> > +           ClearPageCgroupFileMapped(pc);
> > +   }
> >  }
> >  
> ditto.
> 
ok.

> >  /*
> > @@ -1810,13 +1859,7 @@ static void __mem_cgroup_move_account(st
> >     VM_BUG_ON(pc->mem_cgroup != from);
> >  
> >     page = pc->page;
> > -   if (page_mapped(page) && !PageAnon(page)) {
> > -           /* Update mapped_file data for mem_cgroup */
> > -           preempt_disable();
> > -           __this_cpu_dec(from->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]);
> > -           __this_cpu_inc(to->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]);
> > -           preempt_enable();
> > -   }
> > +   mem_cgroup_migrate_stat(pc, from, to);
> >     mem_cgroup_charge_statistics(from, pc, false);
> >     if (uncharge)
> >             /* This is not "cancel", but cancel_charge does all we need. */
> I welcome this fixup. IIUC, we have stat leak in current implementation.
> 

If necessary, I'd like to prepare fixed one as independent patch for mmotm.

Thanks,
-Kame

_______________________________________________
Containers mailing list
[email protected]
https://lists.linux-foundation.org/mailman/listinfo/containers

_______________________________________________
Devel mailing list
[email protected]
https://openvz.org/mailman/listinfo/devel

Reply via email to