On Thu, 11 Mar 2010 09:39:13 +0900
KAMEZAWA Hiroyuki <kamezawa.hir...@jp.fujitsu.com> wrote:
> > The performance overhead is not so huge in both solutions, but the impact on
> > performance is even more reduced using a complicated solution...
> > 
> > Maybe we can go ahead with the simplest implementation for now and start to
> > think to an alternative implementation of the page_cgroup locking and
> > charge/uncharge of pages.
> > 
> 
> maybe. But in this 2 years, one of our biggest concerns was the performance.
> So, we do something complex in memcg. But complex-locking is , yes, complex.
> Hmm..I don't want to bet we can fix locking scheme without something complex.
> 
But overall patch set seems good (to me.) And dirty_ratio and 
dirty_background_ratio
will give us much benefit (of performance) than we lose by small overheads.

IIUC, this series affects trgger for background-write-out.

Could you show some score which dirty_ratio give us benefit in the cases of

        - copying a file in a memcg which hits limit
          ex) copying a 100M file in 120MB limit.  etc..

        - kernel make performance in limited memcg.
          ex) making a kernel in 100MB limit (too large ?)
    etc....(when an application does many write and hits memcg's limit.)

But, please get enough ack for changes in generic codes of dirty_ratio.

Thank you for your work.

Regards,
-Kame

_______________________________________________
Containers mailing list
contain...@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers

_______________________________________________
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel

Reply via email to