[Devel] Re: [PATCH 00/10] memcg: per cgroup dirty page accounting

2010-10-18 Thread KAMEZAWA Hiroyuki
On Sun,  3 Oct 2010 23:57:55 -0700
Greg Thelen gthe...@google.com wrote:

 Greg Thelen (10):
   memcg: add page_cgroup flags for dirty page tracking
   memcg: document cgroup dirty memory interfaces
   memcg: create extensible page stat update routines
   memcg: disable local interrupts in lock_page_cgroup()
   memcg: add dirty page accounting infrastructure
   memcg: add kernel calls for memcg dirty page stats
   memcg: add dirty limits to mem_cgroup
   memcg: add cgroupfs interface to memcg dirty limits
   writeback: make determine_dirtyable_memory() static.
   memcg: check memcg dirty limits in page writeback

Greg, this is a patch on your set.

 mmotm-1014 
 - memcg-reduce-lock-hold-time-during-charge-moving.patch
   (I asked Andrew to drop this)
 + your 1,2,3,5,6,7,8,9,10 (dropped patch 4)

I'm grad if you merge this to your set as replacement of 4.
I'll prepare a performance improvement patch and post it if this dirty_limit
patches goes to -mm.

Thank you for your work.

==
From: KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com

Now, at supporing dirty limit, there is a deadlock problem in accounting.

 1. If pages are being migrated from a memcg, then updates to that
memcg page statistics are protected by grabbing a bit spin lock
using lock_page_cgroup().  In recent changes of dirty page accounting
is updating memcg page accounting (specifically: num writeback pages)
from IRQ context (softirq).  Avoid a deadlocking nested spin lock attempt
by irq on the local processor when grabbing the page_cgroup.

 2. lock for update_stat is used only for avoiding race with move_account().
So, IRQ awareness of lock_page_cgroup() itself is not a problem. The problem
is in update_stat() and move_account().

Then, this reworks locking scheme of update_stat() and move_account() by
adding new lock bit PCG_MOVE_LOCK, which is always taken under IRQ disable.

Trade-off
  * using lock_page_cgroup() + disable IRQ has some impacts on performance
and I think it's bad to disable IRQ when it's not necessary.
  * adding a new lock makes move_account() slow. Score is here.

Peformance Impact: moving a 8G anon process.

Before:
real0m0.792s
user0m0.000s
sys 0m0.780s

After:
real0m0.854s
user0m0.000s
sys 0m0.842s

This score is bad but planned patches for optimization can reduce
this impact.

Signed-off-by: KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com
---
 include/linux/page_cgroup.h |   31 ---
 mm/memcontrol.c |9 +++--
 2 files changed, 35 insertions(+), 5 deletions(-)

Index: dirty_limit_new/include/linux/page_cgroup.h
===
--- dirty_limit_new.orig/include/linux/page_cgroup.h
+++ dirty_limit_new/include/linux/page_cgroup.h
@@ -35,15 +35,18 @@ struct page_cgroup *lookup_page_cgroup(s
 
 enum {
/* flags for mem_cgroup */
-   PCG_LOCK,  /* page cgroup is locked */
+   PCG_LOCK,  /* Lock for pc-mem_cgroup and following bits. */
PCG_CACHE, /* charged as cache */
PCG_USED, /* this object is in use. */
-   PCG_ACCT_LRU, /* page has been accounted for */
+   PCG_MIGRATION, /* under page migration */
+   /* flags for mem_cgroup and file and I/O status */
+   PCG_MOVE_LOCK, /* For race between move_account v.s. following bits */
PCG_FILE_MAPPED, /* page is accounted as mapped */
PCG_FILE_DIRTY, /* page is dirty */
PCG_FILE_WRITEBACK, /* page is under writeback */
PCG_FILE_UNSTABLE_NFS, /* page is NFS unstable */
-   PCG_MIGRATION, /* under page migration */
+   /* No lock in page_cgroup */
+   PCG_ACCT_LRU, /* page has been accounted for (under lru_lock) */
 };
 
 #define TESTPCGFLAG(uname, lname)  \
@@ -119,6 +122,10 @@ static inline enum zone_type page_cgroup
 
 static inline void lock_page_cgroup(struct page_cgroup *pc)
 {
+   /*
+* Don't take this lock in IRQ context.
+* This lock is for pc-mem_cgroup, USED, CACHE, MIGRATION
+*/
bit_spin_lock(PCG_LOCK, pc-flags);
 }
 
@@ -127,6 +134,24 @@ static inline void unlock_page_cgroup(st
bit_spin_unlock(PCG_LOCK, pc-flags);
 }
 
+static inline void move_lock_page_cgroup(struct page_cgroup *pc,
+   unsigned long *flags)
+{
+   /*
+* We know updates to pc-flags of page cache's stats are from both of
+* usual context or IRQ context. Disable IRQ to avoid deadlock.
+*/
+   local_irq_save(*flags);
+   bit_spin_lock(PCG_MOVE_LOCK, pc-flags);
+}
+
+static inline void move_unlock_page_cgroup(struct page_cgroup *pc,
+   unsigned long *flags)
+{
+   bit_spin_unlock(PCG_MOVE_LOCK, pc-flags);
+   local_irq_restore(*flags);
+}
+
 #else /* CONFIG_CGROUP_MEM_RES_CTLR */
 struct page_cgroup;
 
Index: dirty_limit_new/mm/memcontrol.c
===
--- 

[Devel] Re: [PATCH 00/10] memcg: per cgroup dirty page accounting

2010-10-18 Thread Ciju Rajan K
Greg Thelen wrote:
 Balbir Singh bal...@linux.vnet.ibm.com writes:
   
 * Greg Thelen gthe...@google.com [2010-10-03 23:57:55]:

 
 This patch set provides the ability for each cgroup to have independent 
 dirty
 page limits.

 Limiting dirty memory is like fixing the max amount of dirty (hard to 
 reclaim)
 page cache used by a cgroup.  So, in case of multiple cgroup writers, they 
 will
 not be able to consume more than their designated share of dirty pages and 
 will
 be forced to perform write-out if they cross that limit.

 These patches were developed and tested on mmotm 2010-09-28-16-13.  The 
 patches
 are based on a series proposed by Andrea Righi in Mar 2010.
   
 Hi, Greg,

 I see a problem with memcg: add dirty page accounting infrastructure.

 The reject is

  enum mem_cgroup_write_page_stat_item {
 MEMCG_NR_FILE_MAPPED, /* # of pages charged as file rss */
 +   MEMCG_NR_FILE_DIRTY, /* # of dirty pages in page cache */
 +   MEMCG_NR_FILE_WRITEBACK, /* # of pages under writeback */
 +   MEMCG_NR_FILE_UNSTABLE_NFS, /* # of NFS unstable pages */
  };

 I don't see mem_cgroup_write_page_stat_item in memcontrol.h. Is this
 based on top of Kame's cleanup.

 I am working off of mmotm 28 sept 2010 16:13.
 

 Balbir,

 All of the 10 memcg dirty limits patches should apply directly to mmotm
 28 sept 2010 16:13 without any other patches.  Any of Kame's cleanup
 patches that are not in mmotm are not needed by this memcg dirty limit
 series.

 The patch you refer to, [PATCH 05/10] memcg: add dirty page accounting
 infrastructure depends on a change from an earlier patch in the series.
 Specifically, [PATCH 03/10] memcg: create extensible page stat update
 routines contains the addition of mem_cgroup_write_page_stat_item:

 --- a/include/linux/memcontrol.h
 +++ b/include/linux/memcontrol.h
 @@ -25,6 +25,11 @@ struct page_cgroup;
  struct page;
  struct mm_struct;

 +/* Stats that can be updated by kernel. */
 +enum mem_cgroup_write_page_stat_item {
 + MEMCG_NR_FILE_MAPPED, /* # of pages charged as file rss */
 +};
 +

 Do you have trouble applying patch 5 after applying patches 1-4?
   
I could apply all the patches cleanly on mmotm 28/09/2010. Kernel build 
also went through.
 --
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/

   

___
Containers mailing list
contain...@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: [PATCH 00/10] memcg: per cgroup dirty page accounting

2010-10-18 Thread Greg Thelen
KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com writes:

 On Sun,  3 Oct 2010 23:57:55 -0700
 Greg Thelen gthe...@google.com wrote:

 Greg Thelen (10):
   memcg: add page_cgroup flags for dirty page tracking
   memcg: document cgroup dirty memory interfaces
   memcg: create extensible page stat update routines
   memcg: disable local interrupts in lock_page_cgroup()
   memcg: add dirty page accounting infrastructure
   memcg: add kernel calls for memcg dirty page stats
   memcg: add dirty limits to mem_cgroup
   memcg: add cgroupfs interface to memcg dirty limits
   writeback: make determine_dirtyable_memory() static.
   memcg: check memcg dirty limits in page writeback

 Greg, this is a patch on your set.

  mmotm-1014 
  - memcg-reduce-lock-hold-time-during-charge-moving.patch
(I asked Andrew to drop this)
  + your 1,2,3,5,6,7,8,9,10 (dropped patch 4)

 I'm grad if you merge this to your set as replacement of 4.
 I'll prepare a performance improvement patch and post it if this dirty_limit
 patches goes to -mm.

Thanks for the patch.  I will merge your patch (below) as a replacement
of memcg dirty limits patch #4 and repost the entire series.

 Thank you for your work.

 ==
 From: KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com

 Now, at supporing dirty limit, there is a deadlock problem in accounting.

  1. If pages are being migrated from a memcg, then updates to that
 memcg page statistics are protected by grabbing a bit spin lock
 using lock_page_cgroup().  In recent changes of dirty page accounting
 is updating memcg page accounting (specifically: num writeback pages)
 from IRQ context (softirq).  Avoid a deadlocking nested spin lock attempt
 by irq on the local processor when grabbing the page_cgroup.

  2. lock for update_stat is used only for avoiding race with move_account().
 So, IRQ awareness of lock_page_cgroup() itself is not a problem. The problem
 is in update_stat() and move_account().

 Then, this reworks locking scheme of update_stat() and move_account() by
 adding new lock bit PCG_MOVE_LOCK, which is always taken under IRQ disable.

 Trade-off
   * using lock_page_cgroup() + disable IRQ has some impacts on performance
 and I think it's bad to disable IRQ when it's not necessary.
   * adding a new lock makes move_account() slow. Score is here.

 Peformance Impact: moving a 8G anon process.

 Before:
   real0m0.792s
   user0m0.000s
   sys 0m0.780s

 After:
   real0m0.854s
   user0m0.000s
   sys 0m0.842s

 This score is bad but planned patches for optimization can reduce
 this impact.

 Signed-off-by: KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com
 ---
  include/linux/page_cgroup.h |   31 ---
  mm/memcontrol.c |9 +++--
  2 files changed, 35 insertions(+), 5 deletions(-)

 Index: dirty_limit_new/include/linux/page_cgroup.h
 ===
 --- dirty_limit_new.orig/include/linux/page_cgroup.h
 +++ dirty_limit_new/include/linux/page_cgroup.h
 @@ -35,15 +35,18 @@ struct page_cgroup *lookup_page_cgroup(s
  
  enum {
   /* flags for mem_cgroup */
 - PCG_LOCK,  /* page cgroup is locked */
 + PCG_LOCK,  /* Lock for pc-mem_cgroup and following bits. */
   PCG_CACHE, /* charged as cache */
   PCG_USED, /* this object is in use. */
 - PCG_ACCT_LRU, /* page has been accounted for */
 + PCG_MIGRATION, /* under page migration */
 + /* flags for mem_cgroup and file and I/O status */
 + PCG_MOVE_LOCK, /* For race between move_account v.s. following bits */
   PCG_FILE_MAPPED, /* page is accounted as mapped */
   PCG_FILE_DIRTY, /* page is dirty */
   PCG_FILE_WRITEBACK, /* page is under writeback */
   PCG_FILE_UNSTABLE_NFS, /* page is NFS unstable */
 - PCG_MIGRATION, /* under page migration */
 + /* No lock in page_cgroup */
 + PCG_ACCT_LRU, /* page has been accounted for (under lru_lock) */
  };
  
  #define TESTPCGFLAG(uname, lname)\
 @@ -119,6 +122,10 @@ static inline enum zone_type page_cgroup
  
  static inline void lock_page_cgroup(struct page_cgroup *pc)
  {
 + /*
 +  * Don't take this lock in IRQ context.
 +  * This lock is for pc-mem_cgroup, USED, CACHE, MIGRATION
 +  */
   bit_spin_lock(PCG_LOCK, pc-flags);
  }
  
 @@ -127,6 +134,24 @@ static inline void unlock_page_cgroup(st
   bit_spin_unlock(PCG_LOCK, pc-flags);
  }
  
 +static inline void move_lock_page_cgroup(struct page_cgroup *pc,
 + unsigned long *flags)
 +{
 + /*
 +  * We know updates to pc-flags of page cache's stats are from both of
 +  * usual context or IRQ context. Disable IRQ to avoid deadlock.
 +  */
 + local_irq_save(*flags);
 + bit_spin_lock(PCG_MOVE_LOCK, pc-flags);
 +}
 +
 +static inline void move_unlock_page_cgroup(struct page_cgroup *pc,
 + unsigned long *flags)
 +{
 + bit_spin_unlock(PCG_MOVE_LOCK, pc-flags);
 + 

[Devel] Re: [PATCH 00/10] memcg: per cgroup dirty page accounting

2010-10-05 Thread Andrea Righi
On Sun, Oct 03, 2010 at 11:57:55PM -0700, Greg Thelen wrote:
 This patch set provides the ability for each cgroup to have independent dirty
 page limits.
 
 Limiting dirty memory is like fixing the max amount of dirty (hard to reclaim)
 page cache used by a cgroup.  So, in case of multiple cgroup writers, they 
 will
 not be able to consume more than their designated share of dirty pages and 
 will
 be forced to perform write-out if they cross that limit.
 
 These patches were developed and tested on mmotm 2010-09-28-16-13.  The 
 patches
 are based on a series proposed by Andrea Righi in Mar 2010.
 
 Overview:
 - Add page_cgroup flags to record when pages are dirty, in writeback, or nfs
   unstable.
 - Extend mem_cgroup to record the total number of pages in each of the 
   interesting dirty states (dirty, writeback, unstable_nfs).  
 - Add dirty parameters similar to the system-wide  /proc/sys/vm/dirty_*
   limits to mem_cgroup.  The mem_cgroup dirty parameters are accessible
   via cgroupfs control files.
 - Consider both system and per-memcg dirty limits in page writeback when
   deciding to queue background writeback or block for foreground writeback.
 
 Known shortcomings:
 - When a cgroup dirty limit is exceeded, then bdi writeback is employed to
   writeback dirty inodes.  Bdi writeback considers inodes from any cgroup, not
   just inodes contributing dirty pages to the cgroup exceeding its limit.  
 
 Performance measurements:
 - kernel builds are unaffected unless run with a small dirty limit.
 - all data collected with CONFIG_CGROUP_MEM_RES_CTLR=y.
 - dd has three data points (in secs) for three data sizes (100M, 200M, and 
 1G).  
   As expected, dd slows when it exceed its cgroup dirty limit.
 
kernel_build  dd
 mmotm 2:370.18, 0.38, 1.65
   root_memcg
 
 mmotm 2:370.18, 0.35, 1.66
   non-root_memcg
 
 mmotm+patches 2:370.18, 0.35, 1.68
   root_memcg
 
 mmotm+patches 2:370.19, 0.35, 1.69
   non-root_memcg
 
 mmotm+patches 2:370.19, 2.34, 22.82
   non-root_memcg
   150 MiB memcg dirty limit
 
 mmotm+patches 3:581.71, 3.38, 17.33
   non-root_memcg
   1 MiB memcg dirty limit

Hi Greg,

the patchset seems to work fine on my box.

I also ran a pretty simple test to directly verify the effectiveness of
the dirty memory limit, using a dd running on a non-root memcg:

  dd if=/dev/zero of=tmpfile bs=1M count=512

and monitoring the max of the dirty value in cgroup/memory.stat:

Here the results:
  dd in non-root memcg (  4 MiB memcg dirty limit): dirty max=4227072
  dd in non-root memcg (  8 MiB memcg dirty limit): dirty max=8454144
  dd in non-root memcg ( 16 MiB memcg dirty limit): dirty max=15179776
  dd in non-root memcg ( 32 MiB memcg dirty limit): dirty max=32235520
  dd in non-root memcg ( 64 MiB memcg dirty limit): dirty max=64245760
  dd in non-root memcg (128 MiB memcg dirty limit): dirty max=121028608
  dd in non-root memcg (256 MiB memcg dirty limit): dirty max=232865792
  dd in non-root memcg (512 MiB memcg dirty limit): dirty max=445194240

-Andrea
___
Containers mailing list
contain...@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: [PATCH 00/10] memcg: per cgroup dirty page accounting

2010-10-05 Thread Balbir Singh
* Greg Thelen gthe...@google.com [2010-10-03 23:57:55]:

 This patch set provides the ability for each cgroup to have independent dirty
 page limits.
 
 Limiting dirty memory is like fixing the max amount of dirty (hard to reclaim)
 page cache used by a cgroup.  So, in case of multiple cgroup writers, they 
 will
 not be able to consume more than their designated share of dirty pages and 
 will
 be forced to perform write-out if they cross that limit.
 
 These patches were developed and tested on mmotm 2010-09-28-16-13.  The 
 patches
 are based on a series proposed by Andrea Righi in Mar 2010.
 
 Overview:
 - Add page_cgroup flags to record when pages are dirty, in writeback, or nfs
   unstable.
 - Extend mem_cgroup to record the total number of pages in each of the 
   interesting dirty states (dirty, writeback, unstable_nfs).  
 - Add dirty parameters similar to the system-wide  /proc/sys/vm/dirty_*
   limits to mem_cgroup.  The mem_cgroup dirty parameters are accessible
   via cgroupfs control files.
 - Consider both system and per-memcg dirty limits in page writeback when
   deciding to queue background writeback or block for foreground writeback.
 
 Known shortcomings:
 - When a cgroup dirty limit is exceeded, then bdi writeback is employed to
   writeback dirty inodes.  Bdi writeback considers inodes from any cgroup, not
   just inodes contributing dirty pages to the cgroup exceeding its limit.  
 
 Performance measurements:
 - kernel builds are unaffected unless run with a small dirty limit.
 - all data collected with CONFIG_CGROUP_MEM_RES_CTLR=y.
 - dd has three data points (in secs) for three data sizes (100M, 200M, and 
 1G).  
   As expected, dd slows when it exceed its cgroup dirty limit.
 
kernel_build  dd
 mmotm 2:370.18, 0.38, 1.65
   root_memcg
 
 mmotm 2:370.18, 0.35, 1.66
   non-root_memcg
 
 mmotm+patches 2:370.18, 0.35, 1.68
   root_memcg
 
 mmotm+patches 2:370.19, 0.35, 1.69
   non-root_memcg
 
 mmotm+patches 2:370.19, 2.34, 22.82
   non-root_memcg
   150 MiB memcg dirty limit
 
 mmotm+patches 3:581.71, 3.38, 17.33
   non-root_memcg
   1 MiB memcg dirty limit


Greg, could you please try the parallel page fault test. Could you
look at commit 0c3e73e84fe3f64cf1c2e8bb4e91e8901cbcdc38 and
569b846df54ffb2827b83ce3244c5f032394cba4 for examples. 

-- 
Three Cheers,
Balbir
___
Containers mailing list
contain...@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: [PATCH 00/10] memcg: per cgroup dirty page accounting

2010-10-04 Thread Balbir Singh
* Greg Thelen gthe...@google.com [2010-10-03 23:57:55]:

 This patch set provides the ability for each cgroup to have independent dirty
 page limits.
 
 Limiting dirty memory is like fixing the max amount of dirty (hard to reclaim)
 page cache used by a cgroup.  So, in case of multiple cgroup writers, they 
 will
 not be able to consume more than their designated share of dirty pages and 
 will
 be forced to perform write-out if they cross that limit.
 
 These patches were developed and tested on mmotm 2010-09-28-16-13.  The 
 patches
 are based on a series proposed by Andrea Righi in Mar 2010.
 
 Overview:
 - Add page_cgroup flags to record when pages are dirty, in writeback, or nfs
   unstable.
 - Extend mem_cgroup to record the total number of pages in each of the 
   interesting dirty states (dirty, writeback, unstable_nfs).  
 - Add dirty parameters similar to the system-wide  /proc/sys/vm/dirty_*
   limits to mem_cgroup.  The mem_cgroup dirty parameters are accessible
   via cgroupfs control files.
 - Consider both system and per-memcg dirty limits in page writeback when
   deciding to queue background writeback or block for foreground writeback.
 
 Known shortcomings:
 - When a cgroup dirty limit is exceeded, then bdi writeback is employed to
   writeback dirty inodes.  Bdi writeback considers inodes from any cgroup, not
   just inodes contributing dirty pages to the cgroup exceeding its limit.  

I suspect this means that we'll need a bdi controller in the I/O
controller spectrum or make writeback cgroup aware.

 
 Performance measurements:
 - kernel builds are unaffected unless run with a small dirty limit.
 - all data collected with CONFIG_CGROUP_MEM_RES_CTLR=y.
 - dd has three data points (in secs) for three data sizes (100M, 200M, and 
 1G).  
   As expected, dd slows when it exceed its cgroup dirty limit.
 
kernel_build  dd
 mmotm 2:370.18, 0.38, 1.65
   root_memcg
 
 mmotm 2:370.18, 0.35, 1.66
   non-root_memcg
 
 mmotm+patches 2:370.18, 0.35, 1.68
   root_memcg
 
 mmotm+patches 2:370.19, 0.35, 1.69
   non-root_memcg
 
 mmotm+patches 2:370.19, 2.34, 22.82
   non-root_memcg
   150 MiB memcg dirty limit
 
 mmotm+patches 3:581.71, 3.38, 17.33
   non-root_memcg
   1 MiB memcg dirty limit
 
 Greg Thelen (10):
   memcg: add page_cgroup flags for dirty page tracking
   memcg: document cgroup dirty memory interfaces
   memcg: create extensible page stat update routines
   memcg: disable local interrupts in lock_page_cgroup()
   memcg: add dirty page accounting infrastructure
   memcg: add kernel calls for memcg dirty page stats
   memcg: add dirty limits to mem_cgroup
   memcg: add cgroupfs interface to memcg dirty limits
   writeback: make determine_dirtyable_memory() static.
   memcg: check memcg dirty limits in page writeback
 
  Documentation/cgroups/memory.txt |   37 
  fs/nfs/write.c   |4 +
  include/linux/memcontrol.h   |   78 +++-
  include/linux/page_cgroup.h  |   31 +++-
  include/linux/writeback.h|2 -
  mm/filemap.c |1 +
  mm/memcontrol.c  |  426 
 ++
  mm/page-writeback.c  |  211 ---
  mm/rmap.c|4 +-
  mm/truncate.c|1 +
  10 files changed, 672 insertions(+), 123 deletions(-)
 
 --
 To unsubscribe, send a message with 'unsubscribe linux-mm' in
 the body to majord...@kvack.org.  For more info on Linux MM,
 see: http://www.linux-mm.org/ .
 Don't email: a href=mailto:d...@kvack.org; em...@kvack.org /a
 

-- 
Three Cheers,
Balbir
___
Containers mailing list
contain...@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: [PATCH 00/10] memcg: per cgroup dirty page accounting

2010-10-04 Thread Balbir Singh
* Greg Thelen gthe...@google.com [2010-10-03 23:57:55]:

 This patch set provides the ability for each cgroup to have independent dirty
 page limits.
 
 Limiting dirty memory is like fixing the max amount of dirty (hard to reclaim)
 page cache used by a cgroup.  So, in case of multiple cgroup writers, they 
 will
 not be able to consume more than their designated share of dirty pages and 
 will
 be forced to perform write-out if they cross that limit.
 
 These patches were developed and tested on mmotm 2010-09-28-16-13.  The 
 patches
 are based on a series proposed by Andrea Righi in Mar 2010.

Hi, Greg,

I see a problem with memcg: add dirty page accounting infrastructure.

The reject is

 enum mem_cgroup_write_page_stat_item {
MEMCG_NR_FILE_MAPPED, /* # of pages charged as file rss */
+   MEMCG_NR_FILE_DIRTY, /* # of dirty pages in page cache */
+   MEMCG_NR_FILE_WRITEBACK, /* # of pages under writeback */
+   MEMCG_NR_FILE_UNSTABLE_NFS, /* # of NFS unstable pages */
 };

I don't see mem_cgroup_write_page_stat_item in memcontrol.h. Is this
based on top of Kame's cleanup.

I am working off of mmotm 28 sept 2010 16:13.


-- 
Three Cheers,
Balbir
___
Containers mailing list
contain...@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: [PATCH 00/10] memcg: per cgroup dirty page accounting

2010-10-04 Thread Greg Thelen
Balbir Singh bal...@linux.vnet.ibm.com writes:

 * Greg Thelen gthe...@google.com [2010-10-03 23:57:55]:

 This patch set provides the ability for each cgroup to have independent dirty
 page limits.
 
 Limiting dirty memory is like fixing the max amount of dirty (hard to 
 reclaim)
 page cache used by a cgroup.  So, in case of multiple cgroup writers, they 
 will
 not be able to consume more than their designated share of dirty pages and 
 will
 be forced to perform write-out if they cross that limit.
 
 These patches were developed and tested on mmotm 2010-09-28-16-13.  The 
 patches
 are based on a series proposed by Andrea Righi in Mar 2010.

 Hi, Greg,

 I see a problem with memcg: add dirty page accounting infrastructure.

 The reject is

  enum mem_cgroup_write_page_stat_item {
 MEMCG_NR_FILE_MAPPED, /* # of pages charged as file rss */
 +   MEMCG_NR_FILE_DIRTY, /* # of dirty pages in page cache */
 +   MEMCG_NR_FILE_WRITEBACK, /* # of pages under writeback */
 +   MEMCG_NR_FILE_UNSTABLE_NFS, /* # of NFS unstable pages */
  };

 I don't see mem_cgroup_write_page_stat_item in memcontrol.h. Is this
 based on top of Kame's cleanup.

 I am working off of mmotm 28 sept 2010 16:13.

Balbir,

All of the 10 memcg dirty limits patches should apply directly to mmotm
28 sept 2010 16:13 without any other patches.  Any of Kame's cleanup
patches that are not in mmotm are not needed by this memcg dirty limit
series.

The patch you refer to, [PATCH 05/10] memcg: add dirty page accounting
infrastructure depends on a change from an earlier patch in the series.
Specifically, [PATCH 03/10] memcg: create extensible page stat update
routines contains the addition of mem_cgroup_write_page_stat_item:

--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -25,6 +25,11 @@ struct page_cgroup;
 struct page;
 struct mm_struct;
 
+/* Stats that can be updated by kernel. */
+enum mem_cgroup_write_page_stat_item {
+ MEMCG_NR_FILE_MAPPED, /* # of pages charged as file rss */
+};
+

Do you have trouble applying patch 5 after applying patches 1-4?
___
Containers mailing list
contain...@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel